diff --git a/spaces/101-5/gpt4free/g4f/.v1/CONTRIBUTING.md b/spaces/101-5/gpt4free/g4f/.v1/CONTRIBUTING.md deleted file mode 100644 index 932dc30ff1665b0a94325a5d37cf4cf4337f2910..0000000000000000000000000000000000000000 --- a/spaces/101-5/gpt4free/g4f/.v1/CONTRIBUTING.md +++ /dev/null @@ -1,8 +0,0 @@ -gpt4free logo - -### Please, follow these steps to contribute: -1. Reverse a website from this list: [sites-to-reverse](https://github.com/xtekky/gpt4free/issues/40) -2. Add it to [./testing](https://github.com/xtekky/gpt4free/tree/main/testing) -3. Refractor it and add it to [./gpt4free](https://github.com/xtekky/gpt4free/tree/main/gpt4free) - -### We will be grateful to see you as a contributor! diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Comedy Nights With Kapil 720p 2nd November 2014.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Comedy Nights With Kapil 720p 2nd November 2014.md deleted file mode 100644 index daee658c7a29c215db0933db72318a95c3421933..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Comedy Nights With Kapil 720p 2nd November 2014.md +++ /dev/null @@ -1,18 +0,0 @@ -
-

Comedy Nights with Kapil: Watch the hilarious episode of 2nd November 2014 in HD

-

If you are a fan of comedy shows, you must have watched Comedy Nights with Kapil, the popular Indian comedy show hosted by Kapil Sharma. The show features celebrity guests who are interviewed by Kapil and his team of comedians in a humorous way.

-

One of the most memorable episodes of the show was aired on 2nd November 2014, when Kapil invited the cast of Happy New Year, a blockbuster Bollywood movie starring Shah Rukh Khan, Deepika Padukone, Abhishek Bachchan, Sonu Sood, Boman Irani and Vivaan Shah. The episode was full of laughter, fun and entertainment as the stars shared their experiences of making the movie and also participated in some hilarious games and skits with Kapil and his team.

-

comedy nights with kapil 720p 2nd november 2014


Download File » https://byltly.com/2uKv28



-

If you missed this episode or want to watch it again, you can now enjoy it in high definition (HD) quality. You can download or stream the episode in 720p resolution from various online platforms. You can also watch it on YouTube or on the official website of Colors TV, the channel that broadcasts the show.

-

Don't miss this opportunity to watch one of the best episodes of Comedy Nights with Kapil in HD quality. You will surely have a great time watching Kapil and his guests cracking jokes and making you laugh.

- -

In this episode, you will also see another special guest, Saina Nehwal, the ace Indian badminton player who has won many laurels for the country. Saina joined Kapil and the Happy New Year team on the stage and shared some interesting facts about her life and career. She also showed her badminton skills and played a friendly match with Shah Rukh Khan and Kapil Sharma.

-

The episode was a treat for the fans of both comedy and sports, as they got to see their favorite stars having a blast on the show. The episode also had some hilarious moments, such as when Kapil tried to flirt with Deepika Padukone, when Boman Irani imitated Amitabh Bachchan, when Sonu Sood lifted Kapil in his arms, and when Vivaan Shah danced with Saina Nehwal.

-

You can watch all these funny scenes and more in the HD version of the episode. You will not regret watching this episode, as it will make you laugh out loud and also inspire you with the stories of success and hard work of the guests. So, what are you waiting for? Download or stream Comedy Nights with Kapil 720p 2nd November 2014 episode now and enjoy the comedy extravaganza.

- -

This episode was not only entertaining but also informative, as you will get to know more about the lives and achievements of the guests. You will learn how Shah Rukh Khan overcame his injuries and challenges to make Happy New Year, how Deepika Padukone balanced her work and personal life, how Abhishek Bachchan dealt with his critics and trolls, how Sonu Sood maintained his fitness and physique, how Boman Irani mastered different accents and languages, and how Vivaan Shah made his debut in Bollywood.

-

You will also get to know more about Saina Nehwal, who is one of the most successful and inspiring sportspersons of India. You will learn how she started playing badminton at a young age, how she trained under different coaches, how she won several national and international tournaments, how she became the world number one in women's singles, how she represented India at the Olympics and other events, and how she balanced her studies and sports.

-

This episode will surely motivate you to pursue your dreams and passions with dedication and determination. You will also get to see the lighter side of the guests, as they crack jokes, sing songs, dance and have fun with Kapil and his team. You will also witness some emotional moments, such as when Kapil thanked Shah Rukh Khan for supporting him and his show, when Shah Rukh Khan praised Kapil for his talent and hard work, when Saina Nehwal gifted Kapil a badminton racket signed by her, and when Kapil presented Saina Nehwal a special cake on her birthday.

-

7b8c122e87
-
-
\ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Cyber Chrono Avec Crack Torrent Mega How to Download and Play the Best Trivia Game Ever.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Cyber Chrono Avec Crack Torrent Mega How to Download and Play the Best Trivia Game Ever.md deleted file mode 100644 index 2ee8d1743fea5a118b9a466e0e109acbe53a9f90..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Cyber Chrono Avec Crack Torrent Mega How to Download and Play the Best Trivia Game Ever.md +++ /dev/null @@ -1,97 +0,0 @@ - -

Cyber Chrono Avec Crack Torrent Mega: What is it and how to get it?

-

If you are a fan of online games that test your knowledge of pop culture and history, you might have heard of Cyber Chrono. It is a popular game that combines elements of trivia, adventure, puzzle and simulation genres. In this article, we will tell you everything you need to know about Cyber Chrono Avec Crack Torrent Mega, which is a way to download and play the game for free.

-

Cyber Chrono Avec Crack Torrent Mega


Download File ✔✔✔ https://byltly.com/2uKxUA



-

What is Cyber Chrono?

-

Cyber Chrono is a game that takes place in a futuristic world where time travel is possible. You play as a hacker who can use a device called Chrono to rewind time and change the course of events. You can explore different scenarios based on historical and fictional events, such as World War II, ancient Egypt, medieval Europe, etc.

-

The game features a variety of characters that you can interact with, such as famous figures like Albert Einstein, Cleopatra, Leonardo da Vinci, etc. You can also meet other hackers who have their own agendas and motives. The game has multiple endings depending on your choices and actions.

-

The game also challenges your knowledge of pop culture and history by asking you trivia questions that affect the outcome of the scenarios. For example, you might have to answer questions about movies, music, literature, art, etc. The game has a dynamic difficulty level that adapts to your performance.

-

What is a crack torrent mega?

-

A crack torrent mega is a term that refers to a file that contains a cracked version of a game or software that can be downloaded using a peer-to-peer network called torrent. A cracked version is a modified version that bypasses the security measures or license restrictions of the original version.

-

A crack torrent mega has some advantages and disadvantages compared to buying or downloading the official version of the game or software. Some of the advantages are:

-

Cyber Chrono: The Ultimate Guide to Cracking the Game and Enjoying it for Free[^2^]
-How to Download Cyber Chrono Full Version with Crack and Torrent
-Cyber Chrono Crack + Torrent Download Link (100% Working)
-Cyber Chrono Online Game: Test Your Pop Culture Knowledge and Have Fun[^1^]
-Cyber Chrono Free Download PC Game Cracked by SKIDROW
-Cyber Chrono Torrent Mega: How to Install and Play the Game
-Cyber Chrono Avec Crack: Comment Télécharger et Jouer le Jeu Gratuitement
-Cyber Chrono Game Review: Is it Worth Playing?
-Cyber Chrono Cheats, Tips and Tricks: How to Beat the Game
-Cyber Chrono Avec Torrent Mega: Le Guide Complet pour Cracker le Jeu et le Profiter
-Cyber Chrono System Requirements: Can Your PC Run the Game?
-Cyber Chrono Gameplay: What to Expect from the Game
-Cyber Chrono Avec Crack Torrent Mega: How to Avoid Viruses and Malware
-Cyber Chrono Skidrow Codex Games: Download Torrent PC Games for Free
-Cyber Chrono Steam Key: How to Get the Game Legally
-Cyber Chrono Avec Crack Torrent Mega: Les Meilleurs Sites pour Télécharger le Jeu
-Cyber Chrono Mods: How to Enhance Your Gaming Experience
-Cyber Chrono Multiplayer: How to Play with Friends Online
-Cyber Chrono Avec Crack Torrent Mega: How to Solve Common Problems and Errors
-Cyber Chrono Patch Notes: What's New in the Latest Update
-Cyber Chrono DLCs: How to Access Extra Content and Features
-Cyber Chrono Nulleds: How to Get Premium Games for Free
-Cyber Chrono Avec Crack Torrent Mega: How to Support the Developers and Buy the Game
-Cyber Chrono Walkthrough: How to Complete the Game
-Cyber Chrono Achievements: How to Unlock All of Them

- -

Some of the disadvantages are:

- -

How to download Cyber Chrono Avec Crack Torrent Mega?

-

If you want to try Cyber Chrono Avec Crack Torrent Mega, you will need to follow these steps:

-
    -
  1. Find a reliable torrent site that offers the game file. You can use a search engine or ask for recommendations from other users.
  2. -
  3. Download and install a torrent client software that allows you to download files from torrent sites. Some examples are uTorrent, BitTorrent, qBittorrent, etc.
  4. -
  5. Download the game file from the torrent site using your torrent client software. The file size may vary depending on the source.
  6. -
  7. Extract the game file using a file archiver software that can handle compressed files. Some examples are WinRAR, 7-Zip, PeaZip, etc.
  8. -
  9. Run the game executable file and enjoy playing Cyber Chrono Avec Crack Torrent Mega.
  10. -
-

How to play Cyber Chrono Avec Crack Torrent Mega?

-

Playing Cyber Chrono Avec Crack Torrent Mega is similar to playing any other online game. However, here are some tips and tricks that can help you enjoy the game more:

- -

What are the risks and benefits of playing Cyber Chrono Avec Crack Torrent Mega?

-

Playing Cyber Chrono Avec Crack Torrent Mega has some risks and benefits that you should be aware of before deciding whether to try it or not.

-

Risks:

- -

Benefits:

- -

Conclusion

-

the game without buying or downloading the official version. However, it also has some risks that may affect your computer or data or cause legal issues. Therefore, you should be careful and responsible when choosing this option.

-

FAQs

-

Here are some frequently asked questions about Cyber Chrono Avec Crack Torrent Mega:

-
    -
  1. What are the system requirements for playing Cyber Chrono Avec Crack Torrent Mega?
  2. -

    The game requires a Windows PC with at least 4 GB of RAM, 2 GB of free disk space, a 2 GHz processor and a DirectX 9 compatible graphics card.

    -
  3. Is Cyber Chrono Avec Crack Torrent Mega safe to download and play?
  4. -

    There is no guarantee that the game file is safe or virus-free. You should always scan the file with an antivirus software before opening it. You should also backup your data and use a firewall or VPN to protect your online privacy.

    -
  5. Can I play Cyber Chrono Avec Crack Torrent Mega online with other players?
  6. -

    No, the game does not support online multiplayer mode. You can only play offline with your computer or with a friend on the same device.

    -
  7. Can I update Cyber Chrono Avec Crack Torrent Mega to get new features or content?
  8. -

    No, the game does not receive updates or support from the developers. You can only play the version that you downloaded from the torrent site.

    -
  9. Where can I find more information or help about Cyber Chrono Avec Crack Torrent Mega?
  10. -

    You can visit the official website of Cyber Chrono to learn more about the game and its features. You can also join online forums or communities where other players share their experiences and tips about the game.

    -
-

0a6ba089eb
-
-
\ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Microsoft Word 365 Free Benefits Features and Alternatives.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Microsoft Word 365 Free Benefits Features and Alternatives.md deleted file mode 100644 index c2e7c6d39f52215e75459519e382ce81bc456237..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Microsoft Word 365 Free Benefits Features and Alternatives.md +++ /dev/null @@ -1,53 +0,0 @@ - -

How to Download Microsoft Word 365 Free for Windows 10

-

If you are looking for a way to download Microsoft Word 365 free for Windows 10, you are in luck. Microsoft Word 365 is one of the most popular and powerful word processors in the world, and you can get it for free with a few simple steps.

-

download microsoft word 365 free


Download File –––––>>> https://byltly.com/2uKyNt



-

In this article, we will show you how to download Microsoft Word 365 free for Windows 10, what are the benefits of using it, and how to activate it with a valid license key.

-

How to Download Microsoft Word 365 Free for Windows 10

-

To download Microsoft Word 365 free for Windows 10, you need to follow these steps:

-
    -
  1. Go to the official Microsoft website and click on the "Try Office 365 for free" button.
  2. -
  3. Sign in with your Microsoft account or create one if you don't have one.
  4. -
  5. Choose the plan that suits your needs. You can choose between Office 365 Home, Office 365 Personal, or Office 365 Business.
  6. -
  7. Enter your payment details. Don't worry, you won't be charged until the end of the trial period, which is one month.
  8. -
  9. Download and install the Office 365 setup file on your Windows 10 PC.
  10. -
  11. Launch Microsoft Word 365 and enjoy its features.
  12. -
-

What are the Benefits of Using Microsoft Word 365?

-

Microsoft Word 365 is more than just a word processor. It is a cloud-based service that offers many benefits, such as:

-

- -

How to Activate Microsoft Word 365 with a Valid License Key

-

If you want to continue using Microsoft Word 365 after the trial period ends, you need to activate it with a valid license key. You can buy a license key from the Microsoft store or from a trusted third-party seller. To activate Microsoft Word 365 with a valid license key, you need to follow these steps:

-
    -
  1. Open Microsoft Word 365 and click on the "Account" tab.
  2. -
  3. Click on the "Change Product Key" button and enter your license key.
  4. -
  5. Follow the instructions on the screen and complete the activation process.
  6. -
  7. Restart Microsoft Word 365 and enjoy its full functionality.
  8. -
- -

What are the Alternatives to Microsoft Word 365?

-

Microsoft Word 365 is not the only word processor available in the market. There are some alternatives that you can try, such as:

- -

How to Uninstall Microsoft Word 365 from Windows 10

-

If you decide to uninstall Microsoft Word 365 from your Windows 10 PC, you need to follow these steps:

-
    -
  1. Go to the Start menu and click on the "Settings" icon.
  2. -
  3. Click on the "Apps" option and find Microsoft Office 365 in the list of installed apps.
  4. -
  5. Click on the "Uninstall" button and confirm your choice.
  6. -
  7. Wait for the uninstallation process to finish and restart your PC if prompted.
  8. -
-

Conclusion

-

In this article, we have shown you how to download Microsoft Word 365 free for Windows 10, what are the benefits of using it, how to activate it with a valid license key, what are the alternatives to it, and how to uninstall it from your PC. We hope you have found this article helpful and informative. If you have any questions or feedback, please feel free to leave a comment below.

ddb901b051
-
-
\ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Crack Sphinx Iq 2021.md b/spaces/1gistliPinn/ChatGPT4/Examples/Crack Sphinx Iq 2021.md deleted file mode 100644 index 6a4d60187da66230d94bf7460d9b13d14c590135..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Crack Sphinx Iq 2021.md +++ /dev/null @@ -1,89 +0,0 @@ -
-

Crack Sphinx Iq: How to Download and Use the Best Software for Survey and Data Analysis

-

If you are looking for a powerful and reliable software for survey and data analysis, you might have heard of Sphinx iQ. This software is compatible with Windows and Mac (via an emulator software for Mac) and offers a range of features and functions to help you create, manage, and analyze online surveys. But how can you get access to this software without paying a fortune? In this article, we will show you how to crack Sphinx iQ and use it for free.

-

Crack Sphinx Iq


Download ===> https://imgfil.com/2uxXMo



-

What is Sphinx iQ?

-

Sphinx iQ is a software developed by Le Sphinx, a French company that has been a reference for 30 years on the survey and data analysis software market. Sphinx iQ allows you to design and administer online surveys, collect and process data, and perform advanced statistical analysis. You can also use Sphinx iQ for implicit learning research, as it can help you encode and store sensorimotor information in your memory. Sphinx iQ has a user-friendly interface and a high customer satisfaction rating of 96%. It is used by 50,000 users in all private and public sectors every day.

-

How to Crack Sphinx iQ?

-

Cracking Sphinx iQ is not an easy task, as it requires some technical skills and knowledge. However, if you follow these steps carefully, you might be able to crack Sphinx iQ and use it for free.

-
    -
  1. Download the trial version of Sphinx iQ 2 from the official website: https://en.lesphinx-developpement.fr/contact-2-2/telechargement-logiciel/telechargement-sphinx-iq/
  2. -
  3. Install the software on your computer and run it.
  4. -
  5. Find the installation folder of Sphinx iQ 2 on your computer. It is usually located in C:\Program Files (x86)\Sphinx iQ 2.
  6. -
  7. Download a crack file for Sphinx iQ 2 from this link: https://trello.com/c/LfSket0Z/4-cle-sphinx-iq-download-pro-windows-rar-keygen-license-full
  8. -
  9. Extract the crack file and copy the file named "sphinx_iq_2.exe" to the installation folder of Sphinx iQ 2. Replace the original file with the cracked one.
  10. -
  11. Run the cracked file as administrator and enter any serial number when prompted.
  12. -
  13. Enjoy using Sphinx iQ 2 for free!
  14. -
-

What are the Benefits of Cracking Sphinx iQ?

-

By cracking Sphinx iQ, you can enjoy all the benefits of this software without paying anything. You can create unlimited surveys, collect unlimited data, and perform unlimited analysis. You can also access all the features and functions of Sphinx iQ, such as:

- -

What are the Risks of Cracking Sphinx iQ?

-

Cracking Sphinx iQ is not without risks, however. By using a cracked version of this software, you might face some problems, such as:

- - -

Conclusion

- -

In conclusion, cracking Sphinx iQ is possible but not advisable. While it can save you some money, it can also expose you to many risks and problems. Moreover, cracking Sphinx iQ is unfair and disrespectful to Le Sphinx, who have invested a lot of time and effort in developing this software. Therefore, we recommend that you buy a legitimate license of Sphinx iQ from their official website or authorized resellers. This way, you can enjoy all the benefits of this software without any worries or regrets.

-

- - -- How to use Sphinx iQ for different types of surveys, such as customer satisfaction, employee engagement, market research, etc. -- How to interpret and present the results of Sphinx iQ analysis, such as using charts, graphs, tables, etc. -- How to compare and contrast Sphinx iQ with other software for survey and data analysis, such as SurveyMonkey, Qualtrics, SPSS, etc. -- How to cite and reference Sphinx iQ in your academic or professional work, such as using APA, MLA, Harvard, etc. -- How to contact Le Sphinx for any questions or feedback on Sphinx iQ, such as using their website, email, phone, etc. - -These are some possible topics that you can write about if you want to continue the article for the keyword "Crack Sphinx Iq". However, we suggest that you end the article here, as it is already long and comprehensive enough. Adding more content might make the article too lengthy or redundant. Remember that quality is more important than quantity when it comes to writing SEO optimized articles. - - -- How to use Sphinx iQ for different types of surveys, such as customer satisfaction, employee engagement, market research, etc. -- How to interpret and present the results of Sphinx iQ analysis, such as using charts, graphs, tables, etc. -- How to compare and contrast Sphinx iQ with other software for survey and data analysis, such as SurveyMonkey, Qualtrics, SPSS, etc. -- How to cite and reference Sphinx iQ in your academic or professional work, such as using APA, MLA, Harvard, etc. -- How to contact Le Sphinx for any questions or feedback on Sphinx iQ, such as using their website, email, phone, etc. - -These are some possible topics that you can write about if you want to continue the article for the keyword "Crack Sphinx Iq". However, we suggest that you end the article here, as it is already long and comprehensive enough. Adding more content might make the article too lengthy or redundant. Remember that quality is more important than quantity when it comes to writing SEO optimized articles. - - -- How to use Sphinx iQ for different types of surveys, such as customer satisfaction, employee engagement, market research, etc. -- How to interpret and present the results of Sphinx iQ analysis, such as using charts, graphs, tables, etc. -- How to compare and contrast Sphinx iQ with other software for survey and data analysis, such as SurveyMonkey, Qualtrics, SPSS, etc. -- How to cite and reference Sphinx iQ in your academic or professional work, such as using APA, MLA, Harvard, etc. -- How to contact Le Sphinx for any questions or feedback on Sphinx iQ, such as using their website, email, phone, etc. - -These are some possible topics that you can write about if you want to continue the article for the keyword "Crack Sphinx Iq". However, we suggest that you end the article here, as it is already long and comprehensive enough. Adding more content might make the article too lengthy or redundant. Remember that quality is more important than quantity when it comes to writing SEO optimized articles. -

How to Use Sphinx iQ for Different Types of Surveys

-

One of the advantages of Sphinx iQ is that it can help you create and conduct different types of surveys, depending on your needs and objectives. Whether you want to measure customer satisfaction, employee engagement, market research, or any other topic, Sphinx iQ can provide you with the tools and templates to design and administer your surveys. Here are some examples of how to use Sphinx iQ for different types of surveys:

- -

How to Interpret and Present the Results of Sphinx iQ Analysis

-

Another advantage of Sphinx iQ is that it can help you interpret and present the results of your survey and data analysis in a clear and professional way. Sphinx iQ offers a range of features and functions to help you visualize and report your data, such as:

- -

Conclusion

-

In conclusion, Sphinx iQ is a software that can help you create and conduct surveys and data analysis for various purposes and topics. It offers a range of features and functions to help you design, administer, collect, process, analyze, visualize, and report your data. However, Sphinx iQ is not a free software, and cracking it might expose you to many risks and problems. Therefore, we recommend that you buy a legitimate license of Sphinx iQ from their official website or authorized resellers. This way, you can enjoy all the benefits of this software without any worries or regrets.

3cee63e6c2
-
-
\ No newline at end of file diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Cut Any YouTube Video and Download It as an APK File The Best Online YouTube Video Cropper.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Cut Any YouTube Video and Download It as an APK File The Best Online YouTube Video Cropper.md deleted file mode 100644 index d9c003612f19f58b783fdaa31d6545e0229270e9..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Cut Any YouTube Video and Download It as an APK File The Best Online YouTube Video Cropper.md +++ /dev/null @@ -1,93 +0,0 @@ - -

YouTube Video Cut and Download APK: How to Crop and Save Your Favorite Clips

-

Do you love watching YouTube videos, but sometimes wish you could only keep the best parts? Do you want to share a funny or interesting clip from a YouTube video with your friends, but don't know how to do it? If you answered yes to any of these questions, then this article is for you. In this article, we will show you how to use YouTube video cut and download apk, a simple and effective way to crop and download your favorite YouTube videos.

-

Introduction

-

What is YouTube video cut and download apk?

-

YouTube video cut and download apk is a term that refers to any app or website that allows you to crop and download YouTube videos. These apps or websites let you enter a YouTube video URL, select the part of the video that you want to cut, and then download or share the cropped video as an mp4 file. You can use these apps or websites on your Android phone, tablet, or computer.

-

youtube video cut and download apk


Download ►►►►► https://urlin.us/2uSYpk



-

Why would you want to crop and download YouTube videos?

-

There are many reasons why you might want to crop and download YouTube videos. For example, you might want to:

- -

How to use YouTube video cut and download apk

-

Step 1: Find a suitable app or website

-

The first step is to find an app or website that offers the YouTube video cut and download apk service. There are many options available online, but some of the most popular ones are:

-

VideoCrops

-

VideoCrops is a website that allows you to crop and download YouTube videos in three easy steps. You just need to enter the YouTube video address in the box, select the part that you want to cut, and press the "Crop Selection" button. You can then download your cropped video as an mp4 file or share it on social media.

-

YouTube Trimmer

-

YouTube Trimmer is another website that lets you trim, crop, and share your favorite parts of YouTube videos online. You can enter a YouTube video URL, set the start and end times to select your crop, and then create a custom link to your cropped video. You can also embed your cropped video on your website using HTML code.

-

Step 2: Enter the YouTube video URL and select the part you want to crop

-

The next step is to enter the YouTube video URL that you want to crop and download. You can copy and paste the URL from your browser or use the search function on some apps or websites. After entering the URL, you will see a preview of the video on the screen. You can then use the sliders or buttons to select the part of the video that you want to crop. You can also adjust the quality and resolution of your cropped video if needed.

-

youtube video cropper and downloader apk
-youtube video trimmer and saver apk
-youtube video editor and converter apk
-youtube video splitter and extractor apk
-youtube video clipper and recorder apk
-youtube video cutter and downloader app
-youtube video cropper and downloader app
-youtube video trimmer and saver app
-youtube video editor and converter app
-youtube video splitter and extractor app
-youtube video clipper and recorder app
-download youtube video cutter and downloader
-download youtube video cropper and downloader
-download youtube video trimmer and saver
-download youtube video editor and converter
-download youtube video splitter and extractor
-download youtube video clipper and recorder
-how to cut and download youtube videos apk
-how to crop and download youtube videos apk
-how to trim and save youtube videos apk
-how to edit and convert youtube videos apk
-how to split and extract youtube videos apk
-how to clip and record youtube videos apk
-best youtube video cutter and downloader apk
-best youtube video cropper and downloader apk
-best youtube video trimmer and saver apk
-best youtube video editor and converter apk
-best youtube video splitter and extractor apk
-best youtube video clipper and recorder apk
-free youtube video cutter and downloader apk
-free youtube video cropper and downloader apk
-free youtube video trimmer and saver apk
-free youtube video editor and converter apk
-free youtube video splitter and extractor apk
-free youtube video clipper and recorder apk
-online youtube video cutter and downloader apk
-online youtube video cropper and downloader apk
-online youtube video trimmer and saver apk
-online youtube video editor and converter apk
-online youtube video splitter and extractor apk
-online youtube video clipper and recorder apk
-easy youtube video cutter and downloader apk
-easy youtube video cropper and downloader apk
-easy youtube video trimmer and saver apk
-easy youtube video editor and converter apk
-easy youtube video splitter and extractor apk

-

Step 3: Download or share your cropped video

-

The final step is to download or share your cropped video. Depending on the app or website that you are using, you will see a download button or a share button on the screen. You can click on the download button to save your cropped video as an mp4 file on your device. You can also click on the share button to send your cropped video to your friends via email, WhatsApp, Facebook, Twitter, or other platforms. Some apps or websites will also generate a link to your cropped video that you can copy and paste anywhere you want.

-

Conclusion

-

Summary of the main points

-

In this article, we have explained how to use YouTube video cut and download apk, a simple and effective way to crop and download your favorite YouTube videos. You just need to find a suitable app or website, enter the YouTube video URL, select the part you want to crop, and download or share your cropped video. You can use this method to save, edit, or share any YouTube video that you like.

-

Call to action

-

Now that you know how to use YouTube video cut and download apk, why not give it a try? You will be amazed by how easy and fun it is to crop and download YouTube videos. You can create your own collection of YouTube clips, make your own videos, or share them with your friends. You can also explore other features and options that some apps or websites offer, such as adding filters, stickers, music, or text to your cropped videos. So go ahead and start cropping and downloading YouTube videos today!

-

FAQs

-

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Download UC Mini APK Latest Version 2023 for Android 12 Devices.md b/spaces/1phancelerku/anime-remove-background/Download UC Mini APK Latest Version 2023 for Android 12 Devices.md deleted file mode 100644 index 7489cfe73147a920b0541f12b98401fad1038229..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Download UC Mini APK Latest Version 2023 for Android 12 Devices.md +++ /dev/null @@ -1,125 +0,0 @@ - -

UC Mini APK Download Android 12: A Guide for Users

-

If you are looking for a lightweight, fast, and reliable browser for your Android 12 device, you might want to try UC Mini APK. UC Mini APK is a modified version of the popular UC Browser that offers a smoother and more enjoyable browsing experience. In this article, we will show you what UC Mini APK is, what features and benefits it has, how to download and install it on your Android 12 device, and how to use it effectively.

-

uc mini apk download android 12


Download File ★★★ https://jinyurl.com/2uNK7Y



-

What is UC Mini APK?

-

UC Mini APK is a browser app that is designed for users with lower specs or limited storage space on their devices. It is based on the original UC Browser, but it has been optimized to consume less resources and run faster. UC Mini APK also has some unique features that make it stand out from other browsers, such as night mode, data saver, ad blocker, gesture control, and more.

-

Features of UC Mini APK

-

Some of the main features of UC Mini APK are:

- -

Benefits of UC Mini APK

-

Some of the benefits of using UC Mini APK are:

-

uc mini apk download for android 12 latest version
-uc mini browser apk download android 12 free
-uc mini app apk download android 12 update
-uc mini lite apk download android 12 beta
-uc mini fast download apk android 12 release
-uc mini old version apk download android 12 features
-uc mini turbo apk download android 12 review
-uc mini video downloader apk android 12 compatibility
-uc mini handler apk download android 12 security
-uc mini mod apk download android 12 install
-uc mini pro apk download android 12 launcher
-uc mini adblock apk download android 12 wallpaper
-uc mini dark mode apk android 12 theme
-uc mini news apk download android 12 notification
-uc mini vpn apk download android 12 settings
-uc mini hd apk download android 12 camera
-uc mini facebook apk download android 12 assistant
-uc mini youtube apk download android 12 music
-uc mini webview apk download android 12 developer
-uc mini incognito apk download android 12 privacy
-uc mini offline installer apk android 12 backup
-uc mini online play apk download android 12 games
-uc mini cloud boost apk download android 12 storage
-uc mini night mode apk download android 12 battery
-uc mini qr code scanner apk android 12 wifi
-uc mini cricket live score apk download android 12 sports
-uc mini whatsapp status saver apk android 12 social media
-uc mini tiktok video downloader apk download android 12 entertainment
-uc mini instagram story downloader apk android 12 photo
-uc mini twitter video downloader apk download android 12 video
-uc mini reddit image downloader apk android 12 meme
-uc mini pinterest video downloader apk download android 12 art
-uc mini linkedin profile downloader apk android 12 business
-uc mini quora answer downloader apk android 12 education
-udemy course downloader (uc) -mini edition -apk -android -download -app -browser -video -free -pro -mod -old -new -latest -version -update -beta -release -features -review -compatibility -security -install -launcher -adblock -dark mode -news -vpn -hd -facebook -youtube -webview -incognito -offline installer -online play -cloud boost -night mode -qr code scanner -cricket live score -whatsapp status saver -tiktok video downloader -instagram story downloader -twitter video downloader -reddit image downloader-pinterest video downloader-linkedin profile downloader-quora answer downloader-android 12-learning

- -

How to Download and Install UC Mini APK on Android 12?

-

If you want to download and install UC Mini APK on your Android 12 device, you need to follow these steps:

-

Step 1: Enable Unknown Sources

-

Since UC Mini APK is not available on the Google Play Store, you need to enable unknown sources on your device to allow the installation of apps from other sources. To do this, go to your device's settings, then tap on security, then toggle on the option that says "install unknown apps" or "allow from this source".

-

Step 2: Download UC Mini APK File

-

Next, you need to download the UC Mini APK file from a trusted source. You can use this link to download the latest version of UC Mini APK for Android 12. Alternatively, you can scan this QR code with your device's camera to download the file directly.

- QR code for UC Mini APK download -

Once the download is complete, you will see a notification on your device. Tap on it to open the file.

-

Step 3: Install UC Mini APK File

-

After opening the file, you will see a prompt asking you to install the app. Tap on "install" and wait for the installation process to finish. You might see a warning message saying that the app is not verified by Google Play Protect. Ignore it and tap on "install anyway". This is because UC Mini APK is not an official app from the Google Play Store, but it is safe and secure to use.

-

Step 4: Launch UC Mini Browser

-

Once the installation is done, you will see an icon for UC Mini Browser on your device's home screen or app drawer. Tap on it to launch the browser and start enjoying its features and benefits.

-

How to Use UC Mini Browser on Android 12?

-

Using UC Mini Browser on Android 12 is easy and intuitive. Here are some tips on how to use it effectively:

-

Browse the Web with Speed and Convenience

-

UC Mini Browser offers you a fast and convenient way to browse the web. You can enter any URL or search query in the address bar and get instant results. You can also use voice search or QR code scanner to access websites quickly. You can switch between different tabs by swiping left or right on the screen. You can also access your bookmarks, history, downloads, and settings by tapping on the menu icon at the bottom right corner of the screen.

-

Customize Your Browser Settings and Preferences

-

UC Mini Browser allows you to customize your browser settings and preferences according to your needs and preferences. You can change the theme, font size, language, homepage, search engine, and more by tapping on the menu icon and then tapping on "settings". You can also enable or disable various features such as speed mode, night mode, ad blocker, gesture control, incognito mode, and more by tapping on the menu icon and then tapping on "tools".

-

Access Various Tools and Features

-

UC Mini Browser provides you with various tools and features that enhance your browsing experience. You can access them by tapping on the menu icon and then tapping on "tools". Some of these tools and features are:

- -

Conclusion

-

In conclusion, UC Mini APK is a great browser app for Android 12 users who want to enjoy a fast, smooth, and reliable browsing experience. It has many features and benefits that make it stand out from other browsers, such as speed mode, night mode, ad blocker, gesture control, and more. It is also easy to download and install on your device, and you can customize it according to your preferences. If you are looking for a lightweight, efficient, and secure browser for your Android 12 device, you should give UC Mini APK a try.

-

FAQs

-

Here are some frequently asked questions about UC Mini APK:

- - - - - - - - - - - - - - - - - - - - - - - - - -
QuestionAnswer
Is UC Mini APK safe to use?Yes, UC Mini APK is safe to use. It does not contain any viruses or malware, and it protects your privacy and security by blocking malicious websites, phishing attempts, and malware. However, you should always download it from a trusted source and enable unknown sources on your device before installing it.
Is UC Mini APK free to use?Yes, UC Mini APK is free to use. You do not need to pay any fees or charges to download or use it. However, you might see some ads or sponsored content on the browser, which you can block with the ad blocker feature.
What is the difference between UC Mini APK and UC Browser?UC Mini APK is a modified version of the original UC Browser that is optimized for lower specs or limited storage space devices. It has a smaller size, consumes less resources, and runs faster than UC Browser. It also has some unique features that UC Browser does not have, such as night mode, gesture control, and more.
How can I update UC Mini APK?You can update UC Mini APK by downloading the latest version of the file from a trusted source and installing it on your device. You can also check for updates by tapping on the menu icon and then tapping on "check for updates".
How can I contact UC Mini APK support?You can contact UC Mini APK support by tapping on the menu icon and then tapping on "feedback". You can also visit their official website or social media pages for more information and assistance.

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Enjoy Chess with Friends and Foes with Chess Game Hack APK.md b/spaces/1phancelerku/anime-remove-background/Enjoy Chess with Friends and Foes with Chess Game Hack APK.md deleted file mode 100644 index ba3655a78564fca5f69fecb52aad24b2c0bc5173..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Enjoy Chess with Friends and Foes with Chess Game Hack APK.md +++ /dev/null @@ -1,118 +0,0 @@ -
-

Chess Game Hack APK: How to Play and Learn Chess with Unlimited Features

-

Introduction

-

Chess is one of the oldest and most popular board games in the world. It is a game of strategy, logic, and skill that can challenge your mind and improve your cognitive abilities. However, learning chess can be difficult and expensive, especially if you want to access premium features and content. That's why many chess enthusiasts are looking for a way to play and learn chess with unlimited features and resources. In this article, we will introduce you to chess game hack APK, a modified version of the original Chess - Play and Learn app that gives you access to all the premium features for free. We will also show you how to download and install chess game hack APK on your Android device, and what are the benefits of using it.

-

chess game hack apk


DOWNLOADhttps://jinyurl.com/2uNPLq



-

What is chess game hack APK?

-

Chess game hack APK is a modified version of the original Chess - Play and Learn app, which is one of the best chess apps for Android. Chess - Play and Learn is developed by Chess.com, the largest online chess community in the world. The app allows you to play chess online with millions of players, solve puzzles, watch videos, read articles, and learn from top coaches. However, some of the features and content are locked behind a paywall, which means you have to pay a monthly or yearly subscription fee to access them.

-

Chess game hack APK is a solution for those who want to enjoy all the features and content of Chess - Play and Learn without paying anything. Chess game hack APK is a modified version of the original app that bypasses the security checks and unlocks all the premium features for free. You can download chess game hack APK from various sources on the internet, but you have to be careful about the quality and safety of the file.

-

Why use chess game hack APK?

-

There are many reasons why you might want to use chess game hack APK instead of the original app. Here are some of them:

- -

Of course, using chess game hack APK also comes with some risks and drawbacks. For example:

- -

Therefore, you have to weigh the pros and cons before deciding whether to use chess game hack APK or not.

-

chess game mod apk unlimited money
-chess game cheat apk download
-chess game hack apk for android
-chess game premium apk free
-chess game cracked apk latest version
-chess game hack apk online
-chess game mod apk with all features unlocked
-chess game hack apk no root
-chess game pro apk full
-chess game hack apk 2023
-chess game mod apk offline
-chess game hack apk without verification
-chess game mod apk unlimited gems
-chess game hack apk ios
-chess game mod apk revdl
-chess game hack apk no survey
-chess game mod apk rexdl
-chess game hack apk no human verification
-chess game mod apk happymod
-chess game hack apk unlimited coins
-chess game mod apk android 1
-chess game hack apk free download
-chess game mod apk android oyun club
-chess game hack apk for pc
-chess game mod apk an1
-chess game hack apk 2022
-chess game mod apk pure
-chess game hack apk latest
-chess game mod apk apkpure
-chess game hack apk old version
-chess game mod apk 2021
-chess game hack apk 2021 download
-chess game mod apk 2020
-chess game hack apk 2020 download
-chess game mod apk 2019
-chess game hack apk 2019 download
-chess game mod apk 2018
-chess game hack apk 2018 download
-chess game mod apk 2017
-chess game hack apk 2017 download
-chess.com mod apk unlimited lessons and puzzles[^1^]
-lichess mod apk all features unlocked
-magnus trainer premium mod apk
-play magnus plus mod apk
-real chess 3d mod apk
-droidfish pro mod apk
-shredder classic pro mod apk
-ct-art 6.0 premium mod apk
-learn chess with dr. wolf premium mod apk

-

Features of chess game hack APK

-

Chess game hack APK has many features that make it an attractive option for chess lovers. Here are some of them:

-

Premium unlocked

-

One of the main features of chess game hack APK is that it unlocks all the premium features and content that are normally reserved for paid members. This includes:

- -

Unlimited puzzles and lessons

-

Another feature of chess game hack APK is that it allows you to solve unlimited puzzles and lessons to improve your chess skills and knowledge. You can choose from different categories, such as tactics, strategy, endgames, openings, and more. You can also adjust the difficulty level and the time limit according to your preference. You can track your progress and performance with statistics and ratings. You can also learn from the detailed explanations and hints provided by the app.

-

Online multiplayer mode

-

Chess game hack APK also enables you to play online multiplayer mode with anyone in the world, regardless of their rating or membership status. You can join or create a game with different time controls, variants, and rules. You can also chat with your opponents and send them emojis and gifts. You can also join or create a club with other players who share your interests and goals. You can participate in club matches, tournaments, and events with your club members.

-

Customizable board and pieces

-

Chess game hack APK also gives you the option to customize your board and pieces with different themes, colors, and styles. You can choose from various options, such as wood, metal, marble, glass, neon, and more. You can also change the size, shape, and design of your pieces. You can also adjust the sound effects, animations, and notifications of your app. You can make your chess experience more fun and personal with chess game hack APK.

-

How to download and install chess game hack APK

-

If you want to try chess game hack APK on your Android device, you have to follow these steps:

-

Step 1: Download the APK file from a trusted source

-

The first step is to download the APK file of chess game hack APK from a trusted source on the internet. You can search for it on Google or use the link provided below. Make sure that the file is safe and virus-free before downloading it. You can also scan it with an antivirus app if you want to be extra careful.

-

Download chess game hack APK here

-

Step 2: Enable unknown sources on your device

-

The second step is to enable unknown sources on your device. This is necessary because Android devices do not allow installing apps from sources other than the official Google Play Store by default. To enable unknown sources, you have to go to your device settings > security > unknown sources > toggle on.

-

Step 3: Install the APK file and launch the app

-

The third step is to install the APK file and launch the app. To install the APK file, you have to locate it in your device storage > tap on it > follow the instructions on the screen > wait for the installation to complete. To launch the app, you have to find it in your app drawer > tap on it > enjoy playing and learning chess with unlimited features.

-

Conclusion

-

Chess game hack APK is a modified version of the original Chess - Play and Learn app that gives you access to all the premium features and content for free. It is a great way to play and learn chess with unlimited resources and options. However, it also comes with some risks and drawbacks that you have to consider before using it. We hope that this article has given you enough information about chess game hack APK and how to download and install it on your Android device.

-

If you have any questions or feedback about chess game hack APK, feel free to leave a comment below. We would love to hear from you!

-

Frequently Asked Questions

-

Here are some of the most common questions that people ask about chess game hack APK:

-
    -
  1. Is chess game hack APK legal?
  2. -

    No, chess game hack APK is not legal. It is a modified version of the original app that violates the terms and conditions of Chess.com. Using chess game hack APK may result in legal issues or penalties from Chess.com or other authorities.

    -
  3. Is chess game hack APK safe?
  4. -

    Not necessarily. Chess game hack APK may contain malware or viruses that can harm your device or steal your data. It may also expose you to hackers or scammers who can access your account or personal information. Therefore, you have to be careful about where you download chess game hack APK from and what permissions you grant it to. You should also scan chess game hack APK with an antivirus app before installing it.

    -
  5. Is chess game hack APK updated?
  6. -

    It depends. Chess game hack APK may or may not be updated depending on the source and the developer. Sometimes, chess game hack APK may stop working or become incompatible with the latest version of the original app. In that case, you have to look for a new version of chess game hack APK or switch back to the official app.

    -
  7. Can I use chess game hack APK on other devices?
  8. -

    No, chess game hack APK is only compatible with Android devices. You cannot use it on iOS, Windows, Mac, or other platforms. If you want to play and learn chess on other devices, you have to use the official app or the web version of Chess.com.

    -
  9. Can I use chess game hack APK offline?
  10. -

    Yes, you can use chess game hack APK offline for some features, such as puzzles, lessons, and analysis. However, you cannot use it offline for online multiplayer mode, videos, articles, and other content that require an internet connection. You also need an internet connection to download and install chess game hack APK on your device.

    -

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Enjoy Dragon Ball Legends with Platinmods APK Mod Attack Multiplier All Challenges Completed and No Ads.md b/spaces/1phancelerku/anime-remove-background/Enjoy Dragon Ball Legends with Platinmods APK Mod Attack Multiplier All Challenges Completed and No Ads.md deleted file mode 100644 index 440729ac17b53ba84d84c8835258c7fe96141a95..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Enjoy Dragon Ball Legends with Platinmods APK Mod Attack Multiplier All Challenges Completed and No Ads.md +++ /dev/null @@ -1,102 +0,0 @@ - -

Dragon Ball Legends APK Mod Platinmods: How to Download and Install

-

If you are a fan of the Dragon Ball franchise, you might have heard of Dragon Ball Legends, a popular mobile game that lets you fight with your favorite characters from the anime and manga series. But did you know that there is a way to make the game even more fun and exciting? In this article, we will show you how to download and install Dragon Ball Legends APK Mod Platinmods, a modded version of the game that gives you access to various cheats and hacks. Read on to find out more.

-

dragon ball legends apk mod platinmods


Download File ===> https://jinyurl.com/2uNMK8



-

What is Dragon Ball Legends?

-

A brief introduction to the game and its features

-

Dragon Ball Legends is a 3D action RPG game that was released in 2018 by Bandai Namco Entertainment. The game features an original story that involves a new character named Shallot, who wakes up from a long sleep and finds himself in a world where different eras of Dragon Ball history are mixed together. He joins forces with other characters from the series to uncover the mystery behind this phenomenon and stop a sinister force that threatens the universe.

-

The game allows you to create your own team of fighters from a roster of over 200 characters, each with their own unique skills and abilities. You can also customize your characters with different outfits, accessories, and equipment. The game has various modes, such as story mode, event mode, PvP mode, co-op mode, and raid mode, where you can challenge other players or team up with them to defeat powerful enemies. The game also has stunning graphics, voice acting, and sound effects that make you feel like you are watching an episode of the anime.

-

Why you might want to use a modded version of the game

-

While Dragon Ball Legends is undoubtedly an enjoyable game, it also has some drawbacks that might frustrate some players. For example, the game requires a lot of grinding to level up your characters, unlock new ones, and obtain rare items. The game also has a stamina system that limits how much you can play in a day. Moreover, some players might find the game too easy or too hard depending on their skill level and preferences.

-

That's where Dragon Ball Legends APK Mod Platinmods comes in handy. This is a modified version of the game that gives you access to a mod menu that lets you activate various cheats and hacks that can enhance your gaming experience. For example, you can increase your attack power, defense power, ki (energy), speed, and critical rate. You can also enable god mode, instant win, all challenges completed, no ads, and more. With these features, you can breeze through the game without any hassle or difficulty.

-

dragon ball legends mod apk unlimited crystals platinmods
-dragon ball legends hack apk download platinmods
-dragon ball legends god mode mod apk platinmods
-dragon ball legends instant win mod apk platinmods
-dragon ball legends apk mod platinmods latest version
-dragon ball legends apk mod platinmods android
-dragon ball legends apk mod platinmods ios
-dragon ball legends apk mod platinmods no root
-dragon ball legends apk mod platinmods 2023
-dragon ball legends apk mod platinmods free download
-dragon ball legends apk mod platinmods vip
-dragon ball legends apk mod platinmods 12 features
-dragon ball legends apk mod platinmods attack multiplier
-dragon ball legends apk mod platinmods ki hack
-dragon ball legends apk mod platinmods all challenges completed
-dragon ball legends apk mod platinmods no ads
-dragon ball legends apk mod platinmods tutorial
-dragon ball legends apk mod platinmods reddit
-dragon ball legends apk mod platinmods facebook
-dragon ball legends apk mod platinmods youtube
-dragon ball legends apk mod platinmods review
-dragon ball legends apk mod platinmods safe
-dragon ball legends apk mod platinmods legit
-dragon ball legends apk mod platinmods update
-dragon ball legends apk mod platinmods 2.3.0
-dragon ball legends apk mod platinmods 2.4.0
-dragon ball legends apk mod platinmods 2.5.0
-dragon ball legends apk mod platinmods 2.6.0
-dragon ball legends apk mod platinmods 2.7.0
-dragon ball legends apk mod platinmods 2.8.0
-dragon ball legends apk mod platinmods 2.9.0
-dragon ball legends apk mod platinmods 3.0.0
-dragon ball legends apk mod platinmods offline
-dragon ball legends apk mod platinmods online
-dragon ball legends apk mod platinmods pvp
-dragon ball legends apk mod platinmods pve
-dragon ball legends apk mod platinmods co-op
-dragon ball legends apk mod platinmods story mode
-dragon ball legends apk mod platinmods events mode
-dragon ball legends apk mod platinmods raid mode
-dragon ball legends apk mod platinmods summon hack
-dragon ball legends apk mod platinmods zenkai boost hack
-dragon ball legends apk mod platinmods z power hack
-dragon ball legends apk mod platinmods cc hack

-

What is Pl

What is Platinmods?

-

A website that offers modded APKs for various games

-

Platinmods is a website that provides modded APKs for various Android games, including Dragon Ball Legends. A modded APK is a modified version of the original game file that has been altered to include additional features or functions that are not available in the official version. Platinmods has a team of experienced modders who create and update the modded APKs regularly. You can find a wide range of games on Platinmods, from action to strategy, from casual to RPG, and more.

-

The benefits and risks of using Platinmods

-

Using Platinmods has some benefits and risks that you should be aware of before downloading and installing any modded APK. Some of the benefits are:

- -

Some of the risks are:

- -

Therefore, you should use Platinmods at your own risk and discretion. We are not responsible for any consequences that might arise from using Platinmods.

-

How to download and install Dragon Ball Legends APK Mod Platinmods

-

The steps to follow to get the modded version of the game

-

If you want to download and install Dragon Ball Legends APK Mod Platinmods, you need to follow these steps:

-
    -
  1. Go to Platinmods.com and register an account if you don't have one already.
  2. -
  3. Search for Dragon Ball Legends in the search bar and click on the result.
  4. -
  5. Read the description and the instructions carefully and make sure you meet the requirements for using the modded APK.
  6. -
  7. Click on the download link and wait for the file to be downloaded to your device.
  8. -
  9. Uninstall the original version of Dragon Ball Legends if you have it installed on your device.
  10. -
  11. Enable the installation of unknown sources on your device settings if you haven't done so already.
  12. -
  13. Locate the downloaded file on your device and tap on it to install it.
  14. -
  15. Launch the game and enjoy the mod menu.
  16. -
-

The features and options of the mod menu

-

Once you launch the game, you will see a floating icon on your screen that represents the mod menu. You can tap on it to open or close it. The mod menu has various features and options that you can enable or disable according to your preference. Some of them are:

- - - - - - - - - - - - -" in result - result = styler.to_html() - assert "" not in result - - -def test_block_names(tpl_style, tpl_table): - # catch accidental removal of a block - expected_style = { - "before_style", - "style", - "table_styles", - "before_cellstyle", - "cellstyle", - } - expected_table = { - "before_table", - "table", - "caption", - "thead", - "tbody", - "after_table", - "before_head_rows", - "head_tr", - "after_head_rows", - "before_rows", - "tr", - "after_rows", - } - result1 = set(tpl_style.blocks) - assert result1 == expected_style - - result2 = set(tpl_table.blocks) - assert result2 == expected_table - - -def test_from_custom_template_table(tmpdir): - p = tmpdir.mkdir("tpl").join("myhtml_table.tpl") - p.write( - dedent( - """\ - {% extends "html_table.tpl" %} - {% block table %} -

{{custom_title}}

- {{ super() }} - {% endblock table %}""" - ) - ) - result = Styler.from_custom_template(str(tmpdir.join("tpl")), "myhtml_table.tpl") - assert issubclass(result, Styler) - assert result.env is not Styler.env - assert result.template_html_table is not Styler.template_html_table - styler = result(DataFrame({"A": [1, 2]})) - assert "

My Title

\n\n\n - {{ super() }} - {% endblock style %}""" - ) - ) - result = Styler.from_custom_template( - str(tmpdir.join("tpl")), html_style="myhtml_style.tpl" - ) - assert issubclass(result, Styler) - assert result.env is not Styler.env - assert result.template_html_style is not Styler.template_html_style - styler = result(DataFrame({"A": [1, 2]})) - assert '\n\nfull cap" in styler.to_html() - - -@pytest.mark.parametrize("index", [False, True]) -@pytest.mark.parametrize("columns", [False, True]) -@pytest.mark.parametrize("index_name", [True, False]) -def test_sticky_basic(styler, index, columns, index_name): - if index_name: - styler.index.name = "some text" - if index: - styler.set_sticky(axis=0) - if columns: - styler.set_sticky(axis=1) - - left_css = ( - "#T_ {0} {{\n position: sticky;\n background-color: inherit;\n" - " left: 0px;\n z-index: {1};\n}}" - ) - top_css = ( - "#T_ {0} {{\n position: sticky;\n background-color: inherit;\n" - " top: {1}px;\n z-index: {2};\n{3}}}" - ) - - res = styler.set_uuid("").to_html() - - # test index stickys over thead and tbody - assert (left_css.format("thead tr th:nth-child(1)", "3 !important") in res) is index - assert (left_css.format("tbody tr th:nth-child(1)", "1") in res) is index - - # test column stickys including if name row - assert ( - top_css.format("thead tr:nth-child(1) th", "0", "2", " height: 25px;\n") in res - ) is (columns and index_name) - assert ( - top_css.format("thead tr:nth-child(2) th", "25", "2", " height: 25px;\n") - in res - ) is (columns and index_name) - assert (top_css.format("thead tr:nth-child(1) th", "0", "2", "") in res) is ( - columns and not index_name - ) - - -@pytest.mark.parametrize("index", [False, True]) -@pytest.mark.parametrize("columns", [False, True]) -def test_sticky_mi(styler_mi, index, columns): - if index: - styler_mi.set_sticky(axis=0) - if columns: - styler_mi.set_sticky(axis=1) - - left_css = ( - "#T_ {0} {{\n position: sticky;\n background-color: inherit;\n" - " left: {1}px;\n min-width: 75px;\n max-width: 75px;\n z-index: {2};\n}}" - ) - top_css = ( - "#T_ {0} {{\n position: sticky;\n background-color: inherit;\n" - " top: {1}px;\n height: 25px;\n z-index: {2};\n}}" - ) - - res = styler_mi.set_uuid("").to_html() - - # test the index stickys for thead and tbody over both levels - assert ( - left_css.format("thead tr th:nth-child(1)", "0", "3 !important") in res - ) is index - assert (left_css.format("tbody tr th.level0", "0", "1") in res) is index - assert ( - left_css.format("thead tr th:nth-child(2)", "75", "3 !important") in res - ) is index - assert (left_css.format("tbody tr th.level1", "75", "1") in res) is index - - # test the column stickys for each level row - assert (top_css.format("thead tr:nth-child(1) th", "0", "2") in res) is columns - assert (top_css.format("thead tr:nth-child(2) th", "25", "2") in res) is columns - - -@pytest.mark.parametrize("index", [False, True]) -@pytest.mark.parametrize("columns", [False, True]) -@pytest.mark.parametrize("levels", [[1], ["one"], "one"]) -def test_sticky_levels(styler_mi, index, columns, levels): - styler_mi.index.names, styler_mi.columns.names = ["zero", "one"], ["zero", "one"] - if index: - styler_mi.set_sticky(axis=0, levels=levels) - if columns: - styler_mi.set_sticky(axis=1, levels=levels) - - left_css = ( - "#T_ {0} {{\n position: sticky;\n background-color: inherit;\n" - " left: {1}px;\n min-width: 75px;\n max-width: 75px;\n z-index: {2};\n}}" - ) - top_css = ( - "#T_ {0} {{\n position: sticky;\n background-color: inherit;\n" - " top: {1}px;\n height: 25px;\n z-index: {2};\n}}" - ) - - res = styler_mi.set_uuid("").to_html() - - # test no sticking of level0 - assert "#T_ thead tr th:nth-child(1)" not in res - assert "#T_ tbody tr th.level0" not in res - assert "#T_ thead tr:nth-child(1) th" not in res - - # test sticking level1 - assert ( - left_css.format("thead tr th:nth-child(2)", "0", "3 !important") in res - ) is index - assert (left_css.format("tbody tr th.level1", "0", "1") in res) is index - assert (top_css.format("thead tr:nth-child(2) th", "0", "2") in res) is columns - - -def test_sticky_raises(styler): - with pytest.raises(ValueError, match="No axis named bad for object type DataFrame"): - styler.set_sticky(axis="bad") - - -@pytest.mark.parametrize( - "sparse_index, sparse_columns", - [(True, True), (True, False), (False, True), (False, False)], -) -def test_sparse_options(sparse_index, sparse_columns): - cidx = MultiIndex.from_tuples([("Z", "a"), ("Z", "b"), ("Y", "c")]) - ridx = MultiIndex.from_tuples([("A", "a"), ("A", "b"), ("B", "c")]) - df = DataFrame([[1, 2, 3], [4, 5, 6], [7, 8, 9]], index=ridx, columns=cidx) - styler = df.style - - default_html = styler.to_html() # defaults under pd.options to (True , True) - - with option_context( - "styler.sparse.index", sparse_index, "styler.sparse.columns", sparse_columns - ): - html1 = styler.to_html() - assert (html1 == default_html) is (sparse_index and sparse_columns) - html2 = styler.to_html(sparse_index=sparse_index, sparse_columns=sparse_columns) - assert html1 == html2 - - -@pytest.mark.parametrize("index", [True, False]) -@pytest.mark.parametrize("columns", [True, False]) -def test_map_header_cell_ids(styler, index, columns): - # GH 41893 - func = lambda v: "attr: val;" - styler.uuid, styler.cell_ids = "", False - if index: - styler.map_index(func, axis="index") - if columns: - styler.map_index(func, axis="columns") - - result = styler.to_html() - - # test no data cell ids - assert '' in result - assert '' in result - - # test index header ids where needed and css styles - assert ( - '' in result - ) is index - assert ( - '' in result - ) is index - assert ("#T__level0_row0, #T__level0_row1 {\n attr: val;\n}" in result) is index - - # test column header ids where needed and css styles - assert ( - '' in result - ) is columns - assert ("#T__level0_col0 {\n attr: val;\n}" in result) is columns - - -@pytest.mark.parametrize("rows", [True, False]) -@pytest.mark.parametrize("cols", [True, False]) -def test_maximums(styler_mi, rows, cols): - result = styler_mi.to_html( - max_rows=2 if rows else None, - max_columns=2 if cols else None, - ) - - assert ">5" in result # [[0,1], [4,5]] always visible - assert (">8" in result) is not rows # first trimmed vertical element - assert (">2" in result) is not cols # first trimmed horizontal element - - -def test_replaced_css_class_names(): - css = { - "row_heading": "ROWHEAD", - # "col_heading": "COLHEAD", - "index_name": "IDXNAME", - # "col": "COL", - "row": "ROW", - # "col_trim": "COLTRIM", - "row_trim": "ROWTRIM", - "level": "LEVEL", - "data": "DATA", - "blank": "BLANK", - } - midx = MultiIndex.from_product([["a", "b"], ["c", "d"]]) - styler_mi = Styler( - DataFrame(np.arange(16).reshape(4, 4), index=midx, columns=midx), - uuid_len=0, - ).set_table_styles(css_class_names=css) - styler_mi.index.names = ["n1", "n2"] - styler_mi.hide(styler_mi.index[1:], axis=0) - styler_mi.hide(styler_mi.columns[1:], axis=1) - styler_mi.map_index(lambda v: "color: red;", axis=0) - styler_mi.map_index(lambda v: "color: green;", axis=1) - styler_mi.map(lambda v: "color: blue;") - expected = dedent( - """\ - -
FeatureDescription
Attack MultiplierThis feature allows you to increase or decrease your attack power by a certain factor.
Defense MultiplierThis feature allows you to increase or decrease your defense power by a certain factor.
Ki MultiplierThis feature allows you to increase or decrease your ki (energy) by a certain factor.
Speed MultiplierThis feature allows you to increase or decrease your speed by a certain factor.
Critical Rate MultiplierThis feature allows you to increase or decrease your critical rate by a certain factor.
God ModeThis feature makes you invincible and immune to any damage.
Instant WinThis feature allows you to win any battle instantly without fighting.
All Challenges CompletedThis feature allows you to complete all the challenges in any battle without fulfilling them.
No AdsThis feature removes all the ads from the game.
No Root DetectionThis feature prevents the game from detecting if your device is rooted or not.
No Cheat DetectionThis feature prevents the game from detecting if I have already written the article on the topic of "dragon ball legends apk mod platinmods". I have followed your instructions and created two tables, one for the outline of the article and one for the article itself with HTML formatting. I have also written the article in a conversational style, used at least 15 headings and subheadings, used at least one table, and ended with a conclusion paragraph and 5 unique FAQs. I have also written " Is there anything else you need me to do? ?

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1toTree/lora_test/ppdiffusers/pipelines/versatile_diffusion/modeling_text_unet.py b/spaces/1toTree/lora_test/ppdiffusers/pipelines/versatile_diffusion/modeling_text_unet.py deleted file mode 100644 index 74a4d89cf0576f921ce6b0a075e00d995c7dad7b..0000000000000000000000000000000000000000 --- a/spaces/1toTree/lora_test/ppdiffusers/pipelines/versatile_diffusion/modeling_text_unet.py +++ /dev/null @@ -1,1366 +0,0 @@ -# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -from typing import Any, Dict, List, Optional, Tuple, Union - -import numpy as np -import paddle -import paddle.nn as nn -from paddle.distributed.fleet.utils import recompute - -from ...configuration_utils import ConfigMixin, register_to_config -from ...modeling_utils import ModelMixin -from ...models.attention import DualTransformer2DModel, Transformer2DModel -from ...models.cross_attention import ( - AttnProcessor, - CrossAttention, - CrossAttnAddedKVProcessor, -) -from ...models.embeddings import TimestepEmbedding, Timesteps -from ...models.unet_2d_condition import UNet2DConditionOutput -from ...utils import logging - -logger = logging.get_logger(__name__) # pylint: disable=invalid-name - - -def get_down_block( - down_block_type, - num_layers, - in_channels, - out_channels, - temb_channels, - add_downsample, - resnet_eps, - resnet_act_fn, - attn_num_head_channels, - resnet_groups=None, - cross_attention_dim=None, - downsample_padding=None, - dual_cross_attention=False, - use_linear_projection=False, - only_cross_attention=False, - upcast_attention=False, - resnet_time_scale_shift="default", -): - down_block_type = down_block_type[7:] if down_block_type.startswith("UNetRes") else down_block_type - if down_block_type == "DownBlockFlat": - return DownBlockFlat( - num_layers=num_layers, - in_channels=in_channels, - out_channels=out_channels, - temb_channels=temb_channels, - add_downsample=add_downsample, - resnet_eps=resnet_eps, - resnet_act_fn=resnet_act_fn, - resnet_groups=resnet_groups, - downsample_padding=downsample_padding, - resnet_time_scale_shift=resnet_time_scale_shift, - ) - elif down_block_type == "CrossAttnDownBlockFlat": - if cross_attention_dim is None: - raise ValueError("cross_attention_dim must be specified for CrossAttnDownBlockFlat") - return CrossAttnDownBlockFlat( - num_layers=num_layers, - in_channels=in_channels, - out_channels=out_channels, - temb_channels=temb_channels, - add_downsample=add_downsample, - resnet_eps=resnet_eps, - resnet_act_fn=resnet_act_fn, - resnet_groups=resnet_groups, - downsample_padding=downsample_padding, - cross_attention_dim=cross_attention_dim, - attn_num_head_channels=attn_num_head_channels, - dual_cross_attention=dual_cross_attention, - use_linear_projection=use_linear_projection, - only_cross_attention=only_cross_attention, - resnet_time_scale_shift=resnet_time_scale_shift, - ) - raise ValueError(f"{down_block_type} is not supported.") - - -def get_up_block( - up_block_type, - num_layers, - in_channels, - out_channels, - prev_output_channel, - temb_channels, - add_upsample, - resnet_eps, - resnet_act_fn, - attn_num_head_channels, - resnet_groups=None, - cross_attention_dim=None, - dual_cross_attention=False, - use_linear_projection=False, - only_cross_attention=False, - upcast_attention=False, - resnet_time_scale_shift="default", -): - up_block_type = up_block_type[7:] if up_block_type.startswith("UNetRes") else up_block_type - if up_block_type == "UpBlockFlat": - return UpBlockFlat( - num_layers=num_layers, - in_channels=in_channels, - out_channels=out_channels, - prev_output_channel=prev_output_channel, - temb_channels=temb_channels, - add_upsample=add_upsample, - resnet_eps=resnet_eps, - resnet_act_fn=resnet_act_fn, - resnet_groups=resnet_groups, - resnet_time_scale_shift=resnet_time_scale_shift, - ) - elif up_block_type == "CrossAttnUpBlockFlat": - if cross_attention_dim is None: - raise ValueError("cross_attention_dim must be specified for CrossAttnUpBlockFlat") - return CrossAttnUpBlockFlat( - num_layers=num_layers, - in_channels=in_channels, - out_channels=out_channels, - prev_output_channel=prev_output_channel, - temb_channels=temb_channels, - add_upsample=add_upsample, - resnet_eps=resnet_eps, - resnet_act_fn=resnet_act_fn, - resnet_groups=resnet_groups, - cross_attention_dim=cross_attention_dim, - attn_num_head_channels=attn_num_head_channels, - dual_cross_attention=dual_cross_attention, - use_linear_projection=use_linear_projection, - only_cross_attention=only_cross_attention, - resnet_time_scale_shift=resnet_time_scale_shift, - ) - raise ValueError(f"{up_block_type} is not supported.") - - -# Copied from diffusers.models.unet_2d_condition.UNet2DConditionModel with UNet2DConditionModel->UNetFlatConditionModel, nn.Conv2d->LinearMultiDim, Block2D->BlockFlat -class UNetFlatConditionModel(ModelMixin, ConfigMixin): - r""" - UNetFlatConditionModel is a conditional 2D UNet model that takes in a noisy sample, conditional state, and a - timestep and returns sample shaped output. - - This model inherits from [`ModelMixin`]. Check the superclass documentation for the generic methods the library - implements for all the models (such as downloading or saving, etc.) - - Parameters: - sample_size (`int` or `Tuple[int, int]`, *optional*, defaults to `None`): - Height and width of input/output sample. - in_channels (`int`, *optional*, defaults to 4): The number of channels in the input sample. - out_channels (`int`, *optional*, defaults to 4): The number of channels in the output. - center_input_sample (`bool`, *optional*, defaults to `False`): Whether to center the input sample. - flip_sin_to_cos (`bool`, *optional*, defaults to `False`): - Whether to flip the sin to cos in the time embedding. - freq_shift (`int`, *optional*, defaults to 0): The frequency shift to apply to the time embedding. - down_block_types (`Tuple[str]`, *optional*, defaults to `("CrossAttnDownBlockFlat", "CrossAttnDownBlockFlat", "CrossAttnDownBlockFlat", "DownBlockFlat")`): - The tuple of downsample blocks to use. - mid_block_type (`str`, *optional*, defaults to `"UNetMidBlockFlatCrossAttn"`): - The mid block type. Choose from `UNetMidBlockFlatCrossAttn` or `UNetMidBlockFlatSimpleCrossAttn`. - up_block_types (`Tuple[str]`, *optional*, defaults to `("UpBlockFlat", "CrossAttnUpBlockFlat", "CrossAttnUpBlockFlat", "CrossAttnUpBlockFlat",)`): - The tuple of upsample blocks to use. - block_out_channels (`Tuple[int]`, *optional*, defaults to `(320, 640, 1280, 1280)`): - The tuple of output channels for each block. - layers_per_block (`int`, *optional*, defaults to 2): The number of layers per block. - downsample_padding (`int`, *optional*, defaults to 1): The padding to use for the downsampling convolution. - mid_block_scale_factor (`float`, *optional*, defaults to 1.0): The scale factor to use for the mid block. - act_fn (`str`, *optional*, defaults to `"silu"`): The activation function to use. - norm_num_groups (`int`, *optional*, defaults to 32): The number of groups to use for the normalization. - norm_eps (`float`, *optional*, defaults to 1e-5): The epsilon to use for the normalization. - cross_attention_dim (`int`, *optional*, defaults to 1280): The dimension of the cross attention features. - attention_head_dim (`int`, *optional*, defaults to 8): The dimension of the attention heads. - resnet_time_scale_shift (`str`, *optional*, defaults to `"default"`): Time scale shift config - for resnet blocks, see [`~models.resnet.ResnetBlockFlat`]. Choose from `default` or `scale_shift`. - class_embed_type (`str`, *optional*, defaults to None): The type of class embedding to use which is ultimately - summed with the time embeddings. Choose from `None`, `"timestep"`, or `"identity"`. - """ - - _supports_gradient_checkpointing = True - - @register_to_config - def __init__( - self, - sample_size: Optional[int] = None, - in_channels: int = 4, - out_channels: int = 4, - center_input_sample: bool = False, - flip_sin_to_cos: bool = True, - freq_shift: int = 0, - down_block_types: Tuple[str] = ( - "CrossAttnDownBlockFlat", - "CrossAttnDownBlockFlat", - "CrossAttnDownBlockFlat", - "DownBlockFlat", - ), - mid_block_type: str = "UNetMidBlockFlatCrossAttn", - up_block_types: Tuple[str] = ( - "UpBlockFlat", - "CrossAttnUpBlockFlat", - "CrossAttnUpBlockFlat", - "CrossAttnUpBlockFlat", - ), - only_cross_attention: Union[bool, Tuple[bool]] = False, - block_out_channels: Tuple[int] = (320, 640, 1280, 1280), - layers_per_block: int = 2, - downsample_padding: int = 1, - mid_block_scale_factor: float = 1, - act_fn: str = "silu", - norm_num_groups: int = 32, - norm_eps: float = 1e-5, - cross_attention_dim: int = 1280, - attention_head_dim: Union[int, Tuple[int]] = 8, - dual_cross_attention: bool = False, - use_linear_projection: bool = False, - class_embed_type: Optional[str] = None, - num_class_embeds: Optional[int] = None, - upcast_attention: bool = False, - resnet_time_scale_shift: str = "default", - ): - super().__init__() - - self.sample_size = sample_size - time_embed_dim = block_out_channels[0] * 4 - - # input - self.conv_in = LinearMultiDim(in_channels, block_out_channels[0], kernel_size=3, padding=(1, 1)) - - # time - self.time_proj = Timesteps(block_out_channels[0], flip_sin_to_cos, freq_shift) - timestep_input_dim = block_out_channels[0] - - self.time_embedding = TimestepEmbedding(timestep_input_dim, time_embed_dim) - - # class embedding - if class_embed_type is None and num_class_embeds is not None: - self.class_embedding = nn.Embedding(num_class_embeds, time_embed_dim) - elif class_embed_type == "timestep": - self.class_embedding = TimestepEmbedding(timestep_input_dim, time_embed_dim) - elif class_embed_type == "identity": - self.class_embedding = nn.Identity(time_embed_dim, time_embed_dim) - else: - self.class_embedding = None - - self.down_blocks = nn.LayerList([]) - self.mid_block = None - self.up_blocks = nn.LayerList([]) - - if isinstance(only_cross_attention, bool): - only_cross_attention = [only_cross_attention] * len(down_block_types) - - if isinstance(attention_head_dim, int): - attention_head_dim = (attention_head_dim,) * len(down_block_types) - - # down - output_channel = block_out_channels[0] - for i, down_block_type in enumerate(down_block_types): - input_channel = output_channel - output_channel = block_out_channels[i] - is_final_block = i == len(block_out_channels) - 1 - - down_block = get_down_block( - down_block_type, - num_layers=layers_per_block, - in_channels=input_channel, - out_channels=output_channel, - temb_channels=time_embed_dim, - add_downsample=not is_final_block, - resnet_eps=norm_eps, - resnet_act_fn=act_fn, - resnet_groups=norm_num_groups, - cross_attention_dim=cross_attention_dim, - attn_num_head_channels=attention_head_dim[i], - downsample_padding=downsample_padding, - dual_cross_attention=dual_cross_attention, - use_linear_projection=use_linear_projection, - only_cross_attention=only_cross_attention[i], - upcast_attention=upcast_attention, - resnet_time_scale_shift=resnet_time_scale_shift, - ) - self.down_blocks.append(down_block) - - # mid - if mid_block_type == "UNetMidBlockFlatCrossAttn": - self.mid_block = UNetMidBlockFlatCrossAttn( - in_channels=block_out_channels[-1], - temb_channels=time_embed_dim, - resnet_eps=norm_eps, - resnet_act_fn=act_fn, - output_scale_factor=mid_block_scale_factor, - resnet_time_scale_shift=resnet_time_scale_shift, - cross_attention_dim=cross_attention_dim, - attn_num_head_channels=attention_head_dim[-1], - resnet_groups=norm_num_groups, - dual_cross_attention=dual_cross_attention, - use_linear_projection=use_linear_projection, - upcast_attention=upcast_attention, - ) - elif mid_block_type == "UNetMidBlockFlatSimpleCrossAttn": - self.mid_block = UNetMidBlockFlatSimpleCrossAttn( - in_channels=block_out_channels[-1], - temb_channels=time_embed_dim, - resnet_eps=norm_eps, - resnet_act_fn=act_fn, - output_scale_factor=mid_block_scale_factor, - cross_attention_dim=cross_attention_dim, - attn_num_head_channels=attention_head_dim[-1], - resnet_groups=norm_num_groups, - resnet_time_scale_shift=resnet_time_scale_shift, - ) - else: - raise ValueError(f"unknown mid_block_type : {mid_block_type}") - - # count how many layers upsample the images - self.num_upsamplers = 0 - - # up - reversed_block_out_channels = list(reversed(block_out_channels)) - reversed_attention_head_dim = list(reversed(attention_head_dim)) - reversed_only_cross_attention = list(reversed(only_cross_attention)) - - output_channel = reversed_block_out_channels[0] - for i, up_block_type in enumerate(up_block_types): - is_final_block = i == len(block_out_channels) - 1 - - prev_output_channel = output_channel - output_channel = reversed_block_out_channels[i] - input_channel = reversed_block_out_channels[min(i + 1, len(block_out_channels) - 1)] - - # add upsample block for all BUT final layer - if not is_final_block: - add_upsample = True - self.num_upsamplers += 1 - else: - add_upsample = False - - up_block = get_up_block( - up_block_type, - num_layers=layers_per_block + 1, - in_channels=input_channel, - out_channels=output_channel, - prev_output_channel=prev_output_channel, - temb_channels=time_embed_dim, - add_upsample=add_upsample, - resnet_eps=norm_eps, - resnet_act_fn=act_fn, - resnet_groups=norm_num_groups, - cross_attention_dim=cross_attention_dim, - attn_num_head_channels=reversed_attention_head_dim[i], - dual_cross_attention=dual_cross_attention, - use_linear_projection=use_linear_projection, - only_cross_attention=reversed_only_cross_attention[i], - upcast_attention=upcast_attention, - resnet_time_scale_shift=resnet_time_scale_shift, - ) - self.up_blocks.append(up_block) - prev_output_channel = output_channel - - # out - self.conv_norm_out = nn.GroupNorm( - num_channels=block_out_channels[0], num_groups=norm_num_groups, epsilon=norm_eps - ) - self.conv_act = nn.Silu() - self.conv_out = LinearMultiDim(block_out_channels[0], out_channels, kernel_size=3, padding=1) - - @property - def attn_processors(self) -> Dict[str, AttnProcessor]: - r""" - Returns: - `dict` of attention processors: A dictionary containing all attention processors used in the model with - indexed by its weight name. - """ - # set recursively - processors = {} - - def fn_recursive_add_processors(name: str, module: nn.Layer, processors: Dict[str, AttnProcessor]): - if hasattr(module, "set_processor"): - processors[f"{name}.processor"] = module.processor - - for sub_name, child in module.named_children(): - fn_recursive_add_processors(f"{name}.{sub_name}", child, processors) - - return processors - - for name, module in self.named_children(): - fn_recursive_add_processors(name, module, processors) - - return processors - - def set_attn_processor(self, processor: Union[AttnProcessor, Dict[str, AttnProcessor]]): - r""" - Parameters: - `processor (`dict` of `AttnProcessor` or `AttnProcessor`): - The instantiated processor class or a dictionary of processor classes that will be set as the processor - of **all** `CrossAttention` layers. - In case `processor` is a dict, the key needs to define the path to the corresponding cross attention processor. This is strongly recommended when setting trainablae attention processors.: - """ - count = len(self.attn_processors.keys()) - - if isinstance(processor, dict) and len(processor) != count: - raise ValueError( - f"A dict of processors was passed, but the number of processors {len(processor)} does not match the" - f" number of attention layers: {count}. Please make sure to pass {count} processor classes." - ) - - def fn_recursive_attn_processor(name: str, module: nn.Layer, processor): - if hasattr(module, "set_processor"): - if not isinstance(processor, dict): - module.set_processor(processor) - else: - module.set_processor(processor.pop(f"{name}.processor")) - - for sub_name, child in module.named_children(): - fn_recursive_attn_processor(f"{name}.{sub_name}", child, processor) - - for name, module in self.named_children(): - fn_recursive_attn_processor(name, module, processor) - - def set_attention_slice(self, slice_size): - r""" - Enable sliced attention computation. - - When this option is enabled, the attention module will split the input tensor in slices, to compute attention - in several steps. This is useful to save some memory in exchange for a small speed decrease. - - Args: - slice_size (`str` or `int` or `list(int)`, *optional*, defaults to `"auto"`): - When `"auto"`, halves the input to the attention heads, so attention will be computed in two steps. If - `"max"`, maxium amount of memory will be saved by running only one slice at a time. If a number is - provided, uses as many slices as `attention_head_dim // slice_size`. In this case, `attention_head_dim` - must be a multiple of `slice_size`. - """ - sliceable_head_dims = [] - - def fn_recursive_retrieve_slicable_dims(module: nn.Layer): - if hasattr(module, "set_attention_slice"): - sliceable_head_dims.append(module.sliceable_head_dim) - - for child in module.children(): - fn_recursive_retrieve_slicable_dims(child) - - # retrieve number of attention layers - for module in self.children(): - fn_recursive_retrieve_slicable_dims(module) - - num_slicable_layers = len(sliceable_head_dims) - - if slice_size == "auto": - # half the attention head size is usually a good trade-off between - # speed and memory - slice_size = [dim // 2 for dim in sliceable_head_dims] - elif slice_size == "max": - # make smallest slice possible - slice_size = num_slicable_layers * [1] - - slice_size = num_slicable_layers * [slice_size] if not isinstance(slice_size, list) else slice_size - - if len(slice_size) != len(sliceable_head_dims): - raise ValueError( - f"You have provided {len(slice_size)}, but {self.config} has {len(sliceable_head_dims)} different" - f" attention layers. Make sure to match `len(slice_size)` to be {len(sliceable_head_dims)}." - ) - - for i in range(len(slice_size)): - size = slice_size[i] - dim = sliceable_head_dims[i] - if size is not None and size > dim: - raise ValueError(f"size {size} has to be smaller or equal to {dim}.") - - # Recursively walk through all the children. - # Any children which exposes the set_attention_slice method - # gets the message - def fn_recursive_set_attention_slice(module: nn.Layer, slice_size: List[int]): - if hasattr(module, "set_attention_slice"): - module.set_attention_slice(slice_size.pop()) - - for child in module.children(): - fn_recursive_set_attention_slice(child, slice_size) - - reversed_slice_size = list(reversed(slice_size)) - for module in self.children(): - fn_recursive_set_attention_slice(module, reversed_slice_size) - - def _set_gradient_checkpointing(self, module, value=False): - if isinstance(module, (CrossAttnDownBlockFlat, DownBlockFlat, CrossAttnUpBlockFlat, UpBlockFlat)): - module.gradient_checkpointing = value - - def forward( - self, - sample: paddle.Tensor, - timestep: Union[paddle.Tensor, float, int], - encoder_hidden_states: paddle.Tensor, - class_labels: Optional[paddle.Tensor] = None, - attention_mask: Optional[paddle.Tensor] = None, - cross_attention_kwargs: Optional[Dict[str, Any]] = None, - return_dict: bool = True, - ) -> Union[UNet2DConditionOutput, Tuple]: - r""" - Args: - sample (`paddle.Tensor`): (batch, channel, height, width) noisy inputs tensor - timestep (`paddle.Tensor` or `float` or `int`): (batch) timesteps - encoder_hidden_states (`paddle.Tensor`): (batch, sequence_length, feature_dim) encoder hidden states - return_dict (`bool`, *optional*, defaults to `True`): - Whether or not to return a [`models.unet_2d_condition.UNet2DConditionOutput`] instead of a plain tuple. - - Returns: - [`~models.unet_2d_condition.UNet2DConditionOutput`] or `tuple`: - [`~models.unet_2d_condition.UNet2DConditionOutput`] if `return_dict` is True, otherwise a `tuple`. When - returning a tuple, the first element is the sample tensor. - """ - # By default samples have to be AT least a multiple of the overall upsampling factor. - # The overall upsampling factor is equal to 2 ** (# num of upsampling layears). - # However, the upsampling interpolation output size can be forced to fit any upsampling size - # on the fly if necessary. - default_overall_up_factor = 2**self.num_upsamplers - - # upsample size should be forwarded when sample is not a multiple of `default_overall_up_factor` - forward_upsample_size = False - upsample_size = None - - if any(s % default_overall_up_factor != 0 for s in sample.shape[-2:]): - logger.info("Forward upsample size to force interpolation output size.") - forward_upsample_size = True - - # prepare attention_mask - if attention_mask is not None: - attention_mask = (1 - attention_mask.cast(sample.dtype)) * -10000.0 - attention_mask = attention_mask.unsqueeze(1) - - # 0. center input if necessary - if self.config.center_input_sample: - sample = 2 * sample - 1.0 - - # 1. time - timesteps = timestep - if not paddle.is_tensor(timesteps): - # TODO: this requires sync between CPU and GPU. So try to pass timesteps as tensors if you can - timesteps = paddle.to_tensor([timesteps], dtype="int64") - elif paddle.is_tensor(timesteps) and len(timesteps.shape) == 0: - timesteps = timesteps[None] - - # broadcast to batch dimension in a way that's compatible with ONNX/Core ML - timesteps = timesteps.expand( - [ - sample.shape[0], - ] - ) - - t_emb = self.time_proj(timesteps) - - # timesteps does not contain any weights and will always return f32 tensors - # but time_embedding might actually be running in fp16. so we need to cast here. - # there might be better ways to encapsulate this. - t_emb = t_emb.cast(self.dtype) - emb = self.time_embedding(t_emb) - - if self.class_embedding is not None: - if class_labels is None: - raise ValueError("class_labels should be provided when num_class_embeds > 0") - - if self.config.class_embed_type == "timestep": - class_labels = self.time_proj(class_labels) - - class_emb = self.class_embedding(class_labels).cast(self.dtype) - emb = emb + class_emb - - # 2. pre-process - sample = self.conv_in(sample) - - # 3. down - down_block_res_samples = (sample,) - for downsample_block in self.down_blocks: - if hasattr(downsample_block, "has_cross_attention") and downsample_block.has_cross_attention: - sample, res_samples = downsample_block( - hidden_states=sample, - temb=emb, - encoder_hidden_states=encoder_hidden_states, - attention_mask=attention_mask, - cross_attention_kwargs=cross_attention_kwargs, - ) - else: - sample, res_samples = downsample_block(hidden_states=sample, temb=emb) - - down_block_res_samples += res_samples - - # 4. mid - sample = self.mid_block( - sample, - emb, - encoder_hidden_states=encoder_hidden_states, - attention_mask=attention_mask, - cross_attention_kwargs=cross_attention_kwargs, - ) - - # 5. up - for i, upsample_block in enumerate(self.up_blocks): - is_final_block = i == len(self.up_blocks) - 1 - - res_samples = down_block_res_samples[-len(upsample_block.resnets) :] - down_block_res_samples = down_block_res_samples[: -len(upsample_block.resnets)] - - # if we have not reached the final block and need to forward the - # upsample size, we do it here - if not is_final_block and forward_upsample_size: - upsample_size = down_block_res_samples[-1].shape[2:] - - if hasattr(upsample_block, "has_cross_attention") and upsample_block.has_cross_attention: - sample = upsample_block( - hidden_states=sample, - temb=emb, - res_hidden_states_tuple=res_samples, - encoder_hidden_states=encoder_hidden_states, - cross_attention_kwargs=cross_attention_kwargs, - upsample_size=upsample_size, - attention_mask=attention_mask, - ) - else: - sample = upsample_block( - hidden_states=sample, temb=emb, res_hidden_states_tuple=res_samples, upsample_size=upsample_size - ) - # 6. post-process - sample = self.conv_norm_out(sample) - sample = self.conv_act(sample) - sample = self.conv_out(sample) - - if not return_dict: - return (sample,) - - return UNet2DConditionOutput(sample=sample) - - -class LinearMultiDim(nn.Linear): - def __init__(self, in_features, out_features=None, second_dim=4, *args, **kwargs): - in_features = [in_features, second_dim, 1] if isinstance(in_features, int) else list(in_features) - if out_features is None: - out_features = in_features - out_features = [out_features, second_dim, 1] if isinstance(out_features, int) else list(out_features) - self.in_features_multidim = in_features - self.out_features_multidim = out_features - super().__init__(np.array(in_features).prod(), np.array(out_features).prod()) - - def forward(self, input_tensor, *args, **kwargs): - shape = input_tensor.shape - n_dim = len(self.in_features_multidim) - input_tensor = input_tensor.reshape([*shape[0:-n_dim], self.in_features]) - output_tensor = super().forward(input_tensor) - output_tensor = output_tensor.reshape([*shape[0:-n_dim], *self.out_features_multidim]) - return output_tensor - - -class ResnetBlockFlat(nn.Layer): - def __init__( - self, - *, - in_channels, - out_channels=None, - dropout=0.0, - temb_channels=512, - groups=32, - groups_out=None, - pre_norm=True, - eps=1e-6, - time_embedding_norm="default", - use_in_shortcut=None, - second_dim=4, - **kwargs, - ): - super().__init__() - self.pre_norm = pre_norm - self.pre_norm = True - - in_channels = [in_channels, second_dim, 1] if isinstance(in_channels, int) else list(in_channels) - self.in_channels_prod = np.array(in_channels).prod() - self.channels_multidim = in_channels - - if out_channels is not None: - out_channels = [out_channels, second_dim, 1] if isinstance(out_channels, int) else list(out_channels) - out_channels_prod = np.array(out_channels).prod() - self.out_channels_multidim = out_channels - else: - out_channels_prod = self.in_channels_prod - self.out_channels_multidim = self.channels_multidim - self.time_embedding_norm = time_embedding_norm - - if groups_out is None: - groups_out = groups - - self.norm1 = nn.GroupNorm(num_groups=groups, num_channels=self.in_channels_prod, epsilon=eps) - self.conv1 = nn.Conv2D(self.in_channels_prod, out_channels_prod, kernel_size=1, padding=0) - - if temb_channels is not None: - self.time_emb_proj = nn.Linear(temb_channels, out_channels_prod) - else: - self.time_emb_proj = None - - self.norm2 = nn.GroupNorm(num_groups=groups_out, num_channels=out_channels_prod, epsilon=eps) - self.dropout = nn.Dropout(dropout) - self.conv2 = nn.Conv2D(out_channels_prod, out_channels_prod, kernel_size=1, padding=0) - - self.nonlinearity = nn.Silu() - - self.use_in_shortcut = ( - self.in_channels_prod != out_channels_prod if use_in_shortcut is None else use_in_shortcut - ) - - self.conv_shortcut = None - if self.use_in_shortcut: - self.conv_shortcut = nn.Conv2D( - self.in_channels_prod, out_channels_prod, kernel_size=1, stride=1, padding=0 - ) - - def forward(self, input_tensor, temb): - shape = input_tensor.shape - n_dim = len(self.channels_multidim) - input_tensor = input_tensor.reshape([*shape[0:-n_dim], self.in_channels_prod, 1, 1]) - input_tensor = input_tensor.reshape([-1, self.in_channels_prod, 1, 1]) - - hidden_states = input_tensor - - hidden_states = self.norm1(hidden_states) - hidden_states = self.nonlinearity(hidden_states) - hidden_states = self.conv1(hidden_states) - - if temb is not None: - temb = self.time_emb_proj(self.nonlinearity(temb))[:, :, None, None] - hidden_states = hidden_states + temb - - hidden_states = self.norm2(hidden_states) - hidden_states = self.nonlinearity(hidden_states) - - hidden_states = self.dropout(hidden_states) - hidden_states = self.conv2(hidden_states) - - if self.conv_shortcut is not None: - input_tensor = self.conv_shortcut(input_tensor) - - output_tensor = input_tensor + hidden_states - - output_tensor = output_tensor.reshape([*shape[0:-n_dim], -1]) - output_tensor = output_tensor.reshape([*shape[0:-n_dim], *self.out_channels_multidim]) - - return output_tensor - - -# Copied from diffusers.models.unet_2d_blocks.DownBlock2D with DownBlock2D->DownBlockFlat, ResnetBlock2D->ResnetBlockFlat, Downsample2D->LinearMultiDim -class DownBlockFlat(nn.Layer): - def __init__( - self, - in_channels: int, - out_channels: int, - temb_channels: int, - dropout: float = 0.0, - num_layers: int = 1, - resnet_eps: float = 1e-6, - resnet_time_scale_shift: str = "default", - resnet_act_fn: str = "swish", - resnet_groups: int = 32, - resnet_pre_norm: bool = True, - output_scale_factor=1.0, - add_downsample=True, - downsample_padding=1, - ): - super().__init__() - resnets = [] - - for i in range(num_layers): - in_channels = in_channels if i == 0 else out_channels - resnets.append( - ResnetBlockFlat( - in_channels=in_channels, - out_channels=out_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=resnet_groups, - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - ) - ) - - self.resnets = nn.LayerList(resnets) - - if add_downsample: - self.downsamplers = nn.LayerList( - [ - LinearMultiDim( - out_channels, use_conv=True, out_channels=out_channels, padding=downsample_padding, name="op" - ) - ] - ) - else: - self.downsamplers = None - - self.gradient_checkpointing = False - - def forward(self, hidden_states, temb=None): - output_states = () - - for resnet in self.resnets: - if self.training and self.gradient_checkpointing: - - def create_custom_forward(module): - def custom_forward(*inputs): - return module(*inputs) - - return custom_forward - - hidden_states = recompute(create_custom_forward(resnet), hidden_states, temb) - else: - hidden_states = resnet(hidden_states, temb) - - output_states += (hidden_states,) - - if self.downsamplers is not None: - for downsampler in self.downsamplers: - hidden_states = downsampler(hidden_states) - - output_states += (hidden_states,) - - return hidden_states, output_states - - -# Copied from diffusers.models.unet_2d_blocks.CrossAttnDownBlock2D with CrossAttnDownBlock2D->CrossAttnDownBlockFlat, ResnetBlock2D->ResnetBlockFlat, Downsample2D->LinearMultiDim -class CrossAttnDownBlockFlat(nn.Layer): - def __init__( - self, - in_channels: int, - out_channels: int, - temb_channels: int, - dropout: float = 0.0, - num_layers: int = 1, - resnet_eps: float = 1e-6, - resnet_time_scale_shift: str = "default", - resnet_act_fn: str = "swish", - resnet_groups: int = 32, - resnet_pre_norm: bool = True, - attn_num_head_channels=1, - cross_attention_dim=1280, - output_scale_factor=1.0, - downsample_padding=1, - add_downsample=True, - dual_cross_attention=False, - use_linear_projection=False, - only_cross_attention=False, - upcast_attention=False, - ): - super().__init__() - resnets = [] - attentions = [] - - self.has_cross_attention = True - self.attn_num_head_channels = attn_num_head_channels - - for i in range(num_layers): - in_channels = in_channels if i == 0 else out_channels - resnets.append( - ResnetBlockFlat( - in_channels=in_channels, - out_channels=out_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=resnet_groups, - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - ) - ) - if not dual_cross_attention: - attentions.append( - Transformer2DModel( - attn_num_head_channels, - out_channels // attn_num_head_channels, - in_channels=out_channels, - num_layers=1, - cross_attention_dim=cross_attention_dim, - norm_num_groups=resnet_groups, - use_linear_projection=use_linear_projection, - only_cross_attention=only_cross_attention, - upcast_attention=upcast_attention, - ) - ) - else: - attentions.append( - DualTransformer2DModel( - attn_num_head_channels, - out_channels // attn_num_head_channels, - in_channels=out_channels, - num_layers=1, - cross_attention_dim=cross_attention_dim, - norm_num_groups=resnet_groups, - ) - ) - self.attentions = nn.LayerList(attentions) - self.resnets = nn.LayerList(resnets) - - if add_downsample: - self.downsamplers = nn.LayerList( - [ - LinearMultiDim( - out_channels, use_conv=True, out_channels=out_channels, padding=downsample_padding, name="op" - ) - ] - ) - else: - self.downsamplers = None - - self.gradient_checkpointing = False - - def forward( - self, hidden_states, temb=None, encoder_hidden_states=None, attention_mask=None, cross_attention_kwargs=None - ): - output_states = () - - for resnet, attn in zip(self.resnets, self.attentions): - if self.training and self.gradient_checkpointing: - - def create_custom_forward(module, return_dict=None): - def custom_forward(*inputs): - if return_dict is not None: - return module(*inputs, return_dict=return_dict)[0] # move [0] - else: - return module(*inputs) - - return custom_forward - - hidden_states = recompute(create_custom_forward(resnet), hidden_states, temb) - hidden_states = recompute( - create_custom_forward(attn, return_dict=False), - hidden_states, - encoder_hidden_states, - cross_attention_kwargs, - ) # [0] - else: - hidden_states = resnet(hidden_states, temb) - hidden_states = attn( - hidden_states, - encoder_hidden_states=encoder_hidden_states, - cross_attention_kwargs=cross_attention_kwargs, - ).sample - output_states += (hidden_states,) - - if self.downsamplers is not None: - for downsampler in self.downsamplers: - hidden_states = downsampler(hidden_states) - - output_states += (hidden_states,) - - return hidden_states, output_states - - -# Copied from diffusers.models.unet_2d_blocks.UpBlock2D with UpBlock2D->UpBlockFlat, ResnetBlock2D->ResnetBlockFlat, Upsample2D->LinearMultiDim -class UpBlockFlat(nn.Layer): - def __init__( - self, - in_channels: int, - prev_output_channel: int, - out_channels: int, - temb_channels: int, - dropout: float = 0.0, - num_layers: int = 1, - resnet_eps: float = 1e-6, - resnet_time_scale_shift: str = "default", - resnet_act_fn: str = "swish", - resnet_groups: int = 32, - resnet_pre_norm: bool = True, - output_scale_factor=1.0, - add_upsample=True, - ): - super().__init__() - resnets = [] - - for i in range(num_layers): - res_skip_channels = in_channels if (i == num_layers - 1) else out_channels - resnet_in_channels = prev_output_channel if i == 0 else out_channels - - resnets.append( - ResnetBlockFlat( - in_channels=resnet_in_channels + res_skip_channels, - out_channels=out_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=resnet_groups, - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - ) - ) - - self.resnets = nn.LayerList(resnets) - - if add_upsample: - self.upsamplers = nn.LayerList([LinearMultiDim(out_channels, use_conv=True, out_channels=out_channels)]) - else: - self.upsamplers = None - - self.gradient_checkpointing = False - - def forward(self, hidden_states, res_hidden_states_tuple, temb=None, upsample_size=None): - for resnet in self.resnets: - # pop res hidden states - res_hidden_states = res_hidden_states_tuple[-1] - res_hidden_states_tuple = res_hidden_states_tuple[:-1] - hidden_states = paddle.concat([hidden_states, res_hidden_states], axis=1) - - if self.training and self.gradient_checkpointing: - - def create_custom_forward(module): - def custom_forward(*inputs): - return module(*inputs) - - return custom_forward - - hidden_states = recompute(create_custom_forward(resnet), hidden_states, temb) - else: - hidden_states = resnet(hidden_states, temb) - - if self.upsamplers is not None: - for upsampler in self.upsamplers: - hidden_states = upsampler(hidden_states, upsample_size) - - return hidden_states - - -# Copied from diffusers.models.unet_2d_blocks.CrossAttnUpBlock2D with CrossAttnUpBlock2D->CrossAttnUpBlockFlat, ResnetBlock2D->ResnetBlockFlat, Upsample2D->LinearMultiDim -class CrossAttnUpBlockFlat(nn.Layer): - def __init__( - self, - in_channels: int, - out_channels: int, - prev_output_channel: int, - temb_channels: int, - dropout: float = 0.0, - num_layers: int = 1, - resnet_eps: float = 1e-6, - resnet_time_scale_shift: str = "default", - resnet_act_fn: str = "swish", - resnet_groups: int = 32, - resnet_pre_norm: bool = True, - attn_num_head_channels=1, - cross_attention_dim=1280, - output_scale_factor=1.0, - add_upsample=True, - dual_cross_attention=False, - use_linear_projection=False, - only_cross_attention=False, - upcast_attention=False, - ): - super().__init__() - resnets = [] - attentions = [] - - self.has_cross_attention = True - self.attn_num_head_channels = attn_num_head_channels - - for i in range(num_layers): - res_skip_channels = in_channels if (i == num_layers - 1) else out_channels - resnet_in_channels = prev_output_channel if i == 0 else out_channels - - resnets.append( - ResnetBlockFlat( - in_channels=resnet_in_channels + res_skip_channels, - out_channels=out_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=resnet_groups, - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - ) - ) - if not dual_cross_attention: - attentions.append( - Transformer2DModel( - attn_num_head_channels, - out_channels // attn_num_head_channels, - in_channels=out_channels, - num_layers=1, - cross_attention_dim=cross_attention_dim, - norm_num_groups=resnet_groups, - use_linear_projection=use_linear_projection, - only_cross_attention=only_cross_attention, - upcast_attention=upcast_attention, - ) - ) - else: - attentions.append( - DualTransformer2DModel( - attn_num_head_channels, - out_channels // attn_num_head_channels, - in_channels=out_channels, - num_layers=1, - cross_attention_dim=cross_attention_dim, - norm_num_groups=resnet_groups, - ) - ) - self.attentions = nn.LayerList(attentions) - self.resnets = nn.LayerList(resnets) - - if add_upsample: - self.upsamplers = nn.LayerList([LinearMultiDim(out_channels, use_conv=True, out_channels=out_channels)]) - else: - self.upsamplers = None - - self.gradient_checkpointing = False - - def forward( - self, - hidden_states, - res_hidden_states_tuple, - temb=None, - encoder_hidden_states=None, - cross_attention_kwargs=None, - upsample_size=None, - attention_mask=None, - ): - # TODO(Patrick, William) - attention mask is not used - for resnet, attn in zip(self.resnets, self.attentions): - # pop res hidden states - res_hidden_states = res_hidden_states_tuple[-1] - res_hidden_states_tuple = res_hidden_states_tuple[:-1] - hidden_states = paddle.concat([hidden_states, res_hidden_states], axis=1) - - if self.training and self.gradient_checkpointing: - - def create_custom_forward(module, return_dict=None): - def custom_forward(*inputs): - if return_dict is not None: - return module(*inputs, return_dict=return_dict)[0] # move [0] - else: - return module(*inputs) - - return custom_forward - - hidden_states = recompute(create_custom_forward(resnet), hidden_states, temb) - hidden_states = recompute( - create_custom_forward(attn, return_dict=False), - hidden_states, - encoder_hidden_states, - cross_attention_kwargs, - ) # [0] - else: - hidden_states = resnet(hidden_states, temb) - hidden_states = attn( - hidden_states, - encoder_hidden_states=encoder_hidden_states, - cross_attention_kwargs=cross_attention_kwargs, - ).sample - - if self.upsamplers is not None: - for upsampler in self.upsamplers: - hidden_states = upsampler(hidden_states, upsample_size) - - return hidden_states - - -# Copied from diffusers.models.unet_2d_blocks.UNetMidBlock2DCrossAttn with UNetMidBlock2DCrossAttn->UNetMidBlockFlatCrossAttn, ResnetBlock2D->ResnetBlockFlat -class UNetMidBlockFlatCrossAttn(nn.Layer): - def __init__( - self, - in_channels: int, - temb_channels: int, - dropout: float = 0.0, - num_layers: int = 1, - resnet_eps: float = 1e-6, - resnet_time_scale_shift: str = "default", - resnet_act_fn: str = "swish", - resnet_groups: int = 32, - resnet_pre_norm: bool = True, - attn_num_head_channels=1, - output_scale_factor=1.0, - cross_attention_dim=1280, - dual_cross_attention=False, - use_linear_projection=False, - upcast_attention=False, - ): - super().__init__() - - self.has_cross_attention = True - self.attn_num_head_channels = attn_num_head_channels - resnet_groups = resnet_groups if resnet_groups is not None else min(in_channels // 4, 32) - - # there is always at least one resnet - resnets = [ - ResnetBlockFlat( - in_channels=in_channels, - out_channels=in_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=resnet_groups, - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - ) - ] - attentions = [] - - for _ in range(num_layers): - if not dual_cross_attention: - attentions.append( - Transformer2DModel( - attn_num_head_channels, - in_channels // attn_num_head_channels, - in_channels=in_channels, - num_layers=1, - cross_attention_dim=cross_attention_dim, - norm_num_groups=resnet_groups, - use_linear_projection=use_linear_projection, - upcast_attention=upcast_attention, - ) - ) - else: - attentions.append( - DualTransformer2DModel( - attn_num_head_channels, - in_channels // attn_num_head_channels, - in_channels=in_channels, - num_layers=1, - cross_attention_dim=cross_attention_dim, - norm_num_groups=resnet_groups, - ) - ) - resnets.append( - ResnetBlockFlat( - in_channels=in_channels, - out_channels=in_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=resnet_groups, - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - ) - ) - - self.attentions = nn.LayerList(attentions) - self.resnets = nn.LayerList(resnets) - - def forward( - self, hidden_states, temb=None, encoder_hidden_states=None, attention_mask=None, cross_attention_kwargs=None - ): - hidden_states = self.resnets[0](hidden_states, temb) - for attn, resnet in zip(self.attentions, self.resnets[1:]): - hidden_states = attn( - hidden_states, - encoder_hidden_states=encoder_hidden_states, - cross_attention_kwargs=cross_attention_kwargs, - ).sample - hidden_states = resnet(hidden_states, temb) - - return hidden_states - - -# Copied from diffusers.models.unet_2d_blocks.UNetMidBlock2DSimpleCrossAttn with UNetMidBlock2DSimpleCrossAttn->UNetMidBlockFlatSimpleCrossAttn, ResnetBlock2D->ResnetBlockFlat -class UNetMidBlockFlatSimpleCrossAttn(nn.Layer): - def __init__( - self, - in_channels: int, - temb_channels: int, - dropout: float = 0.0, - num_layers: int = 1, - resnet_eps: float = 1e-6, - resnet_time_scale_shift: str = "default", - resnet_act_fn: str = "swish", - resnet_groups: int = 32, - resnet_pre_norm: bool = True, - attn_num_head_channels=1, - output_scale_factor=1.0, - cross_attention_dim=1280, - ): - super().__init__() - - self.has_cross_attention = True - - self.attn_num_head_channels = attn_num_head_channels - resnet_groups = resnet_groups if resnet_groups is not None else min(in_channels // 4, 32) - - self.num_heads = in_channels // self.attn_num_head_channels - - # there is always at least one resnet - resnets = [ - ResnetBlockFlat( - in_channels=in_channels, - out_channels=in_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=resnet_groups, - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - ) - ] - attentions = [] - - for _ in range(num_layers): - attentions.append( - CrossAttention( - query_dim=in_channels, - cross_attention_dim=in_channels, - heads=self.num_heads, - dim_head=attn_num_head_channels, - added_kv_proj_dim=cross_attention_dim, - norm_num_groups=resnet_groups, - bias=True, - upcast_softmax=True, - processor=CrossAttnAddedKVProcessor(), - ) - ) - resnets.append( - ResnetBlockFlat( - in_channels=in_channels, - out_channels=in_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=resnet_groups, - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - ) - ) - - self.attentions = nn.LayerList(attentions) - self.resnets = nn.LayerList(resnets) - - def forward( - self, hidden_states, temb=None, encoder_hidden_states=None, attention_mask=None, cross_attention_kwargs=None - ): - cross_attention_kwargs = cross_attention_kwargs if cross_attention_kwargs is not None else {} - hidden_states = self.resnets[0](hidden_states, temb) - for attn, resnet in zip(self.attentions, self.resnets[1:]): - # attn - hidden_states = attn( - hidden_states, - encoder_hidden_states=encoder_hidden_states, - attention_mask=attention_mask, - **cross_attention_kwargs, - ) - - # resnet - hidden_states = resnet(hidden_states, temb) - - return hidden_states diff --git a/spaces/3druga/ae-6/app.py b/spaces/3druga/ae-6/app.py deleted file mode 100644 index c2314f77cdfb7f14edd149d7bec7501ca899bc69..0000000000000000000000000000000000000000 --- a/spaces/3druga/ae-6/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/Virus561/anytig").launch() \ No newline at end of file diff --git a/spaces/801artistry/RVC801/infer/modules/train/preprocess.py b/spaces/801artistry/RVC801/infer/modules/train/preprocess.py deleted file mode 100644 index fbe81307ee661a95b2ac479336671a44ee02151a..0000000000000000000000000000000000000000 --- a/spaces/801artistry/RVC801/infer/modules/train/preprocess.py +++ /dev/null @@ -1,147 +0,0 @@ -import multiprocessing -import os -import sys - -from scipy import signal - -now_dir = os.getcwd() -sys.path.append(now_dir) -print(sys.argv) -inp_root = sys.argv[1] -sr = int(sys.argv[2]) -n_p = int(sys.argv[3]) -exp_dir = sys.argv[4] -noparallel = sys.argv[5] == "True" -per = float(sys.argv[6]) -import multiprocessing -import os -import traceback - -import librosa -import numpy as np -from scipy.io import wavfile - -from infer.lib.audio import load_audio -from infer.lib.slicer2 import Slicer - -mutex = multiprocessing.Lock() -f = open("%s/preprocess.log" % exp_dir, "a+") - - -def println(strr): - mutex.acquire() - print(strr) - f.write("%s\n" % strr) - f.flush() - mutex.release() - - -class PreProcess: - def __init__(self, sr, exp_dir, per=3.7): - self.slicer = Slicer( - sr=sr, - threshold=-42, - min_length=1500, - min_interval=400, - hop_size=15, - max_sil_kept=500, - ) - self.sr = sr - self.bh, self.ah = signal.butter(N=5, Wn=48, btype="high", fs=self.sr) - self.per = per - self.overlap = 0.3 - self.tail = self.per + self.overlap - self.max = 0.9 - self.alpha = 0.75 - self.exp_dir = exp_dir - self.gt_wavs_dir = "%s/0_gt_wavs" % exp_dir - self.wavs16k_dir = "%s/1_16k_wavs" % exp_dir - os.makedirs(self.exp_dir, exist_ok=True) - os.makedirs(self.gt_wavs_dir, exist_ok=True) - os.makedirs(self.wavs16k_dir, exist_ok=True) - - def norm_write(self, tmp_audio, idx0, idx1): - tmp_max = np.abs(tmp_audio).max() - if tmp_max > 2.5: - print("%s-%s-%s-filtered" % (idx0, idx1, tmp_max)) - return - tmp_audio = (tmp_audio / tmp_max * (self.max * self.alpha)) + ( - 1 - self.alpha - ) * tmp_audio - wavfile.write( - "%s/%s_%s.wav" % (self.gt_wavs_dir, idx0, idx1), - self.sr, - tmp_audio.astype(np.float32), - ) - tmp_audio = librosa.resample( - tmp_audio, orig_sr=self.sr, target_sr=16000 - ) # , res_type="soxr_vhq" - wavfile.write( - "%s/%s_%s.wav" % (self.wavs16k_dir, idx0, idx1), - 16000, - tmp_audio.astype(np.float32), - ) - - def pipeline(self, path, idx0): - try: - audio = load_audio(path, self.sr) - # zero phased digital filter cause pre-ringing noise... - # audio = signal.filtfilt(self.bh, self.ah, audio) - audio = signal.lfilter(self.bh, self.ah, audio) - - idx1 = 0 - for audio in self.slicer.slice(audio): - i = 0 - while 1: - start = int(self.sr * (self.per - self.overlap) * i) - i += 1 - if len(audio[start:]) > self.tail * self.sr: - tmp_audio = audio[start : start + int(self.per * self.sr)] - self.norm_write(tmp_audio, idx0, idx1) - idx1 += 1 - else: - tmp_audio = audio[start:] - idx1 += 1 - break - self.norm_write(tmp_audio, idx0, idx1) - println("%s->Suc." % path) - except: - println("%s->%s" % (path, traceback.format_exc())) - - def pipeline_mp(self, infos): - for path, idx0 in infos: - self.pipeline(path, idx0) - - def pipeline_mp_inp_dir(self, inp_root, n_p): - try: - infos = [ - ("%s/%s" % (inp_root, name), idx) - for idx, name in enumerate(sorted(list(os.listdir(inp_root)))) - ] - if noparallel: - for i in range(n_p): - self.pipeline_mp(infos[i::n_p]) - else: - ps = [] - for i in range(n_p): - p = multiprocessing.Process( - target=self.pipeline_mp, args=(infos[i::n_p],) - ) - ps.append(p) - p.start() - for i in range(n_p): - ps[i].join() - except: - println("Fail. %s" % traceback.format_exc()) - - -def preprocess_trainset(inp_root, sr, n_p, exp_dir, per): - pp = PreProcess(sr, exp_dir, per) - println("start preprocess") - println(sys.argv) - pp.pipeline_mp_inp_dir(inp_root, n_p) - println("end preprocess") - - -if __name__ == "__main__": - preprocess_trainset(inp_root, sr, n_p, exp_dir, per) diff --git a/spaces/AI-Hobbyist/Hoyo-RVC/go-realtime-gui.bat b/spaces/AI-Hobbyist/Hoyo-RVC/go-realtime-gui.bat deleted file mode 100644 index 835543f5d4845f4b9dae70c1cf1855cce3ce6c0b..0000000000000000000000000000000000000000 --- a/spaces/AI-Hobbyist/Hoyo-RVC/go-realtime-gui.bat +++ /dev/null @@ -1,2 +0,0 @@ -runtime\python.exe gui.py -pause diff --git a/spaces/AI-Zero-to-Hero/04-GR-Seq-2-Seq-QA-Auto-Gen/app.py b/spaces/AI-Zero-to-Hero/04-GR-Seq-2-Seq-QA-Auto-Gen/app.py deleted file mode 100644 index c1cd92499cf1c7d2a91b4dc226bf2d558ff67661..0000000000000000000000000000000000000000 --- a/spaces/AI-Zero-to-Hero/04-GR-Seq-2-Seq-QA-Auto-Gen/app.py +++ /dev/null @@ -1,51 +0,0 @@ -import gradio as gr -from qasrl_model_pipeline import QASRL_Pipeline - -models = ["kleinay/qanom-seq2seq-model-baseline", - "kleinay/qanom-seq2seq-model-joint"] -pipelines = {model: QASRL_Pipeline(model) for model in models} - - -description = f"""Using Seq2Seq T5 model which takes a sequence of items and outputs another sequence this model generates Questions and Answers (QA) with focus on Semantic Role Labeling (SRL)""" -title="Seq2Seq T5 Questions and Answers (QA) with Semantic Role Labeling (SRL)" -examples = [[models[0], "In March and April the patient

had two falls. One was related to asthma, heart palpitations. The second was due to syncope and post covid vaccination dizziness during exercise. The patient is now getting an EKG. Former EKG had shown that there was a bundle branch block. Patient had some uncontrolled immune system reactions like anaphylaxis and shortness of breath.", True, "fall"], - [models[1], "In March and April the patient had two falls. One was related to asthma, heart palpitations. The second was due to syncope and post covid vaccination dizziness during exercise. The patient is now getting an EKG. Former EKG had shown that there was a bundle branch block. Patient had some uncontrolled immune system reactions

like anaphylaxis and shortness of breath.", True, "reactions"], - [models[0], "In March and April the patient had two falls. One was related

to asthma, heart palpitations. The second was due to syncope and post covid vaccination dizziness during exercise. The patient is now getting an EKG. Former EKG had shown that there was a bundle branch block. Patient had some uncontrolled immune system reactions like anaphylaxis and shortness of breath.", True, "relate"], - [models[1], "In March and April the patient

had two falls. One was related to asthma, heart palpitations. The second was due to syncope and post covid vaccination dizziness during exercise. The patient is now getting an EKG. Former EKG had shown that there was a bundle branch block. Patient had some uncontrolled immune system reactions like anaphylaxis and shortness of breath.", False, "fall"]] - -input_sent_box_label = "Insert sentence here. Mark the predicate by adding the token '

' before it." -verb_form_inp_placeholder = "e.g. 'decide' for the nominalization 'decision', 'teach' for 'teacher', etc." -links = """

-QASRL Website | Model Repo at Huggingface Hub -

""" -def call(model_name, sentence, is_nominal, verb_form): - predicate_marker="

" - if predicate_marker not in sentence: - raise ValueError("You must highlight one word of the sentence as a predicate using preceding '

'.") - - if not verb_form: - if is_nominal: - raise ValueError("You should provide the verbal form of the nominalization") - - toks = sentence.split(" ") - pred_idx = toks.index(predicate_marker) - predicate = toks(pred_idx+1) - verb_form=predicate - pipeline = pipelines[model_name] - pipe_out = pipeline([sentence], - predicate_marker=predicate_marker, - predicate_type="nominal" if is_nominal else "verbal", - verb_form=verb_form)[0] - return pipe_out["QAs"], pipe_out["generated_text"] -iface = gr.Interface(fn=call, - inputs=[gr.inputs.Radio(choices=models, default=models[0], label="Model"), - gr.inputs.Textbox(placeholder=input_sent_box_label, label="Sentence", lines=4), - gr.inputs.Checkbox(default=True, label="Is Nominalization?"), - gr.inputs.Textbox(placeholder=verb_form_inp_placeholder, label="Verbal form (for nominalizations)", default='')], - outputs=[gr.outputs.JSON(label="Model Output - QASRL"), gr.outputs.Textbox(label="Raw output sequence")], - title=title, - description=description, - article=links, - examples=examples ) - -iface.launch() \ No newline at end of file diff --git a/spaces/AbandonedMuse/UnlimitedMusicGen/audiocraft/modules/codebooks_patterns.py b/spaces/AbandonedMuse/UnlimitedMusicGen/audiocraft/modules/codebooks_patterns.py deleted file mode 100644 index c5b35cbea8cff84aa56116dbdd860fc72a913a13..0000000000000000000000000000000000000000 --- a/spaces/AbandonedMuse/UnlimitedMusicGen/audiocraft/modules/codebooks_patterns.py +++ /dev/null @@ -1,539 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -from collections import namedtuple -from dataclasses import dataclass -from functools import lru_cache -import logging -import typing as tp - -from abc import ABC, abstractmethod -import torch - -LayoutCoord = namedtuple('LayoutCoord', ['t', 'q']) # (timestep, codebook index) -PatternLayout = tp.List[tp.List[LayoutCoord]] # Sequence of coordinates -logger = logging.getLogger(__name__) - - -@dataclass -class Pattern: - """Base implementation of a pattern over a sequence with multiple codebooks. - - The codebook pattern consists in a layout, defining for each sequence step - the list of coordinates of each codebook timestep in the resulting interleaved sequence. - The first item of the pattern is always an empty list in order to properly insert a special token - to start with. For convenience, we also keep track of ``n_q`` the number of codebooks used for the pattern - and ``timesteps`` the number of timesteps corresponding to the original sequence. - - The pattern provides convenient methods to build and revert interleaved sequences from it: - ``build_pattern_sequence`` maps a given a dense input tensor of multi-codebook sequence from [B, K, T] - to the interleaved sequence of shape [B, K, S] applying the pattern, with S being the batch size, - K being the number of codebooks, T the number of original timesteps and S the number of sequence steps - for the output sequence. The unfilled positions are replaced with a special token and the built sequence - is returned along with a mask indicating valid tokens. - ``revert_pattern_sequence`` maps back an interleaved sequence of shape [B, K, S] to the original alignment - of codebooks across timesteps to an output tensor of shape [B, K, T], using again a special token and a mask - to fill and specify invalid positions if needed. - See the dedicated methods for more details. - """ - # Pattern layout, for each sequence step, we have a list of coordinates - # corresponding to the original codebook timestep and position. - # The first list is always an empty list in order to properly insert - # a special token to start with. - layout: PatternLayout - timesteps: int - n_q: int - - def __post_init__(self): - assert len(self.layout) > 0 - assert self.layout[0] == [] - self._validate_layout() - self._build_reverted_sequence_scatter_indexes = lru_cache(100)(self._build_reverted_sequence_scatter_indexes) - self._build_pattern_sequence_scatter_indexes = lru_cache(100)(self._build_pattern_sequence_scatter_indexes) - logger.info("New pattern, time steps: %d, sequence steps: %d", self.timesteps, len(self.layout)) - - def _validate_layout(self): - """Runs checks on the layout to ensure a valid pattern is defined. - A pattern is considered invalid if: - - Multiple timesteps for a same codebook are defined in the same sequence step - - The timesteps for a given codebook are not in ascending order as we advance in the sequence - (this would mean that we have future timesteps before past timesteps). - """ - q_timesteps = {q: 0 for q in range(self.n_q)} - for s, seq_coords in enumerate(self.layout): - if len(seq_coords) > 0: - qs = set() - for coord in seq_coords: - qs.add(coord.q) - last_q_timestep = q_timesteps[coord.q] - assert coord.t >= last_q_timestep, \ - f"Past timesteps are found in the sequence for codebook = {coord.q} at step {s}" - q_timesteps[coord.q] = coord.t - # each sequence step contains at max 1 coordinate per codebook - assert len(qs) == len(seq_coords), \ - f"Multiple entries for a same codebook are found at step {s}" - - @property - def num_sequence_steps(self): - return len(self.layout) - 1 - - @property - def max_delay(self): - max_t_in_seq_coords = 0 - for seq_coords in self.layout[1:]: - for coords in seq_coords: - max_t_in_seq_coords = max(max_t_in_seq_coords, coords.t + 1) - return max_t_in_seq_coords - self.timesteps - - @property - def valid_layout(self): - valid_step = len(self.layout) - self.max_delay - return self.layout[:valid_step] - - def get_sequence_coords_with_timestep(self, t: int, q: tp.Optional[int] = None): - """Get codebook coordinates in the layout that corresponds to the specified timestep t - and optionally to the codebook q. Coordinates are returned as a tuple with the sequence step - and the actual codebook coordinates. - """ - assert t <= self.timesteps, "provided timesteps is greater than the pattern's number of timesteps" - if q is not None: - assert q <= self.n_q, "provided number of codebooks is greater than the pattern's number of codebooks" - coords = [] - for s, seq_codes in enumerate(self.layout): - for code in seq_codes: - if code.t == t and (q is None or code.q == q): - coords.append((s, code)) - return coords - - def get_steps_with_timestep(self, t: int, q: tp.Optional[int] = None) -> tp.List[int]: - return [step for step, coords in self.get_sequence_coords_with_timestep(t, q)] - - def get_first_step_with_timesteps(self, t: int, q: tp.Optional[int] = None) -> tp.Optional[int]: - steps_with_timesteps = self.get_steps_with_timestep(t, q) - return steps_with_timesteps[0] if len(steps_with_timesteps) > 0 else None - - def _build_pattern_sequence_scatter_indexes(self, timesteps: int, n_q: int, keep_only_valid_steps: bool, - device: tp.Union[torch.device, str] = 'cpu'): - """Build scatter indexes corresponding to the pattern, up to the provided sequence_steps. - - Args: - timesteps (int): Maximum number of timesteps steps to consider. - keep_only_valid_steps (bool): Restrict the pattern layout to match only valid steps. - device (Union[torch.device, str]): Device for created tensors. - Returns: - indexes (torch.Tensor): Indexes corresponding to the sequence, of shape [K, S]. - mask (torch.Tensor): Mask corresponding to indexes that matches valid indexes, of shape [K, S]. - """ - assert n_q == self.n_q, f"invalid number of codebooks for the sequence and the pattern: {n_q} != {self.n_q}" - assert timesteps <= self.timesteps, "invalid number of timesteps used to build the sequence from the pattern" - # use the proper layout based on whether we limit ourselves to valid steps only or not, - # note that using the valid_layout will result in a truncated sequence up to the valid steps - ref_layout = self.valid_layout if keep_only_valid_steps else self.layout - # single item indexing being super slow with pytorch vs. numpy, so we use numpy here - indexes = torch.zeros(n_q, len(ref_layout), dtype=torch.long).numpy() - mask = torch.zeros(n_q, len(ref_layout), dtype=torch.bool).numpy() - # fill indexes with last sequence step value that will correspond to our special token - # the last value is n_q * timesteps as we have flattened z and append special token as the last token - # which will correspond to the index: n_q * timesteps - indexes[:] = n_q * timesteps - # iterate over the pattern and fill scattered indexes and mask - for s, sequence_coords in enumerate(ref_layout): - for coords in sequence_coords: - if coords.t < timesteps: - indexes[coords.q, s] = coords.t + coords.q * timesteps - mask[coords.q, s] = 1 - indexes = torch.from_numpy(indexes).to(device) - mask = torch.from_numpy(mask).to(device) - return indexes, mask - - def build_pattern_sequence(self, z: torch.Tensor, special_token: int, keep_only_valid_steps: bool = False): - """Build sequence corresponding to the pattern from the input tensor z. - The sequence is built using up to sequence_steps if specified, and non-pattern - coordinates are filled with the special token. - - Args: - z (torch.Tensor): Input tensor of multi-codebooks sequence, of shape [B, K, T]. - special_token (int): Special token used to fill non-pattern coordinates in the new sequence. - keep_only_valid_steps (bool): Build a sequence from the pattern up to valid (= fully defined) steps. - Steps that are beyond valid steps will be replaced by the special_token in that case. - Returns: - values (torch.Tensor): Interleaved sequence matching the pattern, of shape [B, K, S] with S - corresponding either to the sequence_steps if provided, otherwise to the length of the pattern. - indexes (torch.Tensor): Indexes corresponding to the interleaved sequence, of shape [K, S]. - mask (torch.Tensor): Mask corresponding to indexes that matches valid indexes of shape [K, S]. - """ - B, K, T = z.shape - indexes, mask = self._build_pattern_sequence_scatter_indexes( - T, K, keep_only_valid_steps=keep_only_valid_steps, device=str(z.device) - ) - z = z.view(B, -1) - # we append the special token as the last index of our flattened z tensor - z = torch.cat([z, torch.zeros_like(z[:, :1]) + special_token], dim=1) - values = z[:, indexes.view(-1)] - values = values.view(B, K, indexes.shape[-1]) - return values, indexes, mask - - def _build_reverted_sequence_scatter_indexes(self, sequence_steps: int, n_q: int, - keep_only_valid_steps: bool = False, - is_model_output: bool = False, - device: tp.Union[torch.device, str] = 'cpu'): - """Builds scatter indexes required to retrieve the original multi-codebook sequence - from interleaving pattern. - - Args: - sequence_steps (int): Sequence steps. - n_q (int): Number of codebooks. - keep_only_valid_steps (bool): Build a sequence from the pattern up to valid (= fully defined) steps. - Steps that are beyond valid steps will be replaced by the special_token in that case. - is_model_output (bool): Whether to keep the sequence item corresponding to initial special token or not. - device (Union[torch.device, str]): Device for created tensors. - Returns: - torch.Tensor: Indexes for reconstructing the output, of shape [K, T]. - mask (torch.Tensor): Mask corresponding to indexes that matches valid indexes of shape [K, T]. - """ - ref_layout = self.valid_layout if keep_only_valid_steps else self.layout - # TODO(jade): Do we want to further truncate to only valid timesteps here as well? - timesteps = self.timesteps - assert n_q == self.n_q, f"invalid number of codebooks for the sequence and the pattern: {n_q} != {self.n_q}" - assert sequence_steps <= len(ref_layout), \ - f"sequence to revert is longer than the defined pattern: {sequence_steps} > {len(ref_layout)}" - - # ensure we take the appropriate indexes to keep the model output from the first special token as well - if is_model_output: - ref_layout = ref_layout[1:] - - # single item indexing being super slow with pytorch vs. numpy, so we use numpy here - indexes = torch.zeros(n_q, timesteps, dtype=torch.long).numpy() - mask = torch.zeros(n_q, timesteps, dtype=torch.bool).numpy() - # fill indexes with last sequence step value that will correspond to our special token - indexes[:] = n_q * sequence_steps - for s, sequence_codes in enumerate(ref_layout): - if s < sequence_steps: - for code in sequence_codes: - if code.t < timesteps: - indexes[code.q, code.t] = s + code.q * sequence_steps - mask[code.q, code.t] = 1 - indexes = torch.from_numpy(indexes).to(device) - mask = torch.from_numpy(mask).to(device) - return indexes, mask - - def revert_pattern_sequence(self, s: torch.Tensor, special_token: int, keep_only_valid_steps: bool = False): - """Revert a sequence built from the pattern back to the original multi-codebook sequence without interleaving. - The sequence is reverted using up to timesteps if specified, and non-pattern coordinates - are filled with the special token. - - Args: - s (torch.Tensor): Interleaved sequence tensor obtained from the pattern, of shape [B, K, S]. - special_token (int or float): Special token used to fill non-pattern coordinates in the new sequence. - Returns: - values (torch.Tensor): Interleaved sequence matching the pattern, of shape [B, K, T] with T - corresponding either to the timesteps if provided, or the total timesteps in pattern otherwise. - indexes (torch.Tensor): Indexes corresponding to the interleaved sequence, of shape [K, T]. - mask (torch.Tensor): Mask corresponding to indexes that matches valid indexes of shape [K, T]. - """ - B, K, S = s.shape - indexes, mask = self._build_reverted_sequence_scatter_indexes( - S, K, keep_only_valid_steps, is_model_output=False, device=str(s.device) - ) - s = s.view(B, -1) - # we append the special token as the last index of our flattened z tensor - s = torch.cat([s, torch.zeros_like(s[:, :1]) + special_token], dim=1) - values = s[:, indexes.view(-1)] - values = values.view(B, K, indexes.shape[-1]) - return values, indexes, mask - - def revert_pattern_logits(self, logits: torch.Tensor, special_token: float, keep_only_valid_steps: bool = False): - """Revert model logits obtained on a sequence built from the pattern - back to a tensor matching the original sequence. - - This method is similar to ``revert_pattern_sequence`` with the following specificities: - 1. It is designed to work with the extra cardinality dimension - 2. We return the logits for the first sequence item that matches the special_token and - which matching target in the original sequence is the first item of the sequence, - while we skip the last logits as there is no matching target - """ - B, card, K, S = logits.shape - indexes, mask = self._build_reverted_sequence_scatter_indexes( - S, K, keep_only_valid_steps, is_model_output=True, device=logits.device - ) - logits = logits.reshape(B, card, -1) - # we append the special token as the last index of our flattened z tensor - logits = torch.cat([logits, torch.zeros_like(logits[:, :, :1]) + special_token], dim=-1) # [B, card, K x S] - values = logits[:, :, indexes.view(-1)] - values = values.view(B, card, K, indexes.shape[-1]) - return values, indexes, mask - - -class CodebooksPatternProvider(ABC): - """Abstraction around providing pattern for interleaving codebooks. - - The CodebooksPatternProvider abstraction allows to implement various strategies to - define interleaving pattern of sequences composed of multiple codebooks. For a given - number of codebooks `n_q`, the pattern provider can generate a specified pattern - corresponding to a sequence of `T` timesteps with `n_q` parallel codebooks. This pattern - can be used to construct a new sequence from the original codes respecting the specified - pattern. The pattern is defined as a list of list of code coordinates, code coordinate - being a tuple with the original timestep and codebook to build the new sequence. - Note that all patterns must start with an empty list that is then used to insert a first - sequence step of special tokens in the newly generated sequence. - - Args: - n_q (int): number of codebooks. - cached (bool): if True, patterns for a given length are cached. In general - that should be true for efficiency reason to avoid synchronization points. - """ - def __init__(self, n_q: int, cached: bool = True): - assert n_q > 0 - self.n_q = n_q - self.get_pattern = lru_cache(100)(self.get_pattern) # type: ignore - - @abstractmethod - def get_pattern(self, timesteps: int) -> Pattern: - """Builds pattern with specific interleaving between codebooks. - - Args: - timesteps (int): Total numer of timesteps. - """ - raise NotImplementedError() - - -class DelayedPatternProvider(CodebooksPatternProvider): - """Provider for delayed pattern across delayed codebooks. - Codebooks are delayed in the sequence and sequence steps will contain codebooks - from different timesteps. - - Example: - Taking timesteps=4 and n_q=3, delays=None, the multi-codebook sequence: - [[1, 2, 3, 4], - [1, 2, 3, 4], - [1, 2, 3, 4]] - The resulting sequence obtained from the returned pattern is: - [[S, 1, 2, 3, 4], - [S, S, 1, 2, 3], - [S, S, S, 1, 2]] - (with S being a special token) - - Args: - n_q (int): Number of codebooks. - delays (Optional[List[int]]): Delay for each of the codebooks. - If delays not defined, each codebook is delayed by 1 compared to the previous one. - flatten_first (int): Flatten the first N timesteps. - empty_initial (int): Prepend with N empty list of coordinates. - """ - def __init__(self, n_q: int, delays: tp.Optional[tp.List[int]] = None, - flatten_first: int = 0, empty_initial: int = 0): - super().__init__(n_q) - if delays is None: - delays = list(range(n_q)) - self.delays = delays - self.flatten_first = flatten_first - self.empty_initial = empty_initial - assert len(self.delays) == self.n_q - assert sorted(self.delays) == self.delays - - def get_pattern(self, timesteps: int) -> Pattern: - out: PatternLayout = [[]] - max_delay = max(self.delays) - if self.empty_initial: - out += [[] for _ in range(self.empty_initial)] - if self.flatten_first: - for t in range(min(timesteps, self.flatten_first)): - for q in range(self.n_q): - out.append([LayoutCoord(t, q)]) - for t in range(self.flatten_first, timesteps + max_delay): - v = [] - for q, delay in enumerate(self.delays): - t_for_q = t - delay - if t_for_q >= self.flatten_first: - v.append(LayoutCoord(t_for_q, q)) - out.append(v) - return Pattern(out, n_q=self.n_q, timesteps=timesteps) - - -class ParallelPatternProvider(DelayedPatternProvider): - """Provider for parallel pattern across codebooks. - This pattern provider is a special case of the delayed pattern with actually no delay, - hence delays=repeat(0, n_q). - - Args: - n_q (int): Number of codebooks. - """ - def __init__(self, n_q: int): - super().__init__(n_q, [0] * n_q) - - -class UnrolledPatternProvider(CodebooksPatternProvider): - """Provider for unrolling codebooks pattern. - This pattern provider enables to represent the codebook flattened completely or only to some extend - while also specifying a given delay between the flattened codebooks representation, allowing to - unroll the codebooks in the sequence. - - Example: - 1. Flattening of the codebooks. - By default, the pattern provider will fully flatten the codebooks such as flattening=range(n_q), - taking n_q = 3 and timesteps = 4: - [[1, 2, 3, 4], - [1, 2, 3, 4], - [1, 2, 3, 4]] - will result into: - [[S, S, 1, S, S, 2, S, S, 3, S, S, 4], - [S, 1, S, S, 2, S, S, 3, S, S, 4, S], - [1, S, S, 2, S, S, 3, S, S, 4, S, S]] - 2. Partial flattening of the codebooks. The ``flattening`` parameter allows to specify the inner step - for each of the codebook, allowing to define which codebook to flatten (or keep in parallel), for example - taking n_q = 3, timesteps = 4 and flattening = [0, 1, 1]: - [[1, 2, 3, 4], - [1, 2, 3, 4], - [1, 2, 3, 4]] - will result into: - [[S, 1, S, S, 2, S, S, 3, S, S, 4, S], - [S, 1, S, S, 2, S, S, 3, S, S, 4, S], - [1, S, S, 2, S, S, 3, S, S, 4, S, S]] - 3. Flattening with delay. The ``delay`` parameter allows to further unroll the sequence of codebooks - allowing to specify the delay per codebook. Note that the delay between codebooks flattened to the - same inner timestep should be coherent. For example, taking n_q = 3, timesteps = 4, flattening = [0, 1, 1] - and delays = [0, 3, 3]: - [[1, 2, 3, 4], - [1, 2, 3, 4], - [1, 2, 3, 4]] - will result into: - [[S, S, S, 1, S, 2, S, 3, S, 4], - [S, S, S, 1, S, 2, S, 3, S, 4], - [1, 2, 3, S, 4, S, 5, S, 6, S]] - - Args: - n_q (int): Number of codebooks. - flattening (Optional[List[int]]): Flattening schema over the codebooks. If not defined, - the codebooks will be flattened to 1 codebook per step, meaning that the sequence will - have n_q extra steps for each timestep. - delays (Optional[List[int]]): Delay for each of the codebooks. If not defined, - no delay is added and therefore will default to [0] * ``n_q``. - Note that two codebooks that will be flattened to the same inner step - should have the same delay, otherwise the pattern is considered as invalid. - """ - FlattenedCodebook = namedtuple('FlattenedCodebook', ['codebooks', 'delay']) - - def __init__(self, n_q: int, flattening: tp.Optional[tp.List[int]] = None, - delays: tp.Optional[tp.List[int]] = None): - super().__init__(n_q) - if flattening is None: - flattening = list(range(n_q)) - if delays is None: - delays = [0] * n_q - assert len(flattening) == n_q - assert len(delays) == n_q - assert sorted(flattening) == flattening - assert sorted(delays) == delays - self._flattened_codebooks = self._build_flattened_codebooks(delays, flattening) - self.max_delay = max(delays) - - def _build_flattened_codebooks(self, delays: tp.List[int], flattening: tp.List[int]): - """Build a flattened codebooks representation as a dictionary of inner step - and the actual codebook indices corresponding to the flattened codebook. For convenience, we - also store the delay associated to the flattened codebook to avoid maintaining an extra mapping. - """ - flattened_codebooks: dict = {} - for q, (inner_step, delay) in enumerate(zip(flattening, delays)): - if inner_step not in flattened_codebooks: - flat_codebook = UnrolledPatternProvider.FlattenedCodebook(codebooks=[q], delay=delay) - else: - flat_codebook = flattened_codebooks[inner_step] - assert flat_codebook.delay == delay, ( - "Delay and flattening between codebooks is inconsistent: ", - "two codebooks flattened to the same position should have the same delay." - ) - flat_codebook.codebooks.append(q) - flattened_codebooks[inner_step] = flat_codebook - return flattened_codebooks - - @property - def _num_inner_steps(self): - """Number of inner steps to unroll between timesteps in order to flatten the codebooks. - """ - return max([inner_step for inner_step in self._flattened_codebooks.keys()]) + 1 - - def num_virtual_steps(self, timesteps: int) -> int: - return timesteps * self._num_inner_steps + 1 - - def get_pattern(self, timesteps: int) -> Pattern: - """Builds pattern for delay across codebooks. - - Args: - timesteps (int): Total numer of timesteps. - """ - # the PatternLayout is built as a tuple of sequence position and list of coordinates - # so that it can be reordered properly given the required delay between codebooks of given timesteps - indexed_out: list = [(-1, [])] - max_timesteps = timesteps + self.max_delay - for t in range(max_timesteps): - # for each timestep, we unroll the flattened codebooks, - # emitting the sequence step with the corresponding delay - for step in range(self._num_inner_steps): - if step in self._flattened_codebooks: - # we have codebooks at this virtual step to emit - step_codebooks = self._flattened_codebooks[step] - t_for_q = t + step_codebooks.delay - coords = [LayoutCoord(t, q) for q in step_codebooks.codebooks] - if t_for_q < max_timesteps and t < max_timesteps: - indexed_out.append((t_for_q, coords)) - else: - # there is no codebook in this virtual step so we emit an empty list - indexed_out.append((t, [])) - out = [coords for _, coords in sorted(indexed_out)] - return Pattern(out, n_q=self.n_q, timesteps=timesteps) - - -class VALLEPattern(CodebooksPatternProvider): - """Almost VALL-E style pattern. We futher allow some delays for the - codebooks other than the first one. - - Args: - n_q (int): Number of codebooks. - delays (Optional[List[int]]): Delay for each of the codebooks. - If delays not defined, each codebook is delayed by 1 compared to the previous one. - """ - def __init__(self, n_q: int, delays: tp.Optional[tp.List[int]] = None): - super().__init__(n_q) - if delays is None: - delays = [0] * (n_q - 1) - self.delays = delays - assert len(self.delays) == self.n_q - 1 - assert sorted(self.delays) == self.delays - - def get_pattern(self, timesteps: int) -> Pattern: - out: PatternLayout = [[]] - for t in range(timesteps): - out.append([LayoutCoord(t, 0)]) - max_delay = max(self.delays) - for t in range(timesteps + max_delay): - v = [] - for q, delay in enumerate(self.delays): - t_for_q = t - delay - if t_for_q >= 0: - v.append(LayoutCoord(t_for_q, q + 1)) - out.append(v) - return Pattern(out, n_q=self.n_q, timesteps=timesteps) - - -class MusicLMPattern(CodebooksPatternProvider): - """Almost MusicLM style pattern. This is equivalent to full flattening - but in a different order. - - Args: - n_q (int): Number of codebooks. - group_by (int): Number of codebooks to group together. - """ - def __init__(self, n_q: int, group_by: int = 2): - super().__init__(n_q) - self.group_by = group_by - - def get_pattern(self, timesteps: int) -> Pattern: - out: PatternLayout = [[]] - for offset in range(0, self.n_q, self.group_by): - for t in range(timesteps): - for q in range(offset, offset + self.group_by): - out.append([LayoutCoord(t, q)]) - return Pattern(out, n_q=self.n_q, timesteps=timesteps) diff --git a/spaces/AchyuthGamer/OpenGPT-Chat-UI/src/hooks.server.ts b/spaces/AchyuthGamer/OpenGPT-Chat-UI/src/hooks.server.ts deleted file mode 100644 index 0114a143c46f8e4a0f08c8c554d2054ff4be8a35..0000000000000000000000000000000000000000 --- a/spaces/AchyuthGamer/OpenGPT-Chat-UI/src/hooks.server.ts +++ /dev/null @@ -1,107 +0,0 @@ -import { COOKIE_NAME, MESSAGES_BEFORE_LOGIN } from "$env/static/private"; -import type { Handle } from "@sveltejs/kit"; -import { - PUBLIC_GOOGLE_ANALYTICS_ID, - PUBLIC_DEPRECATED_GOOGLE_ANALYTICS_ID, - PUBLIC_ORIGIN, - PUBLIC_APP_DISCLAIMER, -} from "$env/static/public"; -import { collections } from "$lib/server/database"; -import { base } from "$app/paths"; -import { refreshSessionCookie, requiresUser } from "$lib/server/auth"; -import { ERROR_MESSAGES } from "$lib/stores/errors"; - -export const handle: Handle = async ({ event, resolve }) => { - const token = event.cookies.get(COOKIE_NAME); - - event.locals.sessionId = token || crypto.randomUUID(); - - function errorResponse(status: number, message: string) { - const sendJson = - event.request.headers.get("accept")?.includes("application/json") || - event.request.headers.get("content-type")?.includes("application/json"); - return new Response(sendJson ? JSON.stringify({ error: message }) : message, { - status, - headers: { - "content-type": sendJson ? "application/json" : "text/plain", - }, - }); - } - - // CSRF protection - const requestContentType = event.request.headers.get("content-type")?.split(";")[0] ?? ""; - /** https://developer.mozilla.org/en-US/docs/Web/HTML/Element/form#attr-enctype */ - const nativeFormContentTypes = [ - "multipart/form-data", - "application/x-www-form-urlencoded", - "text/plain", - ]; - if (event.request.method === "POST" && nativeFormContentTypes.includes(requestContentType)) { - const referer = event.request.headers.get("referer"); - - if (!referer) { - return errorResponse(403, "Non-JSON form requests need to have a referer"); - } - - const validOrigins = [ - new URL(event.request.url).origin, - ...(PUBLIC_ORIGIN ? [new URL(PUBLIC_ORIGIN).origin] : []), - ]; - - if (!validOrigins.includes(new URL(referer).origin)) { - return errorResponse(403, "Invalid referer for POST request"); - } - } - - // if ( - // !event.url.pathname.startsWith(`${base}/login`) && - // !event.url.pathname.startsWith(`${base}/admin`) && - // !["GET", "OPTIONS", "HEAD"].includes(event.request.method) - // ) { - // if ( - // !user && - // requiresUser && - // !((MESSAGES_BEFORE_LOGIN ? parseInt(MESSAGES_BEFORE_LOGIN) : 0) > 0) - // ) { - // return errorResponse(401, ERROR_MESSAGES.authOnly); - // } - - // // if login is not required and the call is not from /settings and we display the ethics modal with PUBLIC_APP_DISCLAIMER - // // we check if the user has accepted the ethics modal first. - // // If login is required, `ethicsModalAcceptedAt` is already true at this point, so do not pass this condition. This saves a DB call. - // if ( - // !requiresUser && - // !event.url.pathname.startsWith(`${base}/settings`) && - // !!PUBLIC_APP_DISCLAIMER - // ) { - // const hasAcceptedEthicsModal = await collections.settings.countDocuments({ - // sessionId: event.locals.sessionId, - // ethicsModalAcceptedAt: { $exists: true }, - // }); - - // if (!hasAcceptedEthicsModal) { - // return errorResponse(405, "You need to accept the welcome modal first"); - // } - // } - // } - - refreshSessionCookie(event.cookies, event.locals.sessionId); - - let replaced = false; - - const response = await resolve(event, { - transformPageChunk: (chunk) => { - // For some reason, Sveltekit doesn't let us load env variables from .env in the app.html template - if (replaced || !chunk.html.includes("%gaId%") || !chunk.html.includes("%gaIdDeprecated%")) { - return chunk.html; - } - replaced = true; - - return chunk.html - .replace("%gaId%", PUBLIC_GOOGLE_ANALYTICS_ID) - .replace("%gaIdDeprecated%", PUBLIC_DEPRECATED_GOOGLE_ANALYTICS_ID); - }, - }); - - return response; -}; diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/customshapes/Factory.d.ts b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/customshapes/Factory.d.ts deleted file mode 100644 index f3b08950efe6d29bed2fa9523e10fc461ba18be6..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/customshapes/Factory.d.ts +++ /dev/null @@ -1,5 +0,0 @@ -import CustomShapes from "./CustomShapes"; - -export default function ( - config?: CustomShapes.IConfig -): CustomShapes; \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/fixwidthbuttons/FixWidthButtons.d.ts b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/fixwidthbuttons/FixWidthButtons.d.ts deleted file mode 100644 index db8767c8aefa115eb6502c94f46a5b342cef1cc8..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/fixwidthbuttons/FixWidthButtons.d.ts +++ /dev/null @@ -1,89 +0,0 @@ -// import * as Phaser from 'phaser'; -import FixWidthSizer from '../fixwidthsizer/FixWidthSizer'; -import { IConfig as IConfigButtons } from '../utils/buttongroup/Buttons'; - - -export default FixWidthButtons; - -declare namespace FixWidthButtons { - - interface IConfig extends FixWidthSizer.IConfig, IConfigButtons { - background?: Phaser.GameObjects.GameObject, - - buttons?: Phaser.GameObjects.GameObject[], - } - -} - -declare class FixWidthButtons extends FixWidthSizer { - constructor( - scene: Phaser.Scene, - config?: FixWidthButtons.IConfig - ); - - emitButtonClick( - index: number | Phaser.GameObjects.GameObject - ): this; - - setButtonEnable( - index?: number | Phaser.GameObjects.GameObject | boolean, - enable?: boolean - ): this; - - toggleButtonEnable( - index?: number | Phaser.GameObjects.GameObject - ): this; - - getButtonEnable( - index: number | Phaser.GameObjects.GameObject - ): boolean; - - getButton( - index: number - ): Phaser.GameObjects.GameObject | null; - - addButton( - gameObject: Phaser.GameObjects.GameObject - ): this; - - removeButton( - gameObject: Phaser.GameObjects.GameObject, - destroyChild?: boolean - ): this; - - clearButtons( - destroyChild?: boolean - ): this; - - showButton( - index: number | Phaser.GameObjects.GameObject - ): this; - - hideButton( - index: number | Phaser.GameObjects.GameObject - ): this; - - forEachButtton( - callback: (button: Phaser.GameObjects.GameObject, index: number, buttons: Phaser.GameObjects.GameObject[]) => void, - scop?: unknown - ): this; - - readonly buttons: Phaser.GameObjects.GameObject[]; - - value: unknown; - - setSelectedButtonName( - name: string - ): this; - - getSelectedButtonName(): string; - - setButtonState( - name: string, - state?: boolean - ): this; - - getButtonState( - name: string - ): boolean; -} diff --git a/spaces/AlexWelcing/MusicLM/musiclm_pytorch.py b/spaces/AlexWelcing/MusicLM/musiclm_pytorch.py deleted file mode 100644 index 48d1f8b1712610ca0971a4df41d8975634a4bea8..0000000000000000000000000000000000000000 --- a/spaces/AlexWelcing/MusicLM/musiclm_pytorch.py +++ /dev/null @@ -1,559 +0,0 @@ -import torch -import torch.nn.functional as F -from torch import nn, einsum - -from torchaudio.transforms import Spectrogram, TimeStretch, FrequencyMasking, TimeMasking - -from audiolm_pytorch import AudioLM -from audiolm_pytorch.utils import AudioConditionerBase - -from x_clip.tokenizer import tokenizer -from vector_quantize_pytorch import ResidualVQ - -from einops import rearrange, repeat, reduce, pack, unpack - -from beartype.typing import List, Optional, Tuple -from beartype import beartype - -# functions - -def exists(val): - return val is not None - -def default(val, d): - return val if exists(val) else d - -def round_down_nearest_multiple(n, divisor): - return n // divisor * divisor - -# tensor functions - -def log(t, eps = 1e-20): - return torch.log(t.clamp(min = eps)) - -def l2norm(t): - return F.normalize(t, p = 2, dim = -1) - -# 2d sinusoidal positional embedding -# simple vit paper shows it is good enough compared to learned - -def posemb_sincos_2d(patches, temperature = 10000, dtype = torch.float32): - _, h, w, dim, device, dtype = *patches.shape, patches.device, patches.dtype - - y, x = torch.meshgrid(torch.arange(h, device = device), torch.arange(w, device = device), indexing = 'ij') - assert (dim % 4) == 0, 'feature dimension must be multiple of 4 for sincos emb' - - omega = torch.arange(dim // 4, device = device) / (dim // 4 - 1) - omega = 1. / (temperature ** omega) - - y = y.flatten()[:, None] * omega[None, :] - x = x.flatten()[:, None] * omega[None, :] - - pe = torch.cat((x.sin(), x.cos(), y.sin(), y.cos()), dim = 1) - pe = pe.type(dtype) - - return rearrange(pe, '(h w) d -> h w d', h = h, w = w) - -# biasless layernorm - -class LayerNorm(nn.Module): - def __init__(self, dim): - super().__init__() - self.gamma = nn.Parameter(torch.ones(dim)) - self.register_buffer('beta', torch.zeros(dim)) - - def forward(self, x): - return F.layer_norm(x, x.shape[-1:], self.gamma, self.beta) - -# feedforward - -class GEGLU(nn.Module): - def forward(self, x): - x, gate = x.chunk(2, dim = -1) - return F.gelu(gate) * x - -def FeedForward(dim, mult = 4, dropout = 0.): - dim_hidden = int(dim * mult * 2 / 3) - - return nn.Sequential( - LayerNorm(dim), - nn.Linear(dim, dim_hidden * 2, bias = False), - GEGLU(), - nn.Dropout(dropout), - nn.Linear(dim_hidden, dim, bias = False) - ) - -# attention - -class Attention(nn.Module): - def __init__( - self, - dim, - causal = False, - dim_head = 64, - heads = 8, - dropout = 0. - ): - super().__init__() - self.heads = heads - self.scale = dim_head ** -0.5 - self.causal = causal - inner_dim = dim_head * heads - - self.norm = LayerNorm(dim) - - self.attn_dropout = nn.Dropout(dropout) - - self.to_q = nn.Linear(dim, inner_dim, bias = False) - self.to_kv = nn.Linear(dim, inner_dim * 2, bias = False) - - self.to_out = nn.Sequential( - nn.Linear(inner_dim, dim, bias = False), - nn.Dropout(dropout) - ) - - def forward( - self, - x, - mask = None - ): - b, n, _, device = *x.shape, x.device - - # prenorm - - x = self.norm(x) - - # project for queries, keys, values - - q, k, v = self.to_q(x), *self.to_kv(x).chunk(2, dim = -1) - - # split for multi-headed attention - - q, k, v = map(lambda t: rearrange(t, 'b n (h d) -> b h n d', h = self.heads), (q, k, v)) - - q = q * self.scale - - # similarities - - sim = einsum('b h i d, b h j d -> b h i j', q, k) - - if exists(mask): - mask = rearrange(mask, 'b j -> b 1 1 j') - sim = sim.masked_fill(~mask, -torch.finfo(sim.dtype).max) - - if self.causal: - i, j = sim.shape[-2:] - causal_mask = torch.ones((i, j), dtype = torch.bool, device = x.device).triu(j - i + 1) - sim = sim.masked_fill(causal_mask, -torch.finfo(sim.dtype).max) - - # attention - - attn = sim.softmax(dim = -1) - attn = self.attn_dropout(attn) - - # aggregate - - out = einsum('b h i j, b h j d -> b h i d', attn, v) - - # merge heads - - out = rearrange(out, 'b h n d -> b n (h d)') - return self.to_out(out) - -# transformer - -class Transformer(nn.Module): - def __init__( - self, - dim, - depth, - dim_head = 64, - heads = 8, - attn_dropout = 0., - ff_mult = 4, - ff_dropout = 0. - ): - super().__init__() - self.layers = nn.ModuleList([]) - for _ in range(depth): - self.layers.append(nn.ModuleList([ - Attention(dim = dim, dim_head = dim_head, heads = heads, dropout = attn_dropout), - FeedForward(dim = dim, mult = ff_mult, dropout = ff_dropout), - ])) - - def forward(self, x, mask = None): - - for attn, ff in self.layers: - x = attn(x, mask = mask) + x - x = ff(x) + x - - return x - -# Audio Spectrogram Transformer - https://arxiv.org/abs/2104.01778 - -def pair(t): - return (t, t) if not isinstance(t, tuple) else t - -class AudioSpectrogramTransformer(nn.Module): - def __init__( - self, - dim, - depth, - patch_size = 16, - dim_head = 64, - heads = 8, - attn_dropout = 0., - ff_mult = 4, - ff_dropout = 0., - spec_n_fft = 128, - spec_power = 2, - spec_win_length = 24, - spec_hop_length = None, - spec_pad = 0, - spec_center = True, - spec_pad_mode = 'reflect', - spec_aug_stretch_factor = 0.8, - spec_aug_freq_mask = 80, - spec_aug_time_mask = 80 - ): - super().__init__() - self.dim = dim - - self.patch_size = pair(patch_size) - self.to_patch_tokens = nn.Conv2d(self.patch_size[0] * self.patch_size[1], dim, 1) - - self.spec = Spectrogram( - n_fft = spec_n_fft, - power = spec_power, - win_length = spec_win_length, - hop_length = spec_hop_length, - pad = spec_pad, - center = spec_center, - pad_mode = spec_pad_mode - ) - - # SpecAugment - seems to be widely used in audio field https://arxiv.org/abs/1904.08779 - - self.aug = torch.nn.Sequential( - TimeStretch(spec_aug_stretch_factor, fixed_rate=True), - FrequencyMasking(freq_mask_param = spec_aug_freq_mask), - TimeMasking(time_mask_param = spec_aug_time_mask), - ) - - self.transformer = Transformer( - dim = dim, - depth = depth, - dim_head = dim_head, - heads = heads, - attn_dropout = attn_dropout, - ff_mult = ff_mult, - ff_dropout = ff_dropout - ) - - self.norm = LayerNorm(dim) - - def forward(self, x): - x = self.spec(x) - - if self.training: - x = self.aug(x) - - # automatically crop if audio does not yield a 2d spectrogram that is divisible by patch sizes - - height, width = x.shape[-2:] - patch_height, patch_width = self.patch_size - - rounded_height, rounded_width = map(lambda args: round_down_nearest_multiple(*args), ((height, patch_height), (width, patch_width))) - - if (height, width) != (rounded_height, rounded_width): # just keep printing to be annoying until it is fixed - print(f'spectrogram yielded shape of {(height, width)}, but had to be cropped to {(rounded_height, rounded_width)} to be patchified for transformer') - - x = x[..., :rounded_height, :rounded_width] - - # to patches - - x = rearrange(x, 'b (h p1) (w p2) -> b (p1 p2) h w', p1 = patch_height, p2 = patch_width) - x = self.to_patch_tokens(x) - - # 2d sinusoidal positional embedding - - x = rearrange(x, 'b c h w -> b h w c') - x = x + posemb_sincos_2d(x) - - # attention, what else - - x = rearrange(x, 'b ... c -> b (...) c') - - x = self.transformer(x) - - # final global average and norm (most recent papers show this is superior to CLS token) - - x = reduce(x, 'b n d -> b d', 'mean') - - return self.norm(x) - -# text transformer - -@beartype -class TextTransformer(nn.Module): - def __init__( - self, - dim, - depth, - num_tokens = tokenizer.vocab_size, - max_seq_len = 256, - dim_head = 64, - heads = 8, - attn_dropout = 0., - ff_dropout = 0., - ff_mult = 4, - pad_id = 0 - ): - super().__init__() - self.dim = dim - - self.token_emb = nn.Embedding(num_tokens, dim) - self.pos_emb = nn.Embedding(max_seq_len, dim) - - self.cls_token = nn.Parameter(torch.randn(dim)) - - self.transformer = Transformer( - dim = dim, - depth = depth, - dim_head = dim_head, - heads = heads, - attn_dropout = attn_dropout, - ff_dropout = ff_dropout, - ff_mult = ff_mult - ) - - self.pad_id = pad_id - self.norm = LayerNorm(dim) - - def forward( - self, - x = None, - raw_texts: Optional[List[str]] = None, - mask = None - ): - assert exists(x) ^ exists(raw_texts) - - if exists(raw_texts): - x = tokenizer.tokenize(raw_texts) - - if not exists(mask): - mask = x != self.pad_id - - b, n, device = *x.shape, x.device - - # token embedding + positional embedding - - x = self.token_emb(x) - x = x + self.pos_emb(torch.arange(n, device = device)) - - # cls tokens, as in bert - - cls_tokens = repeat(self.cls_token, 'd -> b d', b = b) - x, ps = pack([cls_tokens, x], 'b * d') - - # account for attending to cls token with self attention mask - - mask = F.pad(mask, (1, 0), value = True) - - # attention - - x = self.transformer(x, mask = mask) - - # unpack the cls tokens - - cls_tokens, _ = unpack(x, ps, 'b * d') - - return self.norm(cls_tokens) - -# main classes - -@beartype -class MuLaN(nn.Module): - def __init__( - self, - audio_transformer: AudioSpectrogramTransformer, - text_transformer: TextTransformer, - dim_latent = 128, # they use 128 - decoupled_contrastive_learning = True, # think this was used, make it optional - ): - super().__init__() - self.dim_latent = dim_latent - - self.audio = audio_transformer - self.text = text_transformer - - self.temperature = nn.Parameter(torch.tensor(1.)) - - self.text_to_latents = nn.Linear(self.text.dim, dim_latent) - self.audio_to_latents = nn.Linear(self.audio.dim, dim_latent) - - self.decoupled_contrastive_learning = decoupled_contrastive_learning - - def get_audio_latents( - self, - wavs - ): - audio_embeds = self.audio(wavs) - audio_latents = self.audio_to_latents(audio_embeds) - return l2norm(audio_latents) - - def get_text_latents( - self, - texts = None, - raw_texts: Optional[List[str]] = None - ): - text_embeds = self.text(texts) - text_latents = self.text_to_latents(text_embeds) - return l2norm(text_latents) - - def forward( - self, - wavs, - texts = None, - raw_texts: Optional[List[str]] = None, - return_similarities = False - ): - batch, device = wavs.shape[0], wavs.device - - audio_latents = self.get_audio_latents(wavs) - text_latents = self.get_text_latents(texts, raw_texts = raw_texts) - - cosine_sim = einsum('i d, j d -> i j', audio_latents, text_latents) - - assert cosine_sim.shape[0] == cosine_sim.shape[1], 'batch sizes for audio and text are not equal' - - if return_similarities: - return cosine_sim - - cosine_sim = cosine_sim * self.temperature.exp() - - cosine_sim_exp = cosine_sim.exp() - - numerator = cosine_sim_exp.diag() - - if self.decoupled_contrastive_learning: - eye = torch.eye(batch, device = device) - cosine_sim_exp = cosine_sim_exp.masked_fill(eye, 0.) - - denominator = reduce(cosine_sim_exp, 'i j -> i', 'sum') - - contrastive_loss = -log(numerator / denominator) - return contrastive_loss.mean() - -# music lm - -@beartype -class MuLaNEmbedQuantizer(AudioConditionerBase): - def __init__( - self, - mulan: MuLaN, - conditioning_dims: Tuple[int, ...], - rq_num_quantizers = 8, - rq_ema_decay = 0.9, - codebook_size = 1024, - namespaces: Tuple[str, ...] = ('semantic', 'coarse', 'fine'), - ): - super().__init__() - self.mulan = mulan - - assert len(namespaces) > 0 - self.namespaces = namespaces - self.conditioning_dims = conditioning_dims - - assert len(conditioning_dims) == len(namespaces), 'number of conditioning dimensions must be equal to number of namespaces' - - dim = mulan.dim_latent - - self.rq = ResidualVQ( - dim = dim, - num_quantizers = rq_num_quantizers, - codebook_size = codebook_size, - decay = rq_ema_decay, - commitment_weight = 0, # only use EMA to update codebooks - kmeans_init = True, - threshold_ema_dead_code = 2, - quantize_dropout = False # no quantize dropout - ) - - self.dim = dim - self.num_codebooks = rq_num_quantizers - - self.cond_embeddings = nn.ParameterDict({}) - - for namespace, conditioning_dim in zip(namespaces, conditioning_dims): - cond_embeddings = nn.Parameter(torch.randn(rq_num_quantizers, codebook_size, conditioning_dim)) - nn.init.normal_(cond_embeddings, std = 0.02) - - self.cond_embeddings[namespace] = cond_embeddings - - self.set_default_namespace(namespaces[0]) - - def parameters(self): - return self.cond_embeddings.parameters() - - def set_default_namespace(self, namespace): - self._default_namespace = namespace - - def forward( - self, - wavs = None, - texts = None, - namespace = None - ): - assert exists(wavs) ^ exists(texts) - - namespace = default(namespace, self._default_namespace) - assert namespace in self.namespaces, f'namespace {namespace} not found' - cond_embeddings = self.cond_embeddings[namespace] - - with torch.no_grad(): - self.mulan.eval() - - # sound and language live in joint embedding space because of contrastive learning - - if exists(wavs): - latents = self.mulan.get_audio_latents(wavs) - elif exists(texts): - latents = self.mulan.get_text_latents(texts) - - _, indices, _ = self.rq(latents) - - batch, num_codebooks, dim = indices.shape[0], self.num_codebooks, cond_embeddings.shape[-1] - - cond_embeddings = repeat(cond_embeddings, 'q c d -> b q c d', b = batch) - indices = repeat(indices, 'b q -> b q 1 d', q = num_codebooks, d = dim) - - cond_embeddings = cond_embeddings.gather(2, indices) - return rearrange(cond_embeddings, 'b q 1 d -> b q d') - -@beartype -class MusicLM(nn.Module): - def __init__( - self, - audio_lm: AudioLM, - mulan_embed_quantizer: MuLaNEmbedQuantizer - ): - super().__init__() - assert not exists(audio_lm.audio_conditioner), 'mulan must not have been passed into AudioLM. it will be managed externally now, embedding the text into the joint embedding space for text-to-audio synthesis' - - self.mulan_embed_quantizer = mulan_embed_quantizer - self.audio_lm = audio_lm - - @torch.no_grad() - def forward( - self, - raw_texts: List[str], - **audio_lm_kwargs - ): - self.eval() - - texts = tokenizer.tokenize(raw_texts) - - text_embeds = self.mulan_embed_quantizer(texts = texts) - - return self.audio_lm(text_embeds = text_embeds, **audio_lm_kwargs) \ No newline at end of file diff --git a/spaces/AliUsama98/Usama_TextClassifier/README.md b/spaces/AliUsama98/Usama_TextClassifier/README.md deleted file mode 100644 index 7c9602f17f58fc3cee947e5cbda8174864066a6e..0000000000000000000000000000000000000000 --- a/spaces/AliUsama98/Usama_TextClassifier/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Usama TextClassifier -emoji: 📈 -colorFrom: gray -colorTo: red -sdk: gradio -sdk_version: 3.50.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/AlterM/Zaglyt2-transformer-test/m_conf.py b/spaces/AlterM/Zaglyt2-transformer-test/m_conf.py deleted file mode 100644 index bc7a10d51be22408df34bddafb6daf599f268977..0000000000000000000000000000000000000000 --- a/spaces/AlterM/Zaglyt2-transformer-test/m_conf.py +++ /dev/null @@ -1,3 +0,0 @@ -input_length = 20 -emb_dim = 128 -emb_o_dim = 256 \ No newline at end of file diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/audio_diffusion/test_audio_diffusion.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/audio_diffusion/test_audio_diffusion.py deleted file mode 100644 index c8c4b7221cc87e04ecaff2283456bff12d3b0306..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/audio_diffusion/test_audio_diffusion.py +++ /dev/null @@ -1,204 +0,0 @@ -# coding=utf-8 -# Copyright 2023 HuggingFace Inc. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import gc -import unittest - -import numpy as np -import torch - -from diffusers import ( - AudioDiffusionPipeline, - AutoencoderKL, - DDIMScheduler, - DDPMScheduler, - DiffusionPipeline, - Mel, - UNet2DConditionModel, - UNet2DModel, -) -from diffusers.utils import slow, torch_device -from diffusers.utils.testing_utils import enable_full_determinism, require_torch_gpu - - -enable_full_determinism() - - -class PipelineFastTests(unittest.TestCase): - def tearDown(self): - # clean up the VRAM after each test - super().tearDown() - gc.collect() - torch.cuda.empty_cache() - - @property - def dummy_unet(self): - torch.manual_seed(0) - model = UNet2DModel( - sample_size=(32, 64), - in_channels=1, - out_channels=1, - layers_per_block=2, - block_out_channels=(128, 128), - down_block_types=("AttnDownBlock2D", "DownBlock2D"), - up_block_types=("UpBlock2D", "AttnUpBlock2D"), - ) - return model - - @property - def dummy_unet_condition(self): - torch.manual_seed(0) - model = UNet2DConditionModel( - sample_size=(64, 32), - in_channels=1, - out_channels=1, - layers_per_block=2, - block_out_channels=(128, 128), - down_block_types=("CrossAttnDownBlock2D", "DownBlock2D"), - up_block_types=("UpBlock2D", "CrossAttnUpBlock2D"), - cross_attention_dim=10, - ) - return model - - @property - def dummy_vqvae_and_unet(self): - torch.manual_seed(0) - vqvae = AutoencoderKL( - sample_size=(128, 64), - in_channels=1, - out_channels=1, - latent_channels=1, - layers_per_block=2, - block_out_channels=(128, 128), - down_block_types=("DownEncoderBlock2D", "DownEncoderBlock2D"), - up_block_types=("UpDecoderBlock2D", "UpDecoderBlock2D"), - ) - unet = UNet2DModel( - sample_size=(64, 32), - in_channels=1, - out_channels=1, - layers_per_block=2, - block_out_channels=(128, 128), - down_block_types=("AttnDownBlock2D", "DownBlock2D"), - up_block_types=("UpBlock2D", "AttnUpBlock2D"), - ) - return vqvae, unet - - @slow - def test_audio_diffusion(self): - device = "cpu" # ensure determinism for the device-dependent torch.Generator - mel = Mel( - x_res=self.dummy_unet.config.sample_size[1], - y_res=self.dummy_unet.config.sample_size[0], - ) - - scheduler = DDPMScheduler() - pipe = AudioDiffusionPipeline(vqvae=None, unet=self.dummy_unet, mel=mel, scheduler=scheduler) - pipe = pipe.to(device) - pipe.set_progress_bar_config(disable=None) - - generator = torch.Generator(device=device).manual_seed(42) - output = pipe(generator=generator, steps=4) - audio = output.audios[0] - image = output.images[0] - - generator = torch.Generator(device=device).manual_seed(42) - output = pipe(generator=generator, steps=4, return_dict=False) - image_from_tuple = output[0][0] - - assert audio.shape == (1, (self.dummy_unet.config.sample_size[1] - 1) * mel.hop_length) - assert ( - image.height == self.dummy_unet.config.sample_size[0] - and image.width == self.dummy_unet.config.sample_size[1] - ) - image_slice = np.frombuffer(image.tobytes(), dtype="uint8")[:10] - image_from_tuple_slice = np.frombuffer(image_from_tuple.tobytes(), dtype="uint8")[:10] - expected_slice = np.array([69, 255, 255, 255, 0, 0, 77, 181, 12, 127]) - - assert np.abs(image_slice.flatten() - expected_slice).max() == 0 - assert np.abs(image_from_tuple_slice.flatten() - expected_slice).max() == 0 - - mel = Mel( - x_res=self.dummy_vqvae_and_unet[0].config.sample_size[1], - y_res=self.dummy_vqvae_and_unet[0].config.sample_size[0], - ) - - scheduler = DDIMScheduler() - dummy_vqvae_and_unet = self.dummy_vqvae_and_unet - pipe = AudioDiffusionPipeline( - vqvae=self.dummy_vqvae_and_unet[0], unet=dummy_vqvae_and_unet[1], mel=mel, scheduler=scheduler - ) - pipe = pipe.to(device) - pipe.set_progress_bar_config(disable=None) - - np.random.seed(0) - raw_audio = np.random.uniform(-1, 1, ((dummy_vqvae_and_unet[0].config.sample_size[1] - 1) * mel.hop_length,)) - generator = torch.Generator(device=device).manual_seed(42) - output = pipe(raw_audio=raw_audio, generator=generator, start_step=5, steps=10) - image = output.images[0] - - assert ( - image.height == self.dummy_vqvae_and_unet[0].config.sample_size[0] - and image.width == self.dummy_vqvae_and_unet[0].config.sample_size[1] - ) - image_slice = np.frombuffer(image.tobytes(), dtype="uint8")[:10] - expected_slice = np.array([120, 117, 110, 109, 138, 167, 138, 148, 132, 121]) - - assert np.abs(image_slice.flatten() - expected_slice).max() == 0 - - dummy_unet_condition = self.dummy_unet_condition - pipe = AudioDiffusionPipeline( - vqvae=self.dummy_vqvae_and_unet[0], unet=dummy_unet_condition, mel=mel, scheduler=scheduler - ) - pipe = pipe.to(device) - pipe.set_progress_bar_config(disable=None) - - np.random.seed(0) - encoding = torch.rand((1, 1, 10)) - output = pipe(generator=generator, encoding=encoding) - image = output.images[0] - image_slice = np.frombuffer(image.tobytes(), dtype="uint8")[:10] - expected_slice = np.array([107, 103, 120, 127, 142, 122, 113, 122, 97, 111]) - - assert np.abs(image_slice.flatten() - expected_slice).max() == 0 - - -@slow -@require_torch_gpu -class PipelineIntegrationTests(unittest.TestCase): - def tearDown(self): - # clean up the VRAM after each test - super().tearDown() - gc.collect() - torch.cuda.empty_cache() - - def test_audio_diffusion(self): - device = torch_device - - pipe = DiffusionPipeline.from_pretrained("teticio/audio-diffusion-ddim-256") - pipe = pipe.to(device) - pipe.set_progress_bar_config(disable=None) - - generator = torch.Generator(device=device).manual_seed(42) - output = pipe(generator=generator) - audio = output.audios[0] - image = output.images[0] - - assert audio.shape == (1, (pipe.unet.config.sample_size[1] - 1) * pipe.mel.hop_length) - assert image.height == pipe.unet.config.sample_size[0] and image.width == pipe.unet.config.sample_size[1] - image_slice = np.frombuffer(image.tobytes(), dtype="uint8")[:10] - expected_slice = np.array([151, 167, 154, 144, 122, 134, 121, 105, 70, 26]) - - assert np.abs(image_slice.flatten() - expected_slice).max() == 0 diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/test_pipelines_auto.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/test_pipelines_auto.py deleted file mode 100644 index 595a7a5f25ff90c005b0a43d15ab1a58b9d43d5c..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/test_pipelines_auto.py +++ /dev/null @@ -1,201 +0,0 @@ -# coding=utf-8 -# Copyright 2023 HuggingFace Inc. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import gc -import unittest -from collections import OrderedDict - -import torch - -from diffusers import ( - AutoPipelineForImage2Image, - AutoPipelineForInpainting, - AutoPipelineForText2Image, - ControlNetModel, -) -from diffusers.pipelines.auto_pipeline import ( - AUTO_IMAGE2IMAGE_PIPELINES_MAPPING, - AUTO_INPAINT_PIPELINES_MAPPING, - AUTO_TEXT2IMAGE_PIPELINES_MAPPING, -) -from diffusers.utils import slow - - -PRETRAINED_MODEL_REPO_MAPPING = OrderedDict( - [ - ("stable-diffusion", "runwayml/stable-diffusion-v1-5"), - ("if", "DeepFloyd/IF-I-XL-v1.0"), - ("kandinsky", "kandinsky-community/kandinsky-2-1"), - ("kandinsky22", "kandinsky-community/kandinsky-2-2-decoder"), - ] -) - - -class AutoPipelineFastTest(unittest.TestCase): - def test_from_pipe_consistent(self): - pipe = AutoPipelineForText2Image.from_pretrained( - "hf-internal-testing/tiny-stable-diffusion-pipe", requires_safety_checker=False - ) - original_config = dict(pipe.config) - - pipe = AutoPipelineForImage2Image.from_pipe(pipe) - assert dict(pipe.config) == original_config - - pipe = AutoPipelineForText2Image.from_pipe(pipe) - assert dict(pipe.config) == original_config - - def test_from_pipe_override(self): - pipe = AutoPipelineForText2Image.from_pretrained( - "hf-internal-testing/tiny-stable-diffusion-pipe", requires_safety_checker=False - ) - - pipe = AutoPipelineForImage2Image.from_pipe(pipe, requires_safety_checker=True) - assert pipe.config.requires_safety_checker is True - - pipe = AutoPipelineForText2Image.from_pipe(pipe, requires_safety_checker=True) - assert pipe.config.requires_safety_checker is True - - def test_from_pipe_consistent_sdxl(self): - pipe = AutoPipelineForImage2Image.from_pretrained( - "hf-internal-testing/tiny-stable-diffusion-xl-pipe", - requires_aesthetics_score=True, - force_zeros_for_empty_prompt=False, - ) - - original_config = dict(pipe.config) - - pipe = AutoPipelineForText2Image.from_pipe(pipe) - pipe = AutoPipelineForImage2Image.from_pipe(pipe) - - assert dict(pipe.config) == original_config - - -@slow -class AutoPipelineIntegrationTest(unittest.TestCase): - def test_pipe_auto(self): - for model_name, model_repo in PRETRAINED_MODEL_REPO_MAPPING.items(): - # test txt2img - pipe_txt2img = AutoPipelineForText2Image.from_pretrained( - model_repo, variant="fp16", torch_dtype=torch.float16 - ) - self.assertIsInstance(pipe_txt2img, AUTO_TEXT2IMAGE_PIPELINES_MAPPING[model_name]) - - pipe_to = AutoPipelineForText2Image.from_pipe(pipe_txt2img) - self.assertIsInstance(pipe_to, AUTO_TEXT2IMAGE_PIPELINES_MAPPING[model_name]) - - pipe_to = AutoPipelineForImage2Image.from_pipe(pipe_txt2img) - self.assertIsInstance(pipe_to, AUTO_IMAGE2IMAGE_PIPELINES_MAPPING[model_name]) - - if "kandinsky" not in model_name: - pipe_to = AutoPipelineForInpainting.from_pipe(pipe_txt2img) - self.assertIsInstance(pipe_to, AUTO_INPAINT_PIPELINES_MAPPING[model_name]) - - del pipe_txt2img, pipe_to - gc.collect() - - # test img2img - - pipe_img2img = AutoPipelineForImage2Image.from_pretrained( - model_repo, variant="fp16", torch_dtype=torch.float16 - ) - self.assertIsInstance(pipe_img2img, AUTO_IMAGE2IMAGE_PIPELINES_MAPPING[model_name]) - - pipe_to = AutoPipelineForText2Image.from_pipe(pipe_img2img) - self.assertIsInstance(pipe_to, AUTO_TEXT2IMAGE_PIPELINES_MAPPING[model_name]) - - pipe_to = AutoPipelineForImage2Image.from_pipe(pipe_img2img) - self.assertIsInstance(pipe_to, AUTO_IMAGE2IMAGE_PIPELINES_MAPPING[model_name]) - - if "kandinsky" not in model_name: - pipe_to = AutoPipelineForInpainting.from_pipe(pipe_img2img) - self.assertIsInstance(pipe_to, AUTO_INPAINT_PIPELINES_MAPPING[model_name]) - - del pipe_img2img, pipe_to - gc.collect() - - # test inpaint - - if "kandinsky" not in model_name: - pipe_inpaint = AutoPipelineForInpainting.from_pretrained( - model_repo, variant="fp16", torch_dtype=torch.float16 - ) - self.assertIsInstance(pipe_inpaint, AUTO_INPAINT_PIPELINES_MAPPING[model_name]) - - pipe_to = AutoPipelineForText2Image.from_pipe(pipe_inpaint) - self.assertIsInstance(pipe_to, AUTO_TEXT2IMAGE_PIPELINES_MAPPING[model_name]) - - pipe_to = AutoPipelineForImage2Image.from_pipe(pipe_inpaint) - self.assertIsInstance(pipe_to, AUTO_IMAGE2IMAGE_PIPELINES_MAPPING[model_name]) - - pipe_to = AutoPipelineForInpainting.from_pipe(pipe_inpaint) - self.assertIsInstance(pipe_to, AUTO_INPAINT_PIPELINES_MAPPING[model_name]) - - del pipe_inpaint, pipe_to - gc.collect() - - def test_from_pipe_consistent(self): - for model_name, model_repo in PRETRAINED_MODEL_REPO_MAPPING.items(): - if model_name in ["kandinsky", "kandinsky22"]: - auto_pipes = [AutoPipelineForText2Image, AutoPipelineForImage2Image] - else: - auto_pipes = [AutoPipelineForText2Image, AutoPipelineForImage2Image, AutoPipelineForInpainting] - - # test from_pretrained - for pipe_from_class in auto_pipes: - pipe_from = pipe_from_class.from_pretrained(model_repo, variant="fp16", torch_dtype=torch.float16) - pipe_from_config = dict(pipe_from.config) - - for pipe_to_class in auto_pipes: - pipe_to = pipe_to_class.from_pipe(pipe_from) - self.assertEqual(dict(pipe_to.config), pipe_from_config) - - del pipe_from, pipe_to - gc.collect() - - def test_controlnet(self): - # test from_pretrained - model_repo = "runwayml/stable-diffusion-v1-5" - controlnet_repo = "lllyasviel/sd-controlnet-canny" - - controlnet = ControlNetModel.from_pretrained(controlnet_repo, torch_dtype=torch.float16) - - pipe_txt2img = AutoPipelineForText2Image.from_pretrained( - model_repo, controlnet=controlnet, torch_dtype=torch.float16 - ) - self.assertIsInstance(pipe_txt2img, AUTO_TEXT2IMAGE_PIPELINES_MAPPING["stable-diffusion-controlnet"]) - - pipe_img2img = AutoPipelineForImage2Image.from_pretrained( - model_repo, controlnet=controlnet, torch_dtype=torch.float16 - ) - self.assertIsInstance(pipe_img2img, AUTO_IMAGE2IMAGE_PIPELINES_MAPPING["stable-diffusion-controlnet"]) - - pipe_inpaint = AutoPipelineForInpainting.from_pretrained( - model_repo, controlnet=controlnet, torch_dtype=torch.float16 - ) - self.assertIsInstance(pipe_inpaint, AUTO_INPAINT_PIPELINES_MAPPING["stable-diffusion-controlnet"]) - - # test from_pipe - for pipe_from in [pipe_txt2img, pipe_img2img, pipe_inpaint]: - pipe_to = AutoPipelineForText2Image.from_pipe(pipe_from) - self.assertIsInstance(pipe_to, AUTO_TEXT2IMAGE_PIPELINES_MAPPING["stable-diffusion-controlnet"]) - self.assertEqual(dict(pipe_to.config), dict(pipe_txt2img.config)) - - pipe_to = AutoPipelineForImage2Image.from_pipe(pipe_from) - self.assertIsInstance(pipe_to, AUTO_IMAGE2IMAGE_PIPELINES_MAPPING["stable-diffusion-controlnet"]) - self.assertEqual(dict(pipe_to.config), dict(pipe_img2img.config)) - - pipe_to = AutoPipelineForInpainting.from_pipe(pipe_from) - self.assertIsInstance(pipe_to, AUTO_INPAINT_PIPELINES_MAPPING["stable-diffusion-controlnet"]) - self.assertEqual(dict(pipe_to.config), dict(pipe_inpaint.config)) diff --git a/spaces/Andy1621/uniformer_image_detection/mmdet/models/losses/balanced_l1_loss.py b/spaces/Andy1621/uniformer_image_detection/mmdet/models/losses/balanced_l1_loss.py deleted file mode 100644 index 7bcd13ff26dbdc9f6eff8d7c7b5bde742a8d7d1d..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/mmdet/models/losses/balanced_l1_loss.py +++ /dev/null @@ -1,120 +0,0 @@ -import mmcv -import numpy as np -import torch -import torch.nn as nn - -from ..builder import LOSSES -from .utils import weighted_loss - - -@mmcv.jit(derivate=True, coderize=True) -@weighted_loss -def balanced_l1_loss(pred, - target, - beta=1.0, - alpha=0.5, - gamma=1.5, - reduction='mean'): - """Calculate balanced L1 loss. - - Please see the `Libra R-CNN `_ - - Args: - pred (torch.Tensor): The prediction with shape (N, 4). - target (torch.Tensor): The learning target of the prediction with - shape (N, 4). - beta (float): The loss is a piecewise function of prediction and target - and ``beta`` serves as a threshold for the difference between the - prediction and target. Defaults to 1.0. - alpha (float): The denominator ``alpha`` in the balanced L1 loss. - Defaults to 0.5. - gamma (float): The ``gamma`` in the balanced L1 loss. - Defaults to 1.5. - reduction (str, optional): The method that reduces the loss to a - scalar. Options are "none", "mean" and "sum". - - Returns: - torch.Tensor: The calculated loss - """ - assert beta > 0 - assert pred.size() == target.size() and target.numel() > 0 - - diff = torch.abs(pred - target) - b = np.e**(gamma / alpha) - 1 - loss = torch.where( - diff < beta, alpha / b * - (b * diff + 1) * torch.log(b * diff / beta + 1) - alpha * diff, - gamma * diff + gamma / b - alpha * beta) - - return loss - - -@LOSSES.register_module() -class BalancedL1Loss(nn.Module): - """Balanced L1 Loss. - - arXiv: https://arxiv.org/pdf/1904.02701.pdf (CVPR 2019) - - Args: - alpha (float): The denominator ``alpha`` in the balanced L1 loss. - Defaults to 0.5. - gamma (float): The ``gamma`` in the balanced L1 loss. Defaults to 1.5. - beta (float, optional): The loss is a piecewise function of prediction - and target. ``beta`` serves as a threshold for the difference - between the prediction and target. Defaults to 1.0. - reduction (str, optional): The method that reduces the loss to a - scalar. Options are "none", "mean" and "sum". - loss_weight (float, optional): The weight of the loss. Defaults to 1.0 - """ - - def __init__(self, - alpha=0.5, - gamma=1.5, - beta=1.0, - reduction='mean', - loss_weight=1.0): - super(BalancedL1Loss, self).__init__() - self.alpha = alpha - self.gamma = gamma - self.beta = beta - self.reduction = reduction - self.loss_weight = loss_weight - - def forward(self, - pred, - target, - weight=None, - avg_factor=None, - reduction_override=None, - **kwargs): - """Forward function of loss. - - Args: - pred (torch.Tensor): The prediction with shape (N, 4). - target (torch.Tensor): The learning target of the prediction with - shape (N, 4). - weight (torch.Tensor, optional): Sample-wise loss weight with - shape (N, ). - avg_factor (int, optional): Average factor that is used to average - the loss. Defaults to None. - reduction_override (str, optional): The reduction method used to - override the original reduction method of the loss. - Options are "none", "mean" and "sum". - - Returns: - torch.Tensor: The calculated loss - """ - assert reduction_override in (None, 'none', 'mean', 'sum') - reduction = ( - reduction_override if reduction_override else self.reduction) - loss_bbox = self.loss_weight * balanced_l1_loss( - pred, - target, - weight, - alpha=self.alpha, - gamma=self.gamma, - beta=self.beta, - reduction=reduction, - avg_factor=avg_factor, - **kwargs) - return loss_bbox diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/apcnet/apcnet_r101-d8_512x512_80k_ade20k.py b/spaces/Andy1621/uniformer_image_segmentation/configs/apcnet/apcnet_r101-d8_512x512_80k_ade20k.py deleted file mode 100644 index 8f10b98406c88256c66d3bbe241c149791d68feb..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_segmentation/configs/apcnet/apcnet_r101-d8_512x512_80k_ade20k.py +++ /dev/null @@ -1,2 +0,0 @@ -_base_ = './apcnet_r50-d8_512x512_80k_ade20k.py' -model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101)) diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/point_rend/pointrend_r50_512x512_160k_ade20k.py b/spaces/Andy1621/uniformer_image_segmentation/configs/point_rend/pointrend_r50_512x512_160k_ade20k.py deleted file mode 100644 index db8c634c0f889c69ce80f86c445c493dcfdbd3c8..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_segmentation/configs/point_rend/pointrend_r50_512x512_160k_ade20k.py +++ /dev/null @@ -1,32 +0,0 @@ -_base_ = [ - '../_base_/models/pointrend_r50.py', '../_base_/datasets/ade20k.py', - '../_base_/default_runtime.py', '../_base_/schedules/schedule_160k.py' -] -norm_cfg = dict(type='SyncBN', requires_grad=True) -model = dict(decode_head=[ - dict( - type='FPNHead', - in_channels=[256, 256, 256, 256], - in_index=[0, 1, 2, 3], - feature_strides=[4, 8, 16, 32], - channels=128, - dropout_ratio=-1, - num_classes=150, - norm_cfg=norm_cfg, - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)), - dict( - type='PointHead', - in_channels=[256], - in_index=[0], - channels=256, - num_fcs=3, - coarse_pred_each_layer=True, - dropout_ratio=-1, - num_classes=150, - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)) -]) -lr_config = dict(warmup='linear', warmup_iters=200) diff --git a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/extensions/whisper_stt/script.py b/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/extensions/whisper_stt/script.py deleted file mode 100644 index cdc55687b30abb43ef6adc6c4f25273ff39cb4d0..0000000000000000000000000000000000000000 --- a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/extensions/whisper_stt/script.py +++ /dev/null @@ -1,71 +0,0 @@ -import gradio as gr -import speech_recognition as sr - -from modules import shared - -input_hijack = { - 'state': False, - 'value': ["", ""] -} - -# parameters which can be customized in settings.json of webui -params = { - 'whipser_language': 'english', - 'whipser_model': 'small.en', - 'auto_submit': True -} - - -def chat_input_modifier(text, visible_text, state): - global input_hijack - if input_hijack['state']: - input_hijack['state'] = False - return input_hijack['value'] - else: - return text, visible_text - - -def do_stt(audio, whipser_model, whipser_language): - transcription = "" - r = sr.Recognizer() - - # Convert to AudioData - audio_data = sr.AudioData(sample_rate=audio[0], frame_data=audio[1], sample_width=4) - - try: - transcription = r.recognize_whisper(audio_data, language=whipser_language, model=whipser_model) - except sr.UnknownValueError: - print("Whisper could not understand audio") - except sr.RequestError as e: - print("Could not request results from Whisper", e) - - return transcription - - -def auto_transcribe(audio, auto_submit, whipser_model, whipser_language): - if audio is None: - return "", "" - transcription = do_stt(audio, whipser_model, whipser_language) - if auto_submit: - input_hijack.update({"state": True, "value": [transcription, transcription]}) - - return transcription, None - - -def ui(): - with gr.Accordion("Whisper STT", open=True): - with gr.Row(): - audio = gr.Audio(source="microphone") - with gr.Row(): - with gr.Accordion("Settings", open=False): - auto_submit = gr.Checkbox(label='Submit the transcribed audio automatically', value=params['auto_submit']) - whipser_model = gr.Dropdown(label='Whisper Model', value=params['whipser_model'], choices=["tiny.en", "base.en", "small.en", "medium.en", "tiny", "base", "small", "medium", "large"]) - whipser_language = gr.Dropdown(label='Whisper Language', value=params['whipser_language'], choices=["chinese", "german", "spanish", "russian", "korean", "french", "japanese", "portuguese", "turkish", "polish", "catalan", "dutch", "arabic", "swedish", "italian", "indonesian", "hindi", "finnish", "vietnamese", "hebrew", "ukrainian", "greek", "malay", "czech", "romanian", "danish", "hungarian", "tamil", "norwegian", "thai", "urdu", "croatian", "bulgarian", "lithuanian", "latin", "maori", "malayalam", "welsh", "slovak", "telugu", "persian", "latvian", "bengali", "serbian", "azerbaijani", "slovenian", "kannada", "estonian", "macedonian", "breton", "basque", "icelandic", "armenian", "nepali", "mongolian", "bosnian", "kazakh", "albanian", "swahili", "galician", "marathi", "punjabi", "sinhala", "khmer", "shona", "yoruba", "somali", "afrikaans", "occitan", "georgian", "belarusian", "tajik", "sindhi", "gujarati", "amharic", "yiddish", "lao", "uzbek", "faroese", "haitian creole", "pashto", "turkmen", "nynorsk", "maltese", "sanskrit", "luxembourgish", "myanmar", "tibetan", "tagalog", "malagasy", "assamese", "tatar", "hawaiian", "lingala", "hausa", "bashkir", "javanese", "sundanese"]) - - audio.change( - auto_transcribe, [audio, auto_submit, whipser_model, whipser_language], [shared.gradio['textbox'], audio]).then( - None, auto_submit, None, _js="(check) => {if (check) { document.getElementById('Generate').click() }}") - - whipser_model.change(lambda x: params.update({"whipser_model": x}), whipser_model, None) - whipser_language.change(lambda x: params.update({"whipser_language": x}), whipser_language, None) - auto_submit.change(lambda x: params.update({"auto_submit": x}), auto_submit, None) diff --git a/spaces/Arnaudding001/OpenAI_whisperLive/__init__.py b/spaces/Arnaudding001/OpenAI_whisperLive/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Artrajz/vits-simple-api/bert_vits2/models.py b/spaces/Artrajz/vits-simple-api/bert_vits2/models.py deleted file mode 100644 index 72050c26dea404e398aecdc9dd736876d46cc83c..0000000000000000000000000000000000000000 --- a/spaces/Artrajz/vits-simple-api/bert_vits2/models.py +++ /dev/null @@ -1,686 +0,0 @@ -import math -import torch -from torch import nn -from torch.nn import functional as F - -from bert_vits2 import commons -from bert_vits2 import modules -from bert_vits2 import attentions - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm - -from bert_vits2.commons import init_weights, get_padding -from bert_vits2.text import num_tones, num_languages - - -class DurationDiscriminator(nn.Module): # vits2 - def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, gin_channels=0): - super().__init__() - - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.gin_channels = gin_channels - - self.drop = nn.Dropout(p_dropout) - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size, padding=kernel_size // 2) - self.norm_1 = modules.LayerNorm(filter_channels) - self.conv_2 = nn.Conv1d( - filter_channels, filter_channels, kernel_size, padding=kernel_size // 2 - ) - self.norm_2 = modules.LayerNorm(filter_channels) - self.dur_proj = nn.Conv1d(1, filter_channels, 1) - - self.pre_out_conv_1 = nn.Conv1d(2 * filter_channels, filter_channels, kernel_size, padding=kernel_size // 2) - self.pre_out_norm_1 = modules.LayerNorm(filter_channels) - self.pre_out_conv_2 = nn.Conv1d(filter_channels, filter_channels, kernel_size, padding=kernel_size // 2) - self.pre_out_norm_2 = modules.LayerNorm(filter_channels) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, in_channels, 1) - - self.output_layer = nn.Sequential( - nn.Linear(filter_channels, 1), - nn.Sigmoid() - ) - - def forward_probability(self, x, x_mask, dur, g=None): - dur = self.dur_proj(dur) - x = torch.cat([x, dur], dim=1) - x = self.pre_out_conv_1(x * x_mask) - x = torch.relu(x) - x = self.pre_out_norm_1(x) - x = self.drop(x) - x = self.pre_out_conv_2(x * x_mask) - x = torch.relu(x) - x = self.pre_out_norm_2(x) - x = self.drop(x) - x = x * x_mask - x = x.transpose(1, 2) - output_prob = self.output_layer(x) - return output_prob - - def forward(self, x, x_mask, dur_r, dur_hat, g=None): - x = torch.detach(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.conv_1(x * x_mask) - x = torch.relu(x) - x = self.norm_1(x) - x = self.drop(x) - x = self.conv_2(x * x_mask) - x = torch.relu(x) - x = self.norm_2(x) - x = self.drop(x) - - output_probs = [] - for dur in [dur_r, dur_hat]: - output_prob = self.forward_probability(x, x_mask, dur, g) - output_probs.append(output_prob) - - return output_probs - - -class TransformerCouplingBlock(nn.Module): - def __init__(self, - channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - n_flows=4, - gin_channels=0, - share_parameter=False - ): - - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - - self.wn = attentions.FFT(hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout, - isflow=True, gin_channels=self.gin_channels) if share_parameter else None - - for i in range(n_flows): - self.flows.append( - modules.TransformerCouplingLayer(channels, hidden_channels, kernel_size, n_layers, n_heads, p_dropout, - filter_channels, mean_only=True, wn_sharing_parameter=self.wn, - gin_channels=self.gin_channels)) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - -class StochasticDurationPredictor(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, n_flows=4, gin_channels=0): - super().__init__() - filter_channels = in_channels # it needs to be removed from future version. - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.log_flow = modules.Log() - self.flows = nn.ModuleList() - self.flows.append(modules.ElementwiseAffine(2)) - for i in range(n_flows): - self.flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3)) - self.flows.append(modules.Flip()) - - self.post_pre = nn.Conv1d(1, filter_channels, 1) - self.post_proj = nn.Conv1d(filter_channels, filter_channels, 1) - self.post_convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout) - self.post_flows = nn.ModuleList() - self.post_flows.append(modules.ElementwiseAffine(2)) - for i in range(4): - self.post_flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3)) - self.post_flows.append(modules.Flip()) - - self.pre = nn.Conv1d(in_channels, filter_channels, 1) - self.proj = nn.Conv1d(filter_channels, filter_channels, 1) - self.convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout) - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, filter_channels, 1) - - def forward(self, x, x_mask, w=None, g=None, reverse=False, noise_scale=1.0): - x = torch.detach(x) - x = self.pre(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.convs(x, x_mask) - x = self.proj(x) * x_mask - - if not reverse: - flows = self.flows - assert w is not None - - logdet_tot_q = 0 - h_w = self.post_pre(w) - h_w = self.post_convs(h_w, x_mask) - h_w = self.post_proj(h_w) * x_mask - e_q = torch.randn(w.size(0), 2, w.size(2)).to(device=x.device, dtype=x.dtype) * x_mask - z_q = e_q - for flow in self.post_flows: - z_q, logdet_q = flow(z_q, x_mask, g=(x + h_w)) - logdet_tot_q += logdet_q - z_u, z1 = torch.split(z_q, [1, 1], 1) - u = torch.sigmoid(z_u) * x_mask - z0 = (w - u) * x_mask - logdet_tot_q += torch.sum((F.logsigmoid(z_u) + F.logsigmoid(-z_u)) * x_mask, [1, 2]) - logq = torch.sum(-0.5 * (math.log(2 * math.pi) + (e_q ** 2)) * x_mask, [1, 2]) - logdet_tot_q - - logdet_tot = 0 - z0, logdet = self.log_flow(z0, x_mask) - logdet_tot += logdet - z = torch.cat([z0, z1], 1) - for flow in flows: - z, logdet = flow(z, x_mask, g=x, reverse=reverse) - logdet_tot = logdet_tot + logdet - nll = torch.sum(0.5 * (math.log(2 * math.pi) + (z ** 2)) * x_mask, [1, 2]) - logdet_tot - return nll + logq # [b] - else: - flows = list(reversed(self.flows)) - flows = flows[:-2] + [flows[-1]] # remove a useless vflow - z = torch.randn(x.size(0), 2, x.size(2)).to(device=x.device, dtype=x.dtype) * noise_scale - for flow in flows: - z = flow(z, x_mask, g=x, reverse=reverse) - z0, z1 = torch.split(z, [1, 1], 1) - logw = z0 - return logw - - -class DurationPredictor(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, gin_channels=0): - super().__init__() - - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.gin_channels = gin_channels - - self.drop = nn.Dropout(p_dropout) - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size, padding=kernel_size // 2) - self.norm_1 = modules.LayerNorm(filter_channels) - self.conv_2 = nn.Conv1d(filter_channels, filter_channels, kernel_size, padding=kernel_size // 2) - self.norm_2 = modules.LayerNorm(filter_channels) - self.proj = nn.Conv1d(filter_channels, 1, 1) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, in_channels, 1) - - def forward(self, x, x_mask, g=None): - x = torch.detach(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.conv_1(x * x_mask) - x = torch.relu(x) - x = self.norm_1(x) - x = self.drop(x) - x = self.conv_2(x * x_mask) - x = torch.relu(x) - x = self.norm_2(x) - x = self.drop(x) - x = self.proj(x * x_mask) - return x * x_mask - - -class TextEncoder(nn.Module): - def __init__(self, - n_vocab, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - gin_channels=0, - symbols=None): - super().__init__() - self.n_vocab = n_vocab - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.gin_channels = gin_channels - self.emb = nn.Embedding(len(symbols), hidden_channels) - nn.init.normal_(self.emb.weight, 0.0, hidden_channels ** -0.5) - self.tone_emb = nn.Embedding(num_tones, hidden_channels) - nn.init.normal_(self.tone_emb.weight, 0.0, hidden_channels ** -0.5) - self.language_emb = nn.Embedding(num_languages, hidden_channels) - nn.init.normal_(self.language_emb.weight, 0.0, hidden_channels ** -0.5) - self.bert_proj = nn.Conv1d(1024, hidden_channels, 1) - self.ja_bert_proj = nn.Conv1d(768, hidden_channels, 1) - - self.encoder = attentions.Encoder( - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - gin_channels=self.gin_channels) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, tone, language, bert, ja_bert, g=None): - bert_emb = self.bert_proj(bert).transpose(1, 2) - ja_bert_emb = self.ja_bert_proj(ja_bert).transpose(1, 2) - x = (self.emb(x) + self.tone_emb(tone) + self.language_emb(language) + bert_emb + ja_bert_emb) * math.sqrt( - self.hidden_channels) # [b, t, h] - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - - x = self.encoder(x * x_mask, x_mask, g=g) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return x, m, logs, x_mask - - -class ResidualCouplingBlock(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append( - modules.ResidualCouplingLayer(channels, hidden_channels, kernel_size, dilation_rate, n_layers, - gin_channels=gin_channels, mean_only=True)) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - -class PosteriorEncoder(nn.Module): - def __init__(self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN(hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - -class Generator(torch.nn.Module): - def __init__(self, initial_channel, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, - upsample_initial_channel, upsample_kernel_sizes, gin_channels=0): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d(initial_channel, upsample_initial_channel, 7, 1, padding=3) - resblock = modules.ResBlock1 if resblock == '1' else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append(weight_norm( - ConvTranspose1d(upsample_initial_channel // (2 ** i), upsample_initial_channel // (2 ** (i + 1)), - k, u, padding=(k - u) // 2))) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate(zip(resblock_kernel_sizes, resblock_dilation_sizes)): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - print('Removing weight norm...') - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(get_padding(kernel_size, 1), 0))), - ]) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ]) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2, 3, 5, 7, 11] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class ReferenceEncoder(nn.Module): - ''' - inputs --- [N, Ty/r, n_mels*r] mels - outputs --- [N, ref_enc_gru_size] - ''' - - def __init__(self, spec_channels, gin_channels=0): - - super().__init__() - self.spec_channels = spec_channels - ref_enc_filters = [32, 32, 64, 64, 128, 128] - K = len(ref_enc_filters) - filters = [1] + ref_enc_filters - convs = [weight_norm(nn.Conv2d(in_channels=filters[i], - out_channels=filters[i + 1], - kernel_size=(3, 3), - stride=(2, 2), - padding=(1, 1))) for i in range(K)] - self.convs = nn.ModuleList(convs) - # self.wns = nn.ModuleList([weight_norm(num_features=ref_enc_filters[i]) for i in range(K)]) - - out_channels = self.calculate_channels(spec_channels, 3, 2, 1, K) - self.gru = nn.GRU(input_size=ref_enc_filters[-1] * out_channels, - hidden_size=256 // 2, - batch_first=True) - self.proj = nn.Linear(128, gin_channels) - - def forward(self, inputs, mask=None): - N = inputs.size(0) - out = inputs.view(N, 1, -1, self.spec_channels) # [N, 1, Ty, n_freqs] - for conv in self.convs: - out = conv(out) - # out = wn(out) - out = F.relu(out) # [N, 128, Ty//2^K, n_mels//2^K] - - out = out.transpose(1, 2) # [N, Ty//2^K, 128, n_mels//2^K] - T = out.size(1) - N = out.size(0) - out = out.contiguous().view(N, T, -1) # [N, Ty//2^K, 128*n_mels//2^K] - - self.gru.flatten_parameters() - memory, out = self.gru(out) # out --- [1, N, 128] - - return self.proj(out.squeeze(0)) - - def calculate_channels(self, L, kernel_size, stride, pad, n_convs): - for i in range(n_convs): - L = (L - kernel_size + 2 * pad) // stride + 1 - return L - - -class SynthesizerTrn(nn.Module): - """ - Synthesizer for Training - """ - - def __init__(self, - n_vocab, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - n_speakers=256, - gin_channels=256, - use_sdp=True, - n_flow_layer=4, - n_layers_trans_flow=6, - flow_share_parameter=False, - use_transformer_flow=True, - **kwargs): - - super().__init__() - self.n_vocab = n_vocab - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.n_speakers = n_speakers - self.gin_channels = gin_channels - self.n_layers_trans_flow = n_layers_trans_flow - self.use_spk_conditioned_encoder = kwargs.get("use_spk_conditioned_encoder", True) - self.use_sdp = use_sdp - self.use_noise_scaled_mas = kwargs.get("use_noise_scaled_mas", False) - self.mas_noise_scale_initial = kwargs.get("mas_noise_scale_initial", 0.01) - self.noise_scale_delta = kwargs.get("noise_scale_delta", 2e-6) - self.current_mas_noise_scale = self.mas_noise_scale_initial - if self.use_spk_conditioned_encoder and gin_channels > 0: - self.enc_gin_channels = gin_channels - symbols = kwargs.get("symbols") - self.enc_p = TextEncoder(n_vocab, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - gin_channels=self.enc_gin_channels, - symbols=symbols, - ) - self.dec = Generator(inter_channels, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, - upsample_initial_channel, upsample_kernel_sizes, gin_channels=gin_channels) - self.enc_q = PosteriorEncoder(spec_channels, inter_channels, hidden_channels, 5, 1, 16, - gin_channels=gin_channels) - if use_transformer_flow: - self.flow = TransformerCouplingBlock(inter_channels, hidden_channels, filter_channels, n_heads, - n_layers_trans_flow, 5, p_dropout, n_flow_layer, - gin_channels=gin_channels, share_parameter=flow_share_parameter) - else: - self.flow = ResidualCouplingBlock(inter_channels, hidden_channels, 5, 1, n_flow_layer, - gin_channels=gin_channels) - self.sdp = StochasticDurationPredictor(hidden_channels, 192, 3, 0.5, 4, gin_channels=gin_channels) - self.dp = DurationPredictor(hidden_channels, 256, 3, 0.5, gin_channels=gin_channels) - - if self.n_speakers > 0: - self.emb_g = nn.Embedding(self.n_speakers, gin_channels) - else: - self.ref_enc = ReferenceEncoder(spec_channels, gin_channels) - - def infer(self, x, x_lengths, sid, tone, language, bert, ja_bert, noise_scale=.667, length_scale=1, - noise_scale_w=0.8, - max_len=None, sdp_ratio=0, y=None): - # x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths, tone, language, bert) - # g = self.gst(y) - if self.n_speakers > 0: - g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1] - else: - g = self.ref_enc(y.transpose(1, 2)).unsqueeze(-1) - x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths, tone, language, bert, ja_bert, g=g) - logw = self.sdp(x, x_mask, g=g, reverse=True, noise_scale=noise_scale_w) * (sdp_ratio) + self.dp(x, x_mask, - g=g) * ( - 1 - sdp_ratio) - w = torch.exp(logw) * x_mask * length_scale - w_ceil = torch.ceil(w) - y_lengths = torch.clamp_min(torch.sum(w_ceil, [1, 2]), 1).long() - y_mask = torch.unsqueeze(commons.sequence_mask(y_lengths, None), 1).to(x_mask.dtype) - attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1) - attn = commons.generate_path(w_ceil, attn_mask) - - m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t'] - logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, - 2) # [b, t', t], [b, t, d] -> [b, d, t'] - - z_p = m_p + torch.randn_like(m_p) * torch.exp(logs_p) * noise_scale - z = self.flow(z_p, y_mask, g=g, reverse=True) - o = self.dec((z * y_mask)[:, :, :max_len], g=g) - return o, attn, y_mask, (z, z_p, m_p, logs_p) diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/screen.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/screen.py deleted file mode 100644 index 7f416e1e799abfbf62382456020cc8e59e5cf01f..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/screen.py +++ /dev/null @@ -1,54 +0,0 @@ -from typing import Optional, TYPE_CHECKING - -from .segment import Segment -from .style import StyleType -from ._loop import loop_last - - -if TYPE_CHECKING: - from .console import ( - Console, - ConsoleOptions, - RenderResult, - RenderableType, - Group, - ) - - -class Screen: - """A renderable that fills the terminal screen and crops excess. - - Args: - renderable (RenderableType): Child renderable. - style (StyleType, optional): Optional background style. Defaults to None. - """ - - renderable: "RenderableType" - - def __init__( - self, - *renderables: "RenderableType", - style: Optional[StyleType] = None, - application_mode: bool = False, - ) -> None: - from pip._vendor.rich.console import Group - - self.renderable = Group(*renderables) - self.style = style - self.application_mode = application_mode - - def __rich_console__( - self, console: "Console", options: "ConsoleOptions" - ) -> "RenderResult": - width, height = options.size - style = console.get_style(self.style) if self.style else None - render_options = options.update(width=width, height=height) - lines = console.render_lines( - self.renderable or "", render_options, style=style, pad=True - ) - lines = Segment.set_shape(lines, width, height, style=style) - new_line = Segment("\n\r") if self.application_mode else Segment.line() - for last, line in loop_last(lines): - yield from line - if not last: - yield new_line diff --git a/spaces/Awesimo/jojogan/e4e/scripts/inference.py b/spaces/Awesimo/jojogan/e4e/scripts/inference.py deleted file mode 100644 index 185b9b34db85dcd97b9793bd5dbfc9d1ca046549..0000000000000000000000000000000000000000 --- a/spaces/Awesimo/jojogan/e4e/scripts/inference.py +++ /dev/null @@ -1,133 +0,0 @@ -import argparse - -import torch -import numpy as np -import sys -import os -import dlib - -sys.path.append(".") -sys.path.append("..") - -from configs import data_configs, paths_config -from datasets.inference_dataset import InferenceDataset -from torch.utils.data import DataLoader -from utils.model_utils import setup_model -from utils.common import tensor2im -from utils.alignment import align_face -from PIL import Image - - -def main(args): - net, opts = setup_model(args.ckpt, device) - is_cars = 'cars_' in opts.dataset_type - generator = net.decoder - generator.eval() - args, data_loader = setup_data_loader(args, opts) - - # Check if latents exist - latents_file_path = os.path.join(args.save_dir, 'latents.pt') - if os.path.exists(latents_file_path): - latent_codes = torch.load(latents_file_path).to(device) - else: - latent_codes = get_all_latents(net, data_loader, args.n_sample, is_cars=is_cars) - torch.save(latent_codes, latents_file_path) - - if not args.latents_only: - generate_inversions(args, generator, latent_codes, is_cars=is_cars) - - -def setup_data_loader(args, opts): - dataset_args = data_configs.DATASETS[opts.dataset_type] - transforms_dict = dataset_args['transforms'](opts).get_transforms() - images_path = args.images_dir if args.images_dir is not None else dataset_args['test_source_root'] - print(f"images path: {images_path}") - align_function = None - if args.align: - align_function = run_alignment - test_dataset = InferenceDataset(root=images_path, - transform=transforms_dict['transform_test'], - preprocess=align_function, - opts=opts) - - data_loader = DataLoader(test_dataset, - batch_size=args.batch, - shuffle=False, - num_workers=2, - drop_last=True) - - print(f'dataset length: {len(test_dataset)}') - - if args.n_sample is None: - args.n_sample = len(test_dataset) - return args, data_loader - - -def get_latents(net, x, is_cars=False): - codes = net.encoder(x) - if net.opts.start_from_latent_avg: - if codes.ndim == 2: - codes = codes + net.latent_avg.repeat(codes.shape[0], 1, 1)[:, 0, :] - else: - codes = codes + net.latent_avg.repeat(codes.shape[0], 1, 1) - if codes.shape[1] == 18 and is_cars: - codes = codes[:, :16, :] - return codes - - -def get_all_latents(net, data_loader, n_images=None, is_cars=False): - all_latents = [] - i = 0 - with torch.no_grad(): - for batch in data_loader: - if n_images is not None and i > n_images: - break - x = batch - inputs = x.to(device).float() - latents = get_latents(net, inputs, is_cars) - all_latents.append(latents) - i += len(latents) - return torch.cat(all_latents) - - -def save_image(img, save_dir, idx): - result = tensor2im(img) - im_save_path = os.path.join(save_dir, f"{idx:05d}.jpg") - Image.fromarray(np.array(result)).save(im_save_path) - - -@torch.no_grad() -def generate_inversions(args, g, latent_codes, is_cars): - print('Saving inversion images') - inversions_directory_path = os.path.join(args.save_dir, 'inversions') - os.makedirs(inversions_directory_path, exist_ok=True) - for i in range(args.n_sample): - imgs, _ = g([latent_codes[i].unsqueeze(0)], input_is_latent=True, randomize_noise=False, return_latents=True) - if is_cars: - imgs = imgs[:, :, 64:448, :] - save_image(imgs[0], inversions_directory_path, i + 1) - - -def run_alignment(image_path): - predictor = dlib.shape_predictor(paths_config.model_paths['shape_predictor']) - aligned_image = align_face(filepath=image_path, predictor=predictor) - print("Aligned image has shape: {}".format(aligned_image.size)) - return aligned_image - - -if __name__ == "__main__": - device = "cuda" - - parser = argparse.ArgumentParser(description="Inference") - parser.add_argument("--images_dir", type=str, default=None, - help="The directory of the images to be inverted") - parser.add_argument("--save_dir", type=str, default=None, - help="The directory to save the latent codes and inversion images. (default: images_dir") - parser.add_argument("--batch", type=int, default=1, help="batch size for the generator") - parser.add_argument("--n_sample", type=int, default=None, help="number of the samples to infer.") - parser.add_argument("--latents_only", action="store_true", help="infer only the latent codes of the directory") - parser.add_argument("--align", action="store_true", help="align face images before inference") - parser.add_argument("ckpt", metavar="CHECKPOINT", help="path to generator checkpoint") - - args = parser.parse_args() - main(args) diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/modeling/backbone/res2net.py b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/modeling/backbone/res2net.py deleted file mode 100644 index 1d0d40adb4a300d916deecebd20bcaac08936e6d..0000000000000000000000000000000000000000 --- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/modeling/backbone/res2net.py +++ /dev/null @@ -1,802 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -# This file is modified from https://github.com/Res2Net/Res2Net-detectron2/blob/master/detectron2/modeling/backbone/resnet.py -# The original file is under Apache-2.0 License -import numpy as np -import fvcore.nn.weight_init as weight_init -import torch -import torch.nn.functional as F -from torch import nn - -from detectron2.layers import ( - CNNBlockBase, - Conv2d, - DeformConv, - ModulatedDeformConv, - ShapeSpec, - get_norm, -) - -from detectron2.modeling.backbone import Backbone -from detectron2.modeling.backbone.fpn import FPN -from detectron2.modeling.backbone.build import BACKBONE_REGISTRY -from .fpn_p5 import LastLevelP6P7_P5 -from .bifpn import BiFPN - -__all__ = [ - "ResNetBlockBase", - "BasicBlock", - "BottleneckBlock", - "DeformBottleneckBlock", - "BasicStem", - "ResNet", - "make_stage", - "build_res2net_backbone", -] - - -ResNetBlockBase = CNNBlockBase -""" -Alias for backward compatibiltiy. -""" - - -class BasicBlock(CNNBlockBase): - """ - The basic residual block for ResNet-18 and ResNet-34, with two 3x3 conv layers - and a projection shortcut if needed. - """ - - def __init__(self, in_channels, out_channels, *, stride=1, norm="BN"): - """ - Args: - in_channels (int): Number of input channels. - out_channels (int): Number of output channels. - stride (int): Stride for the first conv. - norm (str or callable): normalization for all conv layers. - See :func:`layers.get_norm` for supported format. - """ - super().__init__(in_channels, out_channels, stride) - - if in_channels != out_channels: - self.shortcut = Conv2d( - in_channels, - out_channels, - kernel_size=1, - stride=stride, - bias=False, - norm=get_norm(norm, out_channels), - ) - else: - self.shortcut = None - - self.conv1 = Conv2d( - in_channels, - out_channels, - kernel_size=3, - stride=stride, - padding=1, - bias=False, - norm=get_norm(norm, out_channels), - ) - - self.conv2 = Conv2d( - out_channels, - out_channels, - kernel_size=3, - stride=1, - padding=1, - bias=False, - norm=get_norm(norm, out_channels), - ) - - for layer in [self.conv1, self.conv2, self.shortcut]: - if layer is not None: # shortcut can be None - weight_init.c2_msra_fill(layer) - - def forward(self, x): - out = self.conv1(x) - out = F.relu_(out) - out = self.conv2(out) - - if self.shortcut is not None: - shortcut = self.shortcut(x) - else: - shortcut = x - - out += shortcut - out = F.relu_(out) - return out - - -class BottleneckBlock(CNNBlockBase): - """ - The standard bottle2neck residual block used by Res2Net-50, 101 and 152. - """ - - def __init__( - self, - in_channels, - out_channels, - *, - bottleneck_channels, - stride=1, - num_groups=1, - norm="BN", - stride_in_1x1=False, - dilation=1, - basewidth=26, - scale=4, - ): - """ - Args: - bottleneck_channels (int): number of output channels for the 3x3 - "bottleneck" conv layers. - num_groups (int): number of groups for the 3x3 conv layer. - norm (str or callable): normalization for all conv layers. - See :func:`layers.get_norm` for supported format. - stride_in_1x1 (bool): when stride>1, whether to put stride in the - first 1x1 convolution or the bottleneck 3x3 convolution. - dilation (int): the dilation rate of the 3x3 conv layer. - """ - super().__init__(in_channels, out_channels, stride) - - if in_channels != out_channels: - self.shortcut = nn.Sequential( - nn.AvgPool2d(kernel_size=stride, stride=stride, - ceil_mode=True, count_include_pad=False), - Conv2d( - in_channels, - out_channels, - kernel_size=1, - stride=1, - bias=False, - norm=get_norm(norm, out_channels), - ) - ) - else: - self.shortcut = None - - # The original MSRA ResNet models have stride in the first 1x1 conv - # The subsequent fb.torch.resnet and Caffe2 ResNe[X]t implementations have - # stride in the 3x3 conv - stride_1x1, stride_3x3 = (stride, 1) if stride_in_1x1 else (1, stride) - width = bottleneck_channels//scale - - self.conv1 = Conv2d( - in_channels, - bottleneck_channels, - kernel_size=1, - stride=stride_1x1, - bias=False, - norm=get_norm(norm, bottleneck_channels), - ) - if scale == 1: - self.nums = 1 - else: - self.nums = scale -1 - if self.in_channels!=self.out_channels and stride_3x3!=2: - self.pool = nn.AvgPool2d(kernel_size=3, stride = stride_3x3, padding=1) - - convs = [] - bns = [] - for i in range(self.nums): - convs.append(nn.Conv2d( - width, - width, - kernel_size=3, - stride=stride_3x3, - padding=1 * dilation, - bias=False, - groups=num_groups, - dilation=dilation, - )) - bns.append(get_norm(norm, width)) - self.convs = nn.ModuleList(convs) - self.bns = nn.ModuleList(bns) - - self.conv3 = Conv2d( - bottleneck_channels, - out_channels, - kernel_size=1, - bias=False, - norm=get_norm(norm, out_channels), - ) - self.scale = scale - self.width = width - self.in_channels = in_channels - self.out_channels = out_channels - self.stride_3x3 = stride_3x3 - for layer in [self.conv1, self.conv3]: - if layer is not None: # shortcut can be None - weight_init.c2_msra_fill(layer) - if self.shortcut is not None: - for layer in self.shortcut.modules(): - if isinstance(layer, Conv2d): - weight_init.c2_msra_fill(layer) - - for layer in self.convs: - if layer is not None: # shortcut can be None - weight_init.c2_msra_fill(layer) - - # Zero-initialize the last normalization in each residual branch, - # so that at the beginning, the residual branch starts with zeros, - # and each residual block behaves like an identity. - # See Sec 5.1 in "Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour": - # "For BN layers, the learnable scaling coefficient γ is initialized - # to be 1, except for each residual block's last BN - # where γ is initialized to be 0." - - # nn.init.constant_(self.conv3.norm.weight, 0) - # TODO this somehow hurts performance when training GN models from scratch. - # Add it as an option when we need to use this code to train a backbone. - - def forward(self, x): - out = self.conv1(x) - out = F.relu_(out) - - spx = torch.split(out, self.width, 1) - for i in range(self.nums): - if i==0 or self.in_channels!=self.out_channels: - sp = spx[i] - else: - sp = sp + spx[i] - sp = self.convs[i](sp) - sp = F.relu_(self.bns[i](sp)) - if i==0: - out = sp - else: - out = torch.cat((out, sp), 1) - if self.scale!=1 and self.stride_3x3==1: - out = torch.cat((out, spx[self.nums]), 1) - elif self.scale != 1 and self.stride_3x3==2: - out = torch.cat((out, self.pool(spx[self.nums])), 1) - - out = self.conv3(out) - - if self.shortcut is not None: - shortcut = self.shortcut(x) - else: - shortcut = x - - out += shortcut - out = F.relu_(out) - return out - - -class DeformBottleneckBlock(ResNetBlockBase): - """ - Not implemented for res2net yet. - Similar to :class:`BottleneckBlock`, but with deformable conv in the 3x3 convolution. - """ - - def __init__( - self, - in_channels, - out_channels, - *, - bottleneck_channels, - stride=1, - num_groups=1, - norm="BN", - stride_in_1x1=False, - dilation=1, - deform_modulated=False, - deform_num_groups=1, - basewidth=26, - scale=4, - ): - super().__init__(in_channels, out_channels, stride) - self.deform_modulated = deform_modulated - - if in_channels != out_channels: - # self.shortcut = Conv2d( - # in_channels, - # out_channels, - # kernel_size=1, - # stride=stride, - # bias=False, - # norm=get_norm(norm, out_channels), - # ) - self.shortcut = nn.Sequential( - nn.AvgPool2d(kernel_size=stride, stride=stride, - ceil_mode=True, count_include_pad=False), - Conv2d( - in_channels, - out_channels, - kernel_size=1, - stride=1, - bias=False, - norm=get_norm(norm, out_channels), - ) - ) - else: - self.shortcut = None - - stride_1x1, stride_3x3 = (stride, 1) if stride_in_1x1 else (1, stride) - width = bottleneck_channels//scale - - self.conv1 = Conv2d( - in_channels, - bottleneck_channels, - kernel_size=1, - stride=stride_1x1, - bias=False, - norm=get_norm(norm, bottleneck_channels), - ) - - if scale == 1: - self.nums = 1 - else: - self.nums = scale -1 - if self.in_channels!=self.out_channels and stride_3x3!=2: - self.pool = nn.AvgPool2d(kernel_size=3, stride = stride_3x3, padding=1) - - if deform_modulated: - deform_conv_op = ModulatedDeformConv - # offset channels are 2 or 3 (if with modulated) * kernel_size * kernel_size - offset_channels = 27 - else: - deform_conv_op = DeformConv - offset_channels = 18 - - # self.conv2_offset = Conv2d( - # bottleneck_channels, - # offset_channels * deform_num_groups, - # kernel_size=3, - # stride=stride_3x3, - # padding=1 * dilation, - # dilation=dilation, - # ) - # self.conv2 = deform_conv_op( - # bottleneck_channels, - # bottleneck_channels, - # kernel_size=3, - # stride=stride_3x3, - # padding=1 * dilation, - # bias=False, - # groups=num_groups, - # dilation=dilation, - # deformable_groups=deform_num_groups, - # norm=get_norm(norm, bottleneck_channels), - # ) - - conv2_offsets = [] - convs = [] - bns = [] - for i in range(self.nums): - conv2_offsets.append(Conv2d( - width, - offset_channels * deform_num_groups, - kernel_size=3, - stride=stride_3x3, - padding=1 * dilation, - bias=False, - groups=num_groups, - dilation=dilation, - )) - convs.append(deform_conv_op( - width, - width, - kernel_size=3, - stride=stride_3x3, - padding=1 * dilation, - bias=False, - groups=num_groups, - dilation=dilation, - deformable_groups=deform_num_groups, - )) - bns.append(get_norm(norm, width)) - self.conv2_offsets = nn.ModuleList(conv2_offsets) - self.convs = nn.ModuleList(convs) - self.bns = nn.ModuleList(bns) - - self.conv3 = Conv2d( - bottleneck_channels, - out_channels, - kernel_size=1, - bias=False, - norm=get_norm(norm, out_channels), - ) - self.scale = scale - self.width = width - self.in_channels = in_channels - self.out_channels = out_channels - self.stride_3x3 = stride_3x3 - # for layer in [self.conv1, self.conv2, self.conv3, self.shortcut]: - # if layer is not None: # shortcut can be None - # weight_init.c2_msra_fill(layer) - - # nn.init.constant_(self.conv2_offset.weight, 0) - # nn.init.constant_(self.conv2_offset.bias, 0) - for layer in [self.conv1, self.conv3]: - if layer is not None: # shortcut can be None - weight_init.c2_msra_fill(layer) - if self.shortcut is not None: - for layer in self.shortcut.modules(): - if isinstance(layer, Conv2d): - weight_init.c2_msra_fill(layer) - - for layer in self.convs: - if layer is not None: # shortcut can be None - weight_init.c2_msra_fill(layer) - - for layer in self.conv2_offsets: - if layer.weight is not None: - nn.init.constant_(layer.weight, 0) - if layer.bias is not None: - nn.init.constant_(layer.bias, 0) - - def forward(self, x): - out = self.conv1(x) - out = F.relu_(out) - - # if self.deform_modulated: - # offset_mask = self.conv2_offset(out) - # offset_x, offset_y, mask = torch.chunk(offset_mask, 3, dim=1) - # offset = torch.cat((offset_x, offset_y), dim=1) - # mask = mask.sigmoid() - # out = self.conv2(out, offset, mask) - # else: - # offset = self.conv2_offset(out) - # out = self.conv2(out, offset) - # out = F.relu_(out) - - spx = torch.split(out, self.width, 1) - for i in range(self.nums): - if i==0 or self.in_channels!=self.out_channels: - sp = spx[i].contiguous() - else: - sp = sp + spx[i].contiguous() - - # sp = self.convs[i](sp) - if self.deform_modulated: - offset_mask = self.conv2_offsets[i](sp) - offset_x, offset_y, mask = torch.chunk(offset_mask, 3, dim=1) - offset = torch.cat((offset_x, offset_y), dim=1) - mask = mask.sigmoid() - sp = self.convs[i](sp, offset, mask) - else: - offset = self.conv2_offsets[i](sp) - sp = self.convs[i](sp, offset) - sp = F.relu_(self.bns[i](sp)) - if i==0: - out = sp - else: - out = torch.cat((out, sp), 1) - if self.scale!=1 and self.stride_3x3==1: - out = torch.cat((out, spx[self.nums]), 1) - elif self.scale != 1 and self.stride_3x3==2: - out = torch.cat((out, self.pool(spx[self.nums])), 1) - - out = self.conv3(out) - - if self.shortcut is not None: - shortcut = self.shortcut(x) - else: - shortcut = x - - out += shortcut - out = F.relu_(out) - return out - - -def make_stage(block_class, num_blocks, first_stride, *, in_channels, out_channels, **kwargs): - """ - Create a list of blocks just like those in a ResNet stage. - Args: - block_class (type): a subclass of ResNetBlockBase - num_blocks (int): - first_stride (int): the stride of the first block. The other blocks will have stride=1. - in_channels (int): input channels of the entire stage. - out_channels (int): output channels of **every block** in the stage. - kwargs: other arguments passed to the constructor of every block. - Returns: - list[nn.Module]: a list of block module. - """ - assert "stride" not in kwargs, "Stride of blocks in make_stage cannot be changed." - blocks = [] - for i in range(num_blocks): - blocks.append( - block_class( - in_channels=in_channels, - out_channels=out_channels, - stride=first_stride if i == 0 else 1, - **kwargs, - ) - ) - in_channels = out_channels - return blocks - - -class BasicStem(CNNBlockBase): - """ - The standard ResNet stem (layers before the first residual block). - """ - - def __init__(self, in_channels=3, out_channels=64, norm="BN"): - """ - Args: - norm (str or callable): norm after the first conv layer. - See :func:`layers.get_norm` for supported format. - """ - super().__init__(in_channels, out_channels, 4) - self.in_channels = in_channels - self.conv1 = nn.Sequential( - Conv2d( - in_channels, - 32, - kernel_size=3, - stride=2, - padding=1, - bias=False, - ), - get_norm(norm, 32), - nn.ReLU(inplace=True), - Conv2d( - 32, - 32, - kernel_size=3, - stride=1, - padding=1, - bias=False, - ), - get_norm(norm, 32), - nn.ReLU(inplace=True), - Conv2d( - 32, - out_channels, - kernel_size=3, - stride=1, - padding=1, - bias=False, - ), - ) - self.bn1 = get_norm(norm, out_channels) - - for layer in self.conv1: - if isinstance(layer, Conv2d): - weight_init.c2_msra_fill(layer) - - def forward(self, x): - x = self.conv1(x) - x = self.bn1(x) - x = F.relu_(x) - x = F.max_pool2d(x, kernel_size=3, stride=2, padding=1) - return x - - -class ResNet(Backbone): - def __init__(self, stem, stages, num_classes=None, out_features=None): - """ - Args: - stem (nn.Module): a stem module - stages (list[list[CNNBlockBase]]): several (typically 4) stages, - each contains multiple :class:`CNNBlockBase`. - num_classes (None or int): if None, will not perform classification. - Otherwise, will create a linear layer. - out_features (list[str]): name of the layers whose outputs should - be returned in forward. Can be anything in "stem", "linear", or "res2" ... - If None, will return the output of the last layer. - """ - super(ResNet, self).__init__() - self.stem = stem - self.num_classes = num_classes - - current_stride = self.stem.stride - self._out_feature_strides = {"stem": current_stride} - self._out_feature_channels = {"stem": self.stem.out_channels} - - self.stages_and_names = [] - for i, blocks in enumerate(stages): - assert len(blocks) > 0, len(blocks) - for block in blocks: - assert isinstance(block, CNNBlockBase), block - - name = "res" + str(i + 2) - stage = nn.Sequential(*blocks) - - self.add_module(name, stage) - self.stages_and_names.append((stage, name)) - - self._out_feature_strides[name] = current_stride = int( - current_stride * np.prod([k.stride for k in blocks]) - ) - self._out_feature_channels[name] = curr_channels = blocks[-1].out_channels - - if num_classes is not None: - self.avgpool = nn.AdaptiveAvgPool2d((1, 1)) - self.linear = nn.Linear(curr_channels, num_classes) - - # Sec 5.1 in "Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour": - # "The 1000-way fully-connected layer is initialized by - # drawing weights from a zero-mean Gaussian with standard deviation of 0.01." - nn.init.normal_(self.linear.weight, std=0.01) - name = "linear" - - if out_features is None: - out_features = [name] - self._out_features = out_features - assert len(self._out_features) - children = [x[0] for x in self.named_children()] - for out_feature in self._out_features: - assert out_feature in children, "Available children: {}".format(", ".join(children)) - - def forward(self, x): - outputs = {} - x = self.stem(x) - if "stem" in self._out_features: - outputs["stem"] = x - for stage, name in self.stages_and_names: - x = stage(x) - if name in self._out_features: - outputs[name] = x - if self.num_classes is not None: - x = self.avgpool(x) - x = torch.flatten(x, 1) - x = self.linear(x) - if "linear" in self._out_features: - outputs["linear"] = x - return outputs - - def output_shape(self): - return { - name: ShapeSpec( - channels=self._out_feature_channels[name], stride=self._out_feature_strides[name] - ) - for name in self._out_features - } - - def freeze(self, freeze_at=0): - """ - Freeze the first several stages of the ResNet. Commonly used in - fine-tuning. - Args: - freeze_at (int): number of stem and stages to freeze. - `1` means freezing the stem. `2` means freezing the stem and - the first stage, etc. - Returns: - nn.Module: this ResNet itself - """ - if freeze_at >= 1: - self.stem.freeze() - for idx, (stage, _) in enumerate(self.stages_and_names, start=2): - if freeze_at >= idx: - for block in stage.children(): - block.freeze() - return self - - -@BACKBONE_REGISTRY.register() -def build_res2net_backbone(cfg, input_shape): - """ - Create a Res2Net instance from config. - Returns: - ResNet: a :class:`ResNet` instance. - """ - # need registration of new blocks/stems? - norm = cfg.MODEL.RESNETS.NORM - stem = BasicStem( - in_channels=input_shape.channels, - out_channels=cfg.MODEL.RESNETS.STEM_OUT_CHANNELS, - norm=norm, - ) - - # fmt: off - freeze_at = cfg.MODEL.BACKBONE.FREEZE_AT - out_features = cfg.MODEL.RESNETS.OUT_FEATURES - depth = cfg.MODEL.RESNETS.DEPTH - num_groups = cfg.MODEL.RESNETS.NUM_GROUPS - width_per_group = cfg.MODEL.RESNETS.WIDTH_PER_GROUP - scale = 4 - bottleneck_channels = num_groups * width_per_group * scale - in_channels = cfg.MODEL.RESNETS.STEM_OUT_CHANNELS - out_channels = cfg.MODEL.RESNETS.RES2_OUT_CHANNELS - stride_in_1x1 = cfg.MODEL.RESNETS.STRIDE_IN_1X1 - res5_dilation = cfg.MODEL.RESNETS.RES5_DILATION - deform_on_per_stage = cfg.MODEL.RESNETS.DEFORM_ON_PER_STAGE - deform_modulated = cfg.MODEL.RESNETS.DEFORM_MODULATED - deform_num_groups = cfg.MODEL.RESNETS.DEFORM_NUM_GROUPS - # fmt: on - assert res5_dilation in {1, 2}, "res5_dilation cannot be {}.".format(res5_dilation) - - num_blocks_per_stage = { - 18: [2, 2, 2, 2], - 34: [3, 4, 6, 3], - 50: [3, 4, 6, 3], - 101: [3, 4, 23, 3], - 152: [3, 8, 36, 3], - }[depth] - - if depth in [18, 34]: - assert out_channels == 64, "Must set MODEL.RESNETS.RES2_OUT_CHANNELS = 64 for R18/R34" - assert not any( - deform_on_per_stage - ), "MODEL.RESNETS.DEFORM_ON_PER_STAGE unsupported for R18/R34" - assert res5_dilation == 1, "Must set MODEL.RESNETS.RES5_DILATION = 1 for R18/R34" - assert num_groups == 1, "Must set MODEL.RESNETS.NUM_GROUPS = 1 for R18/R34" - - stages = [] - - # Avoid creating variables without gradients - # It consumes extra memory and may cause allreduce to fail - out_stage_idx = [{"res2": 2, "res3": 3, "res4": 4, "res5": 5}[f] for f in out_features] - max_stage_idx = max(out_stage_idx) - for idx, stage_idx in enumerate(range(2, max_stage_idx + 1)): - dilation = res5_dilation if stage_idx == 5 else 1 - first_stride = 1 if idx == 0 or (stage_idx == 5 and dilation == 2) else 2 - stage_kargs = { - "num_blocks": num_blocks_per_stage[idx], - "first_stride": first_stride, - "in_channels": in_channels, - "out_channels": out_channels, - "norm": norm, - } - # Use BasicBlock for R18 and R34. - if depth in [18, 34]: - stage_kargs["block_class"] = BasicBlock - else: - stage_kargs["bottleneck_channels"] = bottleneck_channels - stage_kargs["stride_in_1x1"] = stride_in_1x1 - stage_kargs["dilation"] = dilation - stage_kargs["num_groups"] = num_groups - stage_kargs["scale"] = scale - - if deform_on_per_stage[idx]: - stage_kargs["block_class"] = DeformBottleneckBlock - stage_kargs["deform_modulated"] = deform_modulated - stage_kargs["deform_num_groups"] = deform_num_groups - else: - stage_kargs["block_class"] = BottleneckBlock - blocks = make_stage(**stage_kargs) - in_channels = out_channels - out_channels *= 2 - bottleneck_channels *= 2 - stages.append(blocks) - return ResNet(stem, stages, out_features=out_features).freeze(freeze_at) - - -@BACKBONE_REGISTRY.register() -def build_p67_res2net_fpn_backbone(cfg, input_shape: ShapeSpec): - """ - Args: - cfg: a detectron2 CfgNode - - Returns: - backbone (Backbone): backbone module, must be a subclass of :class:`Backbone`. - """ - bottom_up = build_res2net_backbone(cfg, input_shape) - in_features = cfg.MODEL.FPN.IN_FEATURES - out_channels = cfg.MODEL.FPN.OUT_CHANNELS - backbone = FPN( - bottom_up=bottom_up, - in_features=in_features, - out_channels=out_channels, - norm=cfg.MODEL.FPN.NORM, - top_block=LastLevelP6P7_P5(out_channels, out_channels), - fuse_type=cfg.MODEL.FPN.FUSE_TYPE, - ) - return backbone - - -@BACKBONE_REGISTRY.register() -def build_res2net_bifpn_backbone(cfg, input_shape: ShapeSpec): - """ - Args: - cfg: a detectron2 CfgNode - - Returns: - backbone (Backbone): backbone module, must be a subclass of :class:`Backbone`. - """ - bottom_up = build_res2net_backbone(cfg, input_shape) - in_features = cfg.MODEL.FPN.IN_FEATURES - backbone = BiFPN( - cfg=cfg, - bottom_up=bottom_up, - in_features=in_features, - out_channels=cfg.MODEL.BIFPN.OUT_CHANNELS, - norm=cfg.MODEL.BIFPN.NORM, - num_levels=cfg.MODEL.BIFPN.NUM_LEVELS, - num_bifpn=cfg.MODEL.BIFPN.NUM_BIFPN, - separable_conv=cfg.MODEL.BIFPN.SEPARABLE_CONV, - ) - return backbone \ No newline at end of file diff --git a/spaces/Bart92/RVC_HF/demucs/utils.py b/spaces/Bart92/RVC_HF/demucs/utils.py deleted file mode 100644 index 4364184059b1afe3c8379c77793a8e76dccf9699..0000000000000000000000000000000000000000 --- a/spaces/Bart92/RVC_HF/demucs/utils.py +++ /dev/null @@ -1,323 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import errno -import functools -import hashlib -import inspect -import io -import os -import random -import socket -import tempfile -import warnings -import zlib -from contextlib import contextmanager - -from diffq import UniformQuantizer, DiffQuantizer -import torch as th -import tqdm -from torch import distributed -from torch.nn import functional as F - - -def center_trim(tensor, reference): - """ - Center trim `tensor` with respect to `reference`, along the last dimension. - `reference` can also be a number, representing the length to trim to. - If the size difference != 0 mod 2, the extra sample is removed on the right side. - """ - if hasattr(reference, "size"): - reference = reference.size(-1) - delta = tensor.size(-1) - reference - if delta < 0: - raise ValueError("tensor must be larger than reference. " f"Delta is {delta}.") - if delta: - tensor = tensor[..., delta // 2:-(delta - delta // 2)] - return tensor - - -def average_metric(metric, count=1.): - """ - Average `metric` which should be a float across all hosts. `count` should be - the weight for this particular host (i.e. number of examples). - """ - metric = th.tensor([count, count * metric], dtype=th.float32, device='cuda') - distributed.all_reduce(metric, op=distributed.ReduceOp.SUM) - return metric[1].item() / metric[0].item() - - -def free_port(host='', low=20000, high=40000): - """ - Return a port number that is most likely free. - This could suffer from a race condition although - it should be quite rare. - """ - sock = socket.socket() - while True: - port = random.randint(low, high) - try: - sock.bind((host, port)) - except OSError as error: - if error.errno == errno.EADDRINUSE: - continue - raise - return port - - -def sizeof_fmt(num, suffix='B'): - """ - Given `num` bytes, return human readable size. - Taken from https://stackoverflow.com/a/1094933 - """ - for unit in ['', 'Ki', 'Mi', 'Gi', 'Ti', 'Pi', 'Ei', 'Zi']: - if abs(num) < 1024.0: - return "%3.1f%s%s" % (num, unit, suffix) - num /= 1024.0 - return "%.1f%s%s" % (num, 'Yi', suffix) - - -def human_seconds(seconds, display='.2f'): - """ - Given `seconds` seconds, return human readable duration. - """ - value = seconds * 1e6 - ratios = [1e3, 1e3, 60, 60, 24] - names = ['us', 'ms', 's', 'min', 'hrs', 'days'] - last = names.pop(0) - for name, ratio in zip(names, ratios): - if value / ratio < 0.3: - break - value /= ratio - last = name - return f"{format(value, display)} {last}" - - -class TensorChunk: - def __init__(self, tensor, offset=0, length=None): - total_length = tensor.shape[-1] - assert offset >= 0 - assert offset < total_length - - if length is None: - length = total_length - offset - else: - length = min(total_length - offset, length) - - self.tensor = tensor - self.offset = offset - self.length = length - self.device = tensor.device - - @property - def shape(self): - shape = list(self.tensor.shape) - shape[-1] = self.length - return shape - - def padded(self, target_length): - delta = target_length - self.length - total_length = self.tensor.shape[-1] - assert delta >= 0 - - start = self.offset - delta // 2 - end = start + target_length - - correct_start = max(0, start) - correct_end = min(total_length, end) - - pad_left = correct_start - start - pad_right = end - correct_end - - out = F.pad(self.tensor[..., correct_start:correct_end], (pad_left, pad_right)) - assert out.shape[-1] == target_length - return out - - -def tensor_chunk(tensor_or_chunk): - if isinstance(tensor_or_chunk, TensorChunk): - return tensor_or_chunk - else: - assert isinstance(tensor_or_chunk, th.Tensor) - return TensorChunk(tensor_or_chunk) - - -def apply_model(model, mix, shifts=None, split=False, - overlap=0.25, transition_power=1., progress=False): - """ - Apply model to a given mixture. - - Args: - shifts (int): if > 0, will shift in time `mix` by a random amount between 0 and 0.5 sec - and apply the oppositve shift to the output. This is repeated `shifts` time and - all predictions are averaged. This effectively makes the model time equivariant - and improves SDR by up to 0.2 points. - split (bool): if True, the input will be broken down in 8 seconds extracts - and predictions will be performed individually on each and concatenated. - Useful for model with large memory footprint like Tasnet. - progress (bool): if True, show a progress bar (requires split=True) - """ - assert transition_power >= 1, "transition_power < 1 leads to weird behavior." - device = mix.device - channels, length = mix.shape - if split: - out = th.zeros(len(model.sources), channels, length, device=device) - sum_weight = th.zeros(length, device=device) - segment = model.segment_length - stride = int((1 - overlap) * segment) - offsets = range(0, length, stride) - scale = stride / model.samplerate - if progress: - offsets = tqdm.tqdm(offsets, unit_scale=scale, ncols=120, unit='seconds') - # We start from a triangle shaped weight, with maximal weight in the middle - # of the segment. Then we normalize and take to the power `transition_power`. - # Large values of transition power will lead to sharper transitions. - weight = th.cat([th.arange(1, segment // 2 + 1), - th.arange(segment - segment // 2, 0, -1)]).to(device) - assert len(weight) == segment - # If the overlap < 50%, this will translate to linear transition when - # transition_power is 1. - weight = (weight / weight.max())**transition_power - for offset in offsets: - chunk = TensorChunk(mix, offset, segment) - chunk_out = apply_model(model, chunk, shifts=shifts) - chunk_length = chunk_out.shape[-1] - out[..., offset:offset + segment] += weight[:chunk_length] * chunk_out - sum_weight[offset:offset + segment] += weight[:chunk_length] - offset += segment - assert sum_weight.min() > 0 - out /= sum_weight - return out - elif shifts: - max_shift = int(0.5 * model.samplerate) - mix = tensor_chunk(mix) - padded_mix = mix.padded(length + 2 * max_shift) - out = 0 - for _ in range(shifts): - offset = random.randint(0, max_shift) - shifted = TensorChunk(padded_mix, offset, length + max_shift - offset) - shifted_out = apply_model(model, shifted) - out += shifted_out[..., max_shift - offset:] - out /= shifts - return out - else: - valid_length = model.valid_length(length) - mix = tensor_chunk(mix) - padded_mix = mix.padded(valid_length) - with th.no_grad(): - out = model(padded_mix.unsqueeze(0))[0] - return center_trim(out, length) - - -@contextmanager -def temp_filenames(count, delete=True): - names = [] - try: - for _ in range(count): - names.append(tempfile.NamedTemporaryFile(delete=False).name) - yield names - finally: - if delete: - for name in names: - os.unlink(name) - - -def get_quantizer(model, args, optimizer=None): - quantizer = None - if args.diffq: - quantizer = DiffQuantizer( - model, min_size=args.q_min_size, group_size=8) - if optimizer is not None: - quantizer.setup_optimizer(optimizer) - elif args.qat: - quantizer = UniformQuantizer( - model, bits=args.qat, min_size=args.q_min_size) - return quantizer - - -def load_model(path, strict=False): - with warnings.catch_warnings(): - warnings.simplefilter("ignore") - load_from = path - package = th.load(load_from, 'cpu') - - klass = package["klass"] - args = package["args"] - kwargs = package["kwargs"] - - if strict: - model = klass(*args, **kwargs) - else: - sig = inspect.signature(klass) - for key in list(kwargs): - if key not in sig.parameters: - warnings.warn("Dropping inexistant parameter " + key) - del kwargs[key] - model = klass(*args, **kwargs) - - state = package["state"] - training_args = package["training_args"] - quantizer = get_quantizer(model, training_args) - - set_state(model, quantizer, state) - return model - - -def get_state(model, quantizer): - if quantizer is None: - state = {k: p.data.to('cpu') for k, p in model.state_dict().items()} - else: - state = quantizer.get_quantized_state() - buf = io.BytesIO() - th.save(state, buf) - state = {'compressed': zlib.compress(buf.getvalue())} - return state - - -def set_state(model, quantizer, state): - if quantizer is None: - model.load_state_dict(state) - else: - buf = io.BytesIO(zlib.decompress(state["compressed"])) - state = th.load(buf, "cpu") - quantizer.restore_quantized_state(state) - - return state - - -def save_state(state, path): - buf = io.BytesIO() - th.save(state, buf) - sig = hashlib.sha256(buf.getvalue()).hexdigest()[:8] - - path = path.parent / (path.stem + "-" + sig + path.suffix) - path.write_bytes(buf.getvalue()) - - -def save_model(model, quantizer, training_args, path): - args, kwargs = model._init_args_kwargs - klass = model.__class__ - - state = get_state(model, quantizer) - - save_to = path - package = { - 'klass': klass, - 'args': args, - 'kwargs': kwargs, - 'state': state, - 'training_args': training_args, - } - th.save(package, save_to) - - -def capture_init(init): - @functools.wraps(init) - def __init__(self, *args, **kwargs): - self._init_args_kwargs = (args, kwargs) - init(self, *args, **kwargs) - - return __init__ diff --git a/spaces/Benson/text-generation/Examples/Bosque Isla Relajante Juego Mod Apk.md b/spaces/Benson/text-generation/Examples/Bosque Isla Relajante Juego Mod Apk.md deleted file mode 100644 index 370b1182dc454796b136df05a2c1467fbf630d0d..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Bosque Isla Relajante Juego Mod Apk.md +++ /dev/null @@ -1,49 +0,0 @@ -
-

Isla del bosque: Juego relajante Mod APK - Una revisión

-

¿Te gusta la naturaleza y los animales? ¿Quieres escapar del estrés y el ruido de la ciudad? ¿Desea relajarse y disfrutar de un juego tranquilo y relajante? Si respondiste sí a cualquiera de estas preguntas, entonces deberías probar Forest Island: Relaxing Game. Este es un juego que le permite crear su propia isla del bosque con animales lindos, aves, plantas y hábitats naturales. También puede escuchar música relajante y sonidos que calman su mente y alma. En este artículo, vamos a revisar Forest Island: Relaxing Game y decirle por qué debe descargar la versión apk mod de este juego.

-

¿Qué es Forest Island: Juego relajante?

-

Forest Island: Relaxing Game es un juego de simulación desarrollado por Nanali Studios. Está disponible para dispositivos Android y tiene más de 100,000 descargas en Google Play Store. El juego está clasificado 4.5 de 5 estrellas por los usuarios que lo han jugado.

-

bosque isla relajante juego mod apk


Download Ziphttps://bltlly.com/2v6IXk



-

El juego es simple y fácil de jugar. Solo tienes que tocar en la pantalla para crear tu propia isla forestal. Puedes elegir entre diferentes tipos de animales, aves, plantas y hábitats naturales para decorar tu isla. También puedes interactuar con los animales y las aves alimentándolos, jugando con ellos y tomando fotos de ellos. También puede cambiar entre los modos día y noche para ver cómo cambia su isla con la hora del día.

-

Características de Forest Island: Juego relajante

-

Forest Island: Relaxing Game tiene muchas características que lo convierten en un juego divertido y relajante para jugar. Aquí están algunas de ellas:

-

Animales y pájaros lindos

- -

Varios hábitats naturales

-

El juego tiene más de 20 tipos de hábitats naturales que puede utilizar para crear su propia isla bosque. Puede elegir entre bosques, lagos, praderas, grandes rocas, costas, mesetas, acantilados, selvas, desiertos, campos de nieve, volcanes, cuevas, cascadas, islas, arrecifes de coral y más. Cada hábitat tiene su propio paisaje y atmósfera. Puedes mezclar y combinar diferentes hábitats para crear tu propia isla única.

-

Música y sonidos relajantes

-

El juego tiene música relajante que calma tu mente y alma. También puedes escuchar varios sonidos de la naturaleza en modo descanso. Puedes escuchar el viento soplando, el agua fluyendo, los pájaros cantando, los animales rugiendo, y más. Puede ajustar el volumen de la música y los sonidos según su preferencia.

-

¿Por qué descargar Forest Island: Relajante juego Mod APK?

-

Forest Island: Relaxing Game es un juego gratuito que puedes descargar desde Google Play Store. Sin embargo, si quieres disfrutar de más características y beneficios de este juego, usted debe descargar la versión apk mod de este juego. Aquí hay algunas razones por las que:

-

Monedas y gemas ilimitadas

-

En la versión original del juego, necesitas monedas y gemas para comprar nuevos animales, aves, plantas y hábitats. También necesitas monedas y gemas para desbloquear el modo de descanso y el modo nocturno. Sin embargo, en la versión apk mod del juego, obtienes monedas y gemas ilimitadas gratis. Puedes comprar lo que quieras sin preocuparte por quedarte sin dinero. También puedes disfrutar del modo de descanso y el modo nocturno en cualquier momento.

-

No hay anuncios y ventanas emergentes

-

En la versión original del juego, tienes que ver anuncios y ventanas emergentes para ganar monedas y gemas. Estos anuncios y ventanas emergentes pueden ser molestos y distraer. También pueden interrumpir el juego y arruinar tu estado de ánimo. Sin embargo, en la versión apk mod del juego, no tienes que ver ningún anuncio o pop-ups. Puedes jugar el juego sin interrupciones ni distracciones.

-

Fácil instalación y compatibilidad

- -

Cómo descargar e instalar Forest Island: Relajante juego Mod APK?

-

Si está interesado en descargar e instalar Forest Island: Relajante Game Mod APK, puede seguir estos pasos:

-

-

Paso 1: Descargar el archivo apk mod de una fuente de confianza

-

El primer paso es descargar el archivo apk mod de una fuente de confianza. Puede utilizar el siguiente enlace para descargar la última versión de Forest Island: Relajante Game Mod APK. El tamaño del archivo es de unos 100 MB, así que asegúrate de tener suficiente espacio en tu dispositivo.

-

Isla del bosque: Juego relajante Mod APK Enlace de descarga

-

Paso 2: Habilitar fuentes desconocidas en la configuración del dispositivo

-

El segundo paso es habilitar fuentes desconocidas en la configuración del dispositivo. Esto le permitirá instalar aplicaciones desde fuentes distintas de Google Play Store. Para hacer esto, vaya a la configuración del dispositivo, luego a la seguridad, luego a fuentes desconocidas y enciéndala.

-

Paso 3: Instalar el archivo apk mod y lanzar el juego

-

El tercer paso es instalar el archivo apk mod y lanzar el juego. Para hacer esto, localizar el archivo apk mod descargado en el almacenamiento del dispositivo, a continuación, toque en él para iniciar el proceso de instalación. Siga las instrucciones en la pantalla para completar la instalación. Una vez realizada la instalación, puedes iniciar el juego desde el cajón de la app o la pantalla de inicio.

-

Conclusión

- -

Esperamos que haya disfrutado de este artículo y lo encontró útil. Si usted tiene alguna pregunta o retroalimentación acerca de Forest Island: Relajante Game o su versión apk mod, no dude en dejar un comentario a continuación. Nos encantaría saber de ti.

-

Preguntas frecuentes

-

Aquí hay algunas preguntas frecuentes sobre Forest Island: Relajante Game y su versión mod apk:

-

P: ¿Es seguro jugar a Forest Island: Juego relajante?

-

A: Sí, Forest Island: Relaxing Game es seguro jugar. El juego no contiene ningún virus, malware o spyware que pueda dañar su dispositivo o datos. El juego tampoco requiere ninguna información personal o permisos que puedan comprometer su privacidad o seguridad.

-

Q: ¿Es Forest Island: Relajante juego Mod APK legal?

-

A: Sí, Forest Island: Relaxing Game Mod APK es legal. El archivo mod apk no es una versión hackeada o agrietada del juego. Es una versión modificada del juego que proporciona algunas características y beneficios adicionales para los usuarios. El archivo apk mod no viola ninguna ley o reglamento que regule el uso de aplicaciones y juegos.

-

Q: ¿Puedo jugar a Forest Island: Juego relajante sin conexión?

-

A: Sí, puedes jugar sin conexión a Forest Island: Relaxing Game. El juego no requiere una conexión a Internet para funcionar o funcionar correctamente. Puedes jugar el juego en cualquier momento y en cualquier lugar que quieras sin limitaciones o restricciones.

-

Q: ¿Puedo actualizar Forest Island: Relajante juego Mod APK?

-

A: Sí, puede actualizar Forest Island: Relaxing Game Mod APK. El archivo mod apk se actualiza regularmente para que coincida con la última versión del juego. Puede comprobar si hay actualizaciones desde el siguiente enlace o desde la propia aplicación. También puede habilitar las actualizaciones automáticas en la configuración de su dispositivo para obtener las últimas actualizaciones tan pronto como estén disponibles.

-

Q: ¿Puedo compartir Forest Island: Relajante juego Mod APK con mis amigos?

64aa2da5cf
-
-
\ No newline at end of file diff --git a/spaces/Benson/text-generation/Examples/Descargar Gratis De Backgammon Para Android.md b/spaces/Benson/text-generation/Examples/Descargar Gratis De Backgammon Para Android.md deleted file mode 100644 index b192ec110129b806bc4d144c3e27f4815c8b660a..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Descargar Gratis De Backgammon Para Android.md +++ /dev/null @@ -1,85 +0,0 @@ - -

Descarga gratuita de backgammon para Android: Cómo jugar el juego de mesa clásico en su teléfono

-

Introducción

-

El backgammon es uno de los juegos de mesa más antiguos y populares del mundo. Es un juego de habilidad y estrategia, donde dos jugadores compiten para mover sus piezas alrededor de un tablero y fuera de él, mientras intentan evitar que su oponente haga lo mismo. Backgammon tiene una rica historia y cultura, que se remonta a miles de años a la antigua Mesopotamia, Egipto, Roma, India y China. También es un juego de diversión y emoción, ya que el resultado puede cambiar con cada tirada de dados.

-

Pero no necesitas un tablero físico y piezas para disfrutar del backgammon. Usted puede jugar en su teléfono, en cualquier momento y en cualquier lugar, con una aplicación gratuita de backgammon. Jugar al backgammon en tu teléfono tiene muchos beneficios, como comodidad, variedad, desafío y entretenimiento. También puede jugar contra otros jugadores en línea, o contra un oponente de la computadora con diferentes niveles de dificultad. También puedes personalizar tu experiencia de backgammon con diferentes tableros, piezas, dados y configuraciones.

-

descargar gratis de backgammon para android


DOWNLOADhttps://bltlly.com/2v6KHx



-

En este artículo, le mostraremos cómo descargar e instalar backgammon gratis en su dispositivo Android. También le explicaremos cómo jugar al backgammon en su teléfono, y le daremos algunos consejos y trucos para ganar más juegos. Si usted es un principiante o un experto, usted encontrará algo útil e interesante en este artículo. Así que vamos a empezar!

-

Cómo descargar e instalar Backgammon gratis en tu dispositivo Android

-

Hay muchas aplicaciones de backgammon disponibles para dispositivos Android, pero no todas valen la pena descargarse. Algunos pueden tener gráficos pobres, anuncios molestos o un juego injusto. Para ayudarle a elegir la mejor aplicación de backgammon para su teléfono, hemos seleccionado tres de los más populares y altamente calificados. Aquí están:

-
    - -
  • Backgammon Plus by Zynga: Esta es otra gran aplicación de backgammon gratuito que ofrece modos individuales y multijugador. Usted puede jugar al backgammon clásico por sí mismo o contra amigos en línea. También puedes unirte a torneos y ligas para competir con otros jugadores de todo el mundo. Puede personalizar su experiencia de backgammon con diferentes diseños de dados y tableros. También puedes recoger recompensas completando desafíos diarios y haciendo girar la rueda.
  • -
  • Backgammon por mvsvnx-dev: Esta es una aplicación de backgammon gratis simple pero elegante que ofrece modos individuales y multijugador. Puedes jugar contra el ordenador o contra otro jugador online o offline. También puedes ajustar la velocidad y el sonido del juego según tus preferencias. La aplicación tiene un diseño minimalista que se centra en la jugabilidad.
  • -
-

Para descargar cualquiera de estas aplicaciones

Ahora sabes cómo jugar al backgammon en tu teléfono. Pero ¿cómo puedes ganar más juegos? Aquí hay algunos consejos y trucos que te ayudarán a mejorar tus habilidades de backgammon y vencer a tus oponentes.

-

Consejos y trucos para ganar juegos de backgammon en tu teléfono

-

Backgammon es un juego de habilidad y estrategia, pero también de suerte y azar. No puedes controlar los dados, pero puedes controlar cómo los usas. Aquí hay algunos consejos y trucos que te ayudarán a hacer los mejores movimientos y ganar más juegos:

-

Cómo usar estrategia y tácticas en Backgammon

-

La estrategia es el plan o objetivo general de tu juego, mientras que las tácticas son los movimientos o acciones específicas que tomas para lograr tu estrategia. En el backgammon, hay dos estrategias principales: competir y golpear. Carreras significa tratar de mover las fichas más rápido que su oponente, mientras que golpear significa tratar de bloquear o capturar las fichas de su oponente. Dependiendo de la situación, puede optar por utilizar una o ambas de estas estrategias.

-

Algunos consejos generales para el uso de estrategias y tácticas en el backgammon son:

-

-
    - -
  • Trata de evitar dejar manchas (fichas individuales) en el tablero, especialmente en el tablero de tu oponente. Esto hará que sea menos probable que te golpeen y pierdas el ritmo (la ventaja de estar por delante en la carrera).
  • -
  • Intenta crear números primos (seis puntos consecutivos) o números primos parciales (cuatro o cinco puntos consecutivos) frente a las fichas de tu oponente. Esto evitará que avancen y los obligará a quedarse atrás.
  • -
  • Trate de usar el cubo de doblar sabiamente. Solo ofrezca un doble cuando tenga una clara ventaja o una buena oportunidad de ganar. Solo acepta un doble cuando tengas una probabilidad razonable de ganar o perder por un pequeño margen.
  • -
-

Cómo evitar errores y errores comunes en Backgammon

-

Errores y errores son movimientos que te cuestan el juego o una cantidad significativa de puntos. Pueden ser causados por falta de conocimiento, mal juicio o factores emocionales. Para evitar cometer errores y errores en el backgammon, necesitas aprender de ellos y evitar repetirlos. Aquí hay algunos errores y errores comunes que debes evitar:

-
    -
  • Moverse demasiado rápido o demasiado lento. Moverse demasiado rápido puede llevar a errores descuidados, mientras que moverse demasiado lento puede llevar a pensar demasiado y perder oportunidades. Necesitas encontrar el equilibrio correcto entre velocidad y precisión.
  • -
  • Ignorar la posición de las fichas en el tablero. Necesitas prestar atención a todo el tablero, no solo a tus propias fichas. Necesitas considerar cómo tus movimientos afectan las opciones de tu oponente y viceversa.
  • -
  • Ignorar las probabilidades de los dados. Necesitas saber las probabilidades de lanzar ciertos números y combinaciones, y cómo afectan tus movimientos. Necesitas usar matemáticas y lógica, no intuición o superstición.
  • -
  • Ignorar el valor del juego. Necesitas saber cuánto vale cada juego, dependiendo de la puntuación, el cubo y las apuestas. Necesitas ajustar tu estrategia y tácticas en consecuencia.
  • -
- -

La mejor manera de mejorar tus habilidades de backgammon es practicar regularmente y aprender de tu experiencia. Jugar al backgammon en tu teléfono es una gran manera de practicar, ya que puedes jugar en cualquier momento y en cualquier lugar, contra diferentes oponentes y niveles de dificultad. Aquí hay algunas maneras de practicar y mejorar tus habilidades de backgammon en tu teléfono:

-
    -
  • Juega contra la computadora o contra otros jugadores en línea. Prueba diferentes modos, configuraciones y desafíos. Aprende de tus ganancias y pérdidas.
  • -
  • Utilice las características de sugerencia y estadísticas de la aplicación. Vea qué mueve la aplicación sugiere y por qué. Analiza tu desempeño e identifica tus fortalezas y debilidades.
  • -
  • Lee libros, artículos, blogs, foros o videos sobre backgammon. Aprende de expertos y otros jugadores que comparten sus consejos, trucos, estrategias, tácticas, análisis e historias.
  • -
  • Únete a un club de backgammon o comunidad online o offline. Conoce a otros jugadores que comparten tu pasión por el backgammon. Intercambiar ideas, opiniones, comentarios, consejos y apoyo.
  • -
-

Conclusión

- -

Aquí hay algunas preguntas frecuentes sobre el backgammon y su reproducción en el teléfono:

-
    -
  1. ¿Cuál es la mejor aplicación gratuita de backgammon para Android?
  2. -

    No hay una respuesta definitiva a esta pregunta, ya que diferentes aplicaciones pueden adaptarse a diferentes preferencias y gustos. Sin embargo, algunas de las aplicaciones gratuitas de backgammon más populares y altamente calificadas para Android son Backgammon by AI Factory Limited, Backgammon Plus by Zynga y Backgammon by mvsvnx-dev. Puedes probar cualquiera de estas aplicaciones o explorar otras opciones en Google Play Store.

    -
  3. ¿Cómo puedo jugar al backgammon online con otros jugadores?
  4. -

    La mayoría de las aplicaciones gratuitas de backgammon ofrecen un modo multijugador en línea, donde puedes jugar contra otros jugadores de todo el mundo. Para jugar en línea, es necesario tener una conexión a Internet y una cuenta válida en la aplicación. A continuación, puede optar por unirse a un juego al azar o crear su propio juego con ajustes específicos. También puede invitar a sus amigos a jugar con usted en línea.

    -
  5. ¿Cómo puedo mejorar mis habilidades de backgammon?
  6. -

    La mejor manera de mejorar tus habilidades de backgammon es practicar regularmente y aprender de tu experiencia. También puedes usar las funciones de sugerencias y estadísticas de la aplicación para ver qué mueve la aplicación y por qué. También puedes leer libros, artículos, blogs, foros o videos sobre backgammon para aprender de expertos y otros jugadores. También puede unirse a un club de backgammon o comunidad en línea o fuera de línea para conocer a otros jugadores que comparten su pasión por el backgammon.

    -
  7. ¿Cuáles son algunos términos y abreviaturas comunes de backgammon?
  8. -

    Aquí hay algunos términos y abreviaturas comunes de backgammon que puedes encontrar mientras juegas o lees sobre backgammon:

    -
      -
    • Pip: Un punto en el tablero o una unidad de distancia entre dos puntos.
    • -
    • Blot: Un solo verificador en un punto que puede ser golpeado por un oponente.
    • - -
    • Bar: El centro del tablero donde se colocan las fichas.
    • -
    • Bear off: Para quitar una ficha del tablero cuando llega al tablero.
    • -
    • Gammon: Una victoria quitando todas las fichas antes de que el oponente se lleve cualquier ficha.
    • -
    • Backgammon: Una victoria al quitar todas las fichas mientras el oponente todavía tiene una o más fichas en la barra o en su tablero.
    • -
    • Cube: El cubo de duplicación que se utiliza para aumentar el valor del juego.
    • -
    • Duplicar: Para ofrecer o aceptar un doble del valor del juego usando el cubo.
    • -
    • BG: Abreviatura para backgammon.
    • -
    • DMP: Abreviatura para punto de partido doble, el último juego de un partido donde ambos jugadores necesitan un punto para ganar.
    • -
    • GG: Abreviatura para un buen juego, una forma educada de terminar un juego o un partido.
    • -
    -
  9. ¿Dónde puedo encontrar más información sobre backgammon?
  10. -

    Si quieres aprender más sobre el backgammon, hay muchos recursos disponibles online y offline. Algunos de los mejores sitios web para el backgammon son:

    -
      -
    • [Backgammon Galore]: Un sitio web completo que cubre todo sobre el backgammon, desde reglas y estrategias y tácticas a la historia y la cultura. También tiene un foro, un glosario, un cuestionario y una colección de enlaces.
    • -
    • [Backgammon.org]: Un sitio web que ofrece juegos de backgammon en línea, torneos y lecciones. También tiene un blog, una revista, un podcast y una tienda.
    • -
    • [GammonVillage]: Un sitio web que proporciona noticias, artículos, comentarios, videos y libros sobre backgammon. También tiene una tienda, un foro y un directorio de clubes.
    • -
    -

    Algunos de los mejores libros para backgammon son:

    -
      - -
    • Backgammon por Paul Magriel: Un libro clásico que cubre la teoría y la práctica del backgammon, desde los movimientos de apertura y el juego posicional hasta la duplicación y los finales. También incluye diagramas, ejemplos y ejercicios.
    • -
    • Backgammon Boot Camp por Walter Trice: Un libro completo que cubre todos los aspectos del backgammon, desde fundamentos y conceptos hasta análisis y evaluación. También incluye problemas, soluciones, exámenes y pruebas.
    • -
    -

    Estos son solo algunos de los muchos recursos disponibles para los entusiastas del backgammon. También puedes encontrar más información en las redes sociales, como Facebook, Twitter, YouTube o Instagram.

    64aa2da5cf
    -
    -
    \ No newline at end of file diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/boto3/dynamodb/transform.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/boto3/dynamodb/transform.py deleted file mode 100644 index 3944f3151fa1a87d5454523f37459b0511f32ced..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/boto3/dynamodb/transform.py +++ /dev/null @@ -1,343 +0,0 @@ -# Copyright 2015 Amazon.com, Inc. or its affiliates. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"). You -# may not use this file except in compliance with the License. A copy of -# the License is located at -# -# https://aws.amazon.com/apache2.0/ -# -# or in the "license" file accompanying this file. This file is -# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF -# ANY KIND, either express or implied. See the License for the specific -# language governing permissions and limitations under the License. -import copy - -from boto3.compat import collections_abc -from boto3.docs.utils import DocumentModifiedShape -from boto3.dynamodb.conditions import ConditionBase, ConditionExpressionBuilder -from boto3.dynamodb.types import TypeDeserializer, TypeSerializer - - -def register_high_level_interface(base_classes, **kwargs): - base_classes.insert(0, DynamoDBHighLevelResource) - - -class _ForgetfulDict(dict): - """A dictionary that discards any items set on it. For use as `memo` in - `copy.deepcopy()` when every instance of a repeated object in the deepcopied - data structure should result in a separate copy. - """ - - def __setitem__(self, key, value): - pass - - -def copy_dynamodb_params(params, **kwargs): - return copy.deepcopy(params, memo=_ForgetfulDict()) - - -class DynamoDBHighLevelResource: - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - - # Apply handler that creates a copy of the user provided dynamodb - # item such that it can be modified. - self.meta.client.meta.events.register( - 'provide-client-params.dynamodb', - copy_dynamodb_params, - unique_id='dynamodb-create-params-copy', - ) - - self._injector = TransformationInjector() - # Apply the handler that generates condition expressions including - # placeholders. - self.meta.client.meta.events.register( - 'before-parameter-build.dynamodb', - self._injector.inject_condition_expressions, - unique_id='dynamodb-condition-expression', - ) - - # Apply the handler that serializes the request from python - # types to dynamodb types. - self.meta.client.meta.events.register( - 'before-parameter-build.dynamodb', - self._injector.inject_attribute_value_input, - unique_id='dynamodb-attr-value-input', - ) - - # Apply the handler that deserializes the response from dynamodb - # types to python types. - self.meta.client.meta.events.register( - 'after-call.dynamodb', - self._injector.inject_attribute_value_output, - unique_id='dynamodb-attr-value-output', - ) - - # Apply the documentation customizations to account for - # the transformations. - attr_value_shape_docs = DocumentModifiedShape( - 'AttributeValue', - new_type='valid DynamoDB type', - new_description=( - '- The value of the attribute. The valid value types are ' - 'listed in the ' - ':ref:`DynamoDB Reference Guide`.' - ), - new_example_value=( - '\'string\'|123|Binary(b\'bytes\')|True|None|set([\'string\'])' - '|set([123])|set([Binary(b\'bytes\')])|[]|{}' - ), - ) - - key_expression_shape_docs = DocumentModifiedShape( - 'KeyExpression', - new_type=( - 'condition from :py:class:`boto3.dynamodb.conditions.Key` ' - 'method' - ), - new_description=( - 'The condition(s) a key(s) must meet. Valid conditions are ' - 'listed in the ' - ':ref:`DynamoDB Reference Guide`.' - ), - new_example_value='Key(\'mykey\').eq(\'myvalue\')', - ) - - con_expression_shape_docs = DocumentModifiedShape( - 'ConditionExpression', - new_type=( - 'condition from :py:class:`boto3.dynamodb.conditions.Attr` ' - 'method' - ), - new_description=( - 'The condition(s) an attribute(s) must meet. Valid conditions ' - 'are listed in the ' - ':ref:`DynamoDB Reference Guide`.' - ), - new_example_value='Attr(\'myattribute\').eq(\'myvalue\')', - ) - - self.meta.client.meta.events.register( - 'docs.*.dynamodb.*.complete-section', - attr_value_shape_docs.replace_documentation_for_matching_shape, - unique_id='dynamodb-attr-value-docs', - ) - - self.meta.client.meta.events.register( - 'docs.*.dynamodb.*.complete-section', - key_expression_shape_docs.replace_documentation_for_matching_shape, - unique_id='dynamodb-key-expression-docs', - ) - - self.meta.client.meta.events.register( - 'docs.*.dynamodb.*.complete-section', - con_expression_shape_docs.replace_documentation_for_matching_shape, - unique_id='dynamodb-cond-expression-docs', - ) - - -class TransformationInjector: - """Injects the transformations into the user provided parameters.""" - - def __init__( - self, - transformer=None, - condition_builder=None, - serializer=None, - deserializer=None, - ): - self._transformer = transformer - if transformer is None: - self._transformer = ParameterTransformer() - - self._condition_builder = condition_builder - if condition_builder is None: - self._condition_builder = ConditionExpressionBuilder() - - self._serializer = serializer - if serializer is None: - self._serializer = TypeSerializer() - - self._deserializer = deserializer - if deserializer is None: - self._deserializer = TypeDeserializer() - - def inject_condition_expressions(self, params, model, **kwargs): - """Injects the condition expression transformation into the parameters - - This injection includes transformations for ConditionExpression shapes - and KeyExpression shapes. It also handles any placeholder names and - values that are generated when transforming the condition expressions. - """ - self._condition_builder.reset() - generated_names = {} - generated_values = {} - - # Create and apply the Condition Expression transformation. - transformation = ConditionExpressionTransformation( - self._condition_builder, - placeholder_names=generated_names, - placeholder_values=generated_values, - is_key_condition=False, - ) - self._transformer.transform( - params, model.input_shape, transformation, 'ConditionExpression' - ) - - # Create and apply the Key Condition Expression transformation. - transformation = ConditionExpressionTransformation( - self._condition_builder, - placeholder_names=generated_names, - placeholder_values=generated_values, - is_key_condition=True, - ) - self._transformer.transform( - params, model.input_shape, transformation, 'KeyExpression' - ) - - expr_attr_names_input = 'ExpressionAttributeNames' - expr_attr_values_input = 'ExpressionAttributeValues' - - # Now that all of the condition expression transformation are done, - # update the placeholder dictionaries in the request. - if expr_attr_names_input in params: - params[expr_attr_names_input].update(generated_names) - else: - if generated_names: - params[expr_attr_names_input] = generated_names - - if expr_attr_values_input in params: - params[expr_attr_values_input].update(generated_values) - else: - if generated_values: - params[expr_attr_values_input] = generated_values - - def inject_attribute_value_input(self, params, model, **kwargs): - """Injects DynamoDB serialization into parameter input""" - self._transformer.transform( - params, - model.input_shape, - self._serializer.serialize, - 'AttributeValue', - ) - - def inject_attribute_value_output(self, parsed, model, **kwargs): - """Injects DynamoDB deserialization into responses""" - if model.output_shape is not None: - self._transformer.transform( - parsed, - model.output_shape, - self._deserializer.deserialize, - 'AttributeValue', - ) - - -class ConditionExpressionTransformation: - """Provides a transformation for condition expressions - - The ``ParameterTransformer`` class can call this class directly - to transform the condition expressions in the parameters provided. - """ - - def __init__( - self, - condition_builder, - placeholder_names, - placeholder_values, - is_key_condition=False, - ): - self._condition_builder = condition_builder - self._placeholder_names = placeholder_names - self._placeholder_values = placeholder_values - self._is_key_condition = is_key_condition - - def __call__(self, value): - if isinstance(value, ConditionBase): - # Create a conditional expression string with placeholders - # for the provided condition. - built_expression = self._condition_builder.build_expression( - value, is_key_condition=self._is_key_condition - ) - - self._placeholder_names.update( - built_expression.attribute_name_placeholders - ) - self._placeholder_values.update( - built_expression.attribute_value_placeholders - ) - - return built_expression.condition_expression - # Use the user provided value if it is not a ConditonBase object. - return value - - -class ParameterTransformer: - """Transforms the input to and output from botocore based on shape""" - - def transform(self, params, model, transformation, target_shape): - """Transforms the dynamodb input to or output from botocore - - It applies a specified transformation whenever a specific shape name - is encountered while traversing the parameters in the dictionary. - - :param params: The parameters structure to transform. - :param model: The operation model. - :param transformation: The function to apply the parameter - :param target_shape: The name of the shape to apply the - transformation to - """ - self._transform_parameters(model, params, transformation, target_shape) - - def _transform_parameters( - self, model, params, transformation, target_shape - ): - type_name = model.type_name - if type_name in ('structure', 'map', 'list'): - getattr(self, f'_transform_{type_name}')( - model, params, transformation, target_shape - ) - - def _transform_structure( - self, model, params, transformation, target_shape - ): - if not isinstance(params, collections_abc.Mapping): - return - for param in params: - if param in model.members: - member_model = model.members[param] - member_shape = member_model.name - if member_shape == target_shape: - params[param] = transformation(params[param]) - else: - self._transform_parameters( - member_model, - params[param], - transformation, - target_shape, - ) - - def _transform_map(self, model, params, transformation, target_shape): - if not isinstance(params, collections_abc.Mapping): - return - value_model = model.value - value_shape = value_model.name - for key, value in params.items(): - if value_shape == target_shape: - params[key] = transformation(value) - else: - self._transform_parameters( - value_model, params[key], transformation, target_shape - ) - - def _transform_list(self, model, params, transformation, target_shape): - if not isinstance(params, collections_abc.MutableSequence): - return - member_model = model.member - member_shape = member_model.name - for i, item in enumerate(params): - if member_shape == target_shape: - params[i] = transformation(item) - else: - self._transform_parameters( - member_model, params[i], transformation, target_shape - ) diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/chardet/langrussianmodel.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/chardet/langrussianmodel.py deleted file mode 100644 index 39a5388948ef12b69b65fbfa89a84c6ef4a4bfd6..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/chardet/langrussianmodel.py +++ /dev/null @@ -1,5725 +0,0 @@ -from pip._vendor.chardet.sbcharsetprober import SingleByteCharSetModel - -# 3: Positive -# 2: Likely -# 1: Unlikely -# 0: Negative - -RUSSIAN_LANG_MODEL = { - 37: { # 'А' - 37: 0, # 'А' - 44: 1, # 'Б' - 33: 1, # 'В' - 46: 1, # 'Г' - 41: 1, # 'Д' - 48: 1, # 'Е' - 56: 1, # 'Ж' - 51: 1, # 'З' - 42: 1, # 'И' - 60: 1, # 'Й' - 36: 1, # 'К' - 49: 1, # 'Л' - 38: 1, # 'М' - 31: 2, # 'Н' - 34: 1, # 'О' - 35: 1, # 'П' - 45: 1, # 'Р' - 32: 1, # 'С' - 40: 1, # 'Т' - 52: 1, # 'У' - 53: 1, # 'Ф' - 55: 1, # 'Х' - 58: 1, # 'Ц' - 50: 1, # 'Ч' - 57: 1, # 'Ш' - 63: 1, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 1, # 'Ю' - 43: 1, # 'Я' - 3: 1, # 'а' - 21: 2, # 'б' - 10: 2, # 'в' - 19: 2, # 'г' - 13: 2, # 'д' - 2: 0, # 'е' - 24: 1, # 'ж' - 20: 1, # 'з' - 4: 0, # 'и' - 23: 1, # 'й' - 11: 2, # 'к' - 8: 3, # 'л' - 12: 2, # 'м' - 5: 2, # 'н' - 1: 0, # 'о' - 15: 2, # 'п' - 9: 2, # 'р' - 7: 2, # 'с' - 6: 2, # 'т' - 14: 2, # 'у' - 39: 2, # 'ф' - 26: 2, # 'х' - 28: 0, # 'ц' - 22: 1, # 'ч' - 25: 2, # 'ш' - 29: 0, # 'щ' - 54: 0, # 'ъ' - 18: 0, # 'ы' - 17: 0, # 'ь' - 30: 1, # 'э' - 27: 0, # 'ю' - 16: 0, # 'я' - }, - 44: { # 'Б' - 37: 1, # 'А' - 44: 0, # 'Б' - 33: 1, # 'В' - 46: 1, # 'Г' - 41: 0, # 'Д' - 48: 1, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 1, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 1, # 'Л' - 38: 1, # 'М' - 31: 1, # 'Н' - 34: 1, # 'О' - 35: 0, # 'П' - 45: 1, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 1, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 1, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 1, # 'Я' - 3: 2, # 'а' - 21: 0, # 'б' - 10: 0, # 'в' - 19: 0, # 'г' - 13: 1, # 'д' - 2: 3, # 'е' - 24: 0, # 'ж' - 20: 0, # 'з' - 4: 2, # 'и' - 23: 0, # 'й' - 11: 0, # 'к' - 8: 2, # 'л' - 12: 0, # 'м' - 5: 0, # 'н' - 1: 3, # 'о' - 15: 0, # 'п' - 9: 2, # 'р' - 7: 0, # 'с' - 6: 0, # 'т' - 14: 2, # 'у' - 39: 0, # 'ф' - 26: 0, # 'х' - 28: 0, # 'ц' - 22: 0, # 'ч' - 25: 0, # 'ш' - 29: 0, # 'щ' - 54: 0, # 'ъ' - 18: 2, # 'ы' - 17: 1, # 'ь' - 30: 2, # 'э' - 27: 1, # 'ю' - 16: 1, # 'я' - }, - 33: { # 'В' - 37: 2, # 'А' - 44: 0, # 'Б' - 33: 1, # 'В' - 46: 0, # 'Г' - 41: 1, # 'Д' - 48: 1, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 1, # 'И' - 60: 0, # 'Й' - 36: 1, # 'К' - 49: 1, # 'Л' - 38: 1, # 'М' - 31: 1, # 'Н' - 34: 1, # 'О' - 35: 1, # 'П' - 45: 1, # 'Р' - 32: 1, # 'С' - 40: 1, # 'Т' - 52: 1, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 1, # 'Ш' - 63: 0, # 'Щ' - 62: 1, # 'Ы' - 61: 1, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 1, # 'Я' - 3: 2, # 'а' - 21: 1, # 'б' - 10: 1, # 'в' - 19: 1, # 'г' - 13: 2, # 'д' - 2: 3, # 'е' - 24: 0, # 'ж' - 20: 2, # 'з' - 4: 2, # 'и' - 23: 0, # 'й' - 11: 1, # 'к' - 8: 2, # 'л' - 12: 2, # 'м' - 5: 2, # 'н' - 1: 3, # 'о' - 15: 2, # 'п' - 9: 2, # 'р' - 7: 3, # 'с' - 6: 2, # 'т' - 14: 2, # 'у' - 39: 0, # 'ф' - 26: 1, # 'х' - 28: 1, # 'ц' - 22: 2, # 'ч' - 25: 1, # 'ш' - 29: 0, # 'щ' - 54: 1, # 'ъ' - 18: 3, # 'ы' - 17: 1, # 'ь' - 30: 2, # 'э' - 27: 0, # 'ю' - 16: 1, # 'я' - }, - 46: { # 'Г' - 37: 1, # 'А' - 44: 1, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 1, # 'Д' - 48: 1, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 1, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 1, # 'Л' - 38: 1, # 'М' - 31: 1, # 'Н' - 34: 1, # 'О' - 35: 1, # 'П' - 45: 1, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 1, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 2, # 'а' - 21: 0, # 'б' - 10: 1, # 'в' - 19: 0, # 'г' - 13: 2, # 'д' - 2: 2, # 'е' - 24: 0, # 'ж' - 20: 0, # 'з' - 4: 2, # 'и' - 23: 0, # 'й' - 11: 0, # 'к' - 8: 2, # 'л' - 12: 1, # 'м' - 5: 1, # 'н' - 1: 3, # 'о' - 15: 0, # 'п' - 9: 2, # 'р' - 7: 0, # 'с' - 6: 0, # 'т' - 14: 2, # 'у' - 39: 0, # 'ф' - 26: 0, # 'х' - 28: 0, # 'ц' - 22: 0, # 'ч' - 25: 0, # 'ш' - 29: 0, # 'щ' - 54: 0, # 'ъ' - 18: 0, # 'ы' - 17: 1, # 'ь' - 30: 1, # 'э' - 27: 1, # 'ю' - 16: 0, # 'я' - }, - 41: { # 'Д' - 37: 1, # 'А' - 44: 0, # 'Б' - 33: 1, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 2, # 'Е' - 56: 1, # 'Ж' - 51: 0, # 'З' - 42: 1, # 'И' - 60: 0, # 'Й' - 36: 1, # 'К' - 49: 1, # 'Л' - 38: 0, # 'М' - 31: 1, # 'Н' - 34: 1, # 'О' - 35: 0, # 'П' - 45: 1, # 'Р' - 32: 1, # 'С' - 40: 0, # 'Т' - 52: 1, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 1, # 'Ц' - 50: 1, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 1, # 'Ы' - 61: 1, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 1, # 'Я' - 3: 3, # 'а' - 21: 0, # 'б' - 10: 2, # 'в' - 19: 0, # 'г' - 13: 0, # 'д' - 2: 2, # 'е' - 24: 3, # 'ж' - 20: 1, # 'з' - 4: 2, # 'и' - 23: 0, # 'й' - 11: 0, # 'к' - 8: 2, # 'л' - 12: 1, # 'м' - 5: 1, # 'н' - 1: 3, # 'о' - 15: 0, # 'п' - 9: 2, # 'р' - 7: 0, # 'с' - 6: 0, # 'т' - 14: 2, # 'у' - 39: 0, # 'ф' - 26: 1, # 'х' - 28: 0, # 'ц' - 22: 0, # 'ч' - 25: 0, # 'ш' - 29: 0, # 'щ' - 54: 0, # 'ъ' - 18: 1, # 'ы' - 17: 1, # 'ь' - 30: 2, # 'э' - 27: 1, # 'ю' - 16: 1, # 'я' - }, - 48: { # 'Е' - 37: 1, # 'А' - 44: 1, # 'Б' - 33: 1, # 'В' - 46: 1, # 'Г' - 41: 1, # 'Д' - 48: 1, # 'Е' - 56: 1, # 'Ж' - 51: 1, # 'З' - 42: 1, # 'И' - 60: 1, # 'Й' - 36: 1, # 'К' - 49: 1, # 'Л' - 38: 1, # 'М' - 31: 2, # 'Н' - 34: 1, # 'О' - 35: 1, # 'П' - 45: 2, # 'Р' - 32: 2, # 'С' - 40: 1, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 1, # 'Х' - 58: 1, # 'Ц' - 50: 1, # 'Ч' - 57: 1, # 'Ш' - 63: 1, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 1, # 'Я' - 3: 0, # 'а' - 21: 0, # 'б' - 10: 2, # 'в' - 19: 2, # 'г' - 13: 2, # 'д' - 2: 2, # 'е' - 24: 1, # 'ж' - 20: 1, # 'з' - 4: 0, # 'и' - 23: 2, # 'й' - 11: 1, # 'к' - 8: 2, # 'л' - 12: 2, # 'м' - 5: 1, # 'н' - 1: 0, # 'о' - 15: 1, # 'п' - 9: 1, # 'р' - 7: 3, # 'с' - 6: 0, # 'т' - 14: 0, # 'у' - 39: 1, # 'ф' - 26: 1, # 'х' - 28: 0, # 'ц' - 22: 0, # 'ч' - 25: 1, # 'ш' - 29: 2, # 'щ' - 54: 0, # 'ъ' - 18: 0, # 'ы' - 17: 0, # 'ь' - 30: 0, # 'э' - 27: 1, # 'ю' - 16: 0, # 'я' - }, - 56: { # 'Ж' - 37: 1, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 1, # 'Д' - 48: 1, # 'Е' - 56: 0, # 'Ж' - 51: 1, # 'З' - 42: 1, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 0, # 'Л' - 38: 0, # 'М' - 31: 1, # 'Н' - 34: 1, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 1, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 2, # 'а' - 21: 1, # 'б' - 10: 0, # 'в' - 19: 1, # 'г' - 13: 1, # 'д' - 2: 2, # 'е' - 24: 1, # 'ж' - 20: 0, # 'з' - 4: 2, # 'и' - 23: 0, # 'й' - 11: 0, # 'к' - 8: 0, # 'л' - 12: 1, # 'м' - 5: 0, # 'н' - 1: 2, # 'о' - 15: 0, # 'п' - 9: 1, # 'р' - 7: 0, # 'с' - 6: 0, # 'т' - 14: 2, # 'у' - 39: 0, # 'ф' - 26: 0, # 'х' - 28: 0, # 'ц' - 22: 0, # 'ч' - 25: 0, # 'ш' - 29: 0, # 'щ' - 54: 0, # 'ъ' - 18: 0, # 'ы' - 17: 0, # 'ь' - 30: 0, # 'э' - 27: 2, # 'ю' - 16: 0, # 'я' - }, - 51: { # 'З' - 37: 1, # 'А' - 44: 0, # 'Б' - 33: 1, # 'В' - 46: 1, # 'Г' - 41: 1, # 'Д' - 48: 1, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 1, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 1, # 'Л' - 38: 1, # 'М' - 31: 1, # 'Н' - 34: 1, # 'О' - 35: 0, # 'П' - 45: 1, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 1, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 1, # 'Ы' - 61: 1, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 3, # 'а' - 21: 1, # 'б' - 10: 2, # 'в' - 19: 0, # 'г' - 13: 2, # 'д' - 2: 2, # 'е' - 24: 0, # 'ж' - 20: 0, # 'з' - 4: 2, # 'и' - 23: 0, # 'й' - 11: 0, # 'к' - 8: 1, # 'л' - 12: 1, # 'м' - 5: 2, # 'н' - 1: 2, # 'о' - 15: 0, # 'п' - 9: 1, # 'р' - 7: 0, # 'с' - 6: 0, # 'т' - 14: 1, # 'у' - 39: 0, # 'ф' - 26: 0, # 'х' - 28: 0, # 'ц' - 22: 0, # 'ч' - 25: 0, # 'ш' - 29: 0, # 'щ' - 54: 0, # 'ъ' - 18: 1, # 'ы' - 17: 0, # 'ь' - 30: 0, # 'э' - 27: 0, # 'ю' - 16: 1, # 'я' - }, - 42: { # 'И' - 37: 1, # 'А' - 44: 1, # 'Б' - 33: 1, # 'В' - 46: 1, # 'Г' - 41: 1, # 'Д' - 48: 2, # 'Е' - 56: 1, # 'Ж' - 51: 1, # 'З' - 42: 1, # 'И' - 60: 1, # 'Й' - 36: 1, # 'К' - 49: 1, # 'Л' - 38: 1, # 'М' - 31: 1, # 'Н' - 34: 1, # 'О' - 35: 1, # 'П' - 45: 1, # 'Р' - 32: 2, # 'С' - 40: 1, # 'Т' - 52: 0, # 'У' - 53: 1, # 'Ф' - 55: 1, # 'Х' - 58: 1, # 'Ц' - 50: 1, # 'Ч' - 57: 0, # 'Ш' - 63: 1, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 1, # 'Ю' - 43: 1, # 'Я' - 3: 1, # 'а' - 21: 2, # 'б' - 10: 2, # 'в' - 19: 2, # 'г' - 13: 2, # 'д' - 2: 2, # 'е' - 24: 0, # 'ж' - 20: 2, # 'з' - 4: 1, # 'и' - 23: 0, # 'й' - 11: 1, # 'к' - 8: 2, # 'л' - 12: 2, # 'м' - 5: 2, # 'н' - 1: 1, # 'о' - 15: 1, # 'п' - 9: 2, # 'р' - 7: 2, # 'с' - 6: 2, # 'т' - 14: 1, # 'у' - 39: 1, # 'ф' - 26: 2, # 'х' - 28: 0, # 'ц' - 22: 0, # 'ч' - 25: 1, # 'ш' - 29: 1, # 'щ' - 54: 0, # 'ъ' - 18: 0, # 'ы' - 17: 0, # 'ь' - 30: 0, # 'э' - 27: 1, # 'ю' - 16: 0, # 'я' - }, - 60: { # 'Й' - 37: 0, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 1, # 'Д' - 48: 0, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 0, # 'И' - 60: 0, # 'Й' - 36: 1, # 'К' - 49: 1, # 'Л' - 38: 0, # 'М' - 31: 1, # 'Н' - 34: 0, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 1, # 'С' - 40: 1, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 1, # 'Х' - 58: 1, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 0, # 'а' - 21: 0, # 'б' - 10: 0, # 'в' - 19: 0, # 'г' - 13: 0, # 'д' - 2: 1, # 'е' - 24: 0, # 'ж' - 20: 0, # 'з' - 4: 0, # 'и' - 23: 0, # 'й' - 11: 0, # 'к' - 8: 0, # 'л' - 12: 0, # 'м' - 5: 0, # 'н' - 1: 2, # 'о' - 15: 0, # 'п' - 9: 0, # 'р' - 7: 0, # 'с' - 6: 0, # 'т' - 14: 0, # 'у' - 39: 0, # 'ф' - 26: 0, # 'х' - 28: 0, # 'ц' - 22: 0, # 'ч' - 25: 0, # 'ш' - 29: 0, # 'щ' - 54: 0, # 'ъ' - 18: 0, # 'ы' - 17: 0, # 'ь' - 30: 0, # 'э' - 27: 0, # 'ю' - 16: 0, # 'я' - }, - 36: { # 'К' - 37: 2, # 'А' - 44: 0, # 'Б' - 33: 1, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 1, # 'Е' - 56: 0, # 'Ж' - 51: 1, # 'З' - 42: 1, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 1, # 'Л' - 38: 0, # 'М' - 31: 1, # 'Н' - 34: 2, # 'О' - 35: 1, # 'П' - 45: 1, # 'Р' - 32: 1, # 'С' - 40: 1, # 'Т' - 52: 1, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 1, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 3, # 'а' - 21: 0, # 'б' - 10: 1, # 'в' - 19: 0, # 'г' - 13: 0, # 'д' - 2: 2, # 'е' - 24: 0, # 'ж' - 20: 0, # 'з' - 4: 2, # 'и' - 23: 0, # 'й' - 11: 0, # 'к' - 8: 2, # 'л' - 12: 0, # 'м' - 5: 1, # 'н' - 1: 3, # 'о' - 15: 0, # 'п' - 9: 2, # 'р' - 7: 2, # 'с' - 6: 2, # 'т' - 14: 2, # 'у' - 39: 0, # 'ф' - 26: 1, # 'х' - 28: 0, # 'ц' - 22: 0, # 'ч' - 25: 0, # 'ш' - 29: 0, # 'щ' - 54: 0, # 'ъ' - 18: 1, # 'ы' - 17: 1, # 'ь' - 30: 2, # 'э' - 27: 1, # 'ю' - 16: 0, # 'я' - }, - 49: { # 'Л' - 37: 2, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 1, # 'Г' - 41: 0, # 'Д' - 48: 1, # 'Е' - 56: 1, # 'Ж' - 51: 0, # 'З' - 42: 1, # 'И' - 60: 0, # 'Й' - 36: 1, # 'К' - 49: 1, # 'Л' - 38: 1, # 'М' - 31: 0, # 'Н' - 34: 1, # 'О' - 35: 1, # 'П' - 45: 0, # 'Р' - 32: 1, # 'С' - 40: 1, # 'Т' - 52: 1, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 1, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 1, # 'Ы' - 61: 1, # 'Ь' - 47: 0, # 'Э' - 59: 1, # 'Ю' - 43: 1, # 'Я' - 3: 2, # 'а' - 21: 0, # 'б' - 10: 0, # 'в' - 19: 1, # 'г' - 13: 0, # 'д' - 2: 2, # 'е' - 24: 1, # 'ж' - 20: 0, # 'з' - 4: 2, # 'и' - 23: 0, # 'й' - 11: 0, # 'к' - 8: 1, # 'л' - 12: 0, # 'м' - 5: 1, # 'н' - 1: 2, # 'о' - 15: 0, # 'п' - 9: 0, # 'р' - 7: 0, # 'с' - 6: 0, # 'т' - 14: 2, # 'у' - 39: 0, # 'ф' - 26: 1, # 'х' - 28: 0, # 'ц' - 22: 0, # 'ч' - 25: 0, # 'ш' - 29: 0, # 'щ' - 54: 0, # 'ъ' - 18: 1, # 'ы' - 17: 1, # 'ь' - 30: 2, # 'э' - 27: 2, # 'ю' - 16: 1, # 'я' - }, - 38: { # 'М' - 37: 1, # 'А' - 44: 1, # 'Б' - 33: 1, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 1, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 1, # 'И' - 60: 0, # 'Й' - 36: 1, # 'К' - 49: 1, # 'Л' - 38: 1, # 'М' - 31: 1, # 'Н' - 34: 1, # 'О' - 35: 1, # 'П' - 45: 1, # 'Р' - 32: 1, # 'С' - 40: 1, # 'Т' - 52: 1, # 'У' - 53: 1, # 'Ф' - 55: 1, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 1, # 'Ы' - 61: 0, # 'Ь' - 47: 1, # 'Э' - 59: 0, # 'Ю' - 43: 1, # 'Я' - 3: 3, # 'а' - 21: 0, # 'б' - 10: 0, # 'в' - 19: 1, # 'г' - 13: 0, # 'д' - 2: 2, # 'е' - 24: 0, # 'ж' - 20: 0, # 'з' - 4: 3, # 'и' - 23: 0, # 'й' - 11: 0, # 'к' - 8: 1, # 'л' - 12: 1, # 'м' - 5: 2, # 'н' - 1: 3, # 'о' - 15: 0, # 'п' - 9: 1, # 'р' - 7: 1, # 'с' - 6: 0, # 'т' - 14: 2, # 'у' - 39: 0, # 'ф' - 26: 0, # 'х' - 28: 0, # 'ц' - 22: 0, # 'ч' - 25: 0, # 'ш' - 29: 0, # 'щ' - 54: 0, # 'ъ' - 18: 3, # 'ы' - 17: 1, # 'ь' - 30: 2, # 'э' - 27: 1, # 'ю' - 16: 1, # 'я' - }, - 31: { # 'Н' - 37: 2, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 1, # 'Г' - 41: 1, # 'Д' - 48: 1, # 'Е' - 56: 0, # 'Ж' - 51: 1, # 'З' - 42: 2, # 'И' - 60: 0, # 'Й' - 36: 1, # 'К' - 49: 0, # 'Л' - 38: 0, # 'М' - 31: 1, # 'Н' - 34: 1, # 'О' - 35: 0, # 'П' - 45: 1, # 'Р' - 32: 1, # 'С' - 40: 1, # 'Т' - 52: 1, # 'У' - 53: 1, # 'Ф' - 55: 1, # 'Х' - 58: 1, # 'Ц' - 50: 1, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 1, # 'Ы' - 61: 1, # 'Ь' - 47: 1, # 'Э' - 59: 0, # 'Ю' - 43: 1, # 'Я' - 3: 3, # 'а' - 21: 0, # 'б' - 10: 0, # 'в' - 19: 0, # 'г' - 13: 0, # 'д' - 2: 3, # 'е' - 24: 0, # 'ж' - 20: 0, # 'з' - 4: 3, # 'и' - 23: 0, # 'й' - 11: 0, # 'к' - 8: 0, # 'л' - 12: 0, # 'м' - 5: 0, # 'н' - 1: 3, # 'о' - 15: 0, # 'п' - 9: 1, # 'р' - 7: 0, # 'с' - 6: 0, # 'т' - 14: 3, # 'у' - 39: 0, # 'ф' - 26: 1, # 'х' - 28: 0, # 'ц' - 22: 0, # 'ч' - 25: 0, # 'ш' - 29: 0, # 'щ' - 54: 0, # 'ъ' - 18: 1, # 'ы' - 17: 2, # 'ь' - 30: 1, # 'э' - 27: 1, # 'ю' - 16: 1, # 'я' - }, - 34: { # 'О' - 37: 0, # 'А' - 44: 1, # 'Б' - 33: 1, # 'В' - 46: 1, # 'Г' - 41: 2, # 'Д' - 48: 1, # 'Е' - 56: 1, # 'Ж' - 51: 1, # 'З' - 42: 1, # 'И' - 60: 1, # 'Й' - 36: 1, # 'К' - 49: 2, # 'Л' - 38: 1, # 'М' - 31: 2, # 'Н' - 34: 1, # 'О' - 35: 1, # 'П' - 45: 2, # 'Р' - 32: 1, # 'С' - 40: 1, # 'Т' - 52: 1, # 'У' - 53: 1, # 'Ф' - 55: 1, # 'Х' - 58: 0, # 'Ц' - 50: 1, # 'Ч' - 57: 1, # 'Ш' - 63: 1, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 1, # 'Я' - 3: 1, # 'а' - 21: 2, # 'б' - 10: 1, # 'в' - 19: 2, # 'г' - 13: 2, # 'д' - 2: 0, # 'е' - 24: 1, # 'ж' - 20: 1, # 'з' - 4: 0, # 'и' - 23: 1, # 'й' - 11: 2, # 'к' - 8: 2, # 'л' - 12: 1, # 'м' - 5: 3, # 'н' - 1: 0, # 'о' - 15: 2, # 'п' - 9: 2, # 'р' - 7: 2, # 'с' - 6: 2, # 'т' - 14: 1, # 'у' - 39: 1, # 'ф' - 26: 2, # 'х' - 28: 1, # 'ц' - 22: 2, # 'ч' - 25: 2, # 'ш' - 29: 1, # 'щ' - 54: 0, # 'ъ' - 18: 0, # 'ы' - 17: 0, # 'ь' - 30: 0, # 'э' - 27: 0, # 'ю' - 16: 0, # 'я' - }, - 35: { # 'П' - 37: 1, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 1, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 1, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 1, # 'Л' - 38: 0, # 'М' - 31: 1, # 'Н' - 34: 1, # 'О' - 35: 1, # 'П' - 45: 2, # 'Р' - 32: 1, # 'С' - 40: 1, # 'Т' - 52: 1, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 1, # 'Ы' - 61: 1, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 1, # 'Я' - 3: 2, # 'а' - 21: 0, # 'б' - 10: 0, # 'в' - 19: 0, # 'г' - 13: 0, # 'д' - 2: 2, # 'е' - 24: 0, # 'ж' - 20: 0, # 'з' - 4: 2, # 'и' - 23: 0, # 'й' - 11: 0, # 'к' - 8: 2, # 'л' - 12: 0, # 'м' - 5: 1, # 'н' - 1: 3, # 'о' - 15: 0, # 'п' - 9: 3, # 'р' - 7: 1, # 'с' - 6: 1, # 'т' - 14: 2, # 'у' - 39: 1, # 'ф' - 26: 0, # 'х' - 28: 0, # 'ц' - 22: 0, # 'ч' - 25: 1, # 'ш' - 29: 0, # 'щ' - 54: 0, # 'ъ' - 18: 1, # 'ы' - 17: 2, # 'ь' - 30: 1, # 'э' - 27: 0, # 'ю' - 16: 2, # 'я' - }, - 45: { # 'Р' - 37: 2, # 'А' - 44: 1, # 'Б' - 33: 1, # 'В' - 46: 1, # 'Г' - 41: 1, # 'Д' - 48: 2, # 'Е' - 56: 1, # 'Ж' - 51: 0, # 'З' - 42: 2, # 'И' - 60: 0, # 'Й' - 36: 1, # 'К' - 49: 1, # 'Л' - 38: 1, # 'М' - 31: 1, # 'Н' - 34: 2, # 'О' - 35: 0, # 'П' - 45: 1, # 'Р' - 32: 1, # 'С' - 40: 1, # 'Т' - 52: 1, # 'У' - 53: 0, # 'Ф' - 55: 1, # 'Х' - 58: 1, # 'Ц' - 50: 1, # 'Ч' - 57: 1, # 'Ш' - 63: 0, # 'Щ' - 62: 1, # 'Ы' - 61: 1, # 'Ь' - 47: 1, # 'Э' - 59: 1, # 'Ю' - 43: 1, # 'Я' - 3: 3, # 'а' - 21: 0, # 'б' - 10: 1, # 'в' - 19: 0, # 'г' - 13: 0, # 'д' - 2: 2, # 'е' - 24: 1, # 'ж' - 20: 0, # 'з' - 4: 2, # 'и' - 23: 0, # 'й' - 11: 0, # 'к' - 8: 0, # 'л' - 12: 0, # 'м' - 5: 0, # 'н' - 1: 3, # 'о' - 15: 0, # 'п' - 9: 1, # 'р' - 7: 0, # 'с' - 6: 0, # 'т' - 14: 2, # 'у' - 39: 0, # 'ф' - 26: 0, # 'х' - 28: 0, # 'ц' - 22: 0, # 'ч' - 25: 0, # 'ш' - 29: 0, # 'щ' - 54: 0, # 'ъ' - 18: 2, # 'ы' - 17: 0, # 'ь' - 30: 1, # 'э' - 27: 1, # 'ю' - 16: 2, # 'я' - }, - 32: { # 'С' - 37: 1, # 'А' - 44: 1, # 'Б' - 33: 1, # 'В' - 46: 1, # 'Г' - 41: 1, # 'Д' - 48: 1, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 1, # 'И' - 60: 0, # 'Й' - 36: 1, # 'К' - 49: 1, # 'Л' - 38: 1, # 'М' - 31: 1, # 'Н' - 34: 1, # 'О' - 35: 1, # 'П' - 45: 1, # 'Р' - 32: 1, # 'С' - 40: 2, # 'Т' - 52: 1, # 'У' - 53: 0, # 'Ф' - 55: 1, # 'Х' - 58: 1, # 'Ц' - 50: 1, # 'Ч' - 57: 1, # 'Ш' - 63: 0, # 'Щ' - 62: 1, # 'Ы' - 61: 1, # 'Ь' - 47: 1, # 'Э' - 59: 1, # 'Ю' - 43: 1, # 'Я' - 3: 2, # 'а' - 21: 1, # 'б' - 10: 2, # 'в' - 19: 1, # 'г' - 13: 2, # 'д' - 2: 3, # 'е' - 24: 1, # 'ж' - 20: 1, # 'з' - 4: 2, # 'и' - 23: 0, # 'й' - 11: 2, # 'к' - 8: 2, # 'л' - 12: 2, # 'м' - 5: 2, # 'н' - 1: 2, # 'о' - 15: 2, # 'п' - 9: 2, # 'р' - 7: 1, # 'с' - 6: 3, # 'т' - 14: 2, # 'у' - 39: 1, # 'ф' - 26: 1, # 'х' - 28: 1, # 'ц' - 22: 1, # 'ч' - 25: 0, # 'ш' - 29: 0, # 'щ' - 54: 1, # 'ъ' - 18: 1, # 'ы' - 17: 1, # 'ь' - 30: 2, # 'э' - 27: 1, # 'ю' - 16: 1, # 'я' - }, - 40: { # 'Т' - 37: 1, # 'А' - 44: 0, # 'Б' - 33: 1, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 1, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 1, # 'И' - 60: 0, # 'Й' - 36: 1, # 'К' - 49: 1, # 'Л' - 38: 1, # 'М' - 31: 1, # 'Н' - 34: 2, # 'О' - 35: 0, # 'П' - 45: 1, # 'Р' - 32: 1, # 'С' - 40: 1, # 'Т' - 52: 1, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 1, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 1, # 'Ы' - 61: 1, # 'Ь' - 47: 1, # 'Э' - 59: 1, # 'Ю' - 43: 1, # 'Я' - 3: 3, # 'а' - 21: 1, # 'б' - 10: 2, # 'в' - 19: 0, # 'г' - 13: 0, # 'д' - 2: 3, # 'е' - 24: 0, # 'ж' - 20: 0, # 'з' - 4: 2, # 'и' - 23: 0, # 'й' - 11: 1, # 'к' - 8: 1, # 'л' - 12: 0, # 'м' - 5: 0, # 'н' - 1: 3, # 'о' - 15: 0, # 'п' - 9: 2, # 'р' - 7: 1, # 'с' - 6: 0, # 'т' - 14: 2, # 'у' - 39: 0, # 'ф' - 26: 0, # 'х' - 28: 0, # 'ц' - 22: 0, # 'ч' - 25: 0, # 'ш' - 29: 1, # 'щ' - 54: 0, # 'ъ' - 18: 3, # 'ы' - 17: 1, # 'ь' - 30: 2, # 'э' - 27: 1, # 'ю' - 16: 1, # 'я' - }, - 52: { # 'У' - 37: 1, # 'А' - 44: 1, # 'Б' - 33: 1, # 'В' - 46: 1, # 'Г' - 41: 1, # 'Д' - 48: 1, # 'Е' - 56: 1, # 'Ж' - 51: 0, # 'З' - 42: 0, # 'И' - 60: 1, # 'Й' - 36: 1, # 'К' - 49: 1, # 'Л' - 38: 1, # 'М' - 31: 1, # 'Н' - 34: 1, # 'О' - 35: 1, # 'П' - 45: 1, # 'Р' - 32: 1, # 'С' - 40: 1, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 1, # 'Х' - 58: 0, # 'Ц' - 50: 1, # 'Ч' - 57: 1, # 'Ш' - 63: 1, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 1, # 'Ю' - 43: 0, # 'Я' - 3: 1, # 'а' - 21: 2, # 'б' - 10: 2, # 'в' - 19: 1, # 'г' - 13: 2, # 'д' - 2: 1, # 'е' - 24: 2, # 'ж' - 20: 2, # 'з' - 4: 2, # 'и' - 23: 1, # 'й' - 11: 1, # 'к' - 8: 2, # 'л' - 12: 2, # 'м' - 5: 1, # 'н' - 1: 2, # 'о' - 15: 1, # 'п' - 9: 2, # 'р' - 7: 2, # 'с' - 6: 2, # 'т' - 14: 0, # 'у' - 39: 1, # 'ф' - 26: 1, # 'х' - 28: 1, # 'ц' - 22: 2, # 'ч' - 25: 1, # 'ш' - 29: 1, # 'щ' - 54: 0, # 'ъ' - 18: 0, # 'ы' - 17: 0, # 'ь' - 30: 2, # 'э' - 27: 1, # 'ю' - 16: 0, # 'я' - }, - 53: { # 'Ф' - 37: 1, # 'А' - 44: 1, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 1, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 1, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 1, # 'Л' - 38: 0, # 'М' - 31: 0, # 'Н' - 34: 1, # 'О' - 35: 0, # 'П' - 45: 1, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 1, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 2, # 'а' - 21: 0, # 'б' - 10: 0, # 'в' - 19: 0, # 'г' - 13: 0, # 'д' - 2: 2, # 'е' - 24: 0, # 'ж' - 20: 0, # 'з' - 4: 2, # 'и' - 23: 0, # 'й' - 11: 0, # 'к' - 8: 2, # 'л' - 12: 0, # 'м' - 5: 0, # 'н' - 1: 2, # 'о' - 15: 0, # 'п' - 9: 2, # 'р' - 7: 0, # 'с' - 6: 1, # 'т' - 14: 2, # 'у' - 39: 0, # 'ф' - 26: 0, # 'х' - 28: 0, # 'ц' - 22: 0, # 'ч' - 25: 0, # 'ш' - 29: 0, # 'щ' - 54: 0, # 'ъ' - 18: 0, # 'ы' - 17: 1, # 'ь' - 30: 2, # 'э' - 27: 0, # 'ю' - 16: 0, # 'я' - }, - 55: { # 'Х' - 37: 1, # 'А' - 44: 0, # 'Б' - 33: 1, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 0, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 1, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 1, # 'Л' - 38: 1, # 'М' - 31: 1, # 'Н' - 34: 1, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 2, # 'а' - 21: 0, # 'б' - 10: 2, # 'в' - 19: 0, # 'г' - 13: 0, # 'д' - 2: 2, # 'е' - 24: 0, # 'ж' - 20: 0, # 'з' - 4: 2, # 'и' - 23: 0, # 'й' - 11: 0, # 'к' - 8: 2, # 'л' - 12: 1, # 'м' - 5: 0, # 'н' - 1: 2, # 'о' - 15: 0, # 'п' - 9: 2, # 'р' - 7: 0, # 'с' - 6: 0, # 'т' - 14: 1, # 'у' - 39: 0, # 'ф' - 26: 0, # 'х' - 28: 0, # 'ц' - 22: 0, # 'ч' - 25: 0, # 'ш' - 29: 0, # 'щ' - 54: 0, # 'ъ' - 18: 0, # 'ы' - 17: 1, # 'ь' - 30: 1, # 'э' - 27: 0, # 'ю' - 16: 0, # 'я' - }, - 58: { # 'Ц' - 37: 1, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 1, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 1, # 'И' - 60: 0, # 'Й' - 36: 1, # 'К' - 49: 0, # 'Л' - 38: 0, # 'М' - 31: 0, # 'Н' - 34: 1, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 1, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 1, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 1, # 'а' - 21: 0, # 'б' - 10: 1, # 'в' - 19: 0, # 'г' - 13: 0, # 'д' - 2: 2, # 'е' - 24: 0, # 'ж' - 20: 0, # 'з' - 4: 2, # 'и' - 23: 0, # 'й' - 11: 0, # 'к' - 8: 0, # 'л' - 12: 0, # 'м' - 5: 0, # 'н' - 1: 0, # 'о' - 15: 0, # 'п' - 9: 0, # 'р' - 7: 0, # 'с' - 6: 0, # 'т' - 14: 1, # 'у' - 39: 0, # 'ф' - 26: 0, # 'х' - 28: 0, # 'ц' - 22: 0, # 'ч' - 25: 0, # 'ш' - 29: 0, # 'щ' - 54: 0, # 'ъ' - 18: 1, # 'ы' - 17: 0, # 'ь' - 30: 0, # 'э' - 27: 1, # 'ю' - 16: 0, # 'я' - }, - 50: { # 'Ч' - 37: 1, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 1, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 1, # 'И' - 60: 0, # 'Й' - 36: 1, # 'К' - 49: 0, # 'Л' - 38: 0, # 'М' - 31: 1, # 'Н' - 34: 0, # 'О' - 35: 1, # 'П' - 45: 0, # 'Р' - 32: 0, # 'С' - 40: 1, # 'Т' - 52: 1, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 1, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 2, # 'а' - 21: 0, # 'б' - 10: 0, # 'в' - 19: 0, # 'г' - 13: 0, # 'д' - 2: 2, # 'е' - 24: 0, # 'ж' - 20: 0, # 'з' - 4: 2, # 'и' - 23: 0, # 'й' - 11: 0, # 'к' - 8: 1, # 'л' - 12: 0, # 'м' - 5: 0, # 'н' - 1: 1, # 'о' - 15: 0, # 'п' - 9: 1, # 'р' - 7: 0, # 'с' - 6: 3, # 'т' - 14: 2, # 'у' - 39: 0, # 'ф' - 26: 0, # 'х' - 28: 0, # 'ц' - 22: 0, # 'ч' - 25: 0, # 'ш' - 29: 0, # 'щ' - 54: 0, # 'ъ' - 18: 0, # 'ы' - 17: 1, # 'ь' - 30: 0, # 'э' - 27: 0, # 'ю' - 16: 0, # 'я' - }, - 57: { # 'Ш' - 37: 1, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 1, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 1, # 'И' - 60: 0, # 'Й' - 36: 1, # 'К' - 49: 1, # 'Л' - 38: 0, # 'М' - 31: 1, # 'Н' - 34: 1, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 1, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 2, # 'а' - 21: 0, # 'б' - 10: 1, # 'в' - 19: 0, # 'г' - 13: 0, # 'д' - 2: 2, # 'е' - 24: 0, # 'ж' - 20: 0, # 'з' - 4: 1, # 'и' - 23: 0, # 'й' - 11: 1, # 'к' - 8: 2, # 'л' - 12: 1, # 'м' - 5: 1, # 'н' - 1: 2, # 'о' - 15: 2, # 'п' - 9: 1, # 'р' - 7: 0, # 'с' - 6: 2, # 'т' - 14: 2, # 'у' - 39: 0, # 'ф' - 26: 1, # 'х' - 28: 0, # 'ц' - 22: 0, # 'ч' - 25: 1, # 'ш' - 29: 0, # 'щ' - 54: 0, # 'ъ' - 18: 0, # 'ы' - 17: 0, # 'ь' - 30: 1, # 'э' - 27: 0, # 'ю' - 16: 0, # 'я' - }, - 63: { # 'Щ' - 37: 1, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 1, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 1, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 0, # 'Л' - 38: 0, # 'М' - 31: 0, # 'Н' - 34: 0, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 1, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 1, # 'а' - 21: 0, # 'б' - 10: 0, # 'в' - 19: 0, # 'г' - 13: 0, # 'д' - 2: 1, # 'е' - 24: 0, # 'ж' - 20: 0, # 'з' - 4: 1, # 'и' - 23: 0, # 'й' - 11: 0, # 'к' - 8: 0, # 'л' - 12: 0, # 'м' - 5: 0, # 'н' - 1: 1, # 'о' - 15: 0, # 'п' - 9: 0, # 'р' - 7: 0, # 'с' - 6: 0, # 'т' - 14: 1, # 'у' - 39: 0, # 'ф' - 26: 0, # 'х' - 28: 0, # 'ц' - 22: 0, # 'ч' - 25: 0, # 'ш' - 29: 0, # 'щ' - 54: 0, # 'ъ' - 18: 0, # 'ы' - 17: 0, # 'ь' - 30: 0, # 'э' - 27: 0, # 'ю' - 16: 0, # 'я' - }, - 62: { # 'Ы' - 37: 0, # 'А' - 44: 0, # 'Б' - 33: 1, # 'В' - 46: 1, # 'Г' - 41: 0, # 'Д' - 48: 1, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 0, # 'И' - 60: 1, # 'Й' - 36: 1, # 'К' - 49: 1, # 'Л' - 38: 1, # 'М' - 31: 1, # 'Н' - 34: 0, # 'О' - 35: 1, # 'П' - 45: 1, # 'Р' - 32: 1, # 'С' - 40: 1, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 1, # 'Х' - 58: 1, # 'Ц' - 50: 0, # 'Ч' - 57: 1, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 0, # 'а' - 21: 0, # 'б' - 10: 0, # 'в' - 19: 0, # 'г' - 13: 0, # 'д' - 2: 0, # 'е' - 24: 0, # 'ж' - 20: 0, # 'з' - 4: 0, # 'и' - 23: 0, # 'й' - 11: 0, # 'к' - 8: 0, # 'л' - 12: 0, # 'м' - 5: 0, # 'н' - 1: 0, # 'о' - 15: 0, # 'п' - 9: 0, # 'р' - 7: 0, # 'с' - 6: 0, # 'т' - 14: 0, # 'у' - 39: 0, # 'ф' - 26: 0, # 'х' - 28: 0, # 'ц' - 22: 0, # 'ч' - 25: 0, # 'ш' - 29: 0, # 'щ' - 54: 0, # 'ъ' - 18: 0, # 'ы' - 17: 0, # 'ь' - 30: 0, # 'э' - 27: 0, # 'ю' - 16: 0, # 'я' - }, - 61: { # 'Ь' - 37: 0, # 'А' - 44: 1, # 'Б' - 33: 1, # 'В' - 46: 0, # 'Г' - 41: 1, # 'Д' - 48: 1, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 1, # 'И' - 60: 0, # 'Й' - 36: 1, # 'К' - 49: 0, # 'Л' - 38: 1, # 'М' - 31: 1, # 'Н' - 34: 1, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 1, # 'С' - 40: 0, # 'Т' - 52: 0, # 'У' - 53: 1, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 1, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 1, # 'Ю' - 43: 1, # 'Я' - 3: 0, # 'а' - 21: 0, # 'б' - 10: 0, # 'в' - 19: 0, # 'г' - 13: 0, # 'д' - 2: 0, # 'е' - 24: 0, # 'ж' - 20: 0, # 'з' - 4: 0, # 'и' - 23: 0, # 'й' - 11: 0, # 'к' - 8: 0, # 'л' - 12: 0, # 'м' - 5: 0, # 'н' - 1: 0, # 'о' - 15: 0, # 'п' - 9: 0, # 'р' - 7: 0, # 'с' - 6: 0, # 'т' - 14: 0, # 'у' - 39: 0, # 'ф' - 26: 0, # 'х' - 28: 0, # 'ц' - 22: 0, # 'ч' - 25: 0, # 'ш' - 29: 0, # 'щ' - 54: 0, # 'ъ' - 18: 0, # 'ы' - 17: 0, # 'ь' - 30: 0, # 'э' - 27: 0, # 'ю' - 16: 0, # 'я' - }, - 47: { # 'Э' - 37: 0, # 'А' - 44: 0, # 'Б' - 33: 1, # 'В' - 46: 0, # 'Г' - 41: 1, # 'Д' - 48: 0, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 0, # 'И' - 60: 1, # 'Й' - 36: 1, # 'К' - 49: 1, # 'Л' - 38: 1, # 'М' - 31: 1, # 'Н' - 34: 0, # 'О' - 35: 1, # 'П' - 45: 1, # 'Р' - 32: 1, # 'С' - 40: 1, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 1, # 'а' - 21: 1, # 'б' - 10: 2, # 'в' - 19: 1, # 'г' - 13: 2, # 'д' - 2: 0, # 'е' - 24: 1, # 'ж' - 20: 0, # 'з' - 4: 0, # 'и' - 23: 2, # 'й' - 11: 2, # 'к' - 8: 2, # 'л' - 12: 2, # 'м' - 5: 2, # 'н' - 1: 0, # 'о' - 15: 1, # 'п' - 9: 2, # 'р' - 7: 1, # 'с' - 6: 3, # 'т' - 14: 1, # 'у' - 39: 1, # 'ф' - 26: 1, # 'х' - 28: 0, # 'ц' - 22: 0, # 'ч' - 25: 1, # 'ш' - 29: 0, # 'щ' - 54: 0, # 'ъ' - 18: 0, # 'ы' - 17: 0, # 'ь' - 30: 0, # 'э' - 27: 0, # 'ю' - 16: 0, # 'я' - }, - 59: { # 'Ю' - 37: 1, # 'А' - 44: 1, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 1, # 'Д' - 48: 0, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 0, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 0, # 'Л' - 38: 0, # 'М' - 31: 1, # 'Н' - 34: 0, # 'О' - 35: 0, # 'П' - 45: 1, # 'Р' - 32: 0, # 'С' - 40: 1, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 1, # 'Ч' - 57: 0, # 'Ш' - 63: 1, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 0, # 'а' - 21: 1, # 'б' - 10: 0, # 'в' - 19: 1, # 'г' - 13: 1, # 'д' - 2: 0, # 'е' - 24: 1, # 'ж' - 20: 0, # 'з' - 4: 0, # 'и' - 23: 0, # 'й' - 11: 1, # 'к' - 8: 2, # 'л' - 12: 1, # 'м' - 5: 2, # 'н' - 1: 0, # 'о' - 15: 1, # 'п' - 9: 1, # 'р' - 7: 1, # 'с' - 6: 0, # 'т' - 14: 0, # 'у' - 39: 0, # 'ф' - 26: 1, # 'х' - 28: 0, # 'ц' - 22: 0, # 'ч' - 25: 0, # 'ш' - 29: 0, # 'щ' - 54: 0, # 'ъ' - 18: 0, # 'ы' - 17: 0, # 'ь' - 30: 0, # 'э' - 27: 0, # 'ю' - 16: 0, # 'я' - }, - 43: { # 'Я' - 37: 0, # 'А' - 44: 0, # 'Б' - 33: 1, # 'В' - 46: 1, # 'Г' - 41: 0, # 'Д' - 48: 1, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 1, # 'И' - 60: 0, # 'Й' - 36: 1, # 'К' - 49: 0, # 'Л' - 38: 0, # 'М' - 31: 1, # 'Н' - 34: 0, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 1, # 'С' - 40: 1, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 1, # 'Х' - 58: 0, # 'Ц' - 50: 1, # 'Ч' - 57: 0, # 'Ш' - 63: 1, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 1, # 'Ю' - 43: 1, # 'Я' - 3: 0, # 'а' - 21: 1, # 'б' - 10: 1, # 'в' - 19: 1, # 'г' - 13: 1, # 'д' - 2: 0, # 'е' - 24: 0, # 'ж' - 20: 1, # 'з' - 4: 0, # 'и' - 23: 1, # 'й' - 11: 1, # 'к' - 8: 1, # 'л' - 12: 1, # 'м' - 5: 2, # 'н' - 1: 0, # 'о' - 15: 1, # 'п' - 9: 1, # 'р' - 7: 1, # 'с' - 6: 0, # 'т' - 14: 0, # 'у' - 39: 0, # 'ф' - 26: 1, # 'х' - 28: 0, # 'ц' - 22: 0, # 'ч' - 25: 1, # 'ш' - 29: 1, # 'щ' - 54: 0, # 'ъ' - 18: 0, # 'ы' - 17: 0, # 'ь' - 30: 0, # 'э' - 27: 0, # 'ю' - 16: 0, # 'я' - }, - 3: { # 'а' - 37: 0, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 0, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 1, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 0, # 'Л' - 38: 0, # 'М' - 31: 1, # 'Н' - 34: 0, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 2, # 'а' - 21: 3, # 'б' - 10: 3, # 'в' - 19: 3, # 'г' - 13: 3, # 'д' - 2: 3, # 'е' - 24: 3, # 'ж' - 20: 3, # 'з' - 4: 3, # 'и' - 23: 3, # 'й' - 11: 3, # 'к' - 8: 3, # 'л' - 12: 3, # 'м' - 5: 3, # 'н' - 1: 2, # 'о' - 15: 3, # 'п' - 9: 3, # 'р' - 7: 3, # 'с' - 6: 3, # 'т' - 14: 3, # 'у' - 39: 2, # 'ф' - 26: 3, # 'х' - 28: 3, # 'ц' - 22: 3, # 'ч' - 25: 3, # 'ш' - 29: 3, # 'щ' - 54: 0, # 'ъ' - 18: 0, # 'ы' - 17: 0, # 'ь' - 30: 2, # 'э' - 27: 3, # 'ю' - 16: 3, # 'я' - }, - 21: { # 'б' - 37: 0, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 0, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 0, # 'И' - 60: 0, # 'Й' - 36: 1, # 'К' - 49: 0, # 'Л' - 38: 0, # 'М' - 31: 0, # 'Н' - 34: 0, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 3, # 'а' - 21: 2, # 'б' - 10: 2, # 'в' - 19: 1, # 'г' - 13: 2, # 'д' - 2: 3, # 'е' - 24: 2, # 'ж' - 20: 1, # 'з' - 4: 3, # 'и' - 23: 0, # 'й' - 11: 2, # 'к' - 8: 3, # 'л' - 12: 2, # 'м' - 5: 3, # 'н' - 1: 3, # 'о' - 15: 1, # 'п' - 9: 3, # 'р' - 7: 3, # 'с' - 6: 2, # 'т' - 14: 3, # 'у' - 39: 0, # 'ф' - 26: 2, # 'х' - 28: 1, # 'ц' - 22: 1, # 'ч' - 25: 2, # 'ш' - 29: 3, # 'щ' - 54: 2, # 'ъ' - 18: 3, # 'ы' - 17: 2, # 'ь' - 30: 1, # 'э' - 27: 2, # 'ю' - 16: 3, # 'я' - }, - 10: { # 'в' - 37: 0, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 0, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 0, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 0, # 'Л' - 38: 0, # 'М' - 31: 0, # 'Н' - 34: 0, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 3, # 'а' - 21: 2, # 'б' - 10: 2, # 'в' - 19: 2, # 'г' - 13: 3, # 'д' - 2: 3, # 'е' - 24: 1, # 'ж' - 20: 3, # 'з' - 4: 3, # 'и' - 23: 0, # 'й' - 11: 3, # 'к' - 8: 3, # 'л' - 12: 2, # 'м' - 5: 3, # 'н' - 1: 3, # 'о' - 15: 3, # 'п' - 9: 3, # 'р' - 7: 3, # 'с' - 6: 3, # 'т' - 14: 3, # 'у' - 39: 1, # 'ф' - 26: 2, # 'х' - 28: 2, # 'ц' - 22: 2, # 'ч' - 25: 3, # 'ш' - 29: 2, # 'щ' - 54: 2, # 'ъ' - 18: 3, # 'ы' - 17: 3, # 'ь' - 30: 1, # 'э' - 27: 1, # 'ю' - 16: 3, # 'я' - }, - 19: { # 'г' - 37: 0, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 0, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 0, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 0, # 'Л' - 38: 0, # 'М' - 31: 0, # 'Н' - 34: 0, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 3, # 'а' - 21: 1, # 'б' - 10: 2, # 'в' - 19: 1, # 'г' - 13: 3, # 'д' - 2: 3, # 'е' - 24: 0, # 'ж' - 20: 1, # 'з' - 4: 3, # 'и' - 23: 0, # 'й' - 11: 2, # 'к' - 8: 3, # 'л' - 12: 2, # 'м' - 5: 3, # 'н' - 1: 3, # 'о' - 15: 0, # 'п' - 9: 3, # 'р' - 7: 2, # 'с' - 6: 2, # 'т' - 14: 3, # 'у' - 39: 1, # 'ф' - 26: 1, # 'х' - 28: 1, # 'ц' - 22: 2, # 'ч' - 25: 1, # 'ш' - 29: 0, # 'щ' - 54: 0, # 'ъ' - 18: 1, # 'ы' - 17: 1, # 'ь' - 30: 1, # 'э' - 27: 1, # 'ю' - 16: 0, # 'я' - }, - 13: { # 'д' - 37: 0, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 0, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 0, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 0, # 'Л' - 38: 0, # 'М' - 31: 0, # 'Н' - 34: 0, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 3, # 'а' - 21: 2, # 'б' - 10: 3, # 'в' - 19: 2, # 'г' - 13: 2, # 'д' - 2: 3, # 'е' - 24: 2, # 'ж' - 20: 2, # 'з' - 4: 3, # 'и' - 23: 0, # 'й' - 11: 3, # 'к' - 8: 3, # 'л' - 12: 2, # 'м' - 5: 3, # 'н' - 1: 3, # 'о' - 15: 2, # 'п' - 9: 3, # 'р' - 7: 3, # 'с' - 6: 3, # 'т' - 14: 3, # 'у' - 39: 1, # 'ф' - 26: 2, # 'х' - 28: 3, # 'ц' - 22: 2, # 'ч' - 25: 2, # 'ш' - 29: 1, # 'щ' - 54: 2, # 'ъ' - 18: 3, # 'ы' - 17: 3, # 'ь' - 30: 1, # 'э' - 27: 2, # 'ю' - 16: 3, # 'я' - }, - 2: { # 'е' - 37: 0, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 0, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 0, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 0, # 'Л' - 38: 0, # 'М' - 31: 0, # 'Н' - 34: 0, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 2, # 'а' - 21: 3, # 'б' - 10: 3, # 'в' - 19: 3, # 'г' - 13: 3, # 'д' - 2: 3, # 'е' - 24: 3, # 'ж' - 20: 3, # 'з' - 4: 2, # 'и' - 23: 3, # 'й' - 11: 3, # 'к' - 8: 3, # 'л' - 12: 3, # 'м' - 5: 3, # 'н' - 1: 3, # 'о' - 15: 3, # 'п' - 9: 3, # 'р' - 7: 3, # 'с' - 6: 3, # 'т' - 14: 2, # 'у' - 39: 2, # 'ф' - 26: 3, # 'х' - 28: 3, # 'ц' - 22: 3, # 'ч' - 25: 3, # 'ш' - 29: 3, # 'щ' - 54: 0, # 'ъ' - 18: 0, # 'ы' - 17: 0, # 'ь' - 30: 1, # 'э' - 27: 2, # 'ю' - 16: 3, # 'я' - }, - 24: { # 'ж' - 37: 0, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 0, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 0, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 0, # 'Л' - 38: 0, # 'М' - 31: 0, # 'Н' - 34: 0, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 3, # 'а' - 21: 2, # 'б' - 10: 1, # 'в' - 19: 2, # 'г' - 13: 3, # 'д' - 2: 3, # 'е' - 24: 2, # 'ж' - 20: 1, # 'з' - 4: 3, # 'и' - 23: 0, # 'й' - 11: 2, # 'к' - 8: 2, # 'л' - 12: 1, # 'м' - 5: 3, # 'н' - 1: 2, # 'о' - 15: 1, # 'п' - 9: 2, # 'р' - 7: 2, # 'с' - 6: 1, # 'т' - 14: 3, # 'у' - 39: 1, # 'ф' - 26: 0, # 'х' - 28: 1, # 'ц' - 22: 2, # 'ч' - 25: 0, # 'ш' - 29: 0, # 'щ' - 54: 0, # 'ъ' - 18: 1, # 'ы' - 17: 2, # 'ь' - 30: 1, # 'э' - 27: 1, # 'ю' - 16: 1, # 'я' - }, - 20: { # 'з' - 37: 0, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 0, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 0, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 0, # 'Л' - 38: 0, # 'М' - 31: 0, # 'Н' - 34: 0, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 3, # 'а' - 21: 3, # 'б' - 10: 3, # 'в' - 19: 3, # 'г' - 13: 3, # 'д' - 2: 3, # 'е' - 24: 2, # 'ж' - 20: 2, # 'з' - 4: 3, # 'и' - 23: 0, # 'й' - 11: 3, # 'к' - 8: 3, # 'л' - 12: 3, # 'м' - 5: 3, # 'н' - 1: 3, # 'о' - 15: 0, # 'п' - 9: 3, # 'р' - 7: 2, # 'с' - 6: 2, # 'т' - 14: 3, # 'у' - 39: 0, # 'ф' - 26: 0, # 'х' - 28: 1, # 'ц' - 22: 2, # 'ч' - 25: 1, # 'ш' - 29: 0, # 'щ' - 54: 2, # 'ъ' - 18: 3, # 'ы' - 17: 2, # 'ь' - 30: 1, # 'э' - 27: 1, # 'ю' - 16: 3, # 'я' - }, - 4: { # 'и' - 37: 1, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 0, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 0, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 0, # 'Л' - 38: 0, # 'М' - 31: 1, # 'Н' - 34: 0, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 3, # 'а' - 21: 3, # 'б' - 10: 3, # 'в' - 19: 3, # 'г' - 13: 3, # 'д' - 2: 3, # 'е' - 24: 3, # 'ж' - 20: 3, # 'з' - 4: 3, # 'и' - 23: 3, # 'й' - 11: 3, # 'к' - 8: 3, # 'л' - 12: 3, # 'м' - 5: 3, # 'н' - 1: 3, # 'о' - 15: 3, # 'п' - 9: 3, # 'р' - 7: 3, # 'с' - 6: 3, # 'т' - 14: 2, # 'у' - 39: 2, # 'ф' - 26: 3, # 'х' - 28: 3, # 'ц' - 22: 3, # 'ч' - 25: 3, # 'ш' - 29: 3, # 'щ' - 54: 0, # 'ъ' - 18: 0, # 'ы' - 17: 0, # 'ь' - 30: 2, # 'э' - 27: 3, # 'ю' - 16: 3, # 'я' - }, - 23: { # 'й' - 37: 0, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 0, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 0, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 0, # 'Л' - 38: 0, # 'М' - 31: 0, # 'Н' - 34: 0, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 1, # 'а' - 21: 1, # 'б' - 10: 1, # 'в' - 19: 2, # 'г' - 13: 3, # 'д' - 2: 2, # 'е' - 24: 0, # 'ж' - 20: 2, # 'з' - 4: 1, # 'и' - 23: 0, # 'й' - 11: 2, # 'к' - 8: 2, # 'л' - 12: 2, # 'м' - 5: 3, # 'н' - 1: 2, # 'о' - 15: 1, # 'п' - 9: 2, # 'р' - 7: 3, # 'с' - 6: 3, # 'т' - 14: 1, # 'у' - 39: 2, # 'ф' - 26: 1, # 'х' - 28: 2, # 'ц' - 22: 3, # 'ч' - 25: 2, # 'ш' - 29: 1, # 'щ' - 54: 0, # 'ъ' - 18: 0, # 'ы' - 17: 0, # 'ь' - 30: 1, # 'э' - 27: 1, # 'ю' - 16: 2, # 'я' - }, - 11: { # 'к' - 37: 0, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 0, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 0, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 0, # 'Л' - 38: 0, # 'М' - 31: 0, # 'Н' - 34: 0, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 3, # 'а' - 21: 1, # 'б' - 10: 3, # 'в' - 19: 1, # 'г' - 13: 1, # 'д' - 2: 3, # 'е' - 24: 2, # 'ж' - 20: 2, # 'з' - 4: 3, # 'и' - 23: 0, # 'й' - 11: 2, # 'к' - 8: 3, # 'л' - 12: 1, # 'м' - 5: 3, # 'н' - 1: 3, # 'о' - 15: 0, # 'п' - 9: 3, # 'р' - 7: 3, # 'с' - 6: 3, # 'т' - 14: 3, # 'у' - 39: 1, # 'ф' - 26: 2, # 'х' - 28: 2, # 'ц' - 22: 1, # 'ч' - 25: 2, # 'ш' - 29: 0, # 'щ' - 54: 0, # 'ъ' - 18: 1, # 'ы' - 17: 1, # 'ь' - 30: 1, # 'э' - 27: 1, # 'ю' - 16: 1, # 'я' - }, - 8: { # 'л' - 37: 0, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 0, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 0, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 0, # 'Л' - 38: 0, # 'М' - 31: 0, # 'Н' - 34: 0, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 3, # 'а' - 21: 2, # 'б' - 10: 2, # 'в' - 19: 3, # 'г' - 13: 2, # 'д' - 2: 3, # 'е' - 24: 3, # 'ж' - 20: 2, # 'з' - 4: 3, # 'и' - 23: 0, # 'й' - 11: 3, # 'к' - 8: 3, # 'л' - 12: 2, # 'м' - 5: 3, # 'н' - 1: 3, # 'о' - 15: 2, # 'п' - 9: 1, # 'р' - 7: 3, # 'с' - 6: 2, # 'т' - 14: 3, # 'у' - 39: 2, # 'ф' - 26: 2, # 'х' - 28: 1, # 'ц' - 22: 3, # 'ч' - 25: 2, # 'ш' - 29: 1, # 'щ' - 54: 0, # 'ъ' - 18: 3, # 'ы' - 17: 3, # 'ь' - 30: 1, # 'э' - 27: 3, # 'ю' - 16: 3, # 'я' - }, - 12: { # 'м' - 37: 0, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 0, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 0, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 0, # 'Л' - 38: 0, # 'М' - 31: 0, # 'Н' - 34: 0, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 3, # 'а' - 21: 2, # 'б' - 10: 2, # 'в' - 19: 2, # 'г' - 13: 1, # 'д' - 2: 3, # 'е' - 24: 1, # 'ж' - 20: 1, # 'з' - 4: 3, # 'и' - 23: 0, # 'й' - 11: 2, # 'к' - 8: 3, # 'л' - 12: 2, # 'м' - 5: 3, # 'н' - 1: 3, # 'о' - 15: 2, # 'п' - 9: 2, # 'р' - 7: 3, # 'с' - 6: 2, # 'т' - 14: 3, # 'у' - 39: 2, # 'ф' - 26: 2, # 'х' - 28: 2, # 'ц' - 22: 2, # 'ч' - 25: 1, # 'ш' - 29: 1, # 'щ' - 54: 0, # 'ъ' - 18: 3, # 'ы' - 17: 2, # 'ь' - 30: 2, # 'э' - 27: 1, # 'ю' - 16: 3, # 'я' - }, - 5: { # 'н' - 37: 0, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 0, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 0, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 0, # 'Л' - 38: 0, # 'М' - 31: 0, # 'Н' - 34: 0, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 3, # 'а' - 21: 2, # 'б' - 10: 2, # 'в' - 19: 3, # 'г' - 13: 3, # 'д' - 2: 3, # 'е' - 24: 2, # 'ж' - 20: 2, # 'з' - 4: 3, # 'и' - 23: 0, # 'й' - 11: 3, # 'к' - 8: 2, # 'л' - 12: 1, # 'м' - 5: 3, # 'н' - 1: 3, # 'о' - 15: 1, # 'п' - 9: 2, # 'р' - 7: 3, # 'с' - 6: 3, # 'т' - 14: 3, # 'у' - 39: 2, # 'ф' - 26: 2, # 'х' - 28: 3, # 'ц' - 22: 3, # 'ч' - 25: 2, # 'ш' - 29: 2, # 'щ' - 54: 1, # 'ъ' - 18: 3, # 'ы' - 17: 3, # 'ь' - 30: 1, # 'э' - 27: 3, # 'ю' - 16: 3, # 'я' - }, - 1: { # 'о' - 37: 0, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 0, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 0, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 0, # 'Л' - 38: 0, # 'М' - 31: 0, # 'Н' - 34: 0, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 2, # 'а' - 21: 3, # 'б' - 10: 3, # 'в' - 19: 3, # 'г' - 13: 3, # 'д' - 2: 3, # 'е' - 24: 3, # 'ж' - 20: 3, # 'з' - 4: 3, # 'и' - 23: 3, # 'й' - 11: 3, # 'к' - 8: 3, # 'л' - 12: 3, # 'м' - 5: 3, # 'н' - 1: 3, # 'о' - 15: 3, # 'п' - 9: 3, # 'р' - 7: 3, # 'с' - 6: 3, # 'т' - 14: 2, # 'у' - 39: 2, # 'ф' - 26: 3, # 'х' - 28: 2, # 'ц' - 22: 3, # 'ч' - 25: 3, # 'ш' - 29: 3, # 'щ' - 54: 0, # 'ъ' - 18: 0, # 'ы' - 17: 0, # 'ь' - 30: 2, # 'э' - 27: 3, # 'ю' - 16: 3, # 'я' - }, - 15: { # 'п' - 37: 0, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 0, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 0, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 0, # 'Л' - 38: 0, # 'М' - 31: 0, # 'Н' - 34: 0, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 3, # 'а' - 21: 1, # 'б' - 10: 0, # 'в' - 19: 0, # 'г' - 13: 0, # 'д' - 2: 3, # 'е' - 24: 0, # 'ж' - 20: 0, # 'з' - 4: 3, # 'и' - 23: 0, # 'й' - 11: 2, # 'к' - 8: 3, # 'л' - 12: 1, # 'м' - 5: 3, # 'н' - 1: 3, # 'о' - 15: 2, # 'п' - 9: 3, # 'р' - 7: 2, # 'с' - 6: 2, # 'т' - 14: 3, # 'у' - 39: 1, # 'ф' - 26: 0, # 'х' - 28: 2, # 'ц' - 22: 2, # 'ч' - 25: 1, # 'ш' - 29: 1, # 'щ' - 54: 0, # 'ъ' - 18: 3, # 'ы' - 17: 2, # 'ь' - 30: 1, # 'э' - 27: 1, # 'ю' - 16: 3, # 'я' - }, - 9: { # 'р' - 37: 0, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 0, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 0, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 0, # 'Л' - 38: 0, # 'М' - 31: 0, # 'Н' - 34: 0, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 3, # 'а' - 21: 2, # 'б' - 10: 3, # 'в' - 19: 3, # 'г' - 13: 3, # 'д' - 2: 3, # 'е' - 24: 3, # 'ж' - 20: 2, # 'з' - 4: 3, # 'и' - 23: 0, # 'й' - 11: 3, # 'к' - 8: 2, # 'л' - 12: 3, # 'м' - 5: 3, # 'н' - 1: 3, # 'о' - 15: 2, # 'п' - 9: 2, # 'р' - 7: 3, # 'с' - 6: 3, # 'т' - 14: 3, # 'у' - 39: 2, # 'ф' - 26: 3, # 'х' - 28: 2, # 'ц' - 22: 2, # 'ч' - 25: 3, # 'ш' - 29: 2, # 'щ' - 54: 0, # 'ъ' - 18: 3, # 'ы' - 17: 3, # 'ь' - 30: 2, # 'э' - 27: 2, # 'ю' - 16: 3, # 'я' - }, - 7: { # 'с' - 37: 0, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 0, # 'Е' - 56: 0, # 'Ж' - 51: 1, # 'З' - 42: 0, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 0, # 'Л' - 38: 0, # 'М' - 31: 0, # 'Н' - 34: 0, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 3, # 'а' - 21: 2, # 'б' - 10: 3, # 'в' - 19: 2, # 'г' - 13: 3, # 'д' - 2: 3, # 'е' - 24: 2, # 'ж' - 20: 2, # 'з' - 4: 3, # 'и' - 23: 0, # 'й' - 11: 3, # 'к' - 8: 3, # 'л' - 12: 3, # 'м' - 5: 3, # 'н' - 1: 3, # 'о' - 15: 3, # 'п' - 9: 3, # 'р' - 7: 3, # 'с' - 6: 3, # 'т' - 14: 3, # 'у' - 39: 2, # 'ф' - 26: 3, # 'х' - 28: 2, # 'ц' - 22: 3, # 'ч' - 25: 2, # 'ш' - 29: 1, # 'щ' - 54: 2, # 'ъ' - 18: 3, # 'ы' - 17: 3, # 'ь' - 30: 2, # 'э' - 27: 3, # 'ю' - 16: 3, # 'я' - }, - 6: { # 'т' - 37: 0, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 0, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 0, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 0, # 'Л' - 38: 0, # 'М' - 31: 0, # 'Н' - 34: 0, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 3, # 'а' - 21: 2, # 'б' - 10: 3, # 'в' - 19: 2, # 'г' - 13: 2, # 'д' - 2: 3, # 'е' - 24: 1, # 'ж' - 20: 1, # 'з' - 4: 3, # 'и' - 23: 0, # 'й' - 11: 3, # 'к' - 8: 3, # 'л' - 12: 2, # 'м' - 5: 3, # 'н' - 1: 3, # 'о' - 15: 2, # 'п' - 9: 3, # 'р' - 7: 3, # 'с' - 6: 2, # 'т' - 14: 3, # 'у' - 39: 2, # 'ф' - 26: 2, # 'х' - 28: 2, # 'ц' - 22: 2, # 'ч' - 25: 2, # 'ш' - 29: 2, # 'щ' - 54: 2, # 'ъ' - 18: 3, # 'ы' - 17: 3, # 'ь' - 30: 2, # 'э' - 27: 2, # 'ю' - 16: 3, # 'я' - }, - 14: { # 'у' - 37: 0, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 0, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 0, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 0, # 'Л' - 38: 0, # 'М' - 31: 0, # 'Н' - 34: 0, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 2, # 'а' - 21: 3, # 'б' - 10: 3, # 'в' - 19: 3, # 'г' - 13: 3, # 'д' - 2: 3, # 'е' - 24: 3, # 'ж' - 20: 3, # 'з' - 4: 2, # 'и' - 23: 2, # 'й' - 11: 3, # 'к' - 8: 3, # 'л' - 12: 3, # 'м' - 5: 3, # 'н' - 1: 2, # 'о' - 15: 3, # 'п' - 9: 3, # 'р' - 7: 3, # 'с' - 6: 3, # 'т' - 14: 1, # 'у' - 39: 2, # 'ф' - 26: 3, # 'х' - 28: 2, # 'ц' - 22: 3, # 'ч' - 25: 3, # 'ш' - 29: 3, # 'щ' - 54: 0, # 'ъ' - 18: 0, # 'ы' - 17: 0, # 'ь' - 30: 2, # 'э' - 27: 3, # 'ю' - 16: 2, # 'я' - }, - 39: { # 'ф' - 37: 0, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 0, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 0, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 0, # 'Л' - 38: 0, # 'М' - 31: 0, # 'Н' - 34: 0, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 3, # 'а' - 21: 1, # 'б' - 10: 0, # 'в' - 19: 1, # 'г' - 13: 0, # 'д' - 2: 3, # 'е' - 24: 0, # 'ж' - 20: 0, # 'з' - 4: 3, # 'и' - 23: 0, # 'й' - 11: 1, # 'к' - 8: 2, # 'л' - 12: 1, # 'м' - 5: 1, # 'н' - 1: 3, # 'о' - 15: 1, # 'п' - 9: 2, # 'р' - 7: 2, # 'с' - 6: 2, # 'т' - 14: 2, # 'у' - 39: 2, # 'ф' - 26: 0, # 'х' - 28: 0, # 'ц' - 22: 1, # 'ч' - 25: 1, # 'ш' - 29: 0, # 'щ' - 54: 0, # 'ъ' - 18: 2, # 'ы' - 17: 1, # 'ь' - 30: 2, # 'э' - 27: 1, # 'ю' - 16: 1, # 'я' - }, - 26: { # 'х' - 37: 0, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 0, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 0, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 0, # 'Л' - 38: 0, # 'М' - 31: 0, # 'Н' - 34: 0, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 3, # 'а' - 21: 0, # 'б' - 10: 3, # 'в' - 19: 1, # 'г' - 13: 1, # 'д' - 2: 2, # 'е' - 24: 0, # 'ж' - 20: 1, # 'з' - 4: 3, # 'и' - 23: 0, # 'й' - 11: 1, # 'к' - 8: 2, # 'л' - 12: 2, # 'м' - 5: 3, # 'н' - 1: 3, # 'о' - 15: 1, # 'п' - 9: 3, # 'р' - 7: 2, # 'с' - 6: 2, # 'т' - 14: 2, # 'у' - 39: 1, # 'ф' - 26: 1, # 'х' - 28: 1, # 'ц' - 22: 1, # 'ч' - 25: 2, # 'ш' - 29: 0, # 'щ' - 54: 1, # 'ъ' - 18: 0, # 'ы' - 17: 1, # 'ь' - 30: 1, # 'э' - 27: 1, # 'ю' - 16: 0, # 'я' - }, - 28: { # 'ц' - 37: 0, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 0, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 0, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 0, # 'Л' - 38: 0, # 'М' - 31: 0, # 'Н' - 34: 0, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 3, # 'а' - 21: 1, # 'б' - 10: 2, # 'в' - 19: 1, # 'г' - 13: 1, # 'д' - 2: 3, # 'е' - 24: 0, # 'ж' - 20: 1, # 'з' - 4: 3, # 'и' - 23: 0, # 'й' - 11: 2, # 'к' - 8: 1, # 'л' - 12: 1, # 'м' - 5: 1, # 'н' - 1: 3, # 'о' - 15: 0, # 'п' - 9: 1, # 'р' - 7: 0, # 'с' - 6: 1, # 'т' - 14: 3, # 'у' - 39: 0, # 'ф' - 26: 0, # 'х' - 28: 1, # 'ц' - 22: 0, # 'ч' - 25: 1, # 'ш' - 29: 0, # 'щ' - 54: 0, # 'ъ' - 18: 3, # 'ы' - 17: 1, # 'ь' - 30: 0, # 'э' - 27: 1, # 'ю' - 16: 0, # 'я' - }, - 22: { # 'ч' - 37: 0, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 0, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 0, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 0, # 'Л' - 38: 0, # 'М' - 31: 0, # 'Н' - 34: 0, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 3, # 'а' - 21: 1, # 'б' - 10: 1, # 'в' - 19: 0, # 'г' - 13: 0, # 'д' - 2: 3, # 'е' - 24: 1, # 'ж' - 20: 0, # 'з' - 4: 3, # 'и' - 23: 0, # 'й' - 11: 3, # 'к' - 8: 2, # 'л' - 12: 1, # 'м' - 5: 3, # 'н' - 1: 2, # 'о' - 15: 0, # 'п' - 9: 2, # 'р' - 7: 1, # 'с' - 6: 3, # 'т' - 14: 3, # 'у' - 39: 1, # 'ф' - 26: 1, # 'х' - 28: 0, # 'ц' - 22: 1, # 'ч' - 25: 2, # 'ш' - 29: 0, # 'щ' - 54: 0, # 'ъ' - 18: 0, # 'ы' - 17: 3, # 'ь' - 30: 0, # 'э' - 27: 0, # 'ю' - 16: 0, # 'я' - }, - 25: { # 'ш' - 37: 0, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 0, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 0, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 0, # 'Л' - 38: 0, # 'М' - 31: 0, # 'Н' - 34: 0, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 3, # 'а' - 21: 1, # 'б' - 10: 2, # 'в' - 19: 1, # 'г' - 13: 0, # 'д' - 2: 3, # 'е' - 24: 0, # 'ж' - 20: 0, # 'з' - 4: 3, # 'и' - 23: 0, # 'й' - 11: 3, # 'к' - 8: 3, # 'л' - 12: 2, # 'м' - 5: 3, # 'н' - 1: 3, # 'о' - 15: 2, # 'п' - 9: 2, # 'р' - 7: 1, # 'с' - 6: 2, # 'т' - 14: 3, # 'у' - 39: 2, # 'ф' - 26: 1, # 'х' - 28: 1, # 'ц' - 22: 1, # 'ч' - 25: 1, # 'ш' - 29: 0, # 'щ' - 54: 0, # 'ъ' - 18: 0, # 'ы' - 17: 3, # 'ь' - 30: 1, # 'э' - 27: 1, # 'ю' - 16: 0, # 'я' - }, - 29: { # 'щ' - 37: 0, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 0, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 0, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 0, # 'Л' - 38: 0, # 'М' - 31: 0, # 'Н' - 34: 0, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 3, # 'а' - 21: 0, # 'б' - 10: 1, # 'в' - 19: 0, # 'г' - 13: 0, # 'д' - 2: 3, # 'е' - 24: 0, # 'ж' - 20: 0, # 'з' - 4: 3, # 'и' - 23: 0, # 'й' - 11: 0, # 'к' - 8: 0, # 'л' - 12: 1, # 'м' - 5: 2, # 'н' - 1: 1, # 'о' - 15: 0, # 'п' - 9: 2, # 'р' - 7: 0, # 'с' - 6: 0, # 'т' - 14: 2, # 'у' - 39: 0, # 'ф' - 26: 0, # 'х' - 28: 0, # 'ц' - 22: 0, # 'ч' - 25: 0, # 'ш' - 29: 0, # 'щ' - 54: 0, # 'ъ' - 18: 0, # 'ы' - 17: 2, # 'ь' - 30: 0, # 'э' - 27: 0, # 'ю' - 16: 0, # 'я' - }, - 54: { # 'ъ' - 37: 0, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 0, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 0, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 0, # 'Л' - 38: 0, # 'М' - 31: 0, # 'Н' - 34: 0, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 0, # 'а' - 21: 0, # 'б' - 10: 0, # 'в' - 19: 0, # 'г' - 13: 0, # 'д' - 2: 2, # 'е' - 24: 0, # 'ж' - 20: 0, # 'з' - 4: 0, # 'и' - 23: 0, # 'й' - 11: 0, # 'к' - 8: 0, # 'л' - 12: 0, # 'м' - 5: 0, # 'н' - 1: 0, # 'о' - 15: 0, # 'п' - 9: 0, # 'р' - 7: 0, # 'с' - 6: 0, # 'т' - 14: 0, # 'у' - 39: 0, # 'ф' - 26: 0, # 'х' - 28: 0, # 'ц' - 22: 0, # 'ч' - 25: 0, # 'ш' - 29: 0, # 'щ' - 54: 0, # 'ъ' - 18: 0, # 'ы' - 17: 0, # 'ь' - 30: 0, # 'э' - 27: 1, # 'ю' - 16: 2, # 'я' - }, - 18: { # 'ы' - 37: 0, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 0, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 0, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 0, # 'Л' - 38: 0, # 'М' - 31: 0, # 'Н' - 34: 0, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 0, # 'а' - 21: 3, # 'б' - 10: 3, # 'в' - 19: 2, # 'г' - 13: 2, # 'д' - 2: 3, # 'е' - 24: 2, # 'ж' - 20: 2, # 'з' - 4: 2, # 'и' - 23: 3, # 'й' - 11: 3, # 'к' - 8: 3, # 'л' - 12: 3, # 'м' - 5: 3, # 'н' - 1: 1, # 'о' - 15: 3, # 'п' - 9: 3, # 'р' - 7: 3, # 'с' - 6: 3, # 'т' - 14: 1, # 'у' - 39: 0, # 'ф' - 26: 3, # 'х' - 28: 2, # 'ц' - 22: 3, # 'ч' - 25: 3, # 'ш' - 29: 2, # 'щ' - 54: 0, # 'ъ' - 18: 0, # 'ы' - 17: 0, # 'ь' - 30: 0, # 'э' - 27: 0, # 'ю' - 16: 2, # 'я' - }, - 17: { # 'ь' - 37: 0, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 0, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 0, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 0, # 'Л' - 38: 0, # 'М' - 31: 0, # 'Н' - 34: 0, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 0, # 'а' - 21: 2, # 'б' - 10: 2, # 'в' - 19: 2, # 'г' - 13: 2, # 'д' - 2: 3, # 'е' - 24: 1, # 'ж' - 20: 3, # 'з' - 4: 2, # 'и' - 23: 0, # 'й' - 11: 3, # 'к' - 8: 0, # 'л' - 12: 3, # 'м' - 5: 3, # 'н' - 1: 2, # 'о' - 15: 2, # 'п' - 9: 1, # 'р' - 7: 3, # 'с' - 6: 2, # 'т' - 14: 0, # 'у' - 39: 2, # 'ф' - 26: 1, # 'х' - 28: 2, # 'ц' - 22: 2, # 'ч' - 25: 3, # 'ш' - 29: 2, # 'щ' - 54: 0, # 'ъ' - 18: 0, # 'ы' - 17: 0, # 'ь' - 30: 1, # 'э' - 27: 3, # 'ю' - 16: 3, # 'я' - }, - 30: { # 'э' - 37: 0, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 0, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 0, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 0, # 'Л' - 38: 1, # 'М' - 31: 1, # 'Н' - 34: 0, # 'О' - 35: 0, # 'П' - 45: 1, # 'Р' - 32: 1, # 'С' - 40: 0, # 'Т' - 52: 0, # 'У' - 53: 1, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 0, # 'а' - 21: 1, # 'б' - 10: 1, # 'в' - 19: 1, # 'г' - 13: 2, # 'д' - 2: 1, # 'е' - 24: 0, # 'ж' - 20: 1, # 'з' - 4: 0, # 'и' - 23: 2, # 'й' - 11: 2, # 'к' - 8: 2, # 'л' - 12: 2, # 'м' - 5: 2, # 'н' - 1: 0, # 'о' - 15: 2, # 'п' - 9: 2, # 'р' - 7: 2, # 'с' - 6: 3, # 'т' - 14: 1, # 'у' - 39: 2, # 'ф' - 26: 1, # 'х' - 28: 0, # 'ц' - 22: 0, # 'ч' - 25: 1, # 'ш' - 29: 0, # 'щ' - 54: 0, # 'ъ' - 18: 0, # 'ы' - 17: 0, # 'ь' - 30: 1, # 'э' - 27: 1, # 'ю' - 16: 1, # 'я' - }, - 27: { # 'ю' - 37: 0, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 0, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 0, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 0, # 'Л' - 38: 0, # 'М' - 31: 0, # 'Н' - 34: 0, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 2, # 'а' - 21: 3, # 'б' - 10: 1, # 'в' - 19: 2, # 'г' - 13: 3, # 'д' - 2: 1, # 'е' - 24: 2, # 'ж' - 20: 2, # 'з' - 4: 1, # 'и' - 23: 1, # 'й' - 11: 2, # 'к' - 8: 2, # 'л' - 12: 2, # 'м' - 5: 2, # 'н' - 1: 1, # 'о' - 15: 2, # 'п' - 9: 2, # 'р' - 7: 3, # 'с' - 6: 3, # 'т' - 14: 0, # 'у' - 39: 1, # 'ф' - 26: 2, # 'х' - 28: 2, # 'ц' - 22: 2, # 'ч' - 25: 2, # 'ш' - 29: 3, # 'щ' - 54: 0, # 'ъ' - 18: 0, # 'ы' - 17: 0, # 'ь' - 30: 1, # 'э' - 27: 2, # 'ю' - 16: 1, # 'я' - }, - 16: { # 'я' - 37: 0, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 0, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 0, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 0, # 'Л' - 38: 0, # 'М' - 31: 0, # 'Н' - 34: 0, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 0, # 'а' - 21: 2, # 'б' - 10: 3, # 'в' - 19: 2, # 'г' - 13: 3, # 'д' - 2: 3, # 'е' - 24: 3, # 'ж' - 20: 3, # 'з' - 4: 2, # 'и' - 23: 2, # 'й' - 11: 3, # 'к' - 8: 3, # 'л' - 12: 3, # 'м' - 5: 3, # 'н' - 1: 0, # 'о' - 15: 2, # 'п' - 9: 2, # 'р' - 7: 3, # 'с' - 6: 3, # 'т' - 14: 1, # 'у' - 39: 1, # 'ф' - 26: 3, # 'х' - 28: 2, # 'ц' - 22: 2, # 'ч' - 25: 2, # 'ш' - 29: 3, # 'щ' - 54: 0, # 'ъ' - 18: 0, # 'ы' - 17: 0, # 'ь' - 30: 0, # 'э' - 27: 2, # 'ю' - 16: 2, # 'я' - }, -} - -# 255: Undefined characters that did not exist in training text -# 254: Carriage/Return -# 253: symbol (punctuation) that does not belong to word -# 252: 0 - 9 -# 251: Control characters - -# Character Mapping Table(s): -IBM866_RUSSIAN_CHAR_TO_ORDER = { - 0: 255, # '\x00' - 1: 255, # '\x01' - 2: 255, # '\x02' - 3: 255, # '\x03' - 4: 255, # '\x04' - 5: 255, # '\x05' - 6: 255, # '\x06' - 7: 255, # '\x07' - 8: 255, # '\x08' - 9: 255, # '\t' - 10: 254, # '\n' - 11: 255, # '\x0b' - 12: 255, # '\x0c' - 13: 254, # '\r' - 14: 255, # '\x0e' - 15: 255, # '\x0f' - 16: 255, # '\x10' - 17: 255, # '\x11' - 18: 255, # '\x12' - 19: 255, # '\x13' - 20: 255, # '\x14' - 21: 255, # '\x15' - 22: 255, # '\x16' - 23: 255, # '\x17' - 24: 255, # '\x18' - 25: 255, # '\x19' - 26: 255, # '\x1a' - 27: 255, # '\x1b' - 28: 255, # '\x1c' - 29: 255, # '\x1d' - 30: 255, # '\x1e' - 31: 255, # '\x1f' - 32: 253, # ' ' - 33: 253, # '!' - 34: 253, # '"' - 35: 253, # '#' - 36: 253, # '$' - 37: 253, # '%' - 38: 253, # '&' - 39: 253, # "'" - 40: 253, # '(' - 41: 253, # ')' - 42: 253, # '*' - 43: 253, # '+' - 44: 253, # ',' - 45: 253, # '-' - 46: 253, # '.' - 47: 253, # '/' - 48: 252, # '0' - 49: 252, # '1' - 50: 252, # '2' - 51: 252, # '3' - 52: 252, # '4' - 53: 252, # '5' - 54: 252, # '6' - 55: 252, # '7' - 56: 252, # '8' - 57: 252, # '9' - 58: 253, # ':' - 59: 253, # ';' - 60: 253, # '<' - 61: 253, # '=' - 62: 253, # '>' - 63: 253, # '?' - 64: 253, # '@' - 65: 142, # 'A' - 66: 143, # 'B' - 67: 144, # 'C' - 68: 145, # 'D' - 69: 146, # 'E' - 70: 147, # 'F' - 71: 148, # 'G' - 72: 149, # 'H' - 73: 150, # 'I' - 74: 151, # 'J' - 75: 152, # 'K' - 76: 74, # 'L' - 77: 153, # 'M' - 78: 75, # 'N' - 79: 154, # 'O' - 80: 155, # 'P' - 81: 156, # 'Q' - 82: 157, # 'R' - 83: 158, # 'S' - 84: 159, # 'T' - 85: 160, # 'U' - 86: 161, # 'V' - 87: 162, # 'W' - 88: 163, # 'X' - 89: 164, # 'Y' - 90: 165, # 'Z' - 91: 253, # '[' - 92: 253, # '\\' - 93: 253, # ']' - 94: 253, # '^' - 95: 253, # '_' - 96: 253, # '`' - 97: 71, # 'a' - 98: 172, # 'b' - 99: 66, # 'c' - 100: 173, # 'd' - 101: 65, # 'e' - 102: 174, # 'f' - 103: 76, # 'g' - 104: 175, # 'h' - 105: 64, # 'i' - 106: 176, # 'j' - 107: 177, # 'k' - 108: 77, # 'l' - 109: 72, # 'm' - 110: 178, # 'n' - 111: 69, # 'o' - 112: 67, # 'p' - 113: 179, # 'q' - 114: 78, # 'r' - 115: 73, # 's' - 116: 180, # 't' - 117: 181, # 'u' - 118: 79, # 'v' - 119: 182, # 'w' - 120: 183, # 'x' - 121: 184, # 'y' - 122: 185, # 'z' - 123: 253, # '{' - 124: 253, # '|' - 125: 253, # '}' - 126: 253, # '~' - 127: 253, # '\x7f' - 128: 37, # 'А' - 129: 44, # 'Б' - 130: 33, # 'В' - 131: 46, # 'Г' - 132: 41, # 'Д' - 133: 48, # 'Е' - 134: 56, # 'Ж' - 135: 51, # 'З' - 136: 42, # 'И' - 137: 60, # 'Й' - 138: 36, # 'К' - 139: 49, # 'Л' - 140: 38, # 'М' - 141: 31, # 'Н' - 142: 34, # 'О' - 143: 35, # 'П' - 144: 45, # 'Р' - 145: 32, # 'С' - 146: 40, # 'Т' - 147: 52, # 'У' - 148: 53, # 'Ф' - 149: 55, # 'Х' - 150: 58, # 'Ц' - 151: 50, # 'Ч' - 152: 57, # 'Ш' - 153: 63, # 'Щ' - 154: 70, # 'Ъ' - 155: 62, # 'Ы' - 156: 61, # 'Ь' - 157: 47, # 'Э' - 158: 59, # 'Ю' - 159: 43, # 'Я' - 160: 3, # 'а' - 161: 21, # 'б' - 162: 10, # 'в' - 163: 19, # 'г' - 164: 13, # 'д' - 165: 2, # 'е' - 166: 24, # 'ж' - 167: 20, # 'з' - 168: 4, # 'и' - 169: 23, # 'й' - 170: 11, # 'к' - 171: 8, # 'л' - 172: 12, # 'м' - 173: 5, # 'н' - 174: 1, # 'о' - 175: 15, # 'п' - 176: 191, # '░' - 177: 192, # '▒' - 178: 193, # '▓' - 179: 194, # '│' - 180: 195, # '┤' - 181: 196, # '╡' - 182: 197, # '╢' - 183: 198, # '╖' - 184: 199, # '╕' - 185: 200, # '╣' - 186: 201, # '║' - 187: 202, # '╗' - 188: 203, # '╝' - 189: 204, # '╜' - 190: 205, # '╛' - 191: 206, # '┐' - 192: 207, # '└' - 193: 208, # '┴' - 194: 209, # '┬' - 195: 210, # '├' - 196: 211, # '─' - 197: 212, # '┼' - 198: 213, # '╞' - 199: 214, # '╟' - 200: 215, # '╚' - 201: 216, # '╔' - 202: 217, # '╩' - 203: 218, # '╦' - 204: 219, # '╠' - 205: 220, # '═' - 206: 221, # '╬' - 207: 222, # '╧' - 208: 223, # '╨' - 209: 224, # '╤' - 210: 225, # '╥' - 211: 226, # '╙' - 212: 227, # '╘' - 213: 228, # '╒' - 214: 229, # '╓' - 215: 230, # '╫' - 216: 231, # '╪' - 217: 232, # '┘' - 218: 233, # '┌' - 219: 234, # '█' - 220: 235, # '▄' - 221: 236, # '▌' - 222: 237, # '▐' - 223: 238, # '▀' - 224: 9, # 'р' - 225: 7, # 'с' - 226: 6, # 'т' - 227: 14, # 'у' - 228: 39, # 'ф' - 229: 26, # 'х' - 230: 28, # 'ц' - 231: 22, # 'ч' - 232: 25, # 'ш' - 233: 29, # 'щ' - 234: 54, # 'ъ' - 235: 18, # 'ы' - 236: 17, # 'ь' - 237: 30, # 'э' - 238: 27, # 'ю' - 239: 16, # 'я' - 240: 239, # 'Ё' - 241: 68, # 'ё' - 242: 240, # 'Є' - 243: 241, # 'є' - 244: 242, # 'Ї' - 245: 243, # 'ї' - 246: 244, # 'Ў' - 247: 245, # 'ў' - 248: 246, # '°' - 249: 247, # '∙' - 250: 248, # '·' - 251: 249, # '√' - 252: 250, # '№' - 253: 251, # '¤' - 254: 252, # '■' - 255: 255, # '\xa0' -} - -IBM866_RUSSIAN_MODEL = SingleByteCharSetModel( - charset_name="IBM866", - language="Russian", - char_to_order_map=IBM866_RUSSIAN_CHAR_TO_ORDER, - language_model=RUSSIAN_LANG_MODEL, - typical_positive_ratio=0.976601, - keep_ascii_letters=False, - alphabet="ЁАБВГДЕЖЗИЙКЛМНОПРСТУФХЦЧШЩЪЫЬЭЮЯабвгдежзийклмнопрстуфхцчшщъыьэюяё", -) - -WINDOWS_1251_RUSSIAN_CHAR_TO_ORDER = { - 0: 255, # '\x00' - 1: 255, # '\x01' - 2: 255, # '\x02' - 3: 255, # '\x03' - 4: 255, # '\x04' - 5: 255, # '\x05' - 6: 255, # '\x06' - 7: 255, # '\x07' - 8: 255, # '\x08' - 9: 255, # '\t' - 10: 254, # '\n' - 11: 255, # '\x0b' - 12: 255, # '\x0c' - 13: 254, # '\r' - 14: 255, # '\x0e' - 15: 255, # '\x0f' - 16: 255, # '\x10' - 17: 255, # '\x11' - 18: 255, # '\x12' - 19: 255, # '\x13' - 20: 255, # '\x14' - 21: 255, # '\x15' - 22: 255, # '\x16' - 23: 255, # '\x17' - 24: 255, # '\x18' - 25: 255, # '\x19' - 26: 255, # '\x1a' - 27: 255, # '\x1b' - 28: 255, # '\x1c' - 29: 255, # '\x1d' - 30: 255, # '\x1e' - 31: 255, # '\x1f' - 32: 253, # ' ' - 33: 253, # '!' - 34: 253, # '"' - 35: 253, # '#' - 36: 253, # '$' - 37: 253, # '%' - 38: 253, # '&' - 39: 253, # "'" - 40: 253, # '(' - 41: 253, # ')' - 42: 253, # '*' - 43: 253, # '+' - 44: 253, # ',' - 45: 253, # '-' - 46: 253, # '.' - 47: 253, # '/' - 48: 252, # '0' - 49: 252, # '1' - 50: 252, # '2' - 51: 252, # '3' - 52: 252, # '4' - 53: 252, # '5' - 54: 252, # '6' - 55: 252, # '7' - 56: 252, # '8' - 57: 252, # '9' - 58: 253, # ':' - 59: 253, # ';' - 60: 253, # '<' - 61: 253, # '=' - 62: 253, # '>' - 63: 253, # '?' - 64: 253, # '@' - 65: 142, # 'A' - 66: 143, # 'B' - 67: 144, # 'C' - 68: 145, # 'D' - 69: 146, # 'E' - 70: 147, # 'F' - 71: 148, # 'G' - 72: 149, # 'H' - 73: 150, # 'I' - 74: 151, # 'J' - 75: 152, # 'K' - 76: 74, # 'L' - 77: 153, # 'M' - 78: 75, # 'N' - 79: 154, # 'O' - 80: 155, # 'P' - 81: 156, # 'Q' - 82: 157, # 'R' - 83: 158, # 'S' - 84: 159, # 'T' - 85: 160, # 'U' - 86: 161, # 'V' - 87: 162, # 'W' - 88: 163, # 'X' - 89: 164, # 'Y' - 90: 165, # 'Z' - 91: 253, # '[' - 92: 253, # '\\' - 93: 253, # ']' - 94: 253, # '^' - 95: 253, # '_' - 96: 253, # '`' - 97: 71, # 'a' - 98: 172, # 'b' - 99: 66, # 'c' - 100: 173, # 'd' - 101: 65, # 'e' - 102: 174, # 'f' - 103: 76, # 'g' - 104: 175, # 'h' - 105: 64, # 'i' - 106: 176, # 'j' - 107: 177, # 'k' - 108: 77, # 'l' - 109: 72, # 'm' - 110: 178, # 'n' - 111: 69, # 'o' - 112: 67, # 'p' - 113: 179, # 'q' - 114: 78, # 'r' - 115: 73, # 's' - 116: 180, # 't' - 117: 181, # 'u' - 118: 79, # 'v' - 119: 182, # 'w' - 120: 183, # 'x' - 121: 184, # 'y' - 122: 185, # 'z' - 123: 253, # '{' - 124: 253, # '|' - 125: 253, # '}' - 126: 253, # '~' - 127: 253, # '\x7f' - 128: 191, # 'Ђ' - 129: 192, # 'Ѓ' - 130: 193, # '‚' - 131: 194, # 'ѓ' - 132: 195, # '„' - 133: 196, # '…' - 134: 197, # '†' - 135: 198, # '‡' - 136: 199, # '€' - 137: 200, # '‰' - 138: 201, # 'Љ' - 139: 202, # '‹' - 140: 203, # 'Њ' - 141: 204, # 'Ќ' - 142: 205, # 'Ћ' - 143: 206, # 'Џ' - 144: 207, # 'ђ' - 145: 208, # '‘' - 146: 209, # '’' - 147: 210, # '“' - 148: 211, # '”' - 149: 212, # '•' - 150: 213, # '–' - 151: 214, # '—' - 152: 215, # None - 153: 216, # '™' - 154: 217, # 'љ' - 155: 218, # '›' - 156: 219, # 'њ' - 157: 220, # 'ќ' - 158: 221, # 'ћ' - 159: 222, # 'џ' - 160: 223, # '\xa0' - 161: 224, # 'Ў' - 162: 225, # 'ў' - 163: 226, # 'Ј' - 164: 227, # '¤' - 165: 228, # 'Ґ' - 166: 229, # '¦' - 167: 230, # '§' - 168: 231, # 'Ё' - 169: 232, # '©' - 170: 233, # 'Є' - 171: 234, # '«' - 172: 235, # '¬' - 173: 236, # '\xad' - 174: 237, # '®' - 175: 238, # 'Ї' - 176: 239, # '°' - 177: 240, # '±' - 178: 241, # 'І' - 179: 242, # 'і' - 180: 243, # 'ґ' - 181: 244, # 'µ' - 182: 245, # '¶' - 183: 246, # '·' - 184: 68, # 'ё' - 185: 247, # '№' - 186: 248, # 'є' - 187: 249, # '»' - 188: 250, # 'ј' - 189: 251, # 'Ѕ' - 190: 252, # 'ѕ' - 191: 253, # 'ї' - 192: 37, # 'А' - 193: 44, # 'Б' - 194: 33, # 'В' - 195: 46, # 'Г' - 196: 41, # 'Д' - 197: 48, # 'Е' - 198: 56, # 'Ж' - 199: 51, # 'З' - 200: 42, # 'И' - 201: 60, # 'Й' - 202: 36, # 'К' - 203: 49, # 'Л' - 204: 38, # 'М' - 205: 31, # 'Н' - 206: 34, # 'О' - 207: 35, # 'П' - 208: 45, # 'Р' - 209: 32, # 'С' - 210: 40, # 'Т' - 211: 52, # 'У' - 212: 53, # 'Ф' - 213: 55, # 'Х' - 214: 58, # 'Ц' - 215: 50, # 'Ч' - 216: 57, # 'Ш' - 217: 63, # 'Щ' - 218: 70, # 'Ъ' - 219: 62, # 'Ы' - 220: 61, # 'Ь' - 221: 47, # 'Э' - 222: 59, # 'Ю' - 223: 43, # 'Я' - 224: 3, # 'а' - 225: 21, # 'б' - 226: 10, # 'в' - 227: 19, # 'г' - 228: 13, # 'д' - 229: 2, # 'е' - 230: 24, # 'ж' - 231: 20, # 'з' - 232: 4, # 'и' - 233: 23, # 'й' - 234: 11, # 'к' - 235: 8, # 'л' - 236: 12, # 'м' - 237: 5, # 'н' - 238: 1, # 'о' - 239: 15, # 'п' - 240: 9, # 'р' - 241: 7, # 'с' - 242: 6, # 'т' - 243: 14, # 'у' - 244: 39, # 'ф' - 245: 26, # 'х' - 246: 28, # 'ц' - 247: 22, # 'ч' - 248: 25, # 'ш' - 249: 29, # 'щ' - 250: 54, # 'ъ' - 251: 18, # 'ы' - 252: 17, # 'ь' - 253: 30, # 'э' - 254: 27, # 'ю' - 255: 16, # 'я' -} - -WINDOWS_1251_RUSSIAN_MODEL = SingleByteCharSetModel( - charset_name="windows-1251", - language="Russian", - char_to_order_map=WINDOWS_1251_RUSSIAN_CHAR_TO_ORDER, - language_model=RUSSIAN_LANG_MODEL, - typical_positive_ratio=0.976601, - keep_ascii_letters=False, - alphabet="ЁАБВГДЕЖЗИЙКЛМНОПРСТУФХЦЧШЩЪЫЬЭЮЯабвгдежзийклмнопрстуфхцчшщъыьэюяё", -) - -IBM855_RUSSIAN_CHAR_TO_ORDER = { - 0: 255, # '\x00' - 1: 255, # '\x01' - 2: 255, # '\x02' - 3: 255, # '\x03' - 4: 255, # '\x04' - 5: 255, # '\x05' - 6: 255, # '\x06' - 7: 255, # '\x07' - 8: 255, # '\x08' - 9: 255, # '\t' - 10: 254, # '\n' - 11: 255, # '\x0b' - 12: 255, # '\x0c' - 13: 254, # '\r' - 14: 255, # '\x0e' - 15: 255, # '\x0f' - 16: 255, # '\x10' - 17: 255, # '\x11' - 18: 255, # '\x12' - 19: 255, # '\x13' - 20: 255, # '\x14' - 21: 255, # '\x15' - 22: 255, # '\x16' - 23: 255, # '\x17' - 24: 255, # '\x18' - 25: 255, # '\x19' - 26: 255, # '\x1a' - 27: 255, # '\x1b' - 28: 255, # '\x1c' - 29: 255, # '\x1d' - 30: 255, # '\x1e' - 31: 255, # '\x1f' - 32: 253, # ' ' - 33: 253, # '!' - 34: 253, # '"' - 35: 253, # '#' - 36: 253, # '$' - 37: 253, # '%' - 38: 253, # '&' - 39: 253, # "'" - 40: 253, # '(' - 41: 253, # ')' - 42: 253, # '*' - 43: 253, # '+' - 44: 253, # ',' - 45: 253, # '-' - 46: 253, # '.' - 47: 253, # '/' - 48: 252, # '0' - 49: 252, # '1' - 50: 252, # '2' - 51: 252, # '3' - 52: 252, # '4' - 53: 252, # '5' - 54: 252, # '6' - 55: 252, # '7' - 56: 252, # '8' - 57: 252, # '9' - 58: 253, # ':' - 59: 253, # ';' - 60: 253, # '<' - 61: 253, # '=' - 62: 253, # '>' - 63: 253, # '?' - 64: 253, # '@' - 65: 142, # 'A' - 66: 143, # 'B' - 67: 144, # 'C' - 68: 145, # 'D' - 69: 146, # 'E' - 70: 147, # 'F' - 71: 148, # 'G' - 72: 149, # 'H' - 73: 150, # 'I' - 74: 151, # 'J' - 75: 152, # 'K' - 76: 74, # 'L' - 77: 153, # 'M' - 78: 75, # 'N' - 79: 154, # 'O' - 80: 155, # 'P' - 81: 156, # 'Q' - 82: 157, # 'R' - 83: 158, # 'S' - 84: 159, # 'T' - 85: 160, # 'U' - 86: 161, # 'V' - 87: 162, # 'W' - 88: 163, # 'X' - 89: 164, # 'Y' - 90: 165, # 'Z' - 91: 253, # '[' - 92: 253, # '\\' - 93: 253, # ']' - 94: 253, # '^' - 95: 253, # '_' - 96: 253, # '`' - 97: 71, # 'a' - 98: 172, # 'b' - 99: 66, # 'c' - 100: 173, # 'd' - 101: 65, # 'e' - 102: 174, # 'f' - 103: 76, # 'g' - 104: 175, # 'h' - 105: 64, # 'i' - 106: 176, # 'j' - 107: 177, # 'k' - 108: 77, # 'l' - 109: 72, # 'm' - 110: 178, # 'n' - 111: 69, # 'o' - 112: 67, # 'p' - 113: 179, # 'q' - 114: 78, # 'r' - 115: 73, # 's' - 116: 180, # 't' - 117: 181, # 'u' - 118: 79, # 'v' - 119: 182, # 'w' - 120: 183, # 'x' - 121: 184, # 'y' - 122: 185, # 'z' - 123: 253, # '{' - 124: 253, # '|' - 125: 253, # '}' - 126: 253, # '~' - 127: 253, # '\x7f' - 128: 191, # 'ђ' - 129: 192, # 'Ђ' - 130: 193, # 'ѓ' - 131: 194, # 'Ѓ' - 132: 68, # 'ё' - 133: 195, # 'Ё' - 134: 196, # 'є' - 135: 197, # 'Є' - 136: 198, # 'ѕ' - 137: 199, # 'Ѕ' - 138: 200, # 'і' - 139: 201, # 'І' - 140: 202, # 'ї' - 141: 203, # 'Ї' - 142: 204, # 'ј' - 143: 205, # 'Ј' - 144: 206, # 'љ' - 145: 207, # 'Љ' - 146: 208, # 'њ' - 147: 209, # 'Њ' - 148: 210, # 'ћ' - 149: 211, # 'Ћ' - 150: 212, # 'ќ' - 151: 213, # 'Ќ' - 152: 214, # 'ў' - 153: 215, # 'Ў' - 154: 216, # 'џ' - 155: 217, # 'Џ' - 156: 27, # 'ю' - 157: 59, # 'Ю' - 158: 54, # 'ъ' - 159: 70, # 'Ъ' - 160: 3, # 'а' - 161: 37, # 'А' - 162: 21, # 'б' - 163: 44, # 'Б' - 164: 28, # 'ц' - 165: 58, # 'Ц' - 166: 13, # 'д' - 167: 41, # 'Д' - 168: 2, # 'е' - 169: 48, # 'Е' - 170: 39, # 'ф' - 171: 53, # 'Ф' - 172: 19, # 'г' - 173: 46, # 'Г' - 174: 218, # '«' - 175: 219, # '»' - 176: 220, # '░' - 177: 221, # '▒' - 178: 222, # '▓' - 179: 223, # '│' - 180: 224, # '┤' - 181: 26, # 'х' - 182: 55, # 'Х' - 183: 4, # 'и' - 184: 42, # 'И' - 185: 225, # '╣' - 186: 226, # '║' - 187: 227, # '╗' - 188: 228, # '╝' - 189: 23, # 'й' - 190: 60, # 'Й' - 191: 229, # '┐' - 192: 230, # '└' - 193: 231, # '┴' - 194: 232, # '┬' - 195: 233, # '├' - 196: 234, # '─' - 197: 235, # '┼' - 198: 11, # 'к' - 199: 36, # 'К' - 200: 236, # '╚' - 201: 237, # '╔' - 202: 238, # '╩' - 203: 239, # '╦' - 204: 240, # '╠' - 205: 241, # '═' - 206: 242, # '╬' - 207: 243, # '¤' - 208: 8, # 'л' - 209: 49, # 'Л' - 210: 12, # 'м' - 211: 38, # 'М' - 212: 5, # 'н' - 213: 31, # 'Н' - 214: 1, # 'о' - 215: 34, # 'О' - 216: 15, # 'п' - 217: 244, # '┘' - 218: 245, # '┌' - 219: 246, # '█' - 220: 247, # '▄' - 221: 35, # 'П' - 222: 16, # 'я' - 223: 248, # '▀' - 224: 43, # 'Я' - 225: 9, # 'р' - 226: 45, # 'Р' - 227: 7, # 'с' - 228: 32, # 'С' - 229: 6, # 'т' - 230: 40, # 'Т' - 231: 14, # 'у' - 232: 52, # 'У' - 233: 24, # 'ж' - 234: 56, # 'Ж' - 235: 10, # 'в' - 236: 33, # 'В' - 237: 17, # 'ь' - 238: 61, # 'Ь' - 239: 249, # '№' - 240: 250, # '\xad' - 241: 18, # 'ы' - 242: 62, # 'Ы' - 243: 20, # 'з' - 244: 51, # 'З' - 245: 25, # 'ш' - 246: 57, # 'Ш' - 247: 30, # 'э' - 248: 47, # 'Э' - 249: 29, # 'щ' - 250: 63, # 'Щ' - 251: 22, # 'ч' - 252: 50, # 'Ч' - 253: 251, # '§' - 254: 252, # '■' - 255: 255, # '\xa0' -} - -IBM855_RUSSIAN_MODEL = SingleByteCharSetModel( - charset_name="IBM855", - language="Russian", - char_to_order_map=IBM855_RUSSIAN_CHAR_TO_ORDER, - language_model=RUSSIAN_LANG_MODEL, - typical_positive_ratio=0.976601, - keep_ascii_letters=False, - alphabet="ЁАБВГДЕЖЗИЙКЛМНОПРСТУФХЦЧШЩЪЫЬЭЮЯабвгдежзийклмнопрстуфхцчшщъыьэюяё", -) - -KOI8_R_RUSSIAN_CHAR_TO_ORDER = { - 0: 255, # '\x00' - 1: 255, # '\x01' - 2: 255, # '\x02' - 3: 255, # '\x03' - 4: 255, # '\x04' - 5: 255, # '\x05' - 6: 255, # '\x06' - 7: 255, # '\x07' - 8: 255, # '\x08' - 9: 255, # '\t' - 10: 254, # '\n' - 11: 255, # '\x0b' - 12: 255, # '\x0c' - 13: 254, # '\r' - 14: 255, # '\x0e' - 15: 255, # '\x0f' - 16: 255, # '\x10' - 17: 255, # '\x11' - 18: 255, # '\x12' - 19: 255, # '\x13' - 20: 255, # '\x14' - 21: 255, # '\x15' - 22: 255, # '\x16' - 23: 255, # '\x17' - 24: 255, # '\x18' - 25: 255, # '\x19' - 26: 255, # '\x1a' - 27: 255, # '\x1b' - 28: 255, # '\x1c' - 29: 255, # '\x1d' - 30: 255, # '\x1e' - 31: 255, # '\x1f' - 32: 253, # ' ' - 33: 253, # '!' - 34: 253, # '"' - 35: 253, # '#' - 36: 253, # '$' - 37: 253, # '%' - 38: 253, # '&' - 39: 253, # "'" - 40: 253, # '(' - 41: 253, # ')' - 42: 253, # '*' - 43: 253, # '+' - 44: 253, # ',' - 45: 253, # '-' - 46: 253, # '.' - 47: 253, # '/' - 48: 252, # '0' - 49: 252, # '1' - 50: 252, # '2' - 51: 252, # '3' - 52: 252, # '4' - 53: 252, # '5' - 54: 252, # '6' - 55: 252, # '7' - 56: 252, # '8' - 57: 252, # '9' - 58: 253, # ':' - 59: 253, # ';' - 60: 253, # '<' - 61: 253, # '=' - 62: 253, # '>' - 63: 253, # '?' - 64: 253, # '@' - 65: 142, # 'A' - 66: 143, # 'B' - 67: 144, # 'C' - 68: 145, # 'D' - 69: 146, # 'E' - 70: 147, # 'F' - 71: 148, # 'G' - 72: 149, # 'H' - 73: 150, # 'I' - 74: 151, # 'J' - 75: 152, # 'K' - 76: 74, # 'L' - 77: 153, # 'M' - 78: 75, # 'N' - 79: 154, # 'O' - 80: 155, # 'P' - 81: 156, # 'Q' - 82: 157, # 'R' - 83: 158, # 'S' - 84: 159, # 'T' - 85: 160, # 'U' - 86: 161, # 'V' - 87: 162, # 'W' - 88: 163, # 'X' - 89: 164, # 'Y' - 90: 165, # 'Z' - 91: 253, # '[' - 92: 253, # '\\' - 93: 253, # ']' - 94: 253, # '^' - 95: 253, # '_' - 96: 253, # '`' - 97: 71, # 'a' - 98: 172, # 'b' - 99: 66, # 'c' - 100: 173, # 'd' - 101: 65, # 'e' - 102: 174, # 'f' - 103: 76, # 'g' - 104: 175, # 'h' - 105: 64, # 'i' - 106: 176, # 'j' - 107: 177, # 'k' - 108: 77, # 'l' - 109: 72, # 'm' - 110: 178, # 'n' - 111: 69, # 'o' - 112: 67, # 'p' - 113: 179, # 'q' - 114: 78, # 'r' - 115: 73, # 's' - 116: 180, # 't' - 117: 181, # 'u' - 118: 79, # 'v' - 119: 182, # 'w' - 120: 183, # 'x' - 121: 184, # 'y' - 122: 185, # 'z' - 123: 253, # '{' - 124: 253, # '|' - 125: 253, # '}' - 126: 253, # '~' - 127: 253, # '\x7f' - 128: 191, # '─' - 129: 192, # '│' - 130: 193, # '┌' - 131: 194, # '┐' - 132: 195, # '└' - 133: 196, # '┘' - 134: 197, # '├' - 135: 198, # '┤' - 136: 199, # '┬' - 137: 200, # '┴' - 138: 201, # '┼' - 139: 202, # '▀' - 140: 203, # '▄' - 141: 204, # '█' - 142: 205, # '▌' - 143: 206, # '▐' - 144: 207, # '░' - 145: 208, # '▒' - 146: 209, # '▓' - 147: 210, # '⌠' - 148: 211, # '■' - 149: 212, # '∙' - 150: 213, # '√' - 151: 214, # '≈' - 152: 215, # '≤' - 153: 216, # '≥' - 154: 217, # '\xa0' - 155: 218, # '⌡' - 156: 219, # '°' - 157: 220, # '²' - 158: 221, # '·' - 159: 222, # '÷' - 160: 223, # '═' - 161: 224, # '║' - 162: 225, # '╒' - 163: 68, # 'ё' - 164: 226, # '╓' - 165: 227, # '╔' - 166: 228, # '╕' - 167: 229, # '╖' - 168: 230, # '╗' - 169: 231, # '╘' - 170: 232, # '╙' - 171: 233, # '╚' - 172: 234, # '╛' - 173: 235, # '╜' - 174: 236, # '╝' - 175: 237, # '╞' - 176: 238, # '╟' - 177: 239, # '╠' - 178: 240, # '╡' - 179: 241, # 'Ё' - 180: 242, # '╢' - 181: 243, # '╣' - 182: 244, # '╤' - 183: 245, # '╥' - 184: 246, # '╦' - 185: 247, # '╧' - 186: 248, # '╨' - 187: 249, # '╩' - 188: 250, # '╪' - 189: 251, # '╫' - 190: 252, # '╬' - 191: 253, # '©' - 192: 27, # 'ю' - 193: 3, # 'а' - 194: 21, # 'б' - 195: 28, # 'ц' - 196: 13, # 'д' - 197: 2, # 'е' - 198: 39, # 'ф' - 199: 19, # 'г' - 200: 26, # 'х' - 201: 4, # 'и' - 202: 23, # 'й' - 203: 11, # 'к' - 204: 8, # 'л' - 205: 12, # 'м' - 206: 5, # 'н' - 207: 1, # 'о' - 208: 15, # 'п' - 209: 16, # 'я' - 210: 9, # 'р' - 211: 7, # 'с' - 212: 6, # 'т' - 213: 14, # 'у' - 214: 24, # 'ж' - 215: 10, # 'в' - 216: 17, # 'ь' - 217: 18, # 'ы' - 218: 20, # 'з' - 219: 25, # 'ш' - 220: 30, # 'э' - 221: 29, # 'щ' - 222: 22, # 'ч' - 223: 54, # 'ъ' - 224: 59, # 'Ю' - 225: 37, # 'А' - 226: 44, # 'Б' - 227: 58, # 'Ц' - 228: 41, # 'Д' - 229: 48, # 'Е' - 230: 53, # 'Ф' - 231: 46, # 'Г' - 232: 55, # 'Х' - 233: 42, # 'И' - 234: 60, # 'Й' - 235: 36, # 'К' - 236: 49, # 'Л' - 237: 38, # 'М' - 238: 31, # 'Н' - 239: 34, # 'О' - 240: 35, # 'П' - 241: 43, # 'Я' - 242: 45, # 'Р' - 243: 32, # 'С' - 244: 40, # 'Т' - 245: 52, # 'У' - 246: 56, # 'Ж' - 247: 33, # 'В' - 248: 61, # 'Ь' - 249: 62, # 'Ы' - 250: 51, # 'З' - 251: 57, # 'Ш' - 252: 47, # 'Э' - 253: 63, # 'Щ' - 254: 50, # 'Ч' - 255: 70, # 'Ъ' -} - -KOI8_R_RUSSIAN_MODEL = SingleByteCharSetModel( - charset_name="KOI8-R", - language="Russian", - char_to_order_map=KOI8_R_RUSSIAN_CHAR_TO_ORDER, - language_model=RUSSIAN_LANG_MODEL, - typical_positive_ratio=0.976601, - keep_ascii_letters=False, - alphabet="ЁАБВГДЕЖЗИЙКЛМНОПРСТУФХЦЧШЩЪЫЬЭЮЯабвгдежзийклмнопрстуфхцчшщъыьэюяё", -) - -MACCYRILLIC_RUSSIAN_CHAR_TO_ORDER = { - 0: 255, # '\x00' - 1: 255, # '\x01' - 2: 255, # '\x02' - 3: 255, # '\x03' - 4: 255, # '\x04' - 5: 255, # '\x05' - 6: 255, # '\x06' - 7: 255, # '\x07' - 8: 255, # '\x08' - 9: 255, # '\t' - 10: 254, # '\n' - 11: 255, # '\x0b' - 12: 255, # '\x0c' - 13: 254, # '\r' - 14: 255, # '\x0e' - 15: 255, # '\x0f' - 16: 255, # '\x10' - 17: 255, # '\x11' - 18: 255, # '\x12' - 19: 255, # '\x13' - 20: 255, # '\x14' - 21: 255, # '\x15' - 22: 255, # '\x16' - 23: 255, # '\x17' - 24: 255, # '\x18' - 25: 255, # '\x19' - 26: 255, # '\x1a' - 27: 255, # '\x1b' - 28: 255, # '\x1c' - 29: 255, # '\x1d' - 30: 255, # '\x1e' - 31: 255, # '\x1f' - 32: 253, # ' ' - 33: 253, # '!' - 34: 253, # '"' - 35: 253, # '#' - 36: 253, # '$' - 37: 253, # '%' - 38: 253, # '&' - 39: 253, # "'" - 40: 253, # '(' - 41: 253, # ')' - 42: 253, # '*' - 43: 253, # '+' - 44: 253, # ',' - 45: 253, # '-' - 46: 253, # '.' - 47: 253, # '/' - 48: 252, # '0' - 49: 252, # '1' - 50: 252, # '2' - 51: 252, # '3' - 52: 252, # '4' - 53: 252, # '5' - 54: 252, # '6' - 55: 252, # '7' - 56: 252, # '8' - 57: 252, # '9' - 58: 253, # ':' - 59: 253, # ';' - 60: 253, # '<' - 61: 253, # '=' - 62: 253, # '>' - 63: 253, # '?' - 64: 253, # '@' - 65: 142, # 'A' - 66: 143, # 'B' - 67: 144, # 'C' - 68: 145, # 'D' - 69: 146, # 'E' - 70: 147, # 'F' - 71: 148, # 'G' - 72: 149, # 'H' - 73: 150, # 'I' - 74: 151, # 'J' - 75: 152, # 'K' - 76: 74, # 'L' - 77: 153, # 'M' - 78: 75, # 'N' - 79: 154, # 'O' - 80: 155, # 'P' - 81: 156, # 'Q' - 82: 157, # 'R' - 83: 158, # 'S' - 84: 159, # 'T' - 85: 160, # 'U' - 86: 161, # 'V' - 87: 162, # 'W' - 88: 163, # 'X' - 89: 164, # 'Y' - 90: 165, # 'Z' - 91: 253, # '[' - 92: 253, # '\\' - 93: 253, # ']' - 94: 253, # '^' - 95: 253, # '_' - 96: 253, # '`' - 97: 71, # 'a' - 98: 172, # 'b' - 99: 66, # 'c' - 100: 173, # 'd' - 101: 65, # 'e' - 102: 174, # 'f' - 103: 76, # 'g' - 104: 175, # 'h' - 105: 64, # 'i' - 106: 176, # 'j' - 107: 177, # 'k' - 108: 77, # 'l' - 109: 72, # 'm' - 110: 178, # 'n' - 111: 69, # 'o' - 112: 67, # 'p' - 113: 179, # 'q' - 114: 78, # 'r' - 115: 73, # 's' - 116: 180, # 't' - 117: 181, # 'u' - 118: 79, # 'v' - 119: 182, # 'w' - 120: 183, # 'x' - 121: 184, # 'y' - 122: 185, # 'z' - 123: 253, # '{' - 124: 253, # '|' - 125: 253, # '}' - 126: 253, # '~' - 127: 253, # '\x7f' - 128: 37, # 'А' - 129: 44, # 'Б' - 130: 33, # 'В' - 131: 46, # 'Г' - 132: 41, # 'Д' - 133: 48, # 'Е' - 134: 56, # 'Ж' - 135: 51, # 'З' - 136: 42, # 'И' - 137: 60, # 'Й' - 138: 36, # 'К' - 139: 49, # 'Л' - 140: 38, # 'М' - 141: 31, # 'Н' - 142: 34, # 'О' - 143: 35, # 'П' - 144: 45, # 'Р' - 145: 32, # 'С' - 146: 40, # 'Т' - 147: 52, # 'У' - 148: 53, # 'Ф' - 149: 55, # 'Х' - 150: 58, # 'Ц' - 151: 50, # 'Ч' - 152: 57, # 'Ш' - 153: 63, # 'Щ' - 154: 70, # 'Ъ' - 155: 62, # 'Ы' - 156: 61, # 'Ь' - 157: 47, # 'Э' - 158: 59, # 'Ю' - 159: 43, # 'Я' - 160: 191, # '†' - 161: 192, # '°' - 162: 193, # 'Ґ' - 163: 194, # '£' - 164: 195, # '§' - 165: 196, # '•' - 166: 197, # '¶' - 167: 198, # 'І' - 168: 199, # '®' - 169: 200, # '©' - 170: 201, # '™' - 171: 202, # 'Ђ' - 172: 203, # 'ђ' - 173: 204, # '≠' - 174: 205, # 'Ѓ' - 175: 206, # 'ѓ' - 176: 207, # '∞' - 177: 208, # '±' - 178: 209, # '≤' - 179: 210, # '≥' - 180: 211, # 'і' - 181: 212, # 'µ' - 182: 213, # 'ґ' - 183: 214, # 'Ј' - 184: 215, # 'Є' - 185: 216, # 'є' - 186: 217, # 'Ї' - 187: 218, # 'ї' - 188: 219, # 'Љ' - 189: 220, # 'љ' - 190: 221, # 'Њ' - 191: 222, # 'њ' - 192: 223, # 'ј' - 193: 224, # 'Ѕ' - 194: 225, # '¬' - 195: 226, # '√' - 196: 227, # 'ƒ' - 197: 228, # '≈' - 198: 229, # '∆' - 199: 230, # '«' - 200: 231, # '»' - 201: 232, # '…' - 202: 233, # '\xa0' - 203: 234, # 'Ћ' - 204: 235, # 'ћ' - 205: 236, # 'Ќ' - 206: 237, # 'ќ' - 207: 238, # 'ѕ' - 208: 239, # '–' - 209: 240, # '—' - 210: 241, # '“' - 211: 242, # '”' - 212: 243, # '‘' - 213: 244, # '’' - 214: 245, # '÷' - 215: 246, # '„' - 216: 247, # 'Ў' - 217: 248, # 'ў' - 218: 249, # 'Џ' - 219: 250, # 'џ' - 220: 251, # '№' - 221: 252, # 'Ё' - 222: 68, # 'ё' - 223: 16, # 'я' - 224: 3, # 'а' - 225: 21, # 'б' - 226: 10, # 'в' - 227: 19, # 'г' - 228: 13, # 'д' - 229: 2, # 'е' - 230: 24, # 'ж' - 231: 20, # 'з' - 232: 4, # 'и' - 233: 23, # 'й' - 234: 11, # 'к' - 235: 8, # 'л' - 236: 12, # 'м' - 237: 5, # 'н' - 238: 1, # 'о' - 239: 15, # 'п' - 240: 9, # 'р' - 241: 7, # 'с' - 242: 6, # 'т' - 243: 14, # 'у' - 244: 39, # 'ф' - 245: 26, # 'х' - 246: 28, # 'ц' - 247: 22, # 'ч' - 248: 25, # 'ш' - 249: 29, # 'щ' - 250: 54, # 'ъ' - 251: 18, # 'ы' - 252: 17, # 'ь' - 253: 30, # 'э' - 254: 27, # 'ю' - 255: 255, # '€' -} - -MACCYRILLIC_RUSSIAN_MODEL = SingleByteCharSetModel( - charset_name="MacCyrillic", - language="Russian", - char_to_order_map=MACCYRILLIC_RUSSIAN_CHAR_TO_ORDER, - language_model=RUSSIAN_LANG_MODEL, - typical_positive_ratio=0.976601, - keep_ascii_letters=False, - alphabet="ЁАБВГДЕЖЗИЙКЛМНОПРСТУФХЦЧШЩЪЫЬЭЮЯабвгдежзийклмнопрстуфхцчшщъыьэюяё", -) - -ISO_8859_5_RUSSIAN_CHAR_TO_ORDER = { - 0: 255, # '\x00' - 1: 255, # '\x01' - 2: 255, # '\x02' - 3: 255, # '\x03' - 4: 255, # '\x04' - 5: 255, # '\x05' - 6: 255, # '\x06' - 7: 255, # '\x07' - 8: 255, # '\x08' - 9: 255, # '\t' - 10: 254, # '\n' - 11: 255, # '\x0b' - 12: 255, # '\x0c' - 13: 254, # '\r' - 14: 255, # '\x0e' - 15: 255, # '\x0f' - 16: 255, # '\x10' - 17: 255, # '\x11' - 18: 255, # '\x12' - 19: 255, # '\x13' - 20: 255, # '\x14' - 21: 255, # '\x15' - 22: 255, # '\x16' - 23: 255, # '\x17' - 24: 255, # '\x18' - 25: 255, # '\x19' - 26: 255, # '\x1a' - 27: 255, # '\x1b' - 28: 255, # '\x1c' - 29: 255, # '\x1d' - 30: 255, # '\x1e' - 31: 255, # '\x1f' - 32: 253, # ' ' - 33: 253, # '!' - 34: 253, # '"' - 35: 253, # '#' - 36: 253, # '$' - 37: 253, # '%' - 38: 253, # '&' - 39: 253, # "'" - 40: 253, # '(' - 41: 253, # ')' - 42: 253, # '*' - 43: 253, # '+' - 44: 253, # ',' - 45: 253, # '-' - 46: 253, # '.' - 47: 253, # '/' - 48: 252, # '0' - 49: 252, # '1' - 50: 252, # '2' - 51: 252, # '3' - 52: 252, # '4' - 53: 252, # '5' - 54: 252, # '6' - 55: 252, # '7' - 56: 252, # '8' - 57: 252, # '9' - 58: 253, # ':' - 59: 253, # ';' - 60: 253, # '<' - 61: 253, # '=' - 62: 253, # '>' - 63: 253, # '?' - 64: 253, # '@' - 65: 142, # 'A' - 66: 143, # 'B' - 67: 144, # 'C' - 68: 145, # 'D' - 69: 146, # 'E' - 70: 147, # 'F' - 71: 148, # 'G' - 72: 149, # 'H' - 73: 150, # 'I' - 74: 151, # 'J' - 75: 152, # 'K' - 76: 74, # 'L' - 77: 153, # 'M' - 78: 75, # 'N' - 79: 154, # 'O' - 80: 155, # 'P' - 81: 156, # 'Q' - 82: 157, # 'R' - 83: 158, # 'S' - 84: 159, # 'T' - 85: 160, # 'U' - 86: 161, # 'V' - 87: 162, # 'W' - 88: 163, # 'X' - 89: 164, # 'Y' - 90: 165, # 'Z' - 91: 253, # '[' - 92: 253, # '\\' - 93: 253, # ']' - 94: 253, # '^' - 95: 253, # '_' - 96: 253, # '`' - 97: 71, # 'a' - 98: 172, # 'b' - 99: 66, # 'c' - 100: 173, # 'd' - 101: 65, # 'e' - 102: 174, # 'f' - 103: 76, # 'g' - 104: 175, # 'h' - 105: 64, # 'i' - 106: 176, # 'j' - 107: 177, # 'k' - 108: 77, # 'l' - 109: 72, # 'm' - 110: 178, # 'n' - 111: 69, # 'o' - 112: 67, # 'p' - 113: 179, # 'q' - 114: 78, # 'r' - 115: 73, # 's' - 116: 180, # 't' - 117: 181, # 'u' - 118: 79, # 'v' - 119: 182, # 'w' - 120: 183, # 'x' - 121: 184, # 'y' - 122: 185, # 'z' - 123: 253, # '{' - 124: 253, # '|' - 125: 253, # '}' - 126: 253, # '~' - 127: 253, # '\x7f' - 128: 191, # '\x80' - 129: 192, # '\x81' - 130: 193, # '\x82' - 131: 194, # '\x83' - 132: 195, # '\x84' - 133: 196, # '\x85' - 134: 197, # '\x86' - 135: 198, # '\x87' - 136: 199, # '\x88' - 137: 200, # '\x89' - 138: 201, # '\x8a' - 139: 202, # '\x8b' - 140: 203, # '\x8c' - 141: 204, # '\x8d' - 142: 205, # '\x8e' - 143: 206, # '\x8f' - 144: 207, # '\x90' - 145: 208, # '\x91' - 146: 209, # '\x92' - 147: 210, # '\x93' - 148: 211, # '\x94' - 149: 212, # '\x95' - 150: 213, # '\x96' - 151: 214, # '\x97' - 152: 215, # '\x98' - 153: 216, # '\x99' - 154: 217, # '\x9a' - 155: 218, # '\x9b' - 156: 219, # '\x9c' - 157: 220, # '\x9d' - 158: 221, # '\x9e' - 159: 222, # '\x9f' - 160: 223, # '\xa0' - 161: 224, # 'Ё' - 162: 225, # 'Ђ' - 163: 226, # 'Ѓ' - 164: 227, # 'Є' - 165: 228, # 'Ѕ' - 166: 229, # 'І' - 167: 230, # 'Ї' - 168: 231, # 'Ј' - 169: 232, # 'Љ' - 170: 233, # 'Њ' - 171: 234, # 'Ћ' - 172: 235, # 'Ќ' - 173: 236, # '\xad' - 174: 237, # 'Ў' - 175: 238, # 'Џ' - 176: 37, # 'А' - 177: 44, # 'Б' - 178: 33, # 'В' - 179: 46, # 'Г' - 180: 41, # 'Д' - 181: 48, # 'Е' - 182: 56, # 'Ж' - 183: 51, # 'З' - 184: 42, # 'И' - 185: 60, # 'Й' - 186: 36, # 'К' - 187: 49, # 'Л' - 188: 38, # 'М' - 189: 31, # 'Н' - 190: 34, # 'О' - 191: 35, # 'П' - 192: 45, # 'Р' - 193: 32, # 'С' - 194: 40, # 'Т' - 195: 52, # 'У' - 196: 53, # 'Ф' - 197: 55, # 'Х' - 198: 58, # 'Ц' - 199: 50, # 'Ч' - 200: 57, # 'Ш' - 201: 63, # 'Щ' - 202: 70, # 'Ъ' - 203: 62, # 'Ы' - 204: 61, # 'Ь' - 205: 47, # 'Э' - 206: 59, # 'Ю' - 207: 43, # 'Я' - 208: 3, # 'а' - 209: 21, # 'б' - 210: 10, # 'в' - 211: 19, # 'г' - 212: 13, # 'д' - 213: 2, # 'е' - 214: 24, # 'ж' - 215: 20, # 'з' - 216: 4, # 'и' - 217: 23, # 'й' - 218: 11, # 'к' - 219: 8, # 'л' - 220: 12, # 'м' - 221: 5, # 'н' - 222: 1, # 'о' - 223: 15, # 'п' - 224: 9, # 'р' - 225: 7, # 'с' - 226: 6, # 'т' - 227: 14, # 'у' - 228: 39, # 'ф' - 229: 26, # 'х' - 230: 28, # 'ц' - 231: 22, # 'ч' - 232: 25, # 'ш' - 233: 29, # 'щ' - 234: 54, # 'ъ' - 235: 18, # 'ы' - 236: 17, # 'ь' - 237: 30, # 'э' - 238: 27, # 'ю' - 239: 16, # 'я' - 240: 239, # '№' - 241: 68, # 'ё' - 242: 240, # 'ђ' - 243: 241, # 'ѓ' - 244: 242, # 'є' - 245: 243, # 'ѕ' - 246: 244, # 'і' - 247: 245, # 'ї' - 248: 246, # 'ј' - 249: 247, # 'љ' - 250: 248, # 'њ' - 251: 249, # 'ћ' - 252: 250, # 'ќ' - 253: 251, # '§' - 254: 252, # 'ў' - 255: 255, # 'џ' -} - -ISO_8859_5_RUSSIAN_MODEL = SingleByteCharSetModel( - charset_name="ISO-8859-5", - language="Russian", - char_to_order_map=ISO_8859_5_RUSSIAN_CHAR_TO_ORDER, - language_model=RUSSIAN_LANG_MODEL, - typical_positive_ratio=0.976601, - keep_ascii_letters=False, - alphabet="ЁАБВГДЕЖЗИЙКЛМНОПРСТУФХЦЧШЩЪЫЬЭЮЯабвгдежзийклмнопрстуфхцчшщъыьэюяё", -) diff --git a/spaces/Blockinger/OVAChatGPT/app.py b/spaces/Blockinger/OVAChatGPT/app.py deleted file mode 100644 index c4913405316207ca8d557ba1b3600e86f6c99ab2..0000000000000000000000000000000000000000 --- a/spaces/Blockinger/OVAChatGPT/app.py +++ /dev/null @@ -1,49 +0,0 @@ -import whisper -import gradio as gr -import time -from pyChatGPT import ChatGPT -import warnings - -warnings.filterwarnings("ignore") -secret_token = "eyJhbGciOiJkaXIiLCJlbmMiOiJBMjU2R0NNIn0..7mhijdQ19ze0tuj2.sjEQP7Gi4PYli8CUWMGC0GOj4tVDlZOmU8DWlCtRO-uUrkvPLn0hbhHpg0RMERCbBORAmHZ-wawqvZMmiZ2rfxBuaNw0aGp5bZ1IR2N3Na1qyTHuZ29D2TS287MJcsmfiXJaYTb_sxwxi8d9_uY4HEv6aseywDplaGQFmrLnRSKy2kbFXoYXmfXwolIm9miO68JXlYm4tRig2gNf5YsfEvd0rEBf7q7NzdSUud2DVYu7Q-qq6h_AVDCFFgJ7Y3zzhGSJ0c8DPHnynxBzv5KaB94GA05oqWtdqYxZGJ9xLaZOa-KFPGVRK7flzY3jCa8QKxHz8v-bFjdfb8jqU3cGjgyU47_B-P_aLaZ_K2ZFksjImgGzYvOGA1DIk9twImBpDwWtJwZuFifrsIL-d51D33xJJkP4LS_WKjRJJBkS2BPR2A8-NFJ3KaizZLiUdYa1pDlvTHpmd51QwJ7i4cjr41G4VY4O1n-sLgyFcoEbYKsIpuaiFffjyVg1WNJNLl-qOOzkslNjE-xcx5Y6Uo5fEurRfp1vEFlb2PXycaRRUT4r95_39PhSNZciHHFGkNSOoc7Zk90DtrJIcRvFA61YdgSo4_g5H90qSpDL_JvZIuAut58hWbBdxrZDV5c1GIgotX5ZmSyN9DuqRBTwEqdenqo1krMdZc6B1epHV0fON6Ur7PI_CgmIjBWclLc2uxSmnlvBJTvsDp-xOaJty6wFtkyui_QiLXZ0SzJAAOmDgXem91MtjfFF5h0Kc2AviplqyOw6BIFp7i5-oDEBI6FEpDyPNnSHJiMGP8HDV0RIqZs2RK4xUb1FolSqP-DfpZ7gikpcE_Gr-wSM8daHU65g--RxhtgJXi12pR0hz4io5qCfneF__D81Y6tc6x1T79ezPJh9lFRMxYO8G8tJHbUHioHmt4zt-NNy8o0h_fVD3qfRE2w46nju0DGLCw_A4VVL_gTxurVRutvVXj3mgVVEtWNG74jcPSR6jIwJTpmRl7pmj94pQqINJH_-365E2OG55HdZnFhil05_n3uN_0ZZIsaGbznmDvUaGSuwdrhnbnbLSMrAVC9CYs8Euovgh6hQd9aAo4vaboOOECZOeyTbyBmvaOqzHev0GMoXdeUXXCOJAlUQ3wYnFFZ27gr5NNAMqC7uSo0fVEupyop8m3hjjDrO4kVBarCF8IvA-hM2DZ-AUMWuyjcRn4gmJAJMLNxiBM6wPG_Y4HUuYTHZMW_-f5Id0csjCqo9f4TkMDZj6-h3CR6F-Uis80A0HzKSI9jIedrnkS6VfiZw-ZP4T9Ef3AemSqRvhgLN8kcfRwGcB9zotrKDHfrJJQHT4tIEzE5P6JC0KUiObId9nIGAe2IPCed66DbkLF8V_iiQ-ttuQ8ID6gTdSAo7Fl2iV140EbATlNRYCRjrcfFq_Vz2rSNoFJg3r-iE0xh4kmnYTkf7C8NUWY56-r39PP4qRaNbAIEePOBqBa7NKbUem9BSJDa9HZEL3HZ8Xo91yCNOexm-T_MW7_0QgsMbzaZhe3A9KxQjd-Iebn6NIANYqYZ-XubKL_S2OkUCfx_DCCsKFUhsjl1RMF3e-QwX-W2NeTtfpv-7XEe-SgyJsL9eXwYhdZioETLe0McvPNTA3MaZJ068AvWEPsh3NIt5TFIRFhwekGI6wbFIP_amtRSczhGIVmjZblSgvg7XL1V42_vTERz3yHkZmcF7-_T0OV08fOfDQmmU1Q8jUq-v3psPhpORhhzRgNl3gkBeXatDOTKAn6MrGz-EMb44_LVFcqelPABM0Y2H8-h8Yz2d5gdykEyLw6fMF0RCQ8P_OG2AFJMCEqcupzezuf5Q-bzHCZ0yIfksCOX_cMFqVJ2IJTn1SKljgpQ29Bm0IH7zdKWUiu54fVkW9Ie8OpnF6EckCatioCWY3tuA0A0mK2eob4Rn7RnC_KynIOoWjc7SOAjHX42g0rjKqPNk4eQGa7n0QZ5q1tuP2qtvQOhX5ai0QVj_zsGfFYVHtJ8mpA8Mq4fR_BZZ5XdHJF2PpFSXxsvfnZKK_Dz_UMjhwqUtezlxy7aQTr_Bs0JzVMfjfx7Y7sbM_ZymFmcUIq9QzYViMudwpESUjAKTQU_Aw9xIrNxzp-ZZA7EztXIrP5XQSEE0PTVutubJqXDKvTbxSA00c0zs0IF8yVXqi3tsIqsEmM2RFHkDjxwf-U6-ac3rsYtqoTC5nnj6dojw71bgDpwiFBvsrlIDKSgPu2yRM-EXwQFkLOttS4YAWu89yoJDaThRsDOP93wWaAmH1QQc8-kbWfbgOo2pREMUQBNw8pCZEUDMDm1TzNjfpuI8cABMtMu6AjUoesp1sXaa5ZLAeNvuraJ0dDQAvi8.4lEJ1vUP4kQSy_8xka6HOQ" -model = whisper.load_model("base") -model.device - -def transcribe(audio) : - - # load audio and pad/trim it to fit 30 seconds - audio = whisper.load_audio(audio) - audio = whisper.pad_or_trim(audio) - - # make log-Mel spectogram and move to the same device as the model - mel = whisper.log_mel_spectogram(audio).to(model.device) - - # detect the spoken language - _, probs = model.detect_language(mel) - - # decode audio - options = whisper.DecodingOptions() - result = whisper.decode(model, mel, options) - result_text = result.text - - # Pass the generated text to audio - chatgpt_api = ChatGPT(secret_token) - resp = chatgpt_api.send_message(result_text) - out_result = resp['message'] - - return [result_text, out_result] - -output_1 = gr.Textbox(label="Speech to Text") -output_2 = gr.Textbox(label="ChatGPT Output") - -gr.Interface( - title = 'OpenAI Whisper and ChatGPT ASR Gradio Web UI', - fn=transcribe, - inputs=[ - gr.inputs.Audio(source="microphone", type ="filepath") - ], - - outputs= [ - output_1, output_2 - ], - live=True).launch() \ No newline at end of file diff --git a/spaces/CALM/Dashboard/streamlit_observable/frontend/src/react-app-env.d.ts b/spaces/CALM/Dashboard/streamlit_observable/frontend/src/react-app-env.d.ts deleted file mode 100644 index 6431bc5fc6b2c932dfe5d0418fc667b86c18b9fc..0000000000000000000000000000000000000000 --- a/spaces/CALM/Dashboard/streamlit_observable/frontend/src/react-app-env.d.ts +++ /dev/null @@ -1 +0,0 @@ -/// diff --git a/spaces/CVPR/LIVE/ptr.h b/spaces/CVPR/LIVE/ptr.h deleted file mode 100644 index f3f8e43e148d6b0b2abec6a1d4b830a81982f50b..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/ptr.h +++ /dev/null @@ -1,23 +0,0 @@ -#pragma once - -#include - -/** - * Python doesn't have a pointer type, therefore we create a pointer wrapper - * see https://stackoverflow.com/questions/48982143/returning-and-passing-around-raw-pod-pointers-arrays-with-python-c-and-pyb?rq=1 - */ -template -class ptr { -public: - ptr() : p(nullptr) {} - ptr(T* p) : p(p) {} - ptr(std::size_t p) : p((T*)p) {} - ptr(const ptr& other) : ptr(other.p) {} - T* operator->() const { return p; } - T* get() const { return p; } - void destroy() { delete p; } - bool is_null() const { return p == nullptr; } - size_t as_size_t() const {return (size_t)p;} -private: - T* p; -}; diff --git a/spaces/CVPR/WALT/mmdet/core/bbox/assigners/assign_result.py b/spaces/CVPR/WALT/mmdet/core/bbox/assigners/assign_result.py deleted file mode 100644 index 4639fbdba0a5b92778e1ab87d61182e54bfb9b6f..0000000000000000000000000000000000000000 --- a/spaces/CVPR/WALT/mmdet/core/bbox/assigners/assign_result.py +++ /dev/null @@ -1,204 +0,0 @@ -import torch - -from mmdet.utils import util_mixins - - -class AssignResult(util_mixins.NiceRepr): - """Stores assignments between predicted and truth boxes. - - Attributes: - num_gts (int): the number of truth boxes considered when computing this - assignment - - gt_inds (LongTensor): for each predicted box indicates the 1-based - index of the assigned truth box. 0 means unassigned and -1 means - ignore. - - max_overlaps (FloatTensor): the iou between the predicted box and its - assigned truth box. - - labels (None | LongTensor): If specified, for each predicted box - indicates the category label of the assigned truth box. - - Example: - >>> # An assign result between 4 predicted boxes and 9 true boxes - >>> # where only two boxes were assigned. - >>> num_gts = 9 - >>> max_overlaps = torch.LongTensor([0, .5, .9, 0]) - >>> gt_inds = torch.LongTensor([-1, 1, 2, 0]) - >>> labels = torch.LongTensor([0, 3, 4, 0]) - >>> self = AssignResult(num_gts, gt_inds, max_overlaps, labels) - >>> print(str(self)) # xdoctest: +IGNORE_WANT - - >>> # Force addition of gt labels (when adding gt as proposals) - >>> new_labels = torch.LongTensor([3, 4, 5]) - >>> self.add_gt_(new_labels) - >>> print(str(self)) # xdoctest: +IGNORE_WANT - - """ - - def __init__(self, num_gts, gt_inds, max_overlaps, labels=None): - self.num_gts = num_gts - self.gt_inds = gt_inds - self.max_overlaps = max_overlaps - self.labels = labels - # Interface for possible user-defined properties - self._extra_properties = {} - - @property - def num_preds(self): - """int: the number of predictions in this assignment""" - return len(self.gt_inds) - - def set_extra_property(self, key, value): - """Set user-defined new property.""" - assert key not in self.info - self._extra_properties[key] = value - - def get_extra_property(self, key): - """Get user-defined property.""" - return self._extra_properties.get(key, None) - - @property - def info(self): - """dict: a dictionary of info about the object""" - basic_info = { - 'num_gts': self.num_gts, - 'num_preds': self.num_preds, - 'gt_inds': self.gt_inds, - 'max_overlaps': self.max_overlaps, - 'labels': self.labels, - } - basic_info.update(self._extra_properties) - return basic_info - - def __nice__(self): - """str: a "nice" summary string describing this assign result""" - parts = [] - parts.append(f'num_gts={self.num_gts!r}') - if self.gt_inds is None: - parts.append(f'gt_inds={self.gt_inds!r}') - else: - parts.append(f'gt_inds.shape={tuple(self.gt_inds.shape)!r}') - if self.max_overlaps is None: - parts.append(f'max_overlaps={self.max_overlaps!r}') - else: - parts.append('max_overlaps.shape=' - f'{tuple(self.max_overlaps.shape)!r}') - if self.labels is None: - parts.append(f'labels={self.labels!r}') - else: - parts.append(f'labels.shape={tuple(self.labels.shape)!r}') - return ', '.join(parts) - - @classmethod - def random(cls, **kwargs): - """Create random AssignResult for tests or debugging. - - Args: - num_preds: number of predicted boxes - num_gts: number of true boxes - p_ignore (float): probability of a predicted box assinged to an - ignored truth - p_assigned (float): probability of a predicted box not being - assigned - p_use_label (float | bool): with labels or not - rng (None | int | numpy.random.RandomState): seed or state - - Returns: - :obj:`AssignResult`: Randomly generated assign results. - - Example: - >>> from mmdet.core.bbox.assigners.assign_result import * # NOQA - >>> self = AssignResult.random() - >>> print(self.info) - """ - from mmdet.core.bbox import demodata - rng = demodata.ensure_rng(kwargs.get('rng', None)) - - num_gts = kwargs.get('num_gts', None) - num_preds = kwargs.get('num_preds', None) - p_ignore = kwargs.get('p_ignore', 0.3) - p_assigned = kwargs.get('p_assigned', 0.7) - p_use_label = kwargs.get('p_use_label', 0.5) - num_classes = kwargs.get('p_use_label', 3) - - if num_gts is None: - num_gts = rng.randint(0, 8) - if num_preds is None: - num_preds = rng.randint(0, 16) - - if num_gts == 0: - max_overlaps = torch.zeros(num_preds, dtype=torch.float32) - gt_inds = torch.zeros(num_preds, dtype=torch.int64) - if p_use_label is True or p_use_label < rng.rand(): - labels = torch.zeros(num_preds, dtype=torch.int64) - else: - labels = None - else: - import numpy as np - # Create an overlap for each predicted box - max_overlaps = torch.from_numpy(rng.rand(num_preds)) - - # Construct gt_inds for each predicted box - is_assigned = torch.from_numpy(rng.rand(num_preds) < p_assigned) - # maximum number of assignments constraints - n_assigned = min(num_preds, min(num_gts, is_assigned.sum())) - - assigned_idxs = np.where(is_assigned)[0] - rng.shuffle(assigned_idxs) - assigned_idxs = assigned_idxs[0:n_assigned] - assigned_idxs.sort() - - is_assigned[:] = 0 - is_assigned[assigned_idxs] = True - - is_ignore = torch.from_numpy( - rng.rand(num_preds) < p_ignore) & is_assigned - - gt_inds = torch.zeros(num_preds, dtype=torch.int64) - - true_idxs = np.arange(num_gts) - rng.shuffle(true_idxs) - true_idxs = torch.from_numpy(true_idxs) - gt_inds[is_assigned] = true_idxs[:n_assigned] - - gt_inds = torch.from_numpy( - rng.randint(1, num_gts + 1, size=num_preds)) - gt_inds[is_ignore] = -1 - gt_inds[~is_assigned] = 0 - max_overlaps[~is_assigned] = 0 - - if p_use_label is True or p_use_label < rng.rand(): - if num_classes == 0: - labels = torch.zeros(num_preds, dtype=torch.int64) - else: - labels = torch.from_numpy( - # remind that we set FG labels to [0, num_class-1] - # since mmdet v2.0 - # BG cat_id: num_class - rng.randint(0, num_classes, size=num_preds)) - labels[~is_assigned] = 0 - else: - labels = None - - self = cls(num_gts, gt_inds, max_overlaps, labels) - return self - - def add_gt_(self, gt_labels): - """Add ground truth as assigned results. - - Args: - gt_labels (torch.Tensor): Labels of gt boxes - """ - self_inds = torch.arange( - 1, len(gt_labels) + 1, dtype=torch.long, device=gt_labels.device) - self.gt_inds = torch.cat([self_inds, self.gt_inds]) - - self.max_overlaps = torch.cat( - [self.max_overlaps.new_ones(len(gt_labels)), self.max_overlaps]) - - if self.labels is not None: - self.labels = torch.cat([gt_labels, self.labels]) diff --git a/spaces/CVPR/WALT/mmdet/core/bbox/coder/legacy_delta_xywh_bbox_coder.py b/spaces/CVPR/WALT/mmdet/core/bbox/coder/legacy_delta_xywh_bbox_coder.py deleted file mode 100644 index 190309fd42a1b76c12c82fc1acf0511494be5ac3..0000000000000000000000000000000000000000 --- a/spaces/CVPR/WALT/mmdet/core/bbox/coder/legacy_delta_xywh_bbox_coder.py +++ /dev/null @@ -1,215 +0,0 @@ -import mmcv -import numpy as np -import torch - -from ..builder import BBOX_CODERS -from .base_bbox_coder import BaseBBoxCoder - - -@BBOX_CODERS.register_module() -class LegacyDeltaXYWHBBoxCoder(BaseBBoxCoder): - """Legacy Delta XYWH BBox coder used in MMDet V1.x. - - Following the practice in R-CNN [1]_, this coder encodes bbox (x1, y1, x2, - y2) into delta (dx, dy, dw, dh) and decodes delta (dx, dy, dw, dh) - back to original bbox (x1, y1, x2, y2). - - Note: - The main difference between :class`LegacyDeltaXYWHBBoxCoder` and - :class:`DeltaXYWHBBoxCoder` is whether ``+ 1`` is used during width and - height calculation. We suggest to only use this coder when testing with - MMDet V1.x models. - - References: - .. [1] https://arxiv.org/abs/1311.2524 - - Args: - target_means (Sequence[float]): denormalizing means of target for - delta coordinates - target_stds (Sequence[float]): denormalizing standard deviation of - target for delta coordinates - """ - - def __init__(self, - target_means=(0., 0., 0., 0.), - target_stds=(1., 1., 1., 1.)): - super(BaseBBoxCoder, self).__init__() - self.means = target_means - self.stds = target_stds - - def encode(self, bboxes, gt_bboxes): - """Get box regression transformation deltas that can be used to - transform the ``bboxes`` into the ``gt_bboxes``. - - Args: - bboxes (torch.Tensor): source boxes, e.g., object proposals. - gt_bboxes (torch.Tensor): target of the transformation, e.g., - ground-truth boxes. - - Returns: - torch.Tensor: Box transformation deltas - """ - assert bboxes.size(0) == gt_bboxes.size(0) - assert bboxes.size(-1) == gt_bboxes.size(-1) == 4 - encoded_bboxes = legacy_bbox2delta(bboxes, gt_bboxes, self.means, - self.stds) - return encoded_bboxes - - def decode(self, - bboxes, - pred_bboxes, - max_shape=None, - wh_ratio_clip=16 / 1000): - """Apply transformation `pred_bboxes` to `boxes`. - - Args: - boxes (torch.Tensor): Basic boxes. - pred_bboxes (torch.Tensor): Encoded boxes with shape - max_shape (tuple[int], optional): Maximum shape of boxes. - Defaults to None. - wh_ratio_clip (float, optional): The allowed ratio between - width and height. - - Returns: - torch.Tensor: Decoded boxes. - """ - assert pred_bboxes.size(0) == bboxes.size(0) - decoded_bboxes = legacy_delta2bbox(bboxes, pred_bboxes, self.means, - self.stds, max_shape, wh_ratio_clip) - - return decoded_bboxes - - -@mmcv.jit(coderize=True) -def legacy_bbox2delta(proposals, - gt, - means=(0., 0., 0., 0.), - stds=(1., 1., 1., 1.)): - """Compute deltas of proposals w.r.t. gt in the MMDet V1.x manner. - - We usually compute the deltas of x, y, w, h of proposals w.r.t ground - truth bboxes to get regression target. - This is the inverse function of `delta2bbox()` - - Args: - proposals (Tensor): Boxes to be transformed, shape (N, ..., 4) - gt (Tensor): Gt bboxes to be used as base, shape (N, ..., 4) - means (Sequence[float]): Denormalizing means for delta coordinates - stds (Sequence[float]): Denormalizing standard deviation for delta - coordinates - - Returns: - Tensor: deltas with shape (N, 4), where columns represent dx, dy, - dw, dh. - """ - assert proposals.size() == gt.size() - - proposals = proposals.float() - gt = gt.float() - px = (proposals[..., 0] + proposals[..., 2]) * 0.5 - py = (proposals[..., 1] + proposals[..., 3]) * 0.5 - pw = proposals[..., 2] - proposals[..., 0] + 1.0 - ph = proposals[..., 3] - proposals[..., 1] + 1.0 - - gx = (gt[..., 0] + gt[..., 2]) * 0.5 - gy = (gt[..., 1] + gt[..., 3]) * 0.5 - gw = gt[..., 2] - gt[..., 0] + 1.0 - gh = gt[..., 3] - gt[..., 1] + 1.0 - - dx = (gx - px) / pw - dy = (gy - py) / ph - dw = torch.log(gw / pw) - dh = torch.log(gh / ph) - deltas = torch.stack([dx, dy, dw, dh], dim=-1) - - means = deltas.new_tensor(means).unsqueeze(0) - stds = deltas.new_tensor(stds).unsqueeze(0) - deltas = deltas.sub_(means).div_(stds) - - return deltas - - -@mmcv.jit(coderize=True) -def legacy_delta2bbox(rois, - deltas, - means=(0., 0., 0., 0.), - stds=(1., 1., 1., 1.), - max_shape=None, - wh_ratio_clip=16 / 1000): - """Apply deltas to shift/scale base boxes in the MMDet V1.x manner. - - Typically the rois are anchor or proposed bounding boxes and the deltas are - network outputs used to shift/scale those boxes. - This is the inverse function of `bbox2delta()` - - Args: - rois (Tensor): Boxes to be transformed. Has shape (N, 4) - deltas (Tensor): Encoded offsets with respect to each roi. - Has shape (N, 4 * num_classes). Note N = num_anchors * W * H when - rois is a grid of anchors. Offset encoding follows [1]_. - means (Sequence[float]): Denormalizing means for delta coordinates - stds (Sequence[float]): Denormalizing standard deviation for delta - coordinates - max_shape (tuple[int, int]): Maximum bounds for boxes. specifies (H, W) - wh_ratio_clip (float): Maximum aspect ratio for boxes. - - Returns: - Tensor: Boxes with shape (N, 4), where columns represent - tl_x, tl_y, br_x, br_y. - - References: - .. [1] https://arxiv.org/abs/1311.2524 - - Example: - >>> rois = torch.Tensor([[ 0., 0., 1., 1.], - >>> [ 0., 0., 1., 1.], - >>> [ 0., 0., 1., 1.], - >>> [ 5., 5., 5., 5.]]) - >>> deltas = torch.Tensor([[ 0., 0., 0., 0.], - >>> [ 1., 1., 1., 1.], - >>> [ 0., 0., 2., -1.], - >>> [ 0.7, -1.9, -0.5, 0.3]]) - >>> legacy_delta2bbox(rois, deltas, max_shape=(32, 32)) - tensor([[0.0000, 0.0000, 1.5000, 1.5000], - [0.0000, 0.0000, 5.2183, 5.2183], - [0.0000, 0.1321, 7.8891, 0.8679], - [5.3967, 2.4251, 6.0033, 3.7749]]) - """ - means = deltas.new_tensor(means).repeat(1, deltas.size(1) // 4) - stds = deltas.new_tensor(stds).repeat(1, deltas.size(1) // 4) - denorm_deltas = deltas * stds + means - dx = denorm_deltas[:, 0::4] - dy = denorm_deltas[:, 1::4] - dw = denorm_deltas[:, 2::4] - dh = denorm_deltas[:, 3::4] - max_ratio = np.abs(np.log(wh_ratio_clip)) - dw = dw.clamp(min=-max_ratio, max=max_ratio) - dh = dh.clamp(min=-max_ratio, max=max_ratio) - # Compute center of each roi - px = ((rois[:, 0] + rois[:, 2]) * 0.5).unsqueeze(1).expand_as(dx) - py = ((rois[:, 1] + rois[:, 3]) * 0.5).unsqueeze(1).expand_as(dy) - # Compute width/height of each roi - pw = (rois[:, 2] - rois[:, 0] + 1.0).unsqueeze(1).expand_as(dw) - ph = (rois[:, 3] - rois[:, 1] + 1.0).unsqueeze(1).expand_as(dh) - # Use exp(network energy) to enlarge/shrink each roi - gw = pw * dw.exp() - gh = ph * dh.exp() - # Use network energy to shift the center of each roi - gx = px + pw * dx - gy = py + ph * dy - # Convert center-xy/width/height to top-left, bottom-right - - # The true legacy box coder should +- 0.5 here. - # However, current implementation improves the performance when testing - # the models trained in MMDetection 1.X (~0.5 bbox AP, 0.2 mask AP) - x1 = gx - gw * 0.5 - y1 = gy - gh * 0.5 - x2 = gx + gw * 0.5 - y2 = gy + gh * 0.5 - if max_shape is not None: - x1 = x1.clamp(min=0, max=max_shape[1] - 1) - y1 = y1.clamp(min=0, max=max_shape[0] - 1) - x2 = x2.clamp(min=0, max=max_shape[1] - 1) - y2 = y2.clamp(min=0, max=max_shape[0] - 1) - bboxes = torch.stack([x1, y1, x2, y2], dim=-1).view_as(deltas) - return bboxes diff --git a/spaces/CVPR/WALT/mmdet/core/bbox/match_costs/__init__.py b/spaces/CVPR/WALT/mmdet/core/bbox/match_costs/__init__.py deleted file mode 100644 index add5e0d394034d89b2d47c314ff1938294deb6ea..0000000000000000000000000000000000000000 --- a/spaces/CVPR/WALT/mmdet/core/bbox/match_costs/__init__.py +++ /dev/null @@ -1,7 +0,0 @@ -from .builder import build_match_cost -from .match_cost import BBoxL1Cost, ClassificationCost, FocalLossCost, IoUCost - -__all__ = [ - 'build_match_cost', 'ClassificationCost', 'BBoxL1Cost', 'IoUCost', - 'FocalLossCost' -] diff --git a/spaces/CVPR/drawings-to-human/frontend/svelte.config.js b/spaces/CVPR/drawings-to-human/frontend/svelte.config.js deleted file mode 100644 index 84ba69cbc92feabd4162d8d1e46796849651055c..0000000000000000000000000000000000000000 --- a/spaces/CVPR/drawings-to-human/frontend/svelte.config.js +++ /dev/null @@ -1,32 +0,0 @@ -import adapter from '@sveltejs/adapter-static'; -import preprocess from 'svelte-preprocess'; - -const dev = process.env.NODE_ENV === 'development'; - -console.log('dev', dev); -/** @type {import('@sveltejs/kit').Config} */ -const config = { - // Consult https://github.com/sveltejs/svelte-preprocess - // for more information about preprocessors - preprocess: preprocess({ - postcss: true - }), - - kit: { - paths: { - base: '/static' - }, - adapter: adapter({ - pages: 'build', - assets: 'build', - fallback: null, - precompress: false - }), - - prerender: { - default: true - } - } -}; - -export default config; diff --git a/spaces/CVPR/regionclip-demo/detectron2/layers/csrc/README.md b/spaces/CVPR/regionclip-demo/detectron2/layers/csrc/README.md deleted file mode 100644 index 778ed3da0bae89820831bcd8a72ff7b9cad8d4dd..0000000000000000000000000000000000000000 --- a/spaces/CVPR/regionclip-demo/detectron2/layers/csrc/README.md +++ /dev/null @@ -1,7 +0,0 @@ - - -To add a new Op: - -1. Create a new directory -2. Implement new ops there -3. Delcare its Python interface in `vision.cpp`. diff --git a/spaces/CikeyQI/Yunzai/Yunzai/plugins/other/setPubCk.js b/spaces/CikeyQI/Yunzai/Yunzai/plugins/other/setPubCk.js deleted file mode 100644 index 1f8324f08503827e21527a3c6ef702116c773bd3..0000000000000000000000000000000000000000 --- a/spaces/CikeyQI/Yunzai/Yunzai/plugins/other/setPubCk.js +++ /dev/null @@ -1,167 +0,0 @@ -import plugin from '../../lib/plugins/plugin.js' -import GsCfg from '../genshin/model/gsCfg.js' -import fs from 'node:fs' -import lodash from 'lodash' -import fetch from 'node-fetch' -import YAML from 'yaml' -import MysInfo from '../genshin/model/mys/mysInfo.js' -import common from '../../lib/common/common.js' - -export class setPubCk extends plugin { - constructor (e) { - super({ - name: '配置', - dsc: '#配置ck', - event: 'message', - priority: 700, - rule: [ - { - reg: '^#配置(ck|cookie)$|^#*配置公共查询ck$', - fnc: 'setPubCk', - permission: 'master' - }, - { - reg: '^#使用(全部|用户)ck$', - fnc: 'setUserCk', - permission: 'master' - } - ] - }) - - this.file = './plugins/genshin/config/mys.pubCk.yaml' - } - - /** 配置公共ck */ - async setPubCk () { - /** 设置上下文,后续接收到内容会执行doRep方法 */ - this.setContext('pubCk') - /** 回复 */ - await this.reply('请发送米游社cookie......\n配置后该ck将会加入公共查询池') - } - - async pubCk () { - let msg = this.e.msg - - if (!(/(ltoken|ltoken_v2)/.test(this.e.msg) && /(ltuid|ltmid_v2|account_mid_v2)/.test(this.e.msg))) { - this.e.reply('cookie错误,请发送正确的cookie') - return true - } - - this.finish('pubCk') - - let ck = msg.replace(/#|'|"/g, '') - let param = {} - ck.split(';').forEach((v) => { - // cookie_token_v2,ltoken_v2值也可能有= - // let tmp = lodash.trim(v).split('=') - let tmp = lodash.trim(v); - let index = tmp.indexOf("="); - param[tmp.slice(0,index)] = tmp.slice(index+1); - }) - - this.ck = '' - lodash.forEach(param, (v, k) => { - if (['ltoken', 'ltuid', 'cookie_token', 'account_id', 'cookie_token_v2', 'account_mid_v2', 'ltmid_v2', 'ltoken_v2'].includes(k)) { - this.ck += `${k}=${v};` - } - }) - - /** 检查ck是否失效 */ - if (!await this.checkCk()) { - logger.mark(`配置公共cookie错误:${this.checkMsg || 'cookie错误'}`) - await this.e.reply(`配置公共cookie错误:${this.checkMsg || 'cookie错误'}`) - return - } - - this.ltuid = param.ltuid - // 判断是否是v2版ck - if (param.cookie_token_v2 && (param.account_mid_v2 || param.ltoken_v2) && !(/(\d{4,9})/g).test(this.ltuid)) { - // 获取米游社通行证id - let userFullInfo = await this.getUserInfo() - if (userFullInfo?.data?.user_info) { - let userInfo = userFullInfo?.data?.user_info - this.ltuid = userInfo.uid - this.ck = `${this.ck}ltuid=${this.ltuid};` - } else { - logger.mark(`配置公共cookie错误:${userFullInfo.message || 'cookie错误'}`) - await this.e.reply(`配置公共cookie错误:${userFullInfo.message || 'cookie错误'}`) - return - } - } - - let ckArr = GsCfg.getConfig('mys', 'pubCk') || [] - - /** 判断是否重复 */ - for (let ck of ckArr) { - if (ck.includes(this.ltuid)) { - await this.e.reply('配置公共cookie错误:该ck已配置') - return - } - } - - ckArr.push(this.ck) - this.save(ckArr) - GsCfg.change_myspubCk() - - await this.e.reply(`配置公共ck成功:第${ckArr.length}个`) - } - - /** 检查ck是否可用 */ - async checkCk () { - let url = 'https://api-takumi.mihoyo.com/binding/api/getUserGameRolesByCookie?game_biz=hk4e_cn' - let res = await fetch(url, { method: 'get', headers: { Cookie: this.ck } }) - if (!res.ok) return false - res = await res.json() - if (res.retcode != 0) { - this.checkMsg = res.message - return false - } - - return true - } - - // 获取米游社通行证id - async getUserInfo (server = 'mys') { - try { - const that = this - let url = { - mys: 'https://bbs-api.mihoyo.com/user/wapi/getUserFullInfo?gids=2', - hoyolab: '' - } - let res = await fetch(url[server], { - method: 'get', - headers: { - Cookie: that.ck, - Accept: 'application/json, text/plain, */*', - Connection: 'keep-alive', - Host: 'bbs-api.mihoyo.com', - Origin: 'https://m.bbs.mihoyo.com', - Referer: ' https://m.bbs.mihoyo.com/' - } - }) - if (!res.ok) return res - res = await res.json() - return res - } catch (e) { - return null - } - } - - save (data) { - data = YAML.stringify(data) - fs.writeFileSync(this.file, data) - } - - async setUserCk () { - let set = './plugins/genshin/config/mys.set.yaml' - - let config = fs.readFileSync(set, 'utf8') - config = config.replace(/allowUseCookie: [0-1]/g, 'allowUseCookie: 1') - fs.writeFileSync(set, config, 'utf8') - - await common.sleep(500) - await MysInfo.initCache(true) - - await this.reply('开启成功,用户ck已加入公共查询ck池') - } -} diff --git a/spaces/Cloudyy/bark-voice-cloning/README.md b/spaces/Cloudyy/bark-voice-cloning/README.md deleted file mode 100644 index 0201ebf6de813acfb8bfd4997583bc5f5c0d036e..0000000000000000000000000000000000000000 --- a/spaces/Cloudyy/bark-voice-cloning/README.md +++ /dev/null @@ -1,16 +0,0 @@ ---- -title: Bark Voice Cloning -emoji: 🐶 -colorFrom: blue -colorTo: green -sdk: gradio -sdk_version: 3.29.0 -python_version: 3.10.11 -app_file: app.py -models: -- facebook/hubert-base-ls960 -- GitMylo/bark-voice-cloning -pinned: false -license: mit -duplicated_from: GitMylo/bark-voice-cloning ---- diff --git a/spaces/CofAI/chat.v1/temp.py b/spaces/CofAI/chat.v1/temp.py deleted file mode 100644 index fab040ff070d12bd78f8bbf2b2e78ac27e6ed65b..0000000000000000000000000000000000000000 --- a/spaces/CofAI/chat.v1/temp.py +++ /dev/null @@ -1,4 +0,0 @@ -import pandas as pd - -pd = pd.DataFrame({'address':[], 'car_num': [], 'lat': [], 'long': [], 'time': [], 'date': []}) -pd.to_csv('data.csv', index=False) \ No newline at end of file diff --git a/spaces/CosmoAI/ChitChat/app.py b/spaces/CosmoAI/ChitChat/app.py deleted file mode 100644 index 757a583190417a7f25c7b1def792bab00e8807f1..0000000000000000000000000000000000000000 --- a/spaces/CosmoAI/ChitChat/app.py +++ /dev/null @@ -1,110 +0,0 @@ -import gradio -from transformers import pipeline - -# Initialize the Hugging Face model -model = pipeline(model='google/flan-t5-base') - - -# Define the chatbot function -def chatbot(input_text): - # Generate a response from the Hugging Face model - response = model(input_text, max_length=250, do_sample=True)[0]['generated_text'].strip() - - # Return the bot response - return response - -# Define the Gradio interface -gradio_interface = gradio.Interface( - fn=chatbot, - inputs='text', - outputs='text', - title='Chatbot', - description='A weird chatbot conversations experience.', - examples=[ - ['Hi, how are you?'] - ] -) - -# Launch the Gradio interface -gradio_interface.launch() - - - - - -# from dotenv import load_dotenv -# from langchain import HuggingFaceHub, LLMChain -# from langchain import PromptTemplates -# import gradio - -# load_dotenv() -# os.getenv('HF_API') - -# hub_llm = HuggingFaceHub(repo_id='facebook/blenderbot-400M-distill') - -# prompt = prompt_templates( -# input_variable = ["question"], -# template = "Answer is: {question}" -# ) - -# hub_chain = LLMChain(prompt=prompt, llm=hub_llm, verbose=True) - - - - - -# Sample code for AI language model interaction -# from transformers import GPT2Tokenizer, GPT2LMHeadModel -# import gradio - - -# def simptok(data): -# # Load pre-trained model and tokenizer (using the transformers library) -# model_name = "gpt2" -# tokenizer = GPT2Tokenizer.from_pretrained(model_name) -# model = GPT2LMHeadModel.from_pretrained(model_name) - -# # User input -# user_input = data - -# # Tokenize input -# input_ids = tokenizer.encode(user_input, return_tensors="pt") - -# # Generate response -# output = model.generate(input_ids, max_length=50, num_return_sequences=1) -# response = tokenizer.decode(output[0], skip_special_tokens=True) -# return response - - -# def responsenew(data): -# return simptok(data) - - -# from hugchat import hugchat -# import gradio as gr -# import time - -# # Create a chatbot connection -# chatbot = hugchat.ChatBot(cookie_path="cookies.json") - -# # New a conversation (ignore error) -# id = chatbot.new_conversation() -# chatbot.change_conversation(id) - - -# def get_answer(data): -# return chatbot.chat(data) - -# gradio_interface = gr.Interface( -# fn = get_answer, -# inputs = "text", -# outputs = "text" -# ) -# gradio_interface.launch() - -# gradio_interface = gradio.Interface( -# fn = responsenew, -# inputs = "text", -# outputs = "text" -# ) -# gradio_interface.launch() diff --git a/spaces/CrucibleAI/ControlNetMediaPipeFaceSD21/cldm/ddim_hacked.py b/spaces/CrucibleAI/ControlNetMediaPipeFaceSD21/cldm/ddim_hacked.py deleted file mode 100644 index db9503725811e05c3713a9e1095ee6b507c3c0f3..0000000000000000000000000000000000000000 --- a/spaces/CrucibleAI/ControlNetMediaPipeFaceSD21/cldm/ddim_hacked.py +++ /dev/null @@ -1,317 +0,0 @@ -"""SAMPLING ONLY.""" - -import torch -import numpy as np -from tqdm import tqdm - -from ldm.modules.diffusionmodules.util import make_ddim_sampling_parameters, make_ddim_timesteps, noise_like, extract_into_tensor - - -class DDIMSampler(object): - def __init__(self, model, schedule="linear", **kwargs): - super().__init__() - self.model = model - self.ddpm_num_timesteps = model.num_timesteps - self.schedule = schedule - - def register_buffer(self, name, attr): - # Do not force attr to CUDA device by default. It may not exist. - #if type(attr) == torch.Tensor: - # if attr.device != torch.device("cuda"): - # attr = attr.to(torch.device("cuda")) - setattr(self, name, attr) - - def make_schedule(self, ddim_num_steps, ddim_discretize="uniform", ddim_eta=0., verbose=True): - self.ddim_timesteps = make_ddim_timesteps(ddim_discr_method=ddim_discretize, num_ddim_timesteps=ddim_num_steps, - num_ddpm_timesteps=self.ddpm_num_timesteps,verbose=verbose) - alphas_cumprod = self.model.alphas_cumprod - assert alphas_cumprod.shape[0] == self.ddpm_num_timesteps, 'alphas have to be defined for each timestep' - to_torch = lambda x: x.clone().detach().to(torch.float32).to(self.model.device) - - self.register_buffer('betas', to_torch(self.model.betas)) - self.register_buffer('alphas_cumprod', to_torch(alphas_cumprod)) - self.register_buffer('alphas_cumprod_prev', to_torch(self.model.alphas_cumprod_prev)) - - # calculations for diffusion q(x_t | x_{t-1}) and others - self.register_buffer('sqrt_alphas_cumprod', to_torch(np.sqrt(alphas_cumprod.cpu()))) - self.register_buffer('sqrt_one_minus_alphas_cumprod', to_torch(np.sqrt(1. - alphas_cumprod.cpu()))) - self.register_buffer('log_one_minus_alphas_cumprod', to_torch(np.log(1. - alphas_cumprod.cpu()))) - self.register_buffer('sqrt_recip_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod.cpu()))) - self.register_buffer('sqrt_recipm1_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod.cpu() - 1))) - - # ddim sampling parameters - ddim_sigmas, ddim_alphas, ddim_alphas_prev = make_ddim_sampling_parameters(alphacums=alphas_cumprod.cpu(), - ddim_timesteps=self.ddim_timesteps, - eta=ddim_eta,verbose=verbose) - self.register_buffer('ddim_sigmas', ddim_sigmas) - self.register_buffer('ddim_alphas', ddim_alphas) - self.register_buffer('ddim_alphas_prev', ddim_alphas_prev) - self.register_buffer('ddim_sqrt_one_minus_alphas', np.sqrt(1. - ddim_alphas)) - sigmas_for_original_sampling_steps = ddim_eta * torch.sqrt( - (1 - self.alphas_cumprod_prev) / (1 - self.alphas_cumprod) * ( - 1 - self.alphas_cumprod / self.alphas_cumprod_prev)) - self.register_buffer('ddim_sigmas_for_original_num_steps', sigmas_for_original_sampling_steps) - - @torch.no_grad() - def sample(self, - S, - batch_size, - shape, - conditioning=None, - callback=None, - normals_sequence=None, - img_callback=None, - quantize_x0=False, - eta=0., - mask=None, - x0=None, - temperature=1., - noise_dropout=0., - score_corrector=None, - corrector_kwargs=None, - verbose=True, - x_T=None, - log_every_t=100, - unconditional_guidance_scale=1., - unconditional_conditioning=None, # this has to come in the same format as the conditioning, # e.g. as encoded tokens, ... - dynamic_threshold=None, - ucg_schedule=None, - **kwargs - ): - if conditioning is not None: - if isinstance(conditioning, dict): - ctmp = conditioning[list(conditioning.keys())[0]] - while isinstance(ctmp, list): ctmp = ctmp[0] - cbs = ctmp.shape[0] - if cbs != batch_size: - print(f"Warning: Got {cbs} conditionings but batch-size is {batch_size}") - - elif isinstance(conditioning, list): - for ctmp in conditioning: - if ctmp.shape[0] != batch_size: - print(f"Warning: Got {cbs} conditionings but batch-size is {batch_size}") - - else: - if conditioning.shape[0] != batch_size: - print(f"Warning: Got {conditioning.shape[0]} conditionings but batch-size is {batch_size}") - - self.make_schedule(ddim_num_steps=S, ddim_eta=eta, verbose=verbose) - # sampling - C, H, W = shape - size = (batch_size, C, H, W) - print(f'Data shape for DDIM sampling is {size}, eta {eta}') - - samples, intermediates = self.ddim_sampling(conditioning, size, - callback=callback, - img_callback=img_callback, - quantize_denoised=quantize_x0, - mask=mask, x0=x0, - ddim_use_original_steps=False, - noise_dropout=noise_dropout, - temperature=temperature, - score_corrector=score_corrector, - corrector_kwargs=corrector_kwargs, - x_T=x_T, - log_every_t=log_every_t, - unconditional_guidance_scale=unconditional_guidance_scale, - unconditional_conditioning=unconditional_conditioning, - dynamic_threshold=dynamic_threshold, - ucg_schedule=ucg_schedule - ) - return samples, intermediates - - @torch.no_grad() - def ddim_sampling(self, cond, shape, - x_T=None, ddim_use_original_steps=False, - callback=None, timesteps=None, quantize_denoised=False, - mask=None, x0=None, img_callback=None, log_every_t=100, - temperature=1., noise_dropout=0., score_corrector=None, corrector_kwargs=None, - unconditional_guidance_scale=1., unconditional_conditioning=None, dynamic_threshold=None, - ucg_schedule=None): - device = self.model.betas.device - b = shape[0] - if x_T is None: - img = torch.randn(shape, device=device) - else: - img = x_T - - if timesteps is None: - timesteps = self.ddpm_num_timesteps if ddim_use_original_steps else self.ddim_timesteps - elif timesteps is not None and not ddim_use_original_steps: - subset_end = int(min(timesteps / self.ddim_timesteps.shape[0], 1) * self.ddim_timesteps.shape[0]) - 1 - timesteps = self.ddim_timesteps[:subset_end] - - intermediates = {'x_inter': [img], 'pred_x0': [img]} - time_range = reversed(range(0,timesteps)) if ddim_use_original_steps else np.flip(timesteps) - total_steps = timesteps if ddim_use_original_steps else timesteps.shape[0] - print(f"Running DDIM Sampling with {total_steps} timesteps") - - iterator = tqdm(time_range, desc='DDIM Sampler', total=total_steps) - - for i, step in enumerate(iterator): - index = total_steps - i - 1 - ts = torch.full((b,), step, device=device, dtype=torch.long) - - if mask is not None: - assert x0 is not None - img_orig = self.model.q_sample(x0, ts) # TODO: deterministic forward pass? - img = img_orig * mask + (1. - mask) * img - - if ucg_schedule is not None: - assert len(ucg_schedule) == len(time_range) - unconditional_guidance_scale = ucg_schedule[i] - - outs = self.p_sample_ddim(img, cond, ts, index=index, use_original_steps=ddim_use_original_steps, - quantize_denoised=quantize_denoised, temperature=temperature, - noise_dropout=noise_dropout, score_corrector=score_corrector, - corrector_kwargs=corrector_kwargs, - unconditional_guidance_scale=unconditional_guidance_scale, - unconditional_conditioning=unconditional_conditioning, - dynamic_threshold=dynamic_threshold) - img, pred_x0 = outs - if callback: callback(i) - if img_callback: img_callback(pred_x0, i) - - if index % log_every_t == 0 or index == total_steps - 1: - intermediates['x_inter'].append(img) - intermediates['pred_x0'].append(pred_x0) - - return img, intermediates - - @torch.no_grad() - def p_sample_ddim(self, x, c, t, index, repeat_noise=False, use_original_steps=False, quantize_denoised=False, - temperature=1., noise_dropout=0., score_corrector=None, corrector_kwargs=None, - unconditional_guidance_scale=1., unconditional_conditioning=None, - dynamic_threshold=None): - b, *_, device = *x.shape, x.device - - if unconditional_conditioning is None or unconditional_guidance_scale == 1.: - model_output = self.model.apply_model(x, t, c) - else: - model_t = self.model.apply_model(x, t, c) - model_uncond = self.model.apply_model(x, t, unconditional_conditioning) - model_output = model_uncond + unconditional_guidance_scale * (model_t - model_uncond) - - if self.model.parameterization == "v": - e_t = self.model.predict_eps_from_z_and_v(x, t, model_output) - else: - e_t = model_output - - if score_corrector is not None: - assert self.model.parameterization == "eps", 'not implemented' - e_t = score_corrector.modify_score(self.model, e_t, x, t, c, **corrector_kwargs) - - alphas = self.model.alphas_cumprod if use_original_steps else self.ddim_alphas - alphas_prev = self.model.alphas_cumprod_prev if use_original_steps else self.ddim_alphas_prev - sqrt_one_minus_alphas = self.model.sqrt_one_minus_alphas_cumprod if use_original_steps else self.ddim_sqrt_one_minus_alphas - sigmas = self.model.ddim_sigmas_for_original_num_steps if use_original_steps else self.ddim_sigmas - # select parameters corresponding to the currently considered timestep - a_t = torch.full((b, 1, 1, 1), alphas[index], device=device) - a_prev = torch.full((b, 1, 1, 1), alphas_prev[index], device=device) - sigma_t = torch.full((b, 1, 1, 1), sigmas[index], device=device) - sqrt_one_minus_at = torch.full((b, 1, 1, 1), sqrt_one_minus_alphas[index],device=device) - - # current prediction for x_0 - if self.model.parameterization != "v": - pred_x0 = (x - sqrt_one_minus_at * e_t) / a_t.sqrt() - else: - pred_x0 = self.model.predict_start_from_z_and_v(x, t, model_output) - - if quantize_denoised: - pred_x0, _, *_ = self.model.first_stage_model.quantize(pred_x0) - - if dynamic_threshold is not None: - raise NotImplementedError() - - # direction pointing to x_t - dir_xt = (1. - a_prev - sigma_t**2).sqrt() * e_t - noise = sigma_t * noise_like(x.shape, device, repeat_noise) * temperature - if noise_dropout > 0.: - noise = torch.nn.functional.dropout(noise, p=noise_dropout) - x_prev = a_prev.sqrt() * pred_x0 + dir_xt + noise - return x_prev, pred_x0 - - @torch.no_grad() - def encode(self, x0, c, t_enc, use_original_steps=False, return_intermediates=None, - unconditional_guidance_scale=1.0, unconditional_conditioning=None, callback=None): - num_reference_steps = self.ddpm_num_timesteps if use_original_steps else self.ddim_timesteps.shape[0] - - assert t_enc <= num_reference_steps - num_steps = t_enc - - if use_original_steps: - alphas_next = self.alphas_cumprod[:num_steps] - alphas = self.alphas_cumprod_prev[:num_steps] - else: - alphas_next = self.ddim_alphas[:num_steps] - alphas = torch.tensor(self.ddim_alphas_prev[:num_steps]) - - x_next = x0 - intermediates = [] - inter_steps = [] - for i in tqdm(range(num_steps), desc='Encoding Image'): - t = torch.full((x0.shape[0],), i, device=self.model.device, dtype=torch.long) - if unconditional_guidance_scale == 1.: - noise_pred = self.model.apply_model(x_next, t, c) - else: - assert unconditional_conditioning is not None - e_t_uncond, noise_pred = torch.chunk( - self.model.apply_model(torch.cat((x_next, x_next)), torch.cat((t, t)), - torch.cat((unconditional_conditioning, c))), 2) - noise_pred = e_t_uncond + unconditional_guidance_scale * (noise_pred - e_t_uncond) - - xt_weighted = (alphas_next[i] / alphas[i]).sqrt() * x_next - weighted_noise_pred = alphas_next[i].sqrt() * ( - (1 / alphas_next[i] - 1).sqrt() - (1 / alphas[i] - 1).sqrt()) * noise_pred - x_next = xt_weighted + weighted_noise_pred - if return_intermediates and i % ( - num_steps // return_intermediates) == 0 and i < num_steps - 1: - intermediates.append(x_next) - inter_steps.append(i) - elif return_intermediates and i >= num_steps - 2: - intermediates.append(x_next) - inter_steps.append(i) - if callback: callback(i) - - out = {'x_encoded': x_next, 'intermediate_steps': inter_steps} - if return_intermediates: - out.update({'intermediates': intermediates}) - return x_next, out - - @torch.no_grad() - def stochastic_encode(self, x0, t, use_original_steps=False, noise=None): - # fast, but does not allow for exact reconstruction - # t serves as an index to gather the correct alphas - if use_original_steps: - sqrt_alphas_cumprod = self.sqrt_alphas_cumprod - sqrt_one_minus_alphas_cumprod = self.sqrt_one_minus_alphas_cumprod - else: - sqrt_alphas_cumprod = torch.sqrt(self.ddim_alphas) - sqrt_one_minus_alphas_cumprod = self.ddim_sqrt_one_minus_alphas - - if noise is None: - noise = torch.randn_like(x0) - return (extract_into_tensor(sqrt_alphas_cumprod, t, x0.shape) * x0 + - extract_into_tensor(sqrt_one_minus_alphas_cumprod, t, x0.shape) * noise) - - @torch.no_grad() - def decode(self, x_latent, cond, t_start, unconditional_guidance_scale=1.0, unconditional_conditioning=None, - use_original_steps=False, callback=None): - - timesteps = np.arange(self.ddpm_num_timesteps) if use_original_steps else self.ddim_timesteps - timesteps = timesteps[:t_start] - - time_range = np.flip(timesteps) - total_steps = timesteps.shape[0] - print(f"Running DDIM Sampling with {total_steps} timesteps") - - iterator = tqdm(time_range, desc='Decoding image', total=total_steps) - x_dec = x_latent - for i, step in enumerate(iterator): - index = total_steps - i - 1 - ts = torch.full((x_latent.shape[0],), step, device=x_latent.device, dtype=torch.long) - x_dec, _ = self.p_sample_ddim(x_dec, cond, ts, index=index, use_original_steps=use_original_steps, - unconditional_guidance_scale=unconditional_guidance_scale, - unconditional_conditioning=unconditional_conditioning) - if callback: callback(i) - return x_dec \ No newline at end of file diff --git a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/csrc/cuda/vision.h b/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/csrc/cuda/vision.h deleted file mode 100644 index ff02d612304120f86dfc0940a745250594adb267..0000000000000000000000000000000000000000 --- a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/csrc/cuda/vision.h +++ /dev/null @@ -1,121 +0,0 @@ -// Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved. -#pragma once -#include - - -at::Tensor SigmoidFocalLoss_forward_cuda( - const at::Tensor& logits, - const at::Tensor& targets, - const int num_classes, - const float gamma, - const float alpha); - -at::Tensor SigmoidFocalLoss_backward_cuda( - const at::Tensor& logits, - const at::Tensor& targets, - const at::Tensor& d_losses, - const int num_classes, - const float gamma, - const float alpha); - -at::Tensor ROIAlign_forward_cuda(const at::Tensor& input, - const at::Tensor& rois, - const float spatial_scale, - const int pooled_height, - const int pooled_width, - const int sampling_ratio); - -at::Tensor ROIAlign_backward_cuda(const at::Tensor& grad, - const at::Tensor& rois, - const float spatial_scale, - const int pooled_height, - const int pooled_width, - const int batch_size, - const int channels, - const int height, - const int width, - const int sampling_ratio); - - -std::tuple ROIPool_forward_cuda(const at::Tensor& input, - const at::Tensor& rois, - const float spatial_scale, - const int pooled_height, - const int pooled_width); - -at::Tensor ROIPool_backward_cuda(const at::Tensor& grad, - const at::Tensor& input, - const at::Tensor& rois, - const at::Tensor& argmax, - const float spatial_scale, - const int pooled_height, - const int pooled_width, - const int batch_size, - const int channels, - const int height, - const int width); - -at::Tensor nms_cuda(const at::Tensor boxes, float nms_overlap_thresh); - - -at::Tensor compute_flow_cuda(const at::Tensor& boxes, - const int height, - const int width); - -at::Tensor -dcn_v2_cuda_forward(const at::Tensor &input, - const at::Tensor &weight, - const at::Tensor &bias, - const at::Tensor &offset, - const at::Tensor &mask, - const int kernel_h, - const int kernel_w, - const int stride_h, - const int stride_w, - const int pad_h, - const int pad_w, - const int dilation_h, - const int dilation_w, - const int deformable_group); - -std::vector -dcn_v2_cuda_backward(const at::Tensor &input, - const at::Tensor &weight, - const at::Tensor &bias, - const at::Tensor &offset, - const at::Tensor &mask, - const at::Tensor &grad_output, - int kernel_h, int kernel_w, - int stride_h, int stride_w, - int pad_h, int pad_w, - int dilation_h, int dilation_w, - int deformable_group); - - -std::tuple -dcn_v2_psroi_pooling_cuda_forward(const at::Tensor &input, - const at::Tensor &bbox, - const at::Tensor &trans, - const int no_trans, - const float spatial_scale, - const int output_dim, - const int group_size, - const int pooled_size, - const int part_size, - const int sample_per_part, - const float trans_std); - -std::tuple -dcn_v2_psroi_pooling_cuda_backward(const at::Tensor &out_grad, - const at::Tensor &input, - const at::Tensor &bbox, - const at::Tensor &trans, - const at::Tensor &top_count, - const int no_trans, - const float spatial_scale, - const int output_dim, - const int group_size, - const int pooled_size, - const int part_size, - const int sample_per_part, - const float trans_std); diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/aiofiles/threadpool/binary.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/aiofiles/threadpool/binary.py deleted file mode 100644 index 52d0cb30a3db51d1b8686001882136d69a1dc0fa..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/aiofiles/threadpool/binary.py +++ /dev/null @@ -1,108 +0,0 @@ -from ..base import AsyncBase, AsyncIndirectBase -from .utils import ( - delegate_to_executor, - proxy_method_directly, - proxy_property_directly, -) - - -@delegate_to_executor( - "close", - "flush", - "isatty", - "read", - "read1", - "readinto", - "readline", - "readlines", - "seek", - "seekable", - "tell", - "truncate", - "writable", - "write", - "writelines", -) -@proxy_method_directly("detach", "fileno", "readable") -@proxy_property_directly("closed", "raw", "name", "mode") -class AsyncBufferedIOBase(AsyncBase): - """The asyncio executor version of io.BufferedWriter and BufferedIOBase.""" - - -@delegate_to_executor("peek") -class AsyncBufferedReader(AsyncBufferedIOBase): - """The asyncio executor version of io.BufferedReader and Random.""" - - -@delegate_to_executor( - "close", - "flush", - "isatty", - "read", - "readall", - "readinto", - "readline", - "readlines", - "seek", - "seekable", - "tell", - "truncate", - "writable", - "write", - "writelines", -) -@proxy_method_directly("fileno", "readable") -@proxy_property_directly("closed", "name", "mode") -class AsyncFileIO(AsyncBase): - """The asyncio executor version of io.FileIO.""" - - -@delegate_to_executor( - "close", - "flush", - "isatty", - "read", - "read1", - "readinto", - "readline", - "readlines", - "seek", - "seekable", - "tell", - "truncate", - "writable", - "write", - "writelines", -) -@proxy_method_directly("detach", "fileno", "readable") -@proxy_property_directly("closed", "raw", "name", "mode") -class AsyncIndirectBufferedIOBase(AsyncIndirectBase): - """The indirect asyncio executor version of io.BufferedWriter and BufferedIOBase.""" - - -@delegate_to_executor("peek") -class AsyncIndirectBufferedReader(AsyncIndirectBufferedIOBase): - """The indirect asyncio executor version of io.BufferedReader and Random.""" - - -@delegate_to_executor( - "close", - "flush", - "isatty", - "read", - "readall", - "readinto", - "readline", - "readlines", - "seek", - "seekable", - "tell", - "truncate", - "writable", - "write", - "writelines", -) -@proxy_method_directly("fileno", "readable") -@proxy_property_directly("closed", "name", "mode") -class AsyncIndirectFileIO(AsyncIndirectBase): - """The indirect asyncio executor version of io.FileIO.""" diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/httpx/_transports/wsgi.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/httpx/_transports/wsgi.py deleted file mode 100644 index 33035ce586312d8722893e288a1bcadb20548a3f..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/httpx/_transports/wsgi.py +++ /dev/null @@ -1,143 +0,0 @@ -import io -import itertools -import sys -import typing - -from .._models import Request, Response -from .._types import SyncByteStream -from .base import BaseTransport - -if typing.TYPE_CHECKING: - from _typeshed import OptExcInfo # pragma: no cover - from _typeshed.wsgi import WSGIApplication # pragma: no cover - -_T = typing.TypeVar("_T") - - -def _skip_leading_empty_chunks(body: typing.Iterable[_T]) -> typing.Iterable[_T]: - body = iter(body) - for chunk in body: - if chunk: - return itertools.chain([chunk], body) - return [] - - -class WSGIByteStream(SyncByteStream): - def __init__(self, result: typing.Iterable[bytes]) -> None: - self._close = getattr(result, "close", None) - self._result = _skip_leading_empty_chunks(result) - - def __iter__(self) -> typing.Iterator[bytes]: - for part in self._result: - yield part - - def close(self) -> None: - if self._close is not None: - self._close() - - -class WSGITransport(BaseTransport): - """ - A custom transport that handles sending requests directly to an WSGI app. - The simplest way to use this functionality is to use the `app` argument. - - ``` - client = httpx.Client(app=app) - ``` - - Alternatively, you can setup the transport instance explicitly. - This allows you to include any additional configuration arguments specific - to the WSGITransport class: - - ``` - transport = httpx.WSGITransport( - app=app, - script_name="/submount", - remote_addr="1.2.3.4" - ) - client = httpx.Client(transport=transport) - ``` - - Arguments: - - * `app` - The WSGI application. - * `raise_app_exceptions` - Boolean indicating if exceptions in the application - should be raised. Default to `True`. Can be set to `False` for use cases - such as testing the content of a client 500 response. - * `script_name` - The root path on which the WSGI application should be mounted. - * `remote_addr` - A string indicating the client IP of incoming requests. - ``` - """ - - def __init__( - self, - app: "WSGIApplication", - raise_app_exceptions: bool = True, - script_name: str = "", - remote_addr: str = "127.0.0.1", - wsgi_errors: typing.Optional[typing.TextIO] = None, - ) -> None: - self.app = app - self.raise_app_exceptions = raise_app_exceptions - self.script_name = script_name - self.remote_addr = remote_addr - self.wsgi_errors = wsgi_errors - - def handle_request(self, request: Request) -> Response: - request.read() - wsgi_input = io.BytesIO(request.content) - - port = request.url.port or {"http": 80, "https": 443}[request.url.scheme] - environ = { - "wsgi.version": (1, 0), - "wsgi.url_scheme": request.url.scheme, - "wsgi.input": wsgi_input, - "wsgi.errors": self.wsgi_errors or sys.stderr, - "wsgi.multithread": True, - "wsgi.multiprocess": False, - "wsgi.run_once": False, - "REQUEST_METHOD": request.method, - "SCRIPT_NAME": self.script_name, - "PATH_INFO": request.url.path, - "QUERY_STRING": request.url.query.decode("ascii"), - "SERVER_NAME": request.url.host, - "SERVER_PORT": str(port), - "REMOTE_ADDR": self.remote_addr, - } - for header_key, header_value in request.headers.raw: - key = header_key.decode("ascii").upper().replace("-", "_") - if key not in ("CONTENT_TYPE", "CONTENT_LENGTH"): - key = "HTTP_" + key - environ[key] = header_value.decode("ascii") - - seen_status = None - seen_response_headers = None - seen_exc_info = None - - def start_response( - status: str, - response_headers: typing.List[typing.Tuple[str, str]], - exc_info: typing.Optional["OptExcInfo"] = None, - ) -> typing.Callable[[bytes], typing.Any]: - nonlocal seen_status, seen_response_headers, seen_exc_info - seen_status = status - seen_response_headers = response_headers - seen_exc_info = exc_info - return lambda _: None - - result = self.app(environ, start_response) - - stream = WSGIByteStream(result) - - assert seen_status is not None - assert seen_response_headers is not None - if seen_exc_info and seen_exc_info[0] and self.raise_app_exceptions: - raise seen_exc_info[1] - - status_code = int(seen_status.split()[0]) - headers = [ - (key.encode("ascii"), value.encode("ascii")) - for key, value in seen_response_headers - ] - - return Response(status_code, headers=headers, stream=stream) diff --git a/spaces/DarwinAnim8or/NoSleep-Story-Generator/app.py b/spaces/DarwinAnim8or/NoSleep-Story-Generator/app.py deleted file mode 100644 index 58cc9ba91e9f8ecb564966ec5dd56c07db1e0834..0000000000000000000000000000000000000000 --- a/spaces/DarwinAnim8or/NoSleep-Story-Generator/app.py +++ /dev/null @@ -1,45 +0,0 @@ -import gradio as gr - -from happytransformer import HappyGeneration - -happy_gen = HappyGeneration("GPT2", "DarwinAnim8or/GPT-NoSleep-1.5b") - -from happytransformer import GENSettings - -def generate(text, length=100, penalty=1, temperature=0.8): - args_top_k = GENSettings(no_repeat_ngram_size=penalty, do_sample=False, top_k=80, top_p=1, temperature=temperature, max_length=length, early_stopping=False) - print("Prompt: " + text) - - inputText = "[WP] " + text + " [RESPONSE] " - - result = happy_gen.generate_text(inputText, args=args_top_k) - generated_text = result.text #returns generated text only - - generated_text = generated_text.replace('.', '.\n') - - return generated_text - -examples = [ - ["We don't go to the forest anymore."], - ["There used to be a smile on my face. "], - #["Now I know why they're called 'Mans best friend'"], - #["The terror in the night"], - #["I still see them."], - #["The curse of the plague doctor"] -] - -demo = gr.Interface( - fn=generate, - inputs=[ - gr.inputs.Textbox(lines=5, label="Input Text"), - gr.inputs.Slider(25, 500, label='Length', default=250, step=25), - gr.inputs.Slider(1, 10, label='no repeat ngram size', default=1, step=1), - gr.inputs.Slider(0.0, 1.0, label='Temperature - control randomness', default=0.6, step=0.1) - ], - outputs=gr.outputs.Textbox(label="Generated Text"), - examples=examples, - title="NoSleep Horror Story Generator", - description="Using the 1.5b size model. You may need to run it a few times in order to get something good! TIP: if you want a writing prompt, use this model in combination: https://huggingface.co/spaces/DarwinAnim8or/NoSleepWritingPromptGenerator" -) - -demo.launch() \ No newline at end of file diff --git a/spaces/DavidLijun/FI/README.md b/spaces/DavidLijun/FI/README.md deleted file mode 100644 index 15af9d7765bf8a495d4df9889be9d44ba6c5946f..0000000000000000000000000000000000000000 --- a/spaces/DavidLijun/FI/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: FI -emoji: 🏃 -colorFrom: purple -colorTo: purple -sdk: streamlit -sdk_version: 1.15.2 -app_file: app.py -pinned: false -license: bsd ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/DragGan/DragGan-Inversion/gradio_utils/utils.py b/spaces/DragGan/DragGan-Inversion/gradio_utils/utils.py deleted file mode 100644 index d4e760e1515f3f69b11d11426ac3e8fa51f1a99c..0000000000000000000000000000000000000000 --- a/spaces/DragGan/DragGan-Inversion/gradio_utils/utils.py +++ /dev/null @@ -1,154 +0,0 @@ -import gradio as gr -import numpy as np -from PIL import Image, ImageDraw - - -class ImageMask(gr.components.Image): - """ - Sets: source="canvas", tool="sketch" - """ - - is_template = True - - def __init__(self, **kwargs): - super().__init__(source="upload", - tool="sketch", - interactive=False, - **kwargs) - - def preprocess(self, x): - if x is None: - return x - if self.tool == "sketch" and self.source in ["upload", "webcam" - ] and type(x) != dict: - decode_image = gr.processing_utils.decode_base64_to_image(x) - width, height = decode_image.size - mask = np.ones((height, width, 4), dtype=np.uint8) - mask[..., -1] = 255 - mask = self.postprocess(mask) - x = {'image': x, 'mask': mask} - return super().preprocess(x) - - -def get_valid_mask(mask: np.ndarray): - """Convert mask from gr.Image(0 to 255, RGBA) to binary mask. - """ - if mask.ndim == 3: - mask_pil = Image.fromarray(mask).convert('L') - mask = np.array(mask_pil) - if mask.max() == 255: - mask = mask / 255 - return mask - - -def draw_points_on_image(image, - points, - curr_point=None, - highlight_all=True, - radius_scale=0.01): - overlay_rgba = Image.new("RGBA", image.size, 0) - overlay_draw = ImageDraw.Draw(overlay_rgba) - for point_key, point in points.items(): - if ((curr_point is not None and curr_point == point_key) - or highlight_all): - p_color = (255, 0, 0) - t_color = (0, 0, 255) - - else: - p_color = (255, 0, 0, 35) - t_color = (0, 0, 255, 35) - - rad_draw = int(image.size[0] * radius_scale) - - p_start = point.get("start_temp", point["start"]) - p_target = point["target"] - - if p_start is not None and p_target is not None: - p_draw = int(p_start[0]), int(p_start[1]) - t_draw = int(p_target[0]), int(p_target[1]) - - overlay_draw.line( - (p_draw[0], p_draw[1], t_draw[0], t_draw[1]), - fill=(255, 255, 0), - width=2, - ) - - if p_start is not None: - p_draw = int(p_start[0]), int(p_start[1]) - overlay_draw.ellipse( - ( - p_draw[0] - rad_draw, - p_draw[1] - rad_draw, - p_draw[0] + rad_draw, - p_draw[1] + rad_draw, - ), - fill=p_color, - ) - - if curr_point is not None and curr_point == point_key: - # overlay_draw.text(p_draw, "p", font=font, align="center", fill=(0, 0, 0)) - overlay_draw.text(p_draw, "p", align="center", fill=(0, 0, 0)) - - if p_target is not None: - t_draw = int(p_target[0]), int(p_target[1]) - overlay_draw.ellipse( - ( - t_draw[0] - rad_draw, - t_draw[1] - rad_draw, - t_draw[0] + rad_draw, - t_draw[1] + rad_draw, - ), - fill=t_color, - ) - - if curr_point is not None and curr_point == point_key: - # overlay_draw.text(t_draw, "t", font=font, align="center", fill=(0, 0, 0)) - overlay_draw.text(t_draw, "t", align="center", fill=(0, 0, 0)) - - return Image.alpha_composite(image.convert("RGBA"), - overlay_rgba).convert("RGB") - - -def draw_mask_on_image(image, mask): - im_mask = np.uint8(mask * 255) - im_mask_rgba = np.concatenate( - ( - np.tile(im_mask[..., None], [1, 1, 3]), - 45 * np.ones( - (im_mask.shape[0], im_mask.shape[1], 1), dtype=np.uint8), - ), - axis=-1, - ) - im_mask_rgba = Image.fromarray(im_mask_rgba).convert("RGBA") - - return Image.alpha_composite(image.convert("RGBA"), - im_mask_rgba).convert("RGB") - - -def on_change_single_global_state(keys, - value, - global_state, - map_transform=None): - if map_transform is not None: - value = map_transform(value) - - curr_state = global_state - if isinstance(keys, str): - last_key = keys - - else: - for k in keys[:-1]: - curr_state = curr_state[k] - - last_key = keys[-1] - - curr_state[last_key] = value - return global_state - - -def get_latest_points_pair(points_dict): - if not points_dict: - return None - point_idx = list(points_dict.keys()) - latest_point_idx = max(point_idx) - return latest_point_idx diff --git a/spaces/DragGan/DragGan-Inversion/legacy.py b/spaces/DragGan/DragGan-Inversion/legacy.py deleted file mode 100644 index a874c38c2c943e632badb8e12f5a4297071827df..0000000000000000000000000000000000000000 --- a/spaces/DragGan/DragGan-Inversion/legacy.py +++ /dev/null @@ -1,369 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -"""Converting legacy network pickle into the new format.""" - -import click -import pickle -import re -import copy -import numpy as np -import torch -import dnnlib -from torch_utils import misc - -# ---------------------------------------------------------------------------- - - -def load_network_pkl(f, force_fp16=False): - data = _LegacyUnpickler(f).load() - - # Legacy TensorFlow pickle => convert. - if isinstance(data, tuple) and len(data) == 3 and all(isinstance(net, _TFNetworkStub) for net in data): - tf_G, tf_D, tf_Gs = data - G = convert_tf_generator(tf_G) - D = convert_tf_discriminator(tf_D) - G_ema = convert_tf_generator(tf_Gs) - data = dict(G=G, D=D, G_ema=G_ema) - - # Add missing fields. - if 'training_set_kwargs' not in data: - data['training_set_kwargs'] = None - if 'augment_pipe' not in data: - data['augment_pipe'] = None - - # Validate contents. - assert isinstance(data['G'], torch.nn.Module) - assert isinstance(data['D'], torch.nn.Module) - assert isinstance(data['G_ema'], torch.nn.Module) - assert isinstance(data['training_set_kwargs'], (dict, type(None))) - assert isinstance(data['augment_pipe'], (torch.nn.Module, type(None))) - - # Force FP16. - if force_fp16: - for key in ['G', 'D', 'G_ema']: - old = data[key] - kwargs = copy.deepcopy(old.init_kwargs) - fp16_kwargs = kwargs.get('synthesis_kwargs', kwargs) - fp16_kwargs.num_fp16_res = 4 - fp16_kwargs.conv_clamp = 256 - if kwargs != old.init_kwargs: - new = type(old)(**kwargs).eval().requires_grad_(False) - misc.copy_params_and_buffers(old, new, require_all=True) - data[key] = new - return data - -# ---------------------------------------------------------------------------- - - -class _TFNetworkStub(dnnlib.EasyDict): - pass - - -class _LegacyUnpickler(pickle.Unpickler): - def find_class(self, module, name): - if module == 'dnnlib.tflib.network' and name == 'Network': - return _TFNetworkStub - return super().find_class(module, name) - -# ---------------------------------------------------------------------------- - - -def _collect_tf_params(tf_net): - # pylint: disable=protected-access - tf_params = dict() - - def recurse(prefix, tf_net): - for name, value in tf_net.variables: - tf_params[prefix + name] = value - for name, comp in tf_net.components.items(): - recurse(prefix + name + '/', comp) - recurse('', tf_net) - return tf_params - -# ---------------------------------------------------------------------------- - - -def _populate_module_params(module, *patterns): - for name, tensor in misc.named_params_and_buffers(module): - found = False - value = None - for pattern, value_fn in zip(patterns[0::2], patterns[1::2]): - match = re.fullmatch(pattern, name) - if match: - found = True - if value_fn is not None: - value = value_fn(*match.groups()) - break - try: - assert found - if value is not None: - tensor.copy_(torch.from_numpy(np.array(value))) - except: - print(name, list(tensor.shape)) - raise - -# ---------------------------------------------------------------------------- - - -def convert_tf_generator(tf_G): - if tf_G.version < 4: - raise ValueError('TensorFlow pickle version too low') - - # Collect kwargs. - tf_kwargs = tf_G.static_kwargs - known_kwargs = set() - - def kwarg(tf_name, default=None, none=None): - known_kwargs.add(tf_name) - val = tf_kwargs.get(tf_name, default) - return val if val is not None else none - - # Convert kwargs. - from training import networks_stylegan2 - network_class = networks_stylegan2.Generator - kwargs = dnnlib.EasyDict( - z_dim=kwarg('latent_size', 512), - c_dim=kwarg('label_size', 0), - w_dim=kwarg('dlatent_size', 512), - img_resolution=kwarg('resolution', 1024), - img_channels=kwarg('num_channels', 3), - channel_base=kwarg('fmap_base', 16384) * 2, - channel_max=kwarg('fmap_max', 512), - num_fp16_res=kwarg('num_fp16_res', 0), - conv_clamp=kwarg('conv_clamp', None), - architecture=kwarg('architecture', 'skip'), - resample_filter=kwarg('resample_kernel', [1, 3, 3, 1]), - use_noise=kwarg('use_noise', True), - activation=kwarg('nonlinearity', 'lrelu'), - mapping_kwargs=dnnlib.EasyDict( - num_layers=kwarg('mapping_layers', 8), - embed_features=kwarg('label_fmaps', None), - layer_features=kwarg('mapping_fmaps', None), - activation=kwarg('mapping_nonlinearity', 'lrelu'), - lr_multiplier=kwarg('mapping_lrmul', 0.01), - w_avg_beta=kwarg('w_avg_beta', 0.995, none=1), - ), - ) - - # Check for unknown kwargs. - kwarg('truncation_psi') - kwarg('truncation_cutoff') - kwarg('style_mixing_prob') - kwarg('structure') - kwarg('conditioning') - kwarg('fused_modconv') - unknown_kwargs = list(set(tf_kwargs.keys()) - known_kwargs) - if len(unknown_kwargs) > 0: - raise ValueError('Unknown TensorFlow kwarg', unknown_kwargs[0]) - - # Collect params. - tf_params = _collect_tf_params(tf_G) - for name, value in list(tf_params.items()): - match = re.fullmatch(r'ToRGB_lod(\d+)/(.*)', name) - if match: - r = kwargs.img_resolution // (2 ** int(match.group(1))) - tf_params[f'{r}x{r}/ToRGB/{match.group(2)}'] = value - kwargs.synthesis.kwargs.architecture = 'orig' - # for name, value in tf_params.items(): print(f'{name:<50s}{list(value.shape)}') - - # Convert params. - G = network_class(**kwargs).eval().requires_grad_(False) - # pylint: disable=unnecessary-lambda - # pylint: disable=f-string-without-interpolation - _populate_module_params(G, - r'mapping\.w_avg', lambda: tf_params[f'dlatent_avg'], - r'mapping\.embed\.weight', lambda: tf_params[f'mapping/LabelEmbed/weight'].transpose( - ), - r'mapping\.embed\.bias', lambda: tf_params[f'mapping/LabelEmbed/bias'], - r'mapping\.fc(\d+)\.weight', lambda i: tf_params[f'mapping/Dense{i}/weight'].transpose( - ), - r'mapping\.fc(\d+)\.bias', lambda i: tf_params[f'mapping/Dense{i}/bias'], - r'synthesis\.b4\.const', lambda: tf_params[f'synthesis/4x4/Const/const'][0], - r'synthesis\.b4\.conv1\.weight', lambda: tf_params[f'synthesis/4x4/Conv/weight'].transpose( - 3, 2, 0, 1), - r'synthesis\.b4\.conv1\.bias', lambda: tf_params[ - f'synthesis/4x4/Conv/bias'], - r'synthesis\.b4\.conv1\.noise_const', lambda: tf_params[ - f'synthesis/noise0'][0, 0], - r'synthesis\.b4\.conv1\.noise_strength', lambda: tf_params[ - f'synthesis/4x4/Conv/noise_strength'], - r'synthesis\.b4\.conv1\.affine\.weight', lambda: tf_params[ - f'synthesis/4x4/Conv/mod_weight'].transpose(), - r'synthesis\.b4\.conv1\.affine\.bias', lambda: tf_params[ - f'synthesis/4x4/Conv/mod_bias'] + 1, - r'synthesis\.b(\d+)\.conv0\.weight', lambda r: tf_params[f'synthesis/{r}x{r}/Conv0_up/weight'][::-1, ::-1].transpose( - 3, 2, 0, 1), - r'synthesis\.b(\d+)\.conv0\.bias', lambda r: tf_params[ - f'synthesis/{r}x{r}/Conv0_up/bias'], - r'synthesis\.b(\d+)\.conv0\.noise_const', lambda r: tf_params[ - f'synthesis/noise{int(np.log2(int(r)))*2-5}'][0, 0], - r'synthesis\.b(\d+)\.conv0\.noise_strength', lambda r: tf_params[ - f'synthesis/{r}x{r}/Conv0_up/noise_strength'], - r'synthesis\.b(\d+)\.conv0\.affine\.weight', lambda r: tf_params[f'synthesis/{r}x{r}/Conv0_up/mod_weight'].transpose( - ), - r'synthesis\.b(\d+)\.conv0\.affine\.bias', lambda r: tf_params[ - f'synthesis/{r}x{r}/Conv0_up/mod_bias'] + 1, - r'synthesis\.b(\d+)\.conv1\.weight', lambda r: tf_params[f'synthesis/{r}x{r}/Conv1/weight'].transpose( - 3, 2, 0, 1), - r'synthesis\.b(\d+)\.conv1\.bias', lambda r: tf_params[ - f'synthesis/{r}x{r}/Conv1/bias'], - r'synthesis\.b(\d+)\.conv1\.noise_const', lambda r: tf_params[ - f'synthesis/noise{int(np.log2(int(r)))*2-4}'][0, 0], - r'synthesis\.b(\d+)\.conv1\.noise_strength', lambda r: tf_params[ - f'synthesis/{r}x{r}/Conv1/noise_strength'], - r'synthesis\.b(\d+)\.conv1\.affine\.weight', lambda r: tf_params[f'synthesis/{r}x{r}/Conv1/mod_weight'].transpose( - ), - r'synthesis\.b(\d+)\.conv1\.affine\.bias', lambda r: tf_params[ - f'synthesis/{r}x{r}/Conv1/mod_bias'] + 1, - r'synthesis\.b(\d+)\.torgb\.weight', lambda r: tf_params[f'synthesis/{r}x{r}/ToRGB/weight'].transpose( - 3, 2, 0, 1), - r'synthesis\.b(\d+)\.torgb\.bias', lambda r: tf_params[ - f'synthesis/{r}x{r}/ToRGB/bias'], - r'synthesis\.b(\d+)\.torgb\.affine\.weight', lambda r: tf_params[f'synthesis/{r}x{r}/ToRGB/mod_weight'].transpose( - ), - r'synthesis\.b(\d+)\.torgb\.affine\.bias', lambda r: tf_params[ - f'synthesis/{r}x{r}/ToRGB/mod_bias'] + 1, - r'synthesis\.b(\d+)\.skip\.weight', lambda r: tf_params[f'synthesis/{r}x{r}/Skip/weight'][::-1, ::-1].transpose( - 3, 2, 0, 1), - r'.*\.resample_filter', None, - r'.*\.act_filter', None, - ) - return G - -# ---------------------------------------------------------------------------- - - -def convert_tf_discriminator(tf_D): - if tf_D.version < 4: - raise ValueError('TensorFlow pickle version too low') - - # Collect kwargs. - tf_kwargs = tf_D.static_kwargs - known_kwargs = set() - - def kwarg(tf_name, default=None): - known_kwargs.add(tf_name) - return tf_kwargs.get(tf_name, default) - - # Convert kwargs. - kwargs = dnnlib.EasyDict( - c_dim=kwarg('label_size', 0), - img_resolution=kwarg('resolution', 1024), - img_channels=kwarg('num_channels', 3), - architecture=kwarg('architecture', 'resnet'), - channel_base=kwarg('fmap_base', 16384) * 2, - channel_max=kwarg('fmap_max', 512), - num_fp16_res=kwarg('num_fp16_res', 0), - conv_clamp=kwarg('conv_clamp', None), - cmap_dim=kwarg('mapping_fmaps', None), - block_kwargs=dnnlib.EasyDict( - activation=kwarg('nonlinearity', 'lrelu'), - resample_filter=kwarg('resample_kernel', [1, 3, 3, 1]), - freeze_layers=kwarg('freeze_layers', 0), - ), - mapping_kwargs=dnnlib.EasyDict( - num_layers=kwarg('mapping_layers', 0), - embed_features=kwarg('mapping_fmaps', None), - layer_features=kwarg('mapping_fmaps', None), - activation=kwarg('nonlinearity', 'lrelu'), - lr_multiplier=kwarg('mapping_lrmul', 0.1), - ), - epilogue_kwargs=dnnlib.EasyDict( - mbstd_group_size=kwarg('mbstd_group_size', None), - mbstd_num_channels=kwarg('mbstd_num_features', 1), - activation=kwarg('nonlinearity', 'lrelu'), - ), - ) - - # Check for unknown kwargs. - kwarg('structure') - kwarg('conditioning') - unknown_kwargs = list(set(tf_kwargs.keys()) - known_kwargs) - if len(unknown_kwargs) > 0: - raise ValueError('Unknown TensorFlow kwarg', unknown_kwargs[0]) - - # Collect params. - tf_params = _collect_tf_params(tf_D) - for name, value in list(tf_params.items()): - match = re.fullmatch(r'FromRGB_lod(\d+)/(.*)', name) - if match: - r = kwargs.img_resolution // (2 ** int(match.group(1))) - tf_params[f'{r}x{r}/FromRGB/{match.group(2)}'] = value - kwargs.architecture = 'orig' - # for name, value in tf_params.items(): print(f'{name:<50s}{list(value.shape)}') - - # Convert params. - from training import networks_stylegan2 - D = networks_stylegan2.Discriminator(**kwargs).eval().requires_grad_(False) - # pylint: disable=unnecessary-lambda - # pylint: disable=f-string-without-interpolation - _populate_module_params(D, - r'b(\d+)\.fromrgb\.weight', lambda r: tf_params[f'{r}x{r}/FromRGB/weight'].transpose( - 3, 2, 0, 1), - r'b(\d+)\.fromrgb\.bias', lambda r: tf_params[f'{r}x{r}/FromRGB/bias'], - r'b(\d+)\.conv(\d+)\.weight', lambda r, i: tf_params[f'{r}x{r}/Conv{i}{["","_down"][int(i)]}/weight'].transpose( - 3, 2, 0, 1), - r'b(\d+)\.conv(\d+)\.bias', lambda r, i: tf_params[ - f'{r}x{r}/Conv{i}{["","_down"][int(i)]}/bias'], - r'b(\d+)\.skip\.weight', lambda r: tf_params[f'{r}x{r}/Skip/weight'].transpose( - 3, 2, 0, 1), - r'mapping\.embed\.weight', lambda: tf_params[f'LabelEmbed/weight'].transpose( - ), - r'mapping\.embed\.bias', lambda: tf_params[f'LabelEmbed/bias'], - r'mapping\.fc(\d+)\.weight', lambda i: tf_params[f'Mapping{i}/weight'].transpose( - ), - r'mapping\.fc(\d+)\.bias', lambda i: tf_params[f'Mapping{i}/bias'], - r'b4\.conv\.weight', lambda: tf_params[f'4x4/Conv/weight'].transpose( - 3, 2, 0, 1), - r'b4\.conv\.bias', lambda: tf_params[f'4x4/Conv/bias'], - r'b4\.fc\.weight', lambda: tf_params[f'4x4/Dense0/weight'].transpose( - ), - r'b4\.fc\.bias', lambda: tf_params[f'4x4/Dense0/bias'], - r'b4\.out\.weight', lambda: tf_params[f'Output/weight'].transpose( - ), - r'b4\.out\.bias', lambda: tf_params[f'Output/bias'], - r'.*\.resample_filter', None, - ) - return D - -# ---------------------------------------------------------------------------- - - -@click.command() -@click.option('--source', help='Input pickle', required=True, metavar='PATH') -@click.option('--dest', help='Output pickle', required=True, metavar='PATH') -@click.option('--force-fp16', help='Force the networks to use FP16', type=bool, default=False, metavar='BOOL', show_default=True) -def convert_network_pickle(source, dest, force_fp16): - """Convert legacy network pickle into the native PyTorch format. - - The tool is able to load the main network configurations exported using the TensorFlow version of StyleGAN2 or StyleGAN2-ADA. - It does not support e.g. StyleGAN2-ADA comparison methods, StyleGAN2 configs A-D, or StyleGAN1 networks. - - Example: - - \b - python legacy.py \\ - --source=https://nvlabs-fi-cdn.nvidia.com/stylegan2/networks/stylegan2-cat-config-f.pkl \\ - --dest=stylegan2-cat-config-f.pkl - """ - print(f'Loading "{source}"...') - with dnnlib.util.open_url(source) as f: - data = load_network_pkl(f, force_fp16=force_fp16) - print(f'Saving "{dest}"...') - with open(dest, 'wb') as f: - pickle.dump(data, f) - print('Done.') - -# ---------------------------------------------------------------------------- - - -if __name__ == "__main__": - convert_network_pickle() # pylint: disable=no-value-for-parameter - -# ---------------------------------------------------------------------------- diff --git a/spaces/Duskfallcrew/duskfallai_webui/app.py b/spaces/Duskfallcrew/duskfallai_webui/app.py deleted file mode 100644 index bdfef423752d9a6b76bdec815e19645c6618b65b..0000000000000000000000000000000000000000 --- a/spaces/Duskfallcrew/duskfallai_webui/app.py +++ /dev/null @@ -1,141 +0,0 @@ -from diffusers import StableDiffusionPipeline, StableDiffusionImg2ImgPipeline, DPMSolverMultistepScheduler -import gradio as gr -import torch -from PIL import Image - -model_id = 'Duskfallcrew/duskfallai' -prefix = 'lisdusk1, lisdusk2' - -scheduler = DPMSolverMultistepScheduler.from_pretrained(model_id, subfolder="scheduler") - -pipe = StableDiffusionPipeline.from_pretrained( - model_id, - torch_dtype=torch.float16 if torch.cuda.is_available() else torch.float32, - scheduler=scheduler) - -pipe_i2i = StableDiffusionImg2ImgPipeline.from_pretrained( - model_id, - torch_dtype=torch.float16 if torch.cuda.is_available() else torch.float32, - scheduler=scheduler) - -if torch.cuda.is_available(): - pipe = pipe.to("cuda") - pipe_i2i = pipe_i2i.to("cuda") - -def error_str(error, title="Error"): - return f"""#### {title} - {error}""" if error else "" - -def inference(prompt, guidance, steps, width=512, height=512, seed=0, img=None, strength=0.5, neg_prompt="", auto_prefix=False): - - generator = torch.Generator('cuda').manual_seed(seed) if seed != 0 else None - prompt = f"{prefix} {prompt}" if auto_prefix else prompt - - try: - if img is not None: - return img_to_img(prompt, neg_prompt, img, strength, guidance, steps, width, height, generator), None - else: - return txt_to_img(prompt, neg_prompt, guidance, steps, width, height, generator), None - except Exception as e: - return None, error_str(e) - -def txt_to_img(prompt, neg_prompt, guidance, steps, width, height, generator): - - result = pipe( - prompt, - negative_prompt = neg_prompt, - num_inference_steps = int(steps), - guidance_scale = guidance, - width = width, - height = height, - generator = generator) - - return result.images[0] - -def img_to_img(prompt, neg_prompt, img, strength, guidance, steps, width, height, generator): - - ratio = min(height / img.height, width / img.width) - img = img.resize((int(img.width * ratio), int(img.height * ratio)), Image.LANCZOS) - result = pipe_i2i( - prompt, - negative_prompt = neg_prompt, - init_image = img, - num_inference_steps = int(steps), - strength = strength, - guidance_scale = guidance, - width = width, - height = height, - generator = generator) - - return result.images[0] - -css = """.main-div div{display:inline-flex;align-items:center;gap:.8rem;font-size:1.75rem}.main-div div h1{font-weight:900;margin-bottom:7px}.main-div p{margin-bottom:10px;font-size:94%}a{text-decoration:underline}.tabs{margin-top:0;margin-bottom:0}#gallery{min-height:20rem} -""" -with gr.Blocks(css=css) as demo: - gr.HTML( - f""" -
    -
    -

    Duskfall Ai SD Space Web UI

    -
    -

    - Demo for Duskfallai Stable Diffusion model. - -This is trained largely on a small data set of our own art with a focus on the fact that our art, and any stable/midjourney outputs we included in this are related to our Dissoicative Identity Disorder. May actually retrain a larger data set later on. Trained using the MultiModel Dreambooth App, sitting on a friday afternoon doing absolute squat. PLEASE DO upload any images you create or generate in the discussions! - -
    - {"Add the following tokens to your prompts for the model to work properly: prefix" if prefix else ""} -

    - Running on {"GPU 🔥" if torch.cuda.is_available() else f"CPU 🥶. For faster inference it is recommended to upgrade to GPU in Settings"} after duplicating the space

    - Duplicate Space -
    - """ - ) - with gr.Row(): - - with gr.Column(scale=55): - with gr.Group(): - with gr.Row(): - prompt = gr.Textbox(label="Prompt", show_label=False, max_lines=2,placeholder=f"{prefix} [your prompt]").style(container=False) - generate = gr.Button(value="Generate").style(rounded=(False, True, True, False)) - - image_out = gr.Image(height=512) - error_output = gr.Markdown() - - with gr.Column(scale=45): - with gr.Tab("Options"): - with gr.Group(): - neg_prompt = gr.Textbox(label="Negative prompt", placeholder="What to exclude from the image") - auto_prefix = gr.Checkbox(label="Prefix styling tokens automatically (lisdusk1, lisdusk2)", value=prefix, visible=prefix) - - with gr.Row(): - guidance = gr.Slider(label="Guidance scale", value=7.5, maximum=15) - steps = gr.Slider(label="Steps", value=25, minimum=2, maximum=75, step=1) - - with gr.Row(): - width = gr.Slider(label="Width", value=512, minimum=64, maximum=1024, step=8) - height = gr.Slider(label="Height", value=512, minimum=64, maximum=1024, step=8) - - seed = gr.Slider(0, 2147483647, label='Seed (0 = random)', value=0, step=1) - - with gr.Tab("Image to image"): - with gr.Group(): - image = gr.Image(label="Image", height=256, tool="editor", type="pil") - strength = gr.Slider(label="Transformation strength", minimum=0, maximum=1, step=0.01, value=0.5) - - auto_prefix.change(lambda x: gr.update(placeholder=f"{prefix} [your prompt]" if x else "[Your prompt]"), inputs=auto_prefix, outputs=prompt, queue=False) - - inputs = [prompt, guidance, steps, width, height, seed, image, strength, neg_prompt, auto_prefix] - outputs = [image_out, error_output] - prompt.submit(inference, inputs=inputs, outputs=outputs) - generate.click(inference, inputs=inputs, outputs=outputs) - - gr.HTML(""" -
    -
    -

    This space was created using SD Space Creator.

    -
    - """) - -demo.queue(concurrency_count=1) -demo.launch() diff --git a/spaces/EXPOSUREEE/Ai-Image-Enhancer/setup.py b/spaces/EXPOSUREEE/Ai-Image-Enhancer/setup.py deleted file mode 100644 index c2b92e31d2db1aba50767f4f844540cfd53c609d..0000000000000000000000000000000000000000 --- a/spaces/EXPOSUREEE/Ai-Image-Enhancer/setup.py +++ /dev/null @@ -1,107 +0,0 @@ -#!/usr/bin/env python - -from setuptools import find_packages, setup - -import os -import subprocess -import time - -version_file = 'realesrgan/version.py' - - -def readme(): - with open('README.md', encoding='utf-8') as f: - content = f.read() - return content - - -def get_git_hash(): - - def _minimal_ext_cmd(cmd): - # construct minimal environment - env = {} - for k in ['SYSTEMROOT', 'PATH', 'HOME']: - v = os.environ.get(k) - if v is not None: - env[k] = v - # LANGUAGE is used on win32 - env['LANGUAGE'] = 'C' - env['LANG'] = 'C' - env['LC_ALL'] = 'C' - out = subprocess.Popen(cmd, stdout=subprocess.PIPE, env=env).communicate()[0] - return out - - try: - out = _minimal_ext_cmd(['git', 'rev-parse', 'HEAD']) - sha = out.strip().decode('ascii') - except OSError: - sha = 'unknown' - - return sha - - -def get_hash(): - if os.path.exists('.git'): - sha = get_git_hash()[:7] - else: - sha = 'unknown' - - return sha - - -def write_version_py(): - content = """# GENERATED VERSION FILE -# TIME: {} -__version__ = '{}' -__gitsha__ = '{}' -version_info = ({}) -""" - sha = get_hash() - with open('VERSION', 'r') as f: - SHORT_VERSION = f.read().strip() - VERSION_INFO = ', '.join([x if x.isdigit() else f'"{x}"' for x in SHORT_VERSION.split('.')]) - - version_file_str = content.format(time.asctime(), SHORT_VERSION, sha, VERSION_INFO) - with open(version_file, 'w') as f: - f.write(version_file_str) - - -def get_version(): - with open(version_file, 'r') as f: - exec(compile(f.read(), version_file, 'exec')) - return locals()['__version__'] - - -def get_requirements(filename='requirements.txt'): - here = os.path.dirname(os.path.realpath(__file__)) - with open(os.path.join(here, filename), 'r') as f: - requires = [line.replace('\n', '') for line in f.readlines()] - return requires - - -if __name__ == '__main__': - write_version_py() - setup( - name='realesrgan', - version=get_version(), - description='Real-ESRGAN aims at developing Practical Algorithms for General Image Restoration', - long_description=readme(), - long_description_content_type='text/markdown', - author='Xintao Wang', - author_email='xintao.wang@outlook.com', - keywords='computer vision, pytorch, image restoration, super-resolution, esrgan, real-esrgan', - url='https://github.com/xinntao/Real-ESRGAN', - include_package_data=True, - packages=find_packages(exclude=('options', 'datasets', 'experiments', 'results', 'tb_logger', 'wandb')), - classifiers=[ - 'Development Status :: 4 - Beta', - 'License :: OSI Approved :: Apache Software License', - 'Operating System :: OS Independent', - 'Programming Language :: Python :: 3', - 'Programming Language :: Python :: 3.7', - 'Programming Language :: Python :: 3.8', - ], - license='BSD-3-Clause License', - setup_requires=['cython', 'numpy'], - install_requires=get_requirements(), - zip_safe=False) diff --git a/spaces/EuroPython2022/Warehouse_Apparel_Detection/metadata/predictor_yolo_detector/utils/google_app_engine/Dockerfile b/spaces/EuroPython2022/Warehouse_Apparel_Detection/metadata/predictor_yolo_detector/utils/google_app_engine/Dockerfile deleted file mode 100644 index 0155618f475104e9858b81470339558156c94e13..0000000000000000000000000000000000000000 --- a/spaces/EuroPython2022/Warehouse_Apparel_Detection/metadata/predictor_yolo_detector/utils/google_app_engine/Dockerfile +++ /dev/null @@ -1,25 +0,0 @@ -FROM gcr.io/google-appengine/python - -# Create a virtualenv for dependencies. This isolates these packages from -# system-level packages. -# Use -p python3 or -p python3.7 to select python version. Default is version 2. -RUN virtualenv /env -p python3 - -# Setting these environment variables are the same as running -# source /env/bin/activate. -ENV VIRTUAL_ENV /env -ENV PATH /env/bin:$PATH - -RUN apt-get update && apt-get install -y python-opencv - -# Copy the application's requirements.txt and run pip to install all -# dependencies into the virtualenv. -ADD requirements.txt /app/requirements.txt -RUN pip install -r /app/requirements.txt - -# Add the application source code. -ADD . /app - -# Run a WSGI server to serve the application. gunicorn must be declared as -# a dependency in requirements.txt. -CMD gunicorn -b :$PORT main:app diff --git a/spaces/EuroPython2022/mmocr-demo/configs/_base_/recog_datasets/ST_charbox_train.py b/spaces/EuroPython2022/mmocr-demo/configs/_base_/recog_datasets/ST_charbox_train.py deleted file mode 100644 index 45d50d0d151fca5c4e9118d1f6b1f094f8a51324..0000000000000000000000000000000000000000 --- a/spaces/EuroPython2022/mmocr-demo/configs/_base_/recog_datasets/ST_charbox_train.py +++ /dev/null @@ -1,23 +0,0 @@ -# Text Recognition Training set, including: -# Synthetic Datasets: SynthText (with character level boxes) - -train_img_root = 'data/mixture' - -train_img_prefix = f'{train_img_root}/SynthText' - -train_ann_file = f'{train_img_root}/SynthText/instances_train.txt' - -train = dict( - type='OCRSegDataset', - img_prefix=train_img_prefix, - ann_file=train_ann_file, - loader=dict( - type='AnnFileLoader', - repeat=1, - file_format='txt', - parser=dict( - type='LineJsonParser', keys=['file_name', 'annotations', 'text'])), - pipeline=None, - test_mode=False) - -train_list = [train] diff --git a/spaces/FrankZxShen/so-vits-svc-models-ba/vencoder/whisper/__init__.py b/spaces/FrankZxShen/so-vits-svc-models-ba/vencoder/whisper/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/GastonMazzei/escher-inpaint-project/glide_text2im/text2im_model.py b/spaces/GastonMazzei/escher-inpaint-project/glide_text2im/text2im_model.py deleted file mode 100644 index c74394090a1bd61054f9aeabf15075e701d81601..0000000000000000000000000000000000000000 --- a/spaces/GastonMazzei/escher-inpaint-project/glide_text2im/text2im_model.py +++ /dev/null @@ -1,233 +0,0 @@ -import torch as th -import torch.nn as nn -import torch.nn.functional as F - -from .nn import timestep_embedding -from .unet import UNetModel -from .xf import LayerNorm, Transformer, convert_module_to_f16 - - -class Text2ImUNet(UNetModel): - """ - A UNetModel that conditions on text with an encoding transformer. - - Expects an extra kwarg `tokens` of text. - - :param text_ctx: number of text tokens to expect. - :param xf_width: width of the transformer. - :param xf_layers: depth of the transformer. - :param xf_heads: heads in the transformer. - :param xf_final_ln: use a LayerNorm after the output layer. - :param tokenizer: the text tokenizer for sampling/vocab size. - """ - - def __init__( - self, - text_ctx, - xf_width, - xf_layers, - xf_heads, - xf_final_ln, - tokenizer, - *args, - cache_text_emb=False, - xf_ar=0.0, - xf_padding=False, - share_unemb=False, - **kwargs, - ): - self.text_ctx = text_ctx - self.xf_width = xf_width - self.xf_ar = xf_ar - self.xf_padding = xf_padding - self.tokenizer = tokenizer - - if not xf_width: - super().__init__(*args, **kwargs, encoder_channels=None) - else: - super().__init__(*args, **kwargs, encoder_channels=xf_width) - if self.xf_width: - self.transformer = Transformer( - text_ctx, - xf_width, - xf_layers, - xf_heads, - ) - if xf_final_ln: - self.final_ln = LayerNorm(xf_width) - else: - self.final_ln = None - - self.token_embedding = nn.Embedding(self.tokenizer.n_vocab, xf_width) - self.positional_embedding = nn.Parameter(th.empty(text_ctx, xf_width, dtype=th.float32)) - self.transformer_proj = nn.Linear(xf_width, self.model_channels * 4) - - if self.xf_padding: - self.padding_embedding = nn.Parameter( - th.empty(text_ctx, xf_width, dtype=th.float32) - ) - if self.xf_ar: - self.unemb = nn.Linear(xf_width, self.tokenizer.n_vocab) - if share_unemb: - self.unemb.weight = self.token_embedding.weight - - self.cache_text_emb = cache_text_emb - self.cache = None - - def convert_to_fp16(self): - super().convert_to_fp16() - if self.xf_width: - self.transformer.apply(convert_module_to_f16) - self.transformer_proj.to(th.float16) - self.token_embedding.to(th.float16) - self.positional_embedding.to(th.float16) - if self.xf_padding: - self.padding_embedding.to(th.float16) - if self.xf_ar: - self.unemb.to(th.float16) - - def get_text_emb(self, tokens, mask): - assert tokens is not None - - if self.cache_text_emb and self.cache is not None: - assert ( - tokens == self.cache["tokens"] - ).all(), f"Tokens {tokens.cpu().numpy().tolist()} do not match cache {self.cache['tokens'].cpu().numpy().tolist()}" - return self.cache - - xf_in = self.token_embedding(tokens.long()) - xf_in = xf_in + self.positional_embedding[None] - if self.xf_padding: - assert mask is not None - xf_in = th.where(mask[..., None], xf_in, self.padding_embedding[None]) - xf_out = self.transformer(xf_in.to(self.dtype)) - if self.final_ln is not None: - xf_out = self.final_ln(xf_out) - xf_proj = self.transformer_proj(xf_out[:, -1]) - xf_out = xf_out.permute(0, 2, 1) # NLC -> NCL - - outputs = dict(xf_proj=xf_proj, xf_out=xf_out) - - if self.cache_text_emb: - self.cache = dict( - tokens=tokens, - xf_proj=xf_proj.detach(), - xf_out=xf_out.detach() if xf_out is not None else None, - ) - - return outputs - - def del_cache(self): - self.cache = None - - def forward(self, x, timesteps, tokens=None, mask=None): - hs = [] - emb = self.time_embed(timestep_embedding(timesteps, self.model_channels)) - if self.xf_width: - text_outputs = self.get_text_emb(tokens, mask) - xf_proj, xf_out = text_outputs["xf_proj"], text_outputs["xf_out"] - emb = emb + xf_proj.to(emb) - else: - xf_out = None - h = x.type(self.dtype) - for module in self.input_blocks: - h = module(h, emb, xf_out) - hs.append(h) - h = self.middle_block(h, emb, xf_out) - for module in self.output_blocks: - h = th.cat([h, hs.pop()], dim=1) - h = module(h, emb, xf_out) - h = h.type(x.dtype) - h = self.out(h) - return h - - -class SuperResText2ImUNet(Text2ImUNet): - """ - A text2im model that performs super-resolution. - Expects an extra kwarg `low_res` to condition on a low-resolution image. - """ - - def __init__(self, *args, **kwargs): - if "in_channels" in kwargs: - kwargs = dict(kwargs) - kwargs["in_channels"] = kwargs["in_channels"] * 2 - else: - # Curse you, Python. Or really, just curse positional arguments :|. - args = list(args) - args[1] = args[1] * 2 - super().__init__(*args, **kwargs) - - def forward(self, x, timesteps, low_res=None, **kwargs): - _, _, new_height, new_width = x.shape - upsampled = F.interpolate( - low_res, (new_height, new_width), mode="bilinear", align_corners=False - ) - x = th.cat([x, upsampled], dim=1) - return super().forward(x, timesteps, **kwargs) - - -class InpaintText2ImUNet(Text2ImUNet): - """ - A text2im model which can perform inpainting. - """ - - def __init__(self, *args, **kwargs): - if "in_channels" in kwargs: - kwargs = dict(kwargs) - kwargs["in_channels"] = kwargs["in_channels"] * 2 + 1 - else: - # Curse you, Python. Or really, just curse positional arguments :|. - args = list(args) - args[1] = args[1] * 2 + 1 - super().__init__(*args, **kwargs) - - def forward(self, x, timesteps, inpaint_image=None, inpaint_mask=None, **kwargs): - if inpaint_image is None: - inpaint_image = th.zeros_like(x) - if inpaint_mask is None: - inpaint_mask = th.zeros_like(x[:, :1]) - return super().forward( - th.cat([x, inpaint_image * inpaint_mask, inpaint_mask], dim=1), - timesteps, - **kwargs, - ) - - -class SuperResInpaintText2ImUnet(Text2ImUNet): - """ - A text2im model which can perform both upsampling and inpainting. - """ - - def __init__(self, *args, **kwargs): - if "in_channels" in kwargs: - kwargs = dict(kwargs) - kwargs["in_channels"] = kwargs["in_channels"] * 3 + 1 - else: - # Curse you, Python. Or really, just curse positional arguments :|. - args = list(args) - args[1] = args[1] * 3 + 1 - super().__init__(*args, **kwargs) - - def forward( - self, - x, - timesteps, - inpaint_image=None, - inpaint_mask=None, - low_res=None, - **kwargs, - ): - if inpaint_image is None: - inpaint_image = th.zeros_like(x) - if inpaint_mask is None: - inpaint_mask = th.zeros_like(x[:, :1]) - _, _, new_height, new_width = x.shape - upsampled = F.interpolate( - low_res, (new_height, new_width), mode="bilinear", align_corners=False - ) - return super().forward( - th.cat([x, inpaint_image * inpaint_mask, inpaint_mask, upsampled], dim=1), - timesteps, - **kwargs, - ) diff --git a/spaces/GeorgeOrville/bingo/src/components/chat-message.tsx b/spaces/GeorgeOrville/bingo/src/components/chat-message.tsx deleted file mode 100644 index bf272d8d7005cfd06c53bd213e09ea217e803549..0000000000000000000000000000000000000000 --- a/spaces/GeorgeOrville/bingo/src/components/chat-message.tsx +++ /dev/null @@ -1,93 +0,0 @@ -import remarkGfm from 'remark-gfm' -import remarkMath from 'remark-math' -import supersub from 'remark-supersub' -import remarkBreaks from 'remark-breaks' -import { cn } from '@/lib/utils' -import { CodeBlock } from '@/components/ui/codeblock' -import { MemoizedReactMarkdown } from '@/components/markdown' -import { LearnMore } from './learn-more' -import { ChatMessageModel } from '@/lib/bots/bing/types' -import { useEffect } from 'react' -import { TurnCounter } from './turn-counter' - -export interface ChatMessageProps { - message: ChatMessageModel -} - -export function ChatMessage({ message, ...props }: ChatMessageProps) { - useEffect(() => { - if (document.body.scrollHeight - window.innerHeight - window.scrollY - 200 < 0) { - window.scrollBy(0, 200) - } - }, [message.text]) - - return message.text ? ( -
    -
    - {obj.alt} - } - } catch (e) { - } - return {obj.alt} - }, - p({ children }) { - return

    {children}

    - }, - code({ node, inline, className, children, ...props }) { - if (children.length) { - if (children[0] == '▍') { - return ( - - ) - } - - children[0] = (children[0] as string).replace('`▍`', '▍') - } - - const match = /language-(\w+)/.exec(className || '') - - if (inline) { - return ( - - {children} - - ) - } - - return ( - - ) - } - }} - > - {message.text} -
    -
    -
    - {message.author === 'bot' && } - {message.author === 'bot' && } -
    -
    - ) : null -} diff --git a/spaces/GipAdonimus/Real-Time-Voice-Cloning/synthesizer/__init__.py b/spaces/GipAdonimus/Real-Time-Voice-Cloning/synthesizer/__init__.py deleted file mode 100644 index 4287ca8617970fa8fc025b75cb319c7032706910..0000000000000000000000000000000000000000 --- a/spaces/GipAdonimus/Real-Time-Voice-Cloning/synthesizer/__init__.py +++ /dev/null @@ -1 +0,0 @@ -# \ No newline at end of file diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/cornernet/README.md b/spaces/Gradio-Blocks/uniformer_image_detection/configs/cornernet/README.md deleted file mode 100644 index 51e5e7a5b815e6c08ea4f9fa46800b18eebf42c3..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/cornernet/README.md +++ /dev/null @@ -1,33 +0,0 @@ -# CornerNet - -## Introduction - -[ALGORITHM] - -```latex -@inproceedings{law2018cornernet, - title={Cornernet: Detecting objects as paired keypoints}, - author={Law, Hei and Deng, Jia}, - booktitle={15th European Conference on Computer Vision, ECCV 2018}, - pages={765--781}, - year={2018}, - organization={Springer Verlag} -} -``` - -## Results and models - -| Backbone | Batch Size | Step/Total Epochs | Mem (GB) | Inf time (fps) | box AP | Config | Download | -| :-------------: | :--------: |:----------------: | :------: | :------------: | :----: | :------: | :--------: | -| HourglassNet-104 | [10 x 5](./cornernet_hourglass104_mstest_10x5_210e_coco.py) | 180/210 | 13.9 | 4.2 | 41.2 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/cornernet/cornernet_hourglass104_mstest_10x5_210e_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/cornernet/cornernet_hourglass104_mstest_10x5_210e_coco/cornernet_hourglass104_mstest_10x5_210e_coco_20200824_185720-5fefbf1c.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/cornernet/cornernet_hourglass104_mstest_10x5_210e_coco/cornernet_hourglass104_mstest_10x5_210e_coco_20200824_185720.log.json) | -| HourglassNet-104 | [8 x 6](./cornernet_hourglass104_mstest_8x6_210e_coco.py) | 180/210 | 15.9 | 4.2 | 41.2 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/cornernet/cornernet_hourglass104_mstest_8x6_210e_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/cornernet/cornernet_hourglass104_mstest_8x6_210e_coco/cornernet_hourglass104_mstest_8x6_210e_coco_20200825_150618-79b44c30.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/cornernet/cornernet_hourglass104_mstest_8x6_210e_coco/cornernet_hourglass104_mstest_8x6_210e_coco_20200825_150618.log.json) | -| HourglassNet-104 | [32 x 3](./cornernet_hourglass104_mstest_32x3_210e_coco.py) | 180/210 | 9.5 | 3.9 | 40.4 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/cornernet/cornernet_hourglass104_mstest_32x3_210e_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/cornernet/cornernet_hourglass104_mstest_32x3_210e_coco/cornernet_hourglass104_mstest_32x3_210e_coco_20200819_203110-1efaea91.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/cornernet/cornernet_hourglass104_mstest_32x3_210e_coco/cornernet_hourglass104_mstest_32x3_210e_coco_20200819_203110.log.json) | - -Note: - -- TTA setting is single-scale and `flip=True`. -- Experiments with `images_per_gpu=6` are conducted on Tesla V100-SXM2-32GB, `images_per_gpu=3` are conducted on GeForce GTX 1080 Ti. -- Here are the descriptions of each experiment setting: - - 10 x 5: 10 GPUs with 5 images per gpu. This is the same setting as that reported in the original paper. - - 8 x 6: 8 GPUs with 6 images per gpu. The total batchsize is similar to paper and only need 1 node to train. - - 32 x 3: 32 GPUs with 3 images per gpu. The default setting for 1080TI and need 4 nodes to train. diff --git a/spaces/GroveStreet/GTA_SOVITS/diffusion/wavenet.py b/spaces/GroveStreet/GTA_SOVITS/diffusion/wavenet.py deleted file mode 100644 index 3d48c7eaaa0e8191b27a5d1890eb657cbcc0d143..0000000000000000000000000000000000000000 --- a/spaces/GroveStreet/GTA_SOVITS/diffusion/wavenet.py +++ /dev/null @@ -1,108 +0,0 @@ -import math -from math import sqrt - -import torch -import torch.nn as nn -import torch.nn.functional as F -from torch.nn import Mish - - -class Conv1d(torch.nn.Conv1d): - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - nn.init.kaiming_normal_(self.weight) - - -class SinusoidalPosEmb(nn.Module): - def __init__(self, dim): - super().__init__() - self.dim = dim - - def forward(self, x): - device = x.device - half_dim = self.dim // 2 - emb = math.log(10000) / (half_dim - 1) - emb = torch.exp(torch.arange(half_dim, device=device) * -emb) - emb = x[:, None] * emb[None, :] - emb = torch.cat((emb.sin(), emb.cos()), dim=-1) - return emb - - -class ResidualBlock(nn.Module): - def __init__(self, encoder_hidden, residual_channels, dilation): - super().__init__() - self.residual_channels = residual_channels - self.dilated_conv = nn.Conv1d( - residual_channels, - 2 * residual_channels, - kernel_size=3, - padding=dilation, - dilation=dilation - ) - self.diffusion_projection = nn.Linear(residual_channels, residual_channels) - self.conditioner_projection = nn.Conv1d(encoder_hidden, 2 * residual_channels, 1) - self.output_projection = nn.Conv1d(residual_channels, 2 * residual_channels, 1) - - def forward(self, x, conditioner, diffusion_step): - diffusion_step = self.diffusion_projection(diffusion_step).unsqueeze(-1) - conditioner = self.conditioner_projection(conditioner) - y = x + diffusion_step - - y = self.dilated_conv(y) + conditioner - - # Using torch.split instead of torch.chunk to avoid using onnx::Slice - gate, filter = torch.split(y, [self.residual_channels, self.residual_channels], dim=1) - y = torch.sigmoid(gate) * torch.tanh(filter) - - y = self.output_projection(y) - - # Using torch.split instead of torch.chunk to avoid using onnx::Slice - residual, skip = torch.split(y, [self.residual_channels, self.residual_channels], dim=1) - return (x + residual) / math.sqrt(2.0), skip - - -class WaveNet(nn.Module): - def __init__(self, in_dims=128, n_layers=20, n_chans=384, n_hidden=256): - super().__init__() - self.input_projection = Conv1d(in_dims, n_chans, 1) - self.diffusion_embedding = SinusoidalPosEmb(n_chans) - self.mlp = nn.Sequential( - nn.Linear(n_chans, n_chans * 4), - Mish(), - nn.Linear(n_chans * 4, n_chans) - ) - self.residual_layers = nn.ModuleList([ - ResidualBlock( - encoder_hidden=n_hidden, - residual_channels=n_chans, - dilation=1 - ) - for i in range(n_layers) - ]) - self.skip_projection = Conv1d(n_chans, n_chans, 1) - self.output_projection = Conv1d(n_chans, in_dims, 1) - nn.init.zeros_(self.output_projection.weight) - - def forward(self, spec, diffusion_step, cond): - """ - :param spec: [B, 1, M, T] - :param diffusion_step: [B, 1] - :param cond: [B, M, T] - :return: - """ - x = spec.squeeze(1) - x = self.input_projection(x) # [B, residual_channel, T] - - x = F.relu(x) - diffusion_step = self.diffusion_embedding(diffusion_step) - diffusion_step = self.mlp(diffusion_step) - skip = [] - for layer in self.residual_layers: - x, skip_connection = layer(x, cond, diffusion_step) - skip.append(skip_connection) - - x = torch.sum(torch.stack(skip), dim=0) / sqrt(len(self.residual_layers)) - x = self.skip_projection(x) - x = F.relu(x) - x = self.output_projection(x) # [B, mel_bins, T] - return x[:, None, :, :] diff --git a/spaces/GroveStreet/GTA_SOVITS/vencoder/CNHubertLarge.py b/spaces/GroveStreet/GTA_SOVITS/vencoder/CNHubertLarge.py deleted file mode 100644 index 9db93781c36884c4096fa6fa5a12a95d385e80b8..0000000000000000000000000000000000000000 --- a/spaces/GroveStreet/GTA_SOVITS/vencoder/CNHubertLarge.py +++ /dev/null @@ -1,33 +0,0 @@ -from vencoder.encoder import SpeechEncoder -import torch -from fairseq import checkpoint_utils - -class CNHubertLarge(SpeechEncoder): - def __init__(self,vec_path = "pretrain/chinese-hubert-large-fairseq-ckpt.pt",device=None): - print("load model(s) from {}".format(vec_path)) - self.hidden_dim = 1024 - models, saved_cfg, task = checkpoint_utils.load_model_ensemble_and_task( - [vec_path], - suffix="", - ) - if device is None: - self.dev = torch.device("cuda" if torch.cuda.is_available() else "cpu") - else: - self.dev = torch.device(device) - self.model = models[0].to(self.dev) - self.model.eval() - - def encoder(self, wav): - feats = wav - if feats.dim() == 2: # double channels - feats = feats.mean(-1) - assert feats.dim() == 1, feats.dim() - feats = feats.view(1, -1) - padding_mask = torch.BoolTensor(feats.shape).fill_(False) - inputs = { - "source": feats.to(wav.device), - "padding_mask": padding_mask.to(wav.device) - } - with torch.no_grad(): - logits = self.model.extract_features(**inputs) - return logits[0].transpose(1, 2) \ No newline at end of file diff --git a/spaces/HaloMaster/chinesesummary/fengshen/examples/clue_sim/README.md b/spaces/HaloMaster/chinesesummary/fengshen/examples/clue_sim/README.md deleted file mode 100644 index 41b5b72129491139fa6f21e7cc2ea07d027a60c3..0000000000000000000000000000000000000000 --- a/spaces/HaloMaster/chinesesummary/fengshen/examples/clue_sim/README.md +++ /dev/null @@ -1,90 +0,0 @@ -# 二郎神打CLUE语义匹配榜 - - [比赛介绍](#比赛介绍) - - [clue语义匹配榜打榜思路](#clue语义匹配榜-打榜思路) - - [数据集介绍](#数据集介绍) - - [环境](#环境) - - [用法](#用法) - - [提交](#提交) - -## 比赛介绍 -- clue的语义匹配榜 (https://www.cluebenchmarks.com/sim.html) -- clue sim官方实例 (https://github.com/CLUEbenchmark/QBQTC) - -## clue语义匹配榜 打榜思路 - -- 直接使用fengshenbang的二郎神模型,就打到了前三。 -- 为了解决标签平衡问题,设计了一个交叉熵平滑滤波loss,就达到了第一。 - -详细的思路讲解在知乎: 链接 - -## 数据集介绍 - -QQ浏览器搜索相关性数据集(QBQTC,QQ Browser Query Title Corpus),是QQ浏览器搜索引擎目前针对大搜场景构建的一个融合了相关性、权威性、内容质量、 -时效性等维度标注的学习排序(LTR)数据集,广泛应用在搜索引擎业务场景中。 - -相关性的含义:0,相关程度差;1,有一定相关性;2,非常相关。数字越大相关性越高。 - -**数据量统计** - -| 训练集(train) | 验证集(dev) | 公开测试集(test_public) | 私有测试集(test) | -| :----: | :----: | :----: | :----: | -| 180,000| 20,000| 5,000 | >=10,0000| - -**评测指标** - -f1_score来自于sklearn.metrics,计算公式如下: -`F1 = 2 * (precision * recall) / (precision + recall)` - -## 环境 -* Python >= 3.6 -* torch == 1.8.0+cu111 -* transforms == 4.6.0 -* pytorch-lightning == 1.3.2 -* 一张GPU: A100 40G - -## 用法 - -fengshenbang的二郎神模型的使用是非常简单的。 - -该example下的代码和思想继承自fengshen/examples/classification/finetune_classification.py - -如果需要直接使用该python脚本,把官方的数据集处理成如下形式: - -```json -{"sentence1": "应届生实习", "sentence2": "实习生招聘-应届生求职网", "label": "1", "id": 0} -``` - -然后修改其中的fengshen/examples/classification/finetune_classification.sh的参数即可。 - -下面介绍该example的用法: - -### 创建文件夹 - -- dataset 文件夹,下载官方数据集后放进来就行 -- weights 文件夹,用以存放二郎神模型 -- submissions 文件夹,用以存放需要评测的json文件 - -### Train -```bash -python main.py \ - --mode 'Train' \ - --model_path './weights/Erlangshen-MegatronBert-1.3B-Similarity' \ - --model_name 'IDEA-CCNL/Erlangshen-MegatronBert-1.3B-Similarity' -``` - -加载最优的模型用以Test set的预测。 - -### Test -```bash -python main.py \ - --mode 'Test' \ - --predict_model_path 'your_model_path' \ - --model_path './weights/Erlangshen-MegatronBert-1.3B-Similarity' \ - --model_name 'IDEA-CCNL/Erlangshen-MegatronBert-1.3B-Similarity' -``` - -## 提交 - -在路径 ./submissions 下,找到 qbqtc_predict.json 并且提交到测评系统 - -注意:名字必须为qbqtc_predict.json diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/roberta/commonsense_qa/README.md b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/roberta/commonsense_qa/README.md deleted file mode 100644 index 7f386decd87d93bf701e2e313c7fea39d982224f..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/roberta/commonsense_qa/README.md +++ /dev/null @@ -1,99 +0,0 @@ -# Finetuning RoBERTa on Commonsense QA - -We follow a similar approach to [finetuning RACE](../README.race.md). Specifically -for each question we construct five inputs, one for each of the five candidate -answer choices. Each input is constructed by concatenating the question and -candidate answer. We then encode each input and pass the resulting "[CLS]" -representations through a fully-connected layer to predict the correct answer. -We train with a standard cross-entropy loss. - -We also found it helpful to prepend a prefix of `Q:` to the question and `A:` to -the answer. The complete input format is: -``` - Q: Where would I not want a fox? A: hen house -``` - -Our final submission is based on a hyperparameter search over the learning rate -(1e-5, 2e-5, 3e-5), batch size (8, 16), number of training steps (2000, 3000, -4000) and random seed. We selected the model with the best performance on the -development set after 100 trials. - -### 1) Download data from the Commonsense QA website (https://www.tau-nlp.org/commonsenseqa) -```bash -bash examples/roberta/commonsense_qa/download_cqa_data.sh -``` - -### 2) Finetune - -```bash -MAX_UPDATES=3000 # Number of training steps. -WARMUP_UPDATES=150 # Linearly increase LR over this many steps. -LR=1e-05 # Peak LR for polynomial LR scheduler. -MAX_SENTENCES=16 # Batch size. -SEED=1 # Random seed. -ROBERTA_PATH=/path/to/roberta/model.pt -DATA_DIR=data/CommonsenseQA - -# we use the --user-dir option to load the task from -# the examples/roberta/commonsense_qa directory: -FAIRSEQ_PATH=/path/to/fairseq -FAIRSEQ_USER_DIR=${FAIRSEQ_PATH}/examples/roberta/commonsense_qa - -CUDA_VISIBLE_DEVICES=0 fairseq-train --fp16 --ddp-backend=legacy_ddp \ - $DATA_DIR \ - --user-dir $FAIRSEQ_USER_DIR \ - --restore-file $ROBERTA_PATH \ - --reset-optimizer --reset-dataloader --reset-meters \ - --no-epoch-checkpoints --no-last-checkpoints --no-save-optimizer-state \ - --best-checkpoint-metric accuracy --maximize-best-checkpoint-metric \ - --task commonsense_qa --init-token 0 --bpe gpt2 \ - --arch roberta_large --max-positions 512 \ - --dropout 0.1 --attention-dropout 0.1 --weight-decay 0.01 \ - --criterion sentence_ranking --num-classes 5 \ - --optimizer adam --adam-betas '(0.9, 0.98)' --adam-eps 1e-06 --clip-norm 0.0 \ - --lr-scheduler polynomial_decay --lr $LR \ - --warmup-updates $WARMUP_UPDATES --total-num-update $MAX_UPDATES \ - --batch-size $MAX_SENTENCES \ - --max-update $MAX_UPDATES \ - --log-format simple --log-interval 25 \ - --seed $SEED -``` - -The above command assumes training on 1 GPU with 32GB of RAM. For GPUs with -less memory, decrease `--batch-size` and increase `--update-freq` -accordingly to compensate. - -### 3) Evaluate -```python -import json -import torch -from fairseq.models.roberta import RobertaModel -from examples.roberta import commonsense_qa # load the Commonsense QA task -roberta = RobertaModel.from_pretrained('checkpoints', 'checkpoint_best.pt', 'data/CommonsenseQA') -roberta.eval() # disable dropout -roberta.cuda() # use the GPU (optional) -nsamples, ncorrect = 0, 0 -with open('data/CommonsenseQA/valid.jsonl') as h: - for line in h: - example = json.loads(line) - scores = [] - for choice in example['question']['choices']: - input = roberta.encode( - 'Q: ' + example['question']['stem'], - 'A: ' + choice['text'], - no_separator=True - ) - score = roberta.predict('sentence_classification_head', input, return_logits=True) - scores.append(score) - pred = torch.cat(scores).argmax() - answer = ord(example['answerKey']) - ord('A') - nsamples += 1 - if pred == answer: - ncorrect += 1 - -print('Accuracy: ' + str(ncorrect / float(nsamples))) -# Accuracy: 0.7846027846027847 -``` - -The above snippet is not batched, which makes it quite slow. See [instructions -for batched prediction with RoBERTa](https://github.com/pytorch/fairseq/tree/main/examples/roberta#batched-prediction). diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/data/encoders/sentencepiece_bpe.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/data/encoders/sentencepiece_bpe.py deleted file mode 100644 index a76d46a2014e81eff72b19f6c13084a855fcd477..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/data/encoders/sentencepiece_bpe.py +++ /dev/null @@ -1,48 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from dataclasses import dataclass, field - -from fairseq import file_utils -from fairseq.data.encoders import register_bpe -from fairseq.dataclass import FairseqDataclass - - -@dataclass -class SentencepieceConfig(FairseqDataclass): - sentencepiece_model: str = field( - default="???", metadata={"help": "path to sentencepiece model"} - ) - - -@register_bpe("sentencepiece", dataclass=SentencepieceConfig) -class SentencepieceBPE(object): - def __init__(self, cfg): - sentencepiece_model = file_utils.cached_path(cfg.sentencepiece_model) - try: - import sentencepiece as spm - - self.sp = spm.SentencePieceProcessor() - self.sp.Load(sentencepiece_model) - except ImportError: - raise ImportError( - "Please install sentencepiece with: pip install sentencepiece" - ) - - def encode(self, x: str) -> str: - return " ".join(self.sp.EncodeAsPieces(x)) - - def decode(self, x: str) -> str: - return x.replace(" ", "").replace("\u2581", " ").strip() - - def is_beginning_of_word(self, x: str) -> bool: - if x in ["", "", "", ""]: - # special elements are always considered beginnings - # HACK: this logic is already present in fairseq/tasks/masked_lm.py - # but these special tokens are also contained in the sentencepiece - # vocabulary which causes duplicate special tokens. This hack makes - # sure that they are all taken into account. - return True - return x.startswith("\u2581") diff --git a/spaces/Harveenchadha/en_to_indic_translation/indic_nlp_library/indicnlp/normalize/indic_normalize.py b/spaces/Harveenchadha/en_to_indic_translation/indic_nlp_library/indicnlp/normalize/indic_normalize.py deleted file mode 100644 index fcd2f4cddc17e5967a4992afb3ec56488c489e1d..0000000000000000000000000000000000000000 --- a/spaces/Harveenchadha/en_to_indic_translation/indic_nlp_library/indicnlp/normalize/indic_normalize.py +++ /dev/null @@ -1,984 +0,0 @@ -# -*- coding: utf-8 -*- - -# -# Copyright (c) 2013-present, Anoop Kunchukuttan -# All rights reserved. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -# - -#Program for normalization of text written in Unicode. This is mainly geared towards Indic scripts -# -# @author Anoop Kunchukuttan -# - -import sys, codecs, string, itertools, re -from indicnlp import langinfo - - -class NormalizerI(object): - """ - The normalizer classes do the following: - - * Some characters have multiple Unicode codepoints. The normalizer chooses a single standard representation - * Some control characters are deleted - * While typing using the Latin keyboard, certain typical mistakes occur which are corrected by the module - - Base class for normalizer. Performs some common normalization, which includes: - - * Byte order mark, word joiner, etc. removal - * ZERO_WIDTH_NON_JOINER and ZERO_WIDTH_JOINER removal - * ZERO_WIDTH_SPACE and NO_BREAK_SPACE replaced by spaces - - Script specific normalizers should derive from this class and override the normalize() method. - They can call the super class 'normalize() method to avail of the common normalization - - """ - - BYTE_ORDER_MARK='\uFEFF' - BYTE_ORDER_MARK_2='\uFFFE' - WORD_JOINER='\u2060' - SOFT_HYPHEN='\u00AD' - - ZERO_WIDTH_SPACE='\u200B' - NO_BREAK_SPACE='\u00A0' - - ZERO_WIDTH_NON_JOINER='\u200C' - ZERO_WIDTH_JOINER='\u200D' - - def _normalize_punctuations(self, text): - """ - Normalize punctuations. - Applied many of the punctuation normalizations that are part of MosesNormalizer - from sacremoses - """ - text=text.replace(NormalizerI.BYTE_ORDER_MARK,'') - text=text.replace('„', r'"') - text=text.replace('“', r'"') - text=text.replace('”', r'"') - text=text.replace('–', r'-') - text=text.replace('—', r' - ') - text=text.replace('´', r"'") - text=text.replace('‘', r"'") - text=text.replace('‚', r"'") - text=text.replace('’', r"'") - text=text.replace("''", r'"') - text=text.replace('´´', r'"') - text=text.replace('…', r'...') - - return text - - def normalize(self,text): - pass - - -class BaseNormalizer(NormalizerI): - - def __init__(self,lang, - remove_nuktas=False, - nasals_mode='do_nothing', - do_normalize_chandras=False, - do_normalize_vowel_ending=False): - - self.lang=lang - self.remove_nuktas=remove_nuktas - self.nasals_mode=nasals_mode - self.do_normalize_chandras=do_normalize_chandras - self.do_normalize_vowel_ending=do_normalize_vowel_ending - - self._init_normalize_chandras() - self._init_normalize_nasals() - self._init_normalize_vowel_ending() - #self._init_visarga_correction() - - def _init_normalize_vowel_ending(self): - - if self.lang in langinfo.IE_LANGUAGES: - self.fn_vowel_ending=self._normalize_word_vowel_ending_ie - elif self.lang in langinfo.DRAVIDIAN_LANGUAGES: - self.fn_vowel_ending=self._normalize_word_vowel_ending_dravidian - else: - self.fn_vowel_ending=lambda x: x - - def _init_normalize_chandras(self): - - substitution_offsets =\ - [ - [0x0d , 0x0f], # chandra e, independent - [0x11 , 0x13], # chandra o, independent - [0x45 , 0x47], # chandra e , 0xde],pendent - [0x49 , 0x4b], # chandra o , 0xde],pendent - # [0x72 , 0x0f], # mr: chandra e, independent - - [0x00 , 0x02], # chandrabindu - [0x01 , 0x02], # chandrabindu - ] - - self.chandra_substitutions = [ - (langinfo.offset_to_char(x[0],self.lang), langinfo.offset_to_char(x[1],self.lang)) - for x in substitution_offsets ] - - def _normalize_chandras(self,text): - for match, repl in self.chandra_substitutions: - text=text.replace(match,repl) - return text - - def _init_to_anusvaara_strict(self): - """ - `r1_nasal=re.compile(r'\\u0919\\u094D([\\u0915-\\u0918])')` - """ - - pat_signatures=\ - [ - [0x19,0x15,0x18], - [0x1e,0x1a,0x1d], - [0x23,0x1f,0x22], - [0x28,0x24,0x27], - [0x29,0x24,0x27], - [0x2e,0x2a,0x2d], - ] - - halant_offset=0x4d - anusvaara_offset=0x02 - - pats=[] - - for pat_signature in pat_signatures: - pat=re.compile(r'{nasal}{halant}([{start_r}-{end_r}])'.format( - nasal=langinfo.offset_to_char(pat_signature[0],self.lang), - halant=langinfo.offset_to_char(halant_offset,self.lang), - start_r=langinfo.offset_to_char(pat_signature[1],self.lang), - end_r=langinfo.offset_to_char(pat_signature[2],self.lang), - )) - pats.append(pat) - - repl_string='{anusvaara}\\1'.format(anusvaara=langinfo.offset_to_char(anusvaara_offset,self.lang)) - - self.pats_repls=(pats,repl_string) - - def _to_anusvaara_strict(self,text): - - pats, repl_string = self.pats_repls - for pat in pats: - text=pat.sub(repl_string,text) - - return text - - def _init_to_anusvaara_relaxed(self): - """ - `r1_nasal=re.compile(r'\\u0919\\u094D([\\u0915-\\u0918])')` - """ - - nasals_list=[0x19,0x1e,0x23,0x28,0x29,0x2e] - nasals_list_str=','.join([langinfo.offset_to_char(x,self.lang) for x in nasals_list]) - - halant_offset=0x4d - anusvaara_offset=0x02 - - pat=re.compile(r'[{nasals_list_str}]{halant}'.format( - nasals_list_str=nasals_list_str, - halant=langinfo.offset_to_char(halant_offset,self.lang), - )) - - repl_string='{anusvaara}'.format(anusvaara=langinfo.offset_to_char(anusvaara_offset,self.lang)) - - self.pats_repls = (pat,repl_string) - - def _to_anusvaara_relaxed(self,text): - pat, repl_string = self.pats_repls - return pat.sub(repl_string,text) - - - def _init_to_nasal_consonants(self): - """ - `r1_nasal=re.compile(r'\\u0919\\u094D([\\u0915-\\u0918])')` - """ - - pat_signatures=\ - [ - [0x19,0x15,0x18], - [0x1e,0x1a,0x1d], - [0x23,0x1f,0x22], - [0x28,0x24,0x27], - [0x29,0x24,0x27], - [0x2e,0x2a,0x2d], - ] - - halant_offset=0x4d - anusvaara_offset=0x02 - - pats=[] - repl_strings=[] - - for pat_signature in pat_signatures: - pat=re.compile(r'{anusvaara}([{start_r}-{end_r}])'.format( - anusvaara=langinfo.offset_to_char(anusvaara_offset,self.lang), - start_r=langinfo.offset_to_char(pat_signature[1],self.lang), - end_r=langinfo.offset_to_char(pat_signature[2],self.lang), - )) - pats.append(pat) - repl_string='{nasal}{halant}\\1'.format( - nasal=langinfo.offset_to_char(pat_signature[0],self.lang), - halant=langinfo.offset_to_char(halant_offset,self.lang), - ) - repl_strings.append(repl_string) - - self.pats_repls=list(zip(pats,repl_strings)) - - def _to_nasal_consonants(self,text): - - for pat, repl in self.pats_repls: - text=pat.sub(repl,text) - - return text - - def _init_normalize_nasals(self): - - if self.nasals_mode == 'to_anusvaara_strict': - self._init_to_anusvaara_strict() - elif self.nasals_mode == 'to_anusvaara_relaxed': - self._init_to_anusvaara_relaxed() - elif self.nasals_mode == 'to_nasal_consonants': - self._init_to_nasal_consonants() - - def _normalize_nasals(self,text): - if self.nasals_mode == 'to_anusvaara_strict': - return self._to_anusvaara_strict(text) - elif self.nasals_mode == 'to_anusvaara_relaxed': - return self._to_anusvaara_relaxed(text) - elif self.nasals_mode == 'to_nasal_consonants': - return self._to_nasal_consonants(text) - else: - return text - - - def _normalize_word_vowel_ending_dravidian(self,word): - """ - for Dravidian - - consonant ending: add 'a' ki maatra - - halant ending: no change - - 'a' ki maatra: no change - """ - if len(word)>0 and langinfo.is_consonant(word[-1],self.lang): - return word+langinfo.offset_to_char(0x3e,self.lang) - else: - return word - - def _normalize_word_vowel_ending_ie(self,word): - """ - for IE - - consonant ending: add halant - - halant ending: no change - - 'a' ki maatra: no change - """ - if len(word)>0 and langinfo.is_consonant(word[-1],self.lang): - return word+langinfo.offset_to_char(langinfo.HALANTA_OFFSET,self.lang) - else: - return word - - def _normalize_vowel_ending(self,text): - return ' '.join([ self.fn_vowel_ending(w) for w in text.split(' ') ]) - - def normalize(self,text): - """ - Method to be implemented for normalization for each script - """ - text=text.replace(NormalizerI.BYTE_ORDER_MARK,'') - text=text.replace(NormalizerI.BYTE_ORDER_MARK_2,'') - text=text.replace(NormalizerI.WORD_JOINER,'') - text=text.replace(NormalizerI.SOFT_HYPHEN,'') - - text=text.replace(NormalizerI.ZERO_WIDTH_SPACE,' ') # ?? - text=text.replace(NormalizerI.NO_BREAK_SPACE,' ') - - text=text.replace(NormalizerI.ZERO_WIDTH_NON_JOINER, '') - text=text.replace(NormalizerI.ZERO_WIDTH_JOINER,'') - - text=self._normalize_punctuations(text) - - if self.do_normalize_chandras: - text=self._normalize_chandras(text) - text=self._normalize_nasals(text) - if self.do_normalize_vowel_ending: - text=self._normalize_vowel_ending(text) - - return text - - - def get_char_stats(self,text): - print(len(re.findall(NormalizerI.BYTE_ORDER_MARK,text))) - print(len(re.findall(NormalizerI.BYTE_ORDER_MARK_2,text))) - print(len(re.findall(NormalizerI.WORD_JOINER,text))) - print(len(re.findall(NormalizerI.SOFT_HYPHEN,text))) - - print(len(re.findall(NormalizerI.ZERO_WIDTH_SPACE,text) )) - print(len(re.findall(NormalizerI.NO_BREAK_SPACE,text))) - - print(len(re.findall(NormalizerI.ZERO_WIDTH_NON_JOINER,text))) - print(len(re.findall(NormalizerI.ZERO_WIDTH_JOINER,text))) - - #for mobj in re.finditer(NormalizerI.ZERO_WIDTH_NON_JOINER,text): - # print text[mobj.start()-10:mobj.end()+10].replace('\n', ' ').replace(NormalizerI.ZERO_WIDTH_NON_JOINER,'').encode('utf-8') - #print hex(ord(text[mobj.end():mobj.end()+1])) - - def correct_visarga(self,text,visarga_char,char_range): - text=re.sub(r'([\u0900-\u097f]):','\\1\u0903',text) - - - -class DevanagariNormalizer(BaseNormalizer): - """ - Normalizer for the Devanagari script. In addition to basic normalization by the super class, - - * Replaces the composite characters containing nuktas by their decomposed form - * replace pipe character '|' by poorna virama character - * replace colon ':' by visarga if the colon follows a charcter in this script - - """ - - NUKTA='\u093C' - - def __init__(self,lang='hi',remove_nuktas=False,nasals_mode='do_nothing', - do_normalize_chandras=False,do_normalize_vowel_ending=False): - super(DevanagariNormalizer,self).__init__(lang,remove_nuktas,nasals_mode,do_normalize_chandras,do_normalize_vowel_ending) - - def normalize(self,text): - - # common normalization for Indic scripts - text=super(DevanagariNormalizer,self).normalize(text) - - # chandra a replacement for Marathi - text=text.replace('\u0972','\u090f') - - # decomposing Nukta based composite characters - text=text.replace('\u0929','\u0928'+DevanagariNormalizer.NUKTA) - text=text.replace('\u0931','\u0930'+DevanagariNormalizer.NUKTA) - text=text.replace('\u0934','\u0933'+DevanagariNormalizer.NUKTA) - text=text.replace('\u0958','\u0915'+DevanagariNormalizer.NUKTA) - text=text.replace('\u0959','\u0916'+DevanagariNormalizer.NUKTA) - text=text.replace('\u095A','\u0917'+DevanagariNormalizer.NUKTA) - text=text.replace('\u095B','\u091C'+DevanagariNormalizer.NUKTA) - text=text.replace('\u095C','\u0921'+DevanagariNormalizer.NUKTA) - text=text.replace('\u095D','\u0922'+DevanagariNormalizer.NUKTA) - text=text.replace('\u095E','\u092B'+DevanagariNormalizer.NUKTA) - text=text.replace('\u095F','\u092F'+DevanagariNormalizer.NUKTA) - - if self.remove_nuktas: - text=text.replace(DevanagariNormalizer.NUKTA,'') - - # replace pipe character for poorna virama - text=text.replace('\u007c','\u0964') - - # correct visarga - text=re.sub(r'([\u0900-\u097f]):','\\1\u0903',text) - - return text - - def get_char_stats(self,text): - super(DevanagariNormalizer,self).get_char_stats(text) - - print((len(re.findall('\u0929',text)))) - print((len(re.findall('\u0931',text)))) - print((len(re.findall('\u0934',text)))) - print((len(re.findall('\u0958',text)))) - print((len(re.findall('\u0959',text)))) - print((len(re.findall('\u095A',text)))) - print((len(re.findall('\u095B',text)))) - print((len(re.findall('\u095C',text)))) - print((len(re.findall('\u095D',text)))) - print((len(re.findall('\u095E',text)))) - print((len(re.findall('\u095F',text)))) - - #print(len(re.findall(u'\u0928'+DevanagariNormalizer.NUKTA,text))) - #print(len(re.findall(u'\u0930'+DevanagariNormalizer.NUKTA,text))) - #print(len(re.findall(u'\u0933'+DevanagariNormalizer.NUKTA,text))) - #print(len(re.findall(u'\u0915'+DevanagariNormalizer.NUKTA,text))) - #print(len(re.findall(u'\u0916'+DevanagariNormalizer.NUKTA,text))) - #print(len(re.findall(u'\u0917'+DevanagariNormalizer.NUKTA,text))) - #print(len(re.findall(u'\u091C'+DevanagariNormalizer.NUKTA,text))) - #print(len(re.findall(u'\u0921'+DevanagariNormalizer.NUKTA,text))) - #print(len(re.findall(u'\u0922'+DevanagariNormalizer.NUKTA,text))) - #print(len(re.findall(u'\u092B'+DevanagariNormalizer.NUKTA,text))) - #print(len(re.findall(u'\u092F'+DevanagariNormalizer.NUKTA,text))) - -class GurmukhiNormalizer(BaseNormalizer): - """ - Normalizer for the Gurmukhi script. In addition to basic normalization by the super class, - - * Replaces the composite characters containing nuktas by their decomposed form - * Replace the reserved character for poorna virama (if used) with the recommended generic Indic scripts poorna virama - * replace pipe character '|' by poorna virama character - * replace colon ':' by visarga if the colon follows a charcter in this script - """ - - NUKTA='\u0A3C' - - VOWEL_NORM_MAPS={ - ## http://www.unicode.org/versions/Unicode12.1.0/ch12.pdf - ## Table 12-16 - '\u0a05\u0a3e': '\u0a06', - '\u0a72\u0a3f': '\u0a07', - '\u0a72\u0a40': '\u0a08', - '\u0a73\u0a41': '\u0a09', - '\u0a73\u0a42': '\u0a0a', - '\u0a72\u0a47': '\u0a0f', - '\u0a05\u0a48': '\u0a10', - '\u0a73\u0a4b': '\u0a13', - '\u0a05\u0a4c': '\u0a14', - } - - def __init__(self,lang='pa',remove_nuktas=False,nasals_mode='do_nothing',do_normalize_chandras=False, - do_normalize_vowel_ending=False, - do_canonicalize_addak=False, - do_canonicalize_tippi=False, - do_replace_vowel_bases=False): - super(GurmukhiNormalizer,self).__init__(lang,remove_nuktas,nasals_mode,do_normalize_chandras,do_normalize_vowel_ending) - self.do_canonicalize_addak=do_canonicalize_addak - self.do_canonicalize_tippi=do_canonicalize_tippi - self.do_replace_vowel_bases=do_replace_vowel_bases - - - def _normalize_vowels(self,text): - """ - - """ - - ## standard vowel replacements as per suggestions in - ## http://www.unicode.org/versions/Unicode12.1.0/ch12.pdf - ## Table 12-16 - - for k,v in GurmukhiNormalizer.VOWEL_NORM_MAPS.items(): - text=text.replace(k,v) - - ## the above mappings should account for majority of the variantions, - ## Rest are handled via this generic rule which looks at the diacritic - ## following the 2 special characters - ## TBD: don't see evidence for this in Wikipedia corpus - - ## If these special characters occur without any diacritic, replace them with closet - ## equivalent vowels - if self.do_replace_vowel_bases: - text=text.replace('\u0a72','\u0a07') - text=text.replace('\u0a73','\u0a09') - - return text - - - def normalize(self,text): - - # Addak - if self.do_canonicalize_addak: - ## replace addak+consonant with consonat+halant+consonant - text=re.sub(r'\u0a71(.)','\\1\u0a4d\\1',text) - - # Tippi - if self.do_canonicalize_tippi: - text=text.replace('\u0a70','\u0a02') - - # Vowels: Gurumuki has multiple ways of representing independent vowels due - # to the characters 'iri' and 'ura'. - text=self._normalize_vowels(text) - - # common normalization for Indic scripts - text=super(GurmukhiNormalizer,self).normalize(text) - - # decomposing Nukta based composite characters - text=text.replace('\u0a33','\u0a32'+GurmukhiNormalizer.NUKTA) - text=text.replace('\u0a36','\u0a38'+GurmukhiNormalizer.NUKTA) - text=text.replace('\u0a59','\u0a16'+GurmukhiNormalizer.NUKTA) - text=text.replace('\u0a5a','\u0a17'+GurmukhiNormalizer.NUKTA) - text=text.replace('\u0a5b','\u0a1c'+GurmukhiNormalizer.NUKTA) - text=text.replace('\u0a5e','\u0a2b'+GurmukhiNormalizer.NUKTA) - - if self.remove_nuktas: - text=text.replace(GurmukhiNormalizer.NUKTA,'') - - # replace the poorna virama codes specific to script - # with generic Indic script codes - text=text.replace('\u0a64','\u0964') - text=text.replace('\u0a65','\u0965') - - ## replace pipe character for poorna virama - text=text.replace('\u007c','\u0964') - - # correct visarge - text=re.sub(r'([\u0a00-\u0a7f]):','\\1\u0a03',text) - - return text - - -class GujaratiNormalizer(BaseNormalizer): - """ - Normalizer for the Gujarati script. In addition to basic normalization by the super class, - - * Replace the reserved character for poorna virama (if used) with the recommended generic Indic scripts poorna virama - * replace colon ':' by visarga if the colon follows a charcter in this script - """ - - NUKTA='\u0ABC' - - def __init__(self,lang='gu',remove_nuktas=False,nasals_mode='do_nothing',do_normalize_chandras=False, - do_normalize_vowel_ending=False): - super(GujaratiNormalizer,self).__init__(lang,remove_nuktas,nasals_mode,do_normalize_chandras,do_normalize_vowel_ending) - - def normalize(self,text): - - # common normalization for Indic scripts - text=super(GujaratiNormalizer,self).normalize(text) - - # decomposing Nukta based composite characters - if self.remove_nuktas: - text=text.replace(GujaratiNormalizer.NUKTA,'') - - - # replace the poorna virama codes specific to script - # with generic Indic script codes - text=text.replace('\u0ae4','\u0964') - text=text.replace('\u0ae5','\u0965') - - # correct visarge - text=re.sub(r'([\u0a80-\u0aff]):','\\1\u0a83',text) - - return text - - -class OriyaNormalizer(BaseNormalizer): - """ - Normalizer for the Oriya script. In addition to basic normalization by the super class, - - * Replaces the composite characters containing nuktas by their decomposed form - * Replace the reserved character for poorna virama (if used) with the recommended generic Indic scripts poorna virama - * Canonicalize two part dependent vowels - * Replace 'va' with 'ba' - * replace pipe character '|' by poorna virama character - * replace colon ':' by visarga if the colon follows a charcter in this script - """ - - NUKTA='\u0B3C' - - VOWEL_NORM_MAPS={ - ## See Table 12-22 in http://www.unicode.org/versions/Unicode12.1.0/ch12.pdf - '\u0b05\u0b3e': '\u0b06', - '\u0b0f\u0b57': '\u0b10', - '\u0b13\u0b57': '\u0b14', - } - - - def __init__(self,lang='or',remove_nuktas=False,nasals_mode='do_nothing',do_normalize_chandras=False, - do_normalize_vowel_ending=False, - do_remap_wa=False): - super(OriyaNormalizer,self).__init__(lang,remove_nuktas,nasals_mode,do_normalize_chandras,do_normalize_vowel_ending) - self.do_remap_wa=do_remap_wa - - def normalize(self,text): - - # common normalization for Indic scripts - text=super(OriyaNormalizer,self).normalize(text) - - ## standard vowel replacements as per suggestions in Unicode documents - for k,v in OriyaNormalizer.VOWEL_NORM_MAPS.items(): - text=text.replace(k,v) - - # decomposing Nukta based composite characters - text=text.replace('\u0b5c','\u0b21'+OriyaNormalizer.NUKTA) - text=text.replace('\u0b5d','\u0b22'+OriyaNormalizer.NUKTA) - - if self.remove_nuktas: - text=text.replace(OriyaNormalizer.NUKTA,'') - - # replace the poorna virama codes specific to script - # with generic Indic script codes - text=text.replace('\u0b64','\u0964') - text=text.replace('\u0b65','\u0965') - - # replace pipe character for poorna virama - text=text.replace('\u0b7c','\u0964') - - # replace wa with ba - if self.do_remap_wa: - text=text.replace('\u0b71','\u0b2c') - - # replace va with ba - # NOTE: documentation (chapter on Indic scripts) and codepoint chart seem contradictory - # (this applied to wa to ba rule also above) - text=text.replace('\u0b35','\u0b2c') - - # AI dependent vowel sign - text=text.replace('\u0b47\u0b56','\u0b58') - - # two part dependent vowels - text=text.replace('\u0b47\u0b3e','\u0b4b') - text=text.replace('\u0b47\u0b57','\u0b4c') - - - # additional consonant - not clear how to handle this - # ignore - - # correct visarge - text=re.sub(r'([\u0b00-\u0b7f]):','\\1\u0b03',text) - - return text - - -class BengaliNormalizer(BaseNormalizer): - """ - Normalizer for the Bengali script. In addition to basic normalization by the super class, - - * Replaces the composite characters containing nuktas by their decomposed form - * Replace the reserved character for poorna virama (if used) with the recommended generic Indic scripts poorna virama - * Canonicalize two part dependent vowels - * replace pipe character '|' by poorna virama character - * replace colon ':' by visarga if the colon follows a charcter in this script - - """ - - NUKTA='\u09BC' - - def __init__(self,lang='bn',remove_nuktas=False,nasals_mode='do_nothing',do_normalize_chandras=False, - do_normalize_vowel_ending=False, - do_remap_assamese_chars=False): - super(BengaliNormalizer,self).__init__(lang,remove_nuktas,nasals_mode,do_normalize_chandras,do_normalize_vowel_ending) - self.do_remap_assamese_chars=do_remap_assamese_chars - - def normalize(self,text): - - # common normalization for Indic scripts - text=super(BengaliNormalizer,self).normalize(text) - - # decomposing Nukta based composite characters - text=text.replace('\u09dc','\u09a1'+BengaliNormalizer.NUKTA) - text=text.replace('\u09dd','\u09a2'+BengaliNormalizer.NUKTA) - text=text.replace('\u09df','\u09af'+BengaliNormalizer.NUKTA) - - if self.remove_nuktas: - text=text.replace(BengaliNormalizer.NUKTA,'') - - if self.do_remap_assamese_chars and self.lang=='as': - text=text.replace('\u09f0','\u09b0') # 'ra' character - text=text.replace('\u09f1','\u09ac') # 'va' character - - # replace the poorna virama codes specific to script - # with generic Indic script codes - text=text.replace('\u09e4','\u0964') - text=text.replace('\u09e5','\u0965') - - # replace pipe character for poorna virama - text=text.replace('\u007c','\u0964') - # replace bengali currency numerator four for poorna virama (it looks similar and is used as a substitute) - text=text.replace('\u09f7','\u0964') - - # two part dependent vowels - text=text.replace('\u09c7\u09be','\u09cb') - text=text.replace('\u09c7\u09d7','\u09cc') - - # correct visarge - text=re.sub(r'([\u0980-\u09ff]):','\\1\u0983',text) - - return text - - -class TamilNormalizer(BaseNormalizer): - """ - Normalizer for the Tamil script. In addition to basic normalization by the super class, - - * Replace the reserved character for poorna virama (if used) with the recommended generic Indic scripts poorna virama - * canonicalize two-part dependent vowel signs - * replace colon ':' by visarga if the colon follows a charcter in this script - """ - - def __init__(self,lang='ta',remove_nuktas=False,nasals_mode='do_nothing', - do_normalize_chandras=False,do_normalize_vowel_ending=False): - super(TamilNormalizer,self).__init__(lang,remove_nuktas,nasals_mode,do_normalize_chandras,do_normalize_vowel_ending) - - def normalize(self,text): - - # common normalization for Indic scripts - text=super(TamilNormalizer,self).normalize(text) - - # replace the poorna virama codes specific to script - # with generic Indic script codes - text=text.replace('\u0be4','\u0964') - text=text.replace('\u0be5','\u0965') - - # two part dependent vowels - text=text.replace('\u0b92\u0bd7','\u0b94') - text=text.replace('\u0bc6\u0bbe','\u0bca') - text=text.replace('\u0bc7\u0bbe','\u0bcb') - text=text.replace('\u0bc6\u0bd7','\u0bcc') - - # correct visarge - text=re.sub(r'([\u0b80-\u0bff]):','\\1\u0b83',text) - - return text - - -class TeluguNormalizer(BaseNormalizer): - """ - Normalizer for the Teluguscript. In addition to basic normalization by the super class, - - * Replace the reserved character for poorna virama (if used) with the recommended generic Indic scripts poorna virama - * canonicalize two-part dependent vowel signs - * replace colon ':' by visarga if the colon follows a charcter in this script - """ - - def __init__(self,lang='te',remove_nuktas=False,nasals_mode='do_nothing', - do_normalize_chandras=False,do_normalize_vowel_ending=False): - super(TeluguNormalizer,self).__init__(lang,remove_nuktas,nasals_mode,do_normalize_chandras,do_normalize_vowel_ending) - - def normalize(self,text): - - # common normalization for Indic scripts - text=super(TeluguNormalizer,self).normalize(text) - - # replace the poorna virama codes specific to script - # with generic Indic script codes - text=text.replace('\u0c64','\u0964') - text=text.replace('\u0c65','\u0965') - - # dependent vowels - text=text.replace('\u0c46\u0c56','\u0c48') - - # correct visarge - text=re.sub(r'([\u0c00-\u0c7f]):','\\1\u0c03',text) - - return text - - def get_char_stats(self,text): - pass - -class KannadaNormalizer(BaseNormalizer): - """ - Normalizer for the Kannada script. In addition to basic normalization by the super class, - - * Replace the reserved character for poorna virama (if used) with the recommended generic Indic scripts poorna virama - * canonicalize two-part dependent vowel signs - * replace colon ':' by visarga if the colon follows a charcter in this script - """ - - def __init__(self,lang='kn',remove_nuktas=False,nasals_mode='do_nothing', - do_normalize_chandras=False,do_normalize_vowel_ending=False): - super(KannadaNormalizer,self).__init__(lang,remove_nuktas,nasals_mode,do_normalize_chandras,do_normalize_vowel_ending) - - - def normalize(self,text): - - # common normalization for Indic scripts - text=super(KannadaNormalizer,self).normalize(text) - - # replace the poorna virama codes specific to script - # with generic Indic script codes - text=text.replace('\u0ce4','\u0964') - text=text.replace('\u0ce5','\u0965') - - # dependent vowels - text=text.replace('\u0cbf\u0cd5','\u0cc0') - text=text.replace('\u0cc6\u0cd5','\u0cc7') - text=text.replace('\u0cc6\u0cd6','\u0cc8') - text=text.replace('\u0cc6\u0cc2','\u0cca') - text=text.replace('\u0cca\u0cd5','\u0ccb') - - # correct visarge - text=re.sub(r'([\u0c80-\u0cff]):','\\1\u0c83',text) - - return text - - -class MalayalamNormalizer(BaseNormalizer): - """ - Normalizer for the Malayalam script. In addition to basic normalization by the super class, - - * Replace the reserved character for poorna virama (if used) with the recommended generic Indic scripts poorna virama - * canonicalize two-part dependent vowel signs - * Change from old encoding of chillus (till Unicode 5.0) to new encoding - * replace colon ':' by visarga if the colon follows a charcter in this script - """ - - CHILLU_CHAR_MAP= { - '\u0d7a': '\u0d23', - '\u0d7b': '\u0d28', - '\u0d7c': '\u0d30', - '\u0d7d': '\u0d32', - '\u0d7e': '\u0d33', - '\u0d7f': '\u0d15', - } - - def _canonicalize_chillus(self,text): - for chillu, char in MalayalamNormalizer.CHILLU_CHAR_MAP.items(): - text=text.replace(chillu,'{}\u0d4d'.format(char)) - return text - - def _correct_geminated_T(self,text): - return text.replace('\u0d31\u0d4d\u0d31','\u0d1f\u0d4d\u0d1f') - - def __init__(self,lang='ml',remove_nuktas=False,nasals_mode='do_nothing',do_normalize_chandras=False, - do_normalize_vowel_ending=False, - do_canonicalize_chillus=False, do_correct_geminated_T=False): - super(MalayalamNormalizer,self).__init__(lang,remove_nuktas,nasals_mode,do_normalize_chandras,do_normalize_vowel_ending) - self.do_canonicalize_chillus=do_canonicalize_chillus - self.do_correct_geminated_T=do_correct_geminated_T - - def normalize(self,text): - - # Change from old encoding of chillus (till Unicode 5.0) to new encoding - text=text.replace('\u0d23\u0d4d\u200d','\u0d7a') - text=text.replace('\u0d28\u0d4d\u200d','\u0d7b') - text=text.replace('\u0d30\u0d4d\u200d','\u0d7c') - text=text.replace('\u0d32\u0d4d\u200d','\u0d7d') - text=text.replace('\u0d33\u0d4d\u200d','\u0d7e') - text=text.replace('\u0d15\u0d4d\u200d','\u0d7f') - - # Normalize chillus - if self.do_canonicalize_chillus: - text=self._canonicalize_chillus(text) - - # common normalization for Indic scripts - text=super(MalayalamNormalizer,self).normalize(text) - - # replace the poorna virama codes specific to script - # with generic Indic script codes - text=text.replace('\u0d64','\u0964') - text=text.replace('\u0d65','\u0965') - - # dependent vowels - text=text.replace('\u0d46\u0d3e','\u0d4a') - text=text.replace('\u0d47\u0d3e','\u0d4b') - - # au forms - text=text.replace('\u0d46\u0d57','\u0d4c') - text=text.replace('\u0d57','\u0d4c') - - # correct geminated T - if self.do_correct_geminated_T: - text=self._correct_geminated_T(text) - - # correct visarga - text=re.sub(r'([\u0d00-\u0d7f]):','\\1\u0d03',text) - - return text - -class UrduNormalizer(NormalizerI): - '''Uses UrduHack library. - https://docs.urduhack.com/en/stable/_modules/urduhack/normalization/character.html#normalize - ''' - - def __init__(self, lang, remove_nuktas=True): - self.lang = lang - self.remove_nuktas = remove_nuktas - - from urduhack.normalization import ( - remove_diacritics, - normalize_characters, - normalize_combine_characters - ) # TODO: Use only required normalizers - from urduhack.preprocessing import ( - normalize_whitespace, - digits_space, - all_punctuations_space, - english_characters_space - ) - - def normalize(self, text): - text = self._normalize_punctuations(text) - text = UrduNormalizer.normalize_whitespace(text) - if self.remove_nuktas: - text = UrduNormalizer.remove_diacritics(text) - text = UrduNormalizer.normalize_characters(text) - text = UrduNormalizer.normalize_combine_characters(text) - text = UrduNormalizer.digits_space(text) - text = UrduNormalizer.all_punctuations_space(text) - text = UrduNormalizer.english_characters_space(text) - return text - - -class IndicNormalizerFactory(object): - """ - Factory class to create language specific normalizers. - - """ - - def get_normalizer(self,language,**kwargs): - """ - Call the get_normalizer function to get the language specific normalizer - - Paramters: - |language: language code - |remove_nuktas: boolean, should the normalizer remove nukta characters - """ - normalizer=None - if language in ['hi','mr','sa','kK','ne','sd']: - normalizer=DevanagariNormalizer(lang=language, **kwargs) - elif language in ['ur']: - normalizer = UrduNormalizer(lang=language, **kwargs) - elif language in ['pa']: - normalizer=GurmukhiNormalizer(lang=language, **kwargs) - elif language in ['gu']: - normalizer=GujaratiNormalizer(lang=language, **kwargs) - elif language in ['bn']: - normalizer=BengaliNormalizer(lang=language, **kwargs) - elif language in ['as']: - normalizer=BengaliNormalizer(lang=language, **kwargs) - elif language in ['or']: - normalizer=OriyaNormalizer(lang=language, **kwargs) - elif language in ['ml']: - normalizer=MalayalamNormalizer(lang=language, **kwargs) - elif language in ['kn']: - normalizer=KannadaNormalizer(lang=language, **kwargs) - elif language in ['ta']: - normalizer=TamilNormalizer(lang=language, **kwargs) - elif language in ['te']: - normalizer=TeluguNormalizer(lang=language, **kwargs) - else: - normalizer=BaseNormalizer(lang=language, **kwargs) - - return normalizer - - def is_language_supported(self,language): - """ - Is the language supported? - """ - if language in ['hi','mr','sa','kK','ne','sd', - 'ur', - 'pa', - 'gu', - 'bn','as', - 'or', - 'ml', - 'kn', - 'ta', - 'te']: - return True - else: - return False - - -if __name__ == '__main__': - - if len(sys.argv)<4: - print("Usage: python normalize.py [] []") - sys.exit(1) - - language=sys.argv[3] - remove_nuktas=False - normalize_nasals='do_nothing' - if len(sys.argv)>=5: - remove_nuktas=bool(sys.argv[4]) - if len(sys.argv)>=6: - normalize_nasals=sys.argv[5] - - # create normalizer - factory=IndicNormalizerFactory() - normalizer=factory.get_normalizer(language,remove_nuktas=remove_nuktas,nasals_mode=normalize_nasals) - - # DO normalization - with codecs.open(sys.argv[1],'r','utf-8') as ifile: - with codecs.open(sys.argv[2],'w','utf-8') as ofile: - for line in ifile.readlines(): - normalized_line=normalizer.normalize(line) - ofile.write(normalized_line) - - ## gather status about normalization - #with codecs.open(sys.argv[1],'r','utf-8') as ifile: - # normalizer=DevanagariNormalizer() - # text=string.join(ifile.readlines(),sep='') - # normalizer.get_char_stats(text) diff --git a/spaces/Harveenchadha/oiTrans/indic_nlp_library/contrib/correct_moses_tokenizer.py b/spaces/Harveenchadha/oiTrans/indic_nlp_library/contrib/correct_moses_tokenizer.py deleted file mode 100644 index 9c656d4d69fd16638dbfa4a4435920bea50a6fe5..0000000000000000000000000000000000000000 --- a/spaces/Harveenchadha/oiTrans/indic_nlp_library/contrib/correct_moses_tokenizer.py +++ /dev/null @@ -1,29 +0,0 @@ -import sys -from indicnlp import langinfo -from indicnlp import loader - -if __name__ == '__main__': - """ - This script corrects the incorrect tokenization done by Moses tokenizer. - The Moses tokenizer splits on nukta and halant characters - Usage: python correct_moses_tokenizer.py - """ - - loader.load() - - infname=sys.argv[1] - outfname=sys.argv[2] - lang=sys.argv[3] - - halant_char=langinfo.offset_to_char(langinfo.HALANTA_OFFSET,lang) - nukta_char=langinfo.offset_to_char(langinfo.NUKTA_OFFSET,lang) - - with open(infname,'r',encoding='utf-8') as infile, \ - open(outfname,'w',encoding='utf-8') as outfile: - for line in infile: - outfile.write( - line.replace( - ' {} '.format(halant_char), halant_char).replace( - ' {} '.format(nukta_char), nukta_char).replace( - ' {}{}'.format(nukta_char,halant_char),'{}{}'.format(nukta_char,halant_char)) - ) diff --git a/spaces/ICML2022/OFA/fairseq/examples/backtranslation/prepare-wmt18en2de.sh b/spaces/ICML2022/OFA/fairseq/examples/backtranslation/prepare-wmt18en2de.sh deleted file mode 100644 index f6fd275307db50ca84c299440ae02dce49064030..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/examples/backtranslation/prepare-wmt18en2de.sh +++ /dev/null @@ -1,135 +0,0 @@ -#!/bin/bash -# Adapted from https://github.com/facebookresearch/MIXER/blob/master/prepareData.sh - -echo 'Cloning Moses github repository (for tokenization scripts)...' -git clone https://github.com/moses-smt/mosesdecoder.git - -echo 'Cloning Subword NMT repository (for BPE pre-processing)...' -git clone https://github.com/rsennrich/subword-nmt.git - -SCRIPTS=mosesdecoder/scripts -TOKENIZER=$SCRIPTS/tokenizer/tokenizer.perl -CLEAN=$SCRIPTS/training/clean-corpus-n.perl -NORM_PUNC=$SCRIPTS/tokenizer/normalize-punctuation.perl -REM_NON_PRINT_CHAR=$SCRIPTS/tokenizer/remove-non-printing-char.perl -BPEROOT=subword-nmt/subword_nmt -BPE_TOKENS=32000 - -URLS=( - "http://statmt.org/wmt13/training-parallel-europarl-v7.tgz" - "http://statmt.org/wmt13/training-parallel-commoncrawl.tgz" - "http://data.statmt.org/wmt18/translation-task/training-parallel-nc-v13.tgz" - "http://data.statmt.org/wmt18/translation-task/rapid2016.tgz" - "http://data.statmt.org/wmt17/translation-task/dev.tgz" - "http://statmt.org/wmt14/test-full.tgz" -) -FILES=( - "training-parallel-europarl-v7.tgz" - "training-parallel-commoncrawl.tgz" - "training-parallel-nc-v13.tgz" - "rapid2016.tgz" - "dev.tgz" - "test-full.tgz" -) -CORPORA=( - "training/europarl-v7.de-en" - "commoncrawl.de-en" - "training-parallel-nc-v13/news-commentary-v13.de-en" - "rapid2016.de-en" -) - -if [ ! -d "$SCRIPTS" ]; then - echo "Please set SCRIPTS variable correctly to point to Moses scripts." - exit 1 -fi - -OUTDIR=wmt18_en_de - -src=en -tgt=de -lang=en-de -prep=$OUTDIR -tmp=$prep/tmp -orig=orig - -mkdir -p $orig $tmp $prep - -cd $orig - -for ((i=0;i<${#URLS[@]};++i)); do - file=${FILES[i]} - if [ -f $file ]; then - echo "$file already exists, skipping download" - else - url=${URLS[i]} - wget "$url" - if [ -f $file ]; then - echo "$url successfully downloaded." - else - echo "$url not successfully downloaded." - exit 1 - fi - if [ ${file: -4} == ".tgz" ]; then - tar zxvf $file - elif [ ${file: -4} == ".tar" ]; then - tar xvf $file - fi - fi -done -cd .. - -echo "pre-processing train data..." -for l in $src $tgt; do - rm $tmp/train.tags.$lang.tok.$l - for f in "${CORPORA[@]}"; do - cat $orig/$f.$l | \ - perl $NORM_PUNC $l | \ - perl $REM_NON_PRINT_CHAR | \ - perl $TOKENIZER -threads 8 -a -l $l >> $tmp/train.tags.$lang.tok.$l - done -done - -echo "pre-processing test data..." -for l in $src $tgt; do - if [ "$l" == "$src" ]; then - t="src" - else - t="ref" - fi - grep '\s*//g' | \ - sed -e 's/\s*<\/seg>\s*//g' | \ - sed -e "s/\’/\'/g" | \ - perl $TOKENIZER -threads 8 -a -l $l > $tmp/test.$l - echo "" -done - -echo "splitting train and valid..." -for l in $src $tgt; do - awk '{if (NR%100 == 0) print $0; }' $tmp/train.tags.$lang.tok.$l > $tmp/valid.$l - awk '{if (NR%100 != 0) print $0; }' $tmp/train.tags.$lang.tok.$l > $tmp/train.$l -done - -TRAIN=$tmp/train.de-en -BPE_CODE=$prep/code -rm -f $TRAIN -for l in $src $tgt; do - cat $tmp/train.$l >> $TRAIN -done - -echo "learn_bpe.py on ${TRAIN}..." -python $BPEROOT/learn_bpe.py -s $BPE_TOKENS < $TRAIN > $BPE_CODE - -for L in $src $tgt; do - for f in train.$L valid.$L test.$L; do - echo "apply_bpe.py to ${f}..." - python $BPEROOT/apply_bpe.py -c $BPE_CODE < $tmp/$f > $tmp/bpe.$f - done -done - -perl $CLEAN -ratio 1.5 $tmp/bpe.train $src $tgt $prep/train 1 250 -perl $CLEAN -ratio 1.5 $tmp/bpe.valid $src $tgt $prep/valid 1 250 - -for L in $src $tgt; do - cp $tmp/bpe.test.$L $prep/test.$L -done diff --git a/spaces/ICML2022/resefa/third_party/stylegan2_official_ops/upfirdn2d.cpp b/spaces/ICML2022/resefa/third_party/stylegan2_official_ops/upfirdn2d.cpp deleted file mode 100644 index 2d7177fc60040751d20e9a8da0301fa3ab64968a..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/resefa/third_party/stylegan2_official_ops/upfirdn2d.cpp +++ /dev/null @@ -1,103 +0,0 @@ -// Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. -// -// NVIDIA CORPORATION and its licensors retain all intellectual property -// and proprietary rights in and to this software, related documentation -// and any modifications thereto. Any use, reproduction, disclosure or -// distribution of this software and related documentation without an express -// license agreement from NVIDIA CORPORATION is strictly prohibited. - -#include -#include -#include -#include "upfirdn2d.h" - -//------------------------------------------------------------------------ - -static torch::Tensor upfirdn2d(torch::Tensor x, torch::Tensor f, int upx, int upy, int downx, int downy, int padx0, int padx1, int pady0, int pady1, bool flip, float gain) -{ - // Validate arguments. - TORCH_CHECK(x.is_cuda(), "x must reside on CUDA device"); - TORCH_CHECK(f.device() == x.device(), "f must reside on the same device as x"); - TORCH_CHECK(f.dtype() == torch::kFloat, "f must be float32"); - TORCH_CHECK(x.numel() <= INT_MAX, "x is too large"); - TORCH_CHECK(f.numel() <= INT_MAX, "f is too large"); - TORCH_CHECK(x.dim() == 4, "x must be rank 4"); - TORCH_CHECK(f.dim() == 2, "f must be rank 2"); - TORCH_CHECK(f.size(0) >= 1 && f.size(1) >= 1, "f must be at least 1x1"); - TORCH_CHECK(upx >= 1 && upy >= 1, "upsampling factor must be at least 1"); - TORCH_CHECK(downx >= 1 && downy >= 1, "downsampling factor must be at least 1"); - - // Create output tensor. - const at::cuda::OptionalCUDAGuard device_guard(device_of(x)); - int outW = ((int)x.size(3) * upx + padx0 + padx1 - (int)f.size(1) + downx) / downx; - int outH = ((int)x.size(2) * upy + pady0 + pady1 - (int)f.size(0) + downy) / downy; - TORCH_CHECK(outW >= 1 && outH >= 1, "output must be at least 1x1"); - torch::Tensor y = torch::empty({x.size(0), x.size(1), outH, outW}, x.options(), x.suggest_memory_format()); - TORCH_CHECK(y.numel() <= INT_MAX, "output is too large"); - - // Initialize CUDA kernel parameters. - upfirdn2d_kernel_params p; - p.x = x.data_ptr(); - p.f = f.data_ptr(); - p.y = y.data_ptr(); - p.up = make_int2(upx, upy); - p.down = make_int2(downx, downy); - p.pad0 = make_int2(padx0, pady0); - p.flip = (flip) ? 1 : 0; - p.gain = gain; - p.inSize = make_int4((int)x.size(3), (int)x.size(2), (int)x.size(1), (int)x.size(0)); - p.inStride = make_int4((int)x.stride(3), (int)x.stride(2), (int)x.stride(1), (int)x.stride(0)); - p.filterSize = make_int2((int)f.size(1), (int)f.size(0)); - p.filterStride = make_int2((int)f.stride(1), (int)f.stride(0)); - p.outSize = make_int4((int)y.size(3), (int)y.size(2), (int)y.size(1), (int)y.size(0)); - p.outStride = make_int4((int)y.stride(3), (int)y.stride(2), (int)y.stride(1), (int)y.stride(0)); - p.sizeMajor = (p.inStride.z == 1) ? p.inSize.w : p.inSize.w * p.inSize.z; - p.sizeMinor = (p.inStride.z == 1) ? p.inSize.z : 1; - - // Choose CUDA kernel. - upfirdn2d_kernel_spec spec; - AT_DISPATCH_FLOATING_TYPES_AND_HALF(x.scalar_type(), "upfirdn2d_cuda", [&] - { - spec = choose_upfirdn2d_kernel(p); - }); - - // Set looping options. - p.loopMajor = (p.sizeMajor - 1) / 16384 + 1; - p.loopMinor = spec.loopMinor; - p.loopX = spec.loopX; - p.launchMinor = (p.sizeMinor - 1) / p.loopMinor + 1; - p.launchMajor = (p.sizeMajor - 1) / p.loopMajor + 1; - - // Compute grid size. - dim3 blockSize, gridSize; - if (spec.tileOutW < 0) // large - { - blockSize = dim3(4, 32, 1); - gridSize = dim3( - ((p.outSize.y - 1) / blockSize.x + 1) * p.launchMinor, - (p.outSize.x - 1) / (blockSize.y * p.loopX) + 1, - p.launchMajor); - } - else // small - { - blockSize = dim3(256, 1, 1); - gridSize = dim3( - ((p.outSize.y - 1) / spec.tileOutH + 1) * p.launchMinor, - (p.outSize.x - 1) / (spec.tileOutW * p.loopX) + 1, - p.launchMajor); - } - - // Launch CUDA kernel. - void* args[] = {&p}; - AT_CUDA_CHECK(cudaLaunchKernel(spec.kernel, gridSize, blockSize, args, 0, at::cuda::getCurrentCUDAStream())); - return y; -} - -//------------------------------------------------------------------------ - -PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) -{ - m.def("upfirdn2d", &upfirdn2d); -} - -//------------------------------------------------------------------------ diff --git a/spaces/IDEA-Research/Grounded-SAM/GroundingDINO/groundingdino/models/GroundingDINO/transformer_vanilla.py b/spaces/IDEA-Research/Grounded-SAM/GroundingDINO/groundingdino/models/GroundingDINO/transformer_vanilla.py deleted file mode 100644 index 10c0920c1a217af5bb3e1b13077568035ab3b7b5..0000000000000000000000000000000000000000 --- a/spaces/IDEA-Research/Grounded-SAM/GroundingDINO/groundingdino/models/GroundingDINO/transformer_vanilla.py +++ /dev/null @@ -1,123 +0,0 @@ -# ------------------------------------------------------------------------ -# Grounding DINO -# url: https://github.com/IDEA-Research/GroundingDINO -# Copyright (c) 2023 IDEA. All Rights Reserved. -# Licensed under the Apache License, Version 2.0 [see LICENSE for details] -# ------------------------------------------------------------------------ -# Copyright (c) Aishwarya Kamath & Nicolas Carion. Licensed under the Apache License 2.0. All Rights Reserved -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -""" -DETR Transformer class. - -Copy-paste from torch.nn.Transformer with modifications: - * positional encodings are passed in MHattention - * extra LN at the end of encoder is removed - * decoder returns a stack of activations from all decoding layers -""" -from typing import Optional - -import torch -import torch.nn.functional as F -from torch import Tensor, nn - -from .utils import ( - MLP, - _get_activation_fn, - _get_clones, - gen_encoder_output_proposals, - gen_sineembed_for_position, - sigmoid_focal_loss, -) - - -class TextTransformer(nn.Module): - def __init__(self, num_layers, d_model=256, nheads=8, dim_feedforward=2048, dropout=0.1): - super().__init__() - self.num_layers = num_layers - self.d_model = d_model - self.nheads = nheads - self.dim_feedforward = dim_feedforward - self.norm = None - - single_encoder_layer = TransformerEncoderLayer( - d_model=d_model, nhead=nheads, dim_feedforward=dim_feedforward, dropout=dropout - ) - self.layers = _get_clones(single_encoder_layer, num_layers) - - def forward(self, memory_text: torch.Tensor, text_attention_mask: torch.Tensor): - """ - - Args: - text_attention_mask: bs, num_token - memory_text: bs, num_token, d_model - - Raises: - RuntimeError: _description_ - - Returns: - output: bs, num_token, d_model - """ - - output = memory_text.transpose(0, 1) - - for layer in self.layers: - output = layer(output, src_key_padding_mask=text_attention_mask) - - if self.norm is not None: - output = self.norm(output) - - return output.transpose(0, 1) - - -class TransformerEncoderLayer(nn.Module): - def __init__( - self, - d_model, - nhead, - dim_feedforward=2048, - dropout=0.1, - activation="relu", - normalize_before=False, - ): - super().__init__() - self.self_attn = nn.MultiheadAttention(d_model, nhead, dropout=dropout) - # Implementation of Feedforward model - self.linear1 = nn.Linear(d_model, dim_feedforward) - self.dropout = nn.Dropout(dropout) - self.linear2 = nn.Linear(dim_feedforward, d_model) - - self.norm1 = nn.LayerNorm(d_model) - self.norm2 = nn.LayerNorm(d_model) - self.dropout1 = nn.Dropout(dropout) - self.dropout2 = nn.Dropout(dropout) - - self.activation = _get_activation_fn(activation) - self.normalize_before = normalize_before - self.nhead = nhead - - def with_pos_embed(self, tensor, pos: Optional[Tensor]): - return tensor if pos is None else tensor + pos - - def forward( - self, - src, - src_mask: Optional[Tensor] = None, - src_key_padding_mask: Optional[Tensor] = None, - pos: Optional[Tensor] = None, - ): - # repeat attn mask - if src_mask.dim() == 3 and src_mask.shape[0] == src.shape[1]: - # bs, num_q, num_k - src_mask = src_mask.repeat(self.nhead, 1, 1) - - q = k = self.with_pos_embed(src, pos) - - src2 = self.self_attn(q, k, value=src, attn_mask=src_mask)[0] - - # src2 = self.self_attn(q, k, value=src, attn_mask=src_mask, key_padding_mask=src_key_padding_mask)[0] - src = src + self.dropout1(src2) - src = self.norm1(src) - src2 = self.linear2(self.dropout(self.activation(self.linear1(src)))) - src = src + self.dropout2(src2) - src = self.norm2(src) - return src diff --git a/spaces/Illumotion/Koboldcpp/otherarch/tools/gptj_quantize.cpp b/spaces/Illumotion/Koboldcpp/otherarch/tools/gptj_quantize.cpp deleted file mode 100644 index 5e1c695aa0e31e30bcede9847910e5bdd5649a83..0000000000000000000000000000000000000000 --- a/spaces/Illumotion/Koboldcpp/otherarch/tools/gptj_quantize.cpp +++ /dev/null @@ -1,183 +0,0 @@ -#include "ggml.h" - -#include "utils.h" -#include "common-ggml.h" - -#include -#include -#include -#include -#include -#include -#include -#include -#include - -// default hparams (GPT-J 6B) -struct gptj_hparams { - int32_t n_vocab = 50400; - int32_t n_ctx = 2048; - int32_t n_embd = 4096; - int32_t n_head = 16; - int32_t n_layer = 28; - int32_t n_rot = 64; - int32_t ftype = 1; -}; - -// quantize a model -bool gptj_model_quantize(const std::string & fname_inp, const std::string & fname_out, ggml_ftype ftype) { - gpt_vocab vocab; - - printf("%s: loading model from '%s'\n", __func__, fname_inp.c_str()); - - auto finp = std::ifstream(fname_inp, std::ios::binary); - if (!finp) { - fprintf(stderr, "%s: failed to open '%s' for reading\n", __func__, fname_inp.c_str()); - return false; - } - - auto fout = std::ofstream(fname_out, std::ios::binary); - if (!fout) { - fprintf(stderr, "%s: failed to open '%s' for writing\n", __func__, fname_out.c_str()); - return false; - } - - // verify magic - { - uint32_t magic; - finp.read((char *) &magic, sizeof(magic)); - if (magic != 0x67676d6c) { - fprintf(stderr, "%s: invalid model file '%s' (bad magic)\n", __func__, fname_inp.c_str()); - return false; - } - - fout.write((char *) &magic, sizeof(magic)); - } - - gptj_hparams hparams; - - // load hparams - { - finp.read((char *) &hparams.n_vocab, sizeof(hparams.n_vocab)); - finp.read((char *) &hparams.n_ctx, sizeof(hparams.n_ctx)); - finp.read((char *) &hparams.n_embd, sizeof(hparams.n_embd)); - finp.read((char *) &hparams.n_head, sizeof(hparams.n_head)); - finp.read((char *) &hparams.n_layer, sizeof(hparams.n_layer)); - finp.read((char *) &hparams.n_rot, sizeof(hparams.n_rot)); - finp.read((char *) &hparams.ftype, sizeof(hparams.ftype)); - - const int32_t qntvr_src = hparams.ftype / GGML_QNT_VERSION_FACTOR; - const int32_t ftype_dst = GGML_QNT_VERSION * GGML_QNT_VERSION_FACTOR + ftype; - - printf("%s: n_vocab = %d\n", __func__, hparams.n_vocab); - printf("%s: n_ctx = %d\n", __func__, hparams.n_ctx); - printf("%s: n_embd = %d\n", __func__, hparams.n_embd); - printf("%s: n_head = %d\n", __func__, hparams.n_head); - printf("%s: n_layer = %d\n", __func__, hparams.n_layer); - printf("%s: ftype (src) = %d\n", __func__, hparams.ftype); - printf("%s: qntvr (src) = %d\n", __func__, qntvr_src); - printf("%s: ftype (dst) = %d\n", __func__, ftype_dst); - printf("%s: qntvr (dst) = %d\n", __func__, GGML_QNT_VERSION); - - fout.write((char *) &hparams.n_vocab, sizeof(hparams.n_vocab)); - fout.write((char *) &hparams.n_ctx, sizeof(hparams.n_ctx)); - fout.write((char *) &hparams.n_embd, sizeof(hparams.n_embd)); - fout.write((char *) &hparams.n_head, sizeof(hparams.n_head)); - fout.write((char *) &hparams.n_layer, sizeof(hparams.n_layer)); - fout.write((char *) &hparams.n_rot, sizeof(hparams.n_rot)); - fout.write((char *) &ftype_dst, sizeof(ftype_dst)); - } - - // load vocab - { - int32_t n_vocab = 0; - finp.read ((char *) &n_vocab, sizeof(n_vocab)); - fout.write((char *) &n_vocab, sizeof(n_vocab)); - - if (n_vocab != hparams.n_vocab) { - fprintf(stderr, "%s: invalid model file '%s' (bad vocab size %d != %d)\n", - __func__, fname_inp.c_str(), n_vocab, hparams.n_vocab); - return false; - } - - std::string word; - for (int i = 0; i < n_vocab; i++) { - uint32_t len; - finp.read ((char *) &len, sizeof(len)); - fout.write((char *) &len, sizeof(len)); - - word.resize(len); - finp.read ((char *) word.data(), len); - fout.write((char *) word.data(), len); - - vocab.token_to_id[word] = i; - vocab.id_to_token[i] = word; - } - } - - // regexes of tensor names to be quantized - const std::vector to_quant = { - ".*weight", - }; - - if (!ggml_common_quantize_0(finp, fout, ftype, to_quant, {})) { - fprintf(stderr, "%s: failed to quantize model '%s'\n", __func__, fname_inp.c_str()); - return false; - } - - finp.close(); - fout.close(); - - return true; -} - -// usage: -// ./gpt-2-quantize models/gpt-2-117M/ggml-model.bin models/gpt-2-117M/ggml-model-quant.bin type -// -int main(int argc, char ** argv) { - ggml_time_init(); - if (argc != 4) { - fprintf(stderr, "usage: %s model-f32.bin model-quant.bin type\n", argv[0]); - ggml_print_ftypes(stderr); - return 1; - } - - // needed to initialize f16 tables - { - struct ggml_init_params params = { 0, NULL, false }; - struct ggml_context * ctx = ggml_init(params); - ggml_free(ctx); - } - - const std::string fname_inp = argv[1]; - const std::string fname_out = argv[2]; - - const ggml_ftype ftype = ggml_parse_ftype(argv[3]); - - const int64_t t_main_start_us = ggml_time_us(); - - int64_t t_quantize_us = 0; - - // load the model - { - const int64_t t_start_us = ggml_time_us(); - - if (!gptj_model_quantize(fname_inp, fname_out, ggml_ftype(ftype))) { - fprintf(stderr, "%s: failed to quantize model from '%s'\n", __func__, fname_inp.c_str()); - return 1; - } - - t_quantize_us = ggml_time_us() - t_start_us; - } - - // report timing - { - const int64_t t_main_end_us = ggml_time_us(); - - printf("\n"); - printf("%s: quantize time = %8.2f ms\n", __func__, t_quantize_us/1000.0f); - printf("%s: total time = %8.2f ms\n", __func__, (t_main_end_us - t_main_start_us)/1000.0f); - } - - return 0; -} \ No newline at end of file diff --git a/spaces/Intae/deepfake/training/datasets/validation_set.py b/spaces/Intae/deepfake/training/datasets/validation_set.py deleted file mode 100644 index fa28f0889acb86d37fc4f0b8d9a9373ab0cc6ba4..0000000000000000000000000000000000000000 --- a/spaces/Intae/deepfake/training/datasets/validation_set.py +++ /dev/null @@ -1,60 +0,0 @@ - - -PUBLIC_SET = {'tjuihawuqm', 'prwsfljdjo', 'scrbqgpvzz', 'ziipxxchai', 'uubgqnvfdl', 'wclvkepakb', 'xjvxtuakyd', - 'qlvsqdroqo', 'bcbqxhziqz', 'yzuestxcbq', 'hxwtsaydal', 'kqlvggiqee', 'vtunvalyji', 'mohiqoogpb', - 'siebfpwuhu', 'cekwtyxdoo', 'hszwwswewp', 'orekjthsef', 'huvlwkxoxm', 'fmhiujydwo', 'lhvjzhjxdp', - 'ibxfxggtqh', 'bofrwgeyjo', 'rmufsuogzn', 'zbgssotnjm', 'dpevefkefv', 'sufvvwmbha', 'ncoeewrdlo', - 'qhsehzgxqj', 'yxadevzohx', 'aomqqjipcp', 'pcyswtgick', 'wfzjxzhdkj', 'rcjfxxhcal', 'lnjkpdviqb', - 'xmkwsnuzyq', 'ouaowjmigq', 'bkuzquigyt', 'vwxednhlwz', 'mszblrdprw', 'blnmxntbey', 'gccnvdoknm', - 'mkzaekkvej', 'hclsparpth', 'eryjktdexi', 'hfsvqabzfq', 'acazlolrpz', 'yoyhmxtrys', 'rerpivllud', - 'elackxuccp', 'zgbhzkditd', 'vjljdfopjg', 'famlupsgqm', 'nymodlmxni', 'qcbkztamqc', 'qclpbcbgeq', - 'lpkgabskbw', 'mnowxangqx', 'czfqlbcfpa', 'qyyhuvqmyf', 'toinozytsp', 'ztyvglkcsf', 'nplviymzlg', - 'opvqdabdap', 'uxuvkrjhws', 'mxahsihabr', 'cqxxumarvp', 'ptbfnkajyi', 'njzshtfmcw', 'dcqodpzomd', - 'ajiyrjfyzp', 'ywauoonmlr', 'gochxzemmq', 'lpgxwdgnio', 'hnfwagcxdf', 'gfcycflhbo', 'gunamloolc', - 'yhjlnisfel', 'srfefmyjvt', 'evysmtpnrf', 'aktnlyqpah', 'gpsxfxrjrr', 'zfobicuigx', 'mnzabbkpmt', - 'rfjuhbnlro', 'zuwwbbusgl', 'csnkohqxdv', 'bzvzpwrabw', 'yietrwuncf', 'wynotylpnm', 'ekboxwrwuv', - 'rcecrgeotc', 'rklawjhbpv', 'ilqwcbprqa', 'jsysgmycsx', 'sqixhnilfm', 'wnlubukrki', 'nikynwcvuh', - 'sjkfxrlxxs', 'btdxnajogv', 'wjhpisoeaj', 'dyjklprkoc', 'qlqhjcshpk', 'jyfvaequfg', 'dozjwhnedd', - 'owaogcehvc', 'oyqgwjdwaj', 'vvfszaosiv', 'kmcdjxmnoa', 'jiswxuqzyz', 'ddtbarpcgo', 'wqysrieiqu', - 'xcruhaccxc', 'honxqdilvv', 'nxgzmgzkfv', 'cxsvvnxpyz', 'demuhxssgl', 'hzoiotcykp', 'fwykevubzy', - 'tejfudfgpq', 'kvmpmhdxly', 'oojxonbgow', 'vurjckblge', 'oysopgovhu', 'khpipxnsvx', 'pqthmvwonf', - 'fddmkqjwsh', 'pcoxcmtroa', 'cnxccbjlct', 'ggzjfrirjh', 'jquevmhdvc', 'ecumyiowzs', 'esmqxszybs', - 'mllzkpgatp', 'ryxaqpfubf', 'hbufmvbium', 'vdtsbqidjb', 'sjwywglgym', 'qxyrtwozyw', 'upmgtackuf', - 'ucthmsajay', 'zgjosltkie', 'snlyjbnpgw', 'nswtvttxre', 'iznnzjvaxc', 'jhczqfefgw', 'htzbnroagi', - 'pdswwyyntw', 'uvrzaczrbx', 'vbcgoyxsvn', 'hzssdinxec', 'novarhxpbj', 'vizerpsvbz', 'jawgcggquk', - 'iorbtaarte', 'yarpxfqejd', 'vhbbwdflyh', 'rrrfjhugvb', 'fneqiqpqvs', 'jytrvwlewz', 'bfjsthfhbd', - 'rxdoimqble', 'ekelfsnqof', 'uqvxjfpwdo', 'cjkctqqakb', 'tynfsthodx', 'yllztsrwjw', 'bktkwbcawi', - 'wcqvzujamg', 'bcvheslzrq', 'aqrsylrzgi', 'sktpeppbkc', 'mkmgcxaztt', 'etdliwticv', 'hqzwudvhih', - 'swsaoktwgi', 'temjefwaas', 'papagllumt', 'xrtvqhdibb', 'oelqpetgwj', 'ggdpclfcgk', 'imdmhwkkni', - 'lebzjtusnr', 'xhtppuyqdr', 'nxzgekegsp', 'waucvvmtkq', 'rnfcjxynfa', 'adohdulfwb', 'tjywwgftmv', - 'fjrueenjyp', 'oaguiggjyv', 'ytopzxrswu', 'yxvmusxvcz', 'rukyxomwcx', 'qdqdsaiitt', 'mxlipjhmqk', - 'voawxrmqyl', 'kezwvsxxzj', 'oocincvedt', 'qooxnxqqjb', 'mwwploizlj', 'yaxgpxhavq', 'uhakqelqri', - 'bvpeerislp', 'bkcyglmfci', 'jyoxdvxpza', 'gkutjglghz', 'knxltsvzyu', 'ybbrkacebd', 'apvzjkvnwn', - 'ahjnxtiamx', 'hsbljbsgxr', 'fnxgqcvlsd', 'xphdfgmfmz', 'scbdenmaed', 'ywxpquomgt', 'yljecirelf', - 'wcvsqnplsk', 'vmxfwxgdei', 'icbsahlivv', 'yhylappzid', 'irqzdokcws', 'petmyhjclt', 'rmlzgerevr', - 'qarqtkvgby', 'nkhzxomani', 'viteugozpv', 'qhkzlnzruj', 'eisofhptvk', 'gqnaxievjx', 'heiyoojifp', - 'zcxcmneefk', 'wvgviwnwob', 'gcdtglsoqj', 'yqhouqakbx', 'fopjiyxiqd', 'hierggamuo', 'ypbtpunjvm', - 'sjinmmbipg', 'kmqkiihrmj', 'wmoqzxddkb', 'lnhkjhyhvw', 'wixbuuzygv', 'fsdrwikhge', 'sfsayjgzrh', - 'pqdeutauqc', 'frqfsucgao', 'pdufsewrec', 'bfdopzvxbi', 'shnsajrsow', 'rvvpazsffd', 'pxcfrszlgi', - 'itfsvvmslp', 'ayipraspbn', 'prhmixykhr', 'doniqevxeg', 'dvtpwatuja', 'jiavqbrkyk', 'ipkpxvwroe', - 'syxobtuucp', 'syuxttuyhm', 'nwvsbmyndn', 'eqslzbqfea', 'ytddugrwph', 'vokrpfjpeb', 'bdshuoldwx', - 'fmvvmcbdrw', 'bnuwxhfahw', 'gbnzicjyhz', 'txnmkabufs', 'gfdjzwnpyp', 'hweshqpfwe', 'dxgnpnowgk', - 'xugmhbetrw', 'rktrpsdlci', 'nthpnwylxo', 'ihglzxzroo', 'ocgdbrgmtq', 'ruhtnngrqv', 'xljemofssi', - 'zxacihctqp', 'ghnpsltzyn', 'lbigytrrtr', 'ndikguxzek', 'mdfndlljvt', 'lyoslorecs', 'oefukgnvel', - 'zmxeiipnqb', 'cosghhimnd', 'alrtntfxtd', 'eywdmustbb', 'ooafcxxfrs', 'fqgypsunzr', 'hevcclcklc', - 'uhrqlmlclw', 'ipvwtgdlre', 'wcssbghcpc', 'didzujjhtg', 'fjxovgmwnm', 'dmmvuaikkv', 'hitfycdavv', - 'zyufpqvpyu', 'coujjnypba', 'temeqbmzxu', 'apedduehoy', 'iksxzpqxzi', 'kwfdyqofzw', 'aassnaulhq', - 'eyguqfmgzh', 'yiykshcbaz', 'sngjsueuhs', 'okgelildpc', 'ztyuiqrhdk', 'tvhjcfnqtg', 'gfgcwxkbjd', - 'lbfqksftuo', 'kowiwvrjht', 'dkuqbduxev', 'mwnibuujwz', 'sodvtfqbpf', 'hsbwhlolsn', 'qsjiypnjwi', - 'blszgmxkvu', 'ystdtnetgj', 'rfwxcinshk', 'vnlzxqwthl', 'ljouzjaqqe', 'gahgyuwzbu', 'xxzefxwyku', - 'xitgdpzbxv', 'sylnrepacf', 'igpvrfjdzc', 'nxnmkytwze', 'psesikjaxx', 'dvwpvqdflx', 'bjyaxvggle', - 'dpmgoiwhuf', 'wadvzjhwtw', 'kcjvhgvhpt', 'eppyqpgewp', 'tyjpjpglgx', 'cekarydqba', 'dvkdfhrpph', - 'cnpanmywno', 'ljauauuyka', 'hicjuubiau', 'cqhwesrciw', 'dnmowthjcj', 'lujvyveojc', 'wndursivcx', - 'espkiocpxq', 'jsbpkpxwew', 'dsnxgrfdmd', 'hyjqolupxn', 'xdezcezszc', 'axfhbpkdlc', 'qqnlrngaft', - 'coqwgzpbhx', 'ncmpqwmnzb', 'sznkemeqro', 'omphqltjdd', 'uoccaiathd', 'jzmzdispyo', 'pxjkzvqomp', - 'udxqbhgvvx', 'dzkyxbbqkr', 'dtozwcapoa', 'qswlzfgcgj', 'tgawasvbbr', 'lmdyicksrv', 'fzvpbrzssi', - 'dxfdovivlw', 'zzmgnglanj', 'vssmlqoiti', 'vajkicalux', 'ekvwecwltj', 'ylxwcwhjjd', 'keioymnobc', - 'usqqvxcjmg', 'phjvutxpoi', 'nycmyuzpml', 'bwdmzwhdnw', 'fxuxxtryjn', 'orixbcfvdz', 'hefisnapds', - 'fpevfidstw', 'halvwiltfs', 'dzojiwfvba', 'ojsxxkalat', 'esjdyghhog', 'ptbnewtvon', 'hcanfkwivl', - 'yronlutbgm', 'llplvmcvbl', 'yxirnfyijn', 'nwvloufjty', 'rtpbawlmxr', 'aayfryxljh', 'zfrrixsimm', - 'txmnoyiyte'} diff --git a/spaces/Irnkvezz/SIC98-GPT2-python-code-generator/app.py b/spaces/Irnkvezz/SIC98-GPT2-python-code-generator/app.py deleted file mode 100644 index 477af2ecff11d098fa3ba93b5b49ee27ee94d12d..0000000000000000000000000000000000000000 --- a/spaces/Irnkvezz/SIC98-GPT2-python-code-generator/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/SIC98/GPT2-python-code-generator").launch() \ No newline at end of file diff --git a/spaces/Jarvis2301/Aku/utils.py b/spaces/Jarvis2301/Aku/utils.py deleted file mode 100644 index ee4b01ddfbe8173965371b29f770f3e87615fe71..0000000000000000000000000000000000000000 --- a/spaces/Jarvis2301/Aku/utils.py +++ /dev/null @@ -1,225 +0,0 @@ -import os -import sys -import argparse -import logging -import json -import subprocess -import numpy as np -import librosa -import torch - -MATPLOTLIB_FLAG = False - -logging.basicConfig(stream=sys.stdout, level=logging.DEBUG) -logger = logging - - -def load_checkpoint(checkpoint_path, model, optimizer=None): - assert os.path.isfile(checkpoint_path) - checkpoint_dict = torch.load(checkpoint_path, map_location='cpu') - iteration = checkpoint_dict['iteration'] - learning_rate = checkpoint_dict['learning_rate'] - if optimizer is not None: - optimizer.load_state_dict(checkpoint_dict['optimizer']) - saved_state_dict = checkpoint_dict['model'] - if hasattr(model, 'module'): - state_dict = model.module.state_dict() - else: - state_dict = model.state_dict() - new_state_dict= {} - for k, v in state_dict.items(): - try: - new_state_dict[k] = saved_state_dict[k] - except: - logger.info("%s is not in the checkpoint" % k) - new_state_dict[k] = v - if hasattr(model, 'module'): - model.module.load_state_dict(new_state_dict) - else: - model.load_state_dict(new_state_dict) - logger.info("Loaded checkpoint '{}' (iteration {})" .format( - checkpoint_path, iteration)) - return model, optimizer, learning_rate, iteration - - -def plot_spectrogram_to_numpy(spectrogram): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger('matplotlib') - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(10,2)) - im = ax.imshow(spectrogram, aspect="auto", origin="lower", - interpolation='none') - plt.colorbar(im, ax=ax) - plt.xlabel("Frames") - plt.ylabel("Channels") - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - -def plot_alignment_to_numpy(alignment, info=None): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger('matplotlib') - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(6, 4)) - im = ax.imshow(alignment.transpose(), aspect='auto', origin='lower', - interpolation='none') - fig.colorbar(im, ax=ax) - xlabel = 'Decoder timestep' - if info is not None: - xlabel += '\n\n' + info - plt.xlabel(xlabel) - plt.ylabel('Encoder timestep') - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - -def load_audio_to_torch(full_path, target_sampling_rate): - audio, sampling_rate = librosa.load(full_path, sr=target_sampling_rate, mono=True) - return torch.FloatTensor(audio.astype(np.float32)) - - -def load_filepaths_and_text(filename, split="|"): - with open(filename, encoding='utf-8') as f: - filepaths_and_text = [line.strip().split(split) for line in f] - return filepaths_and_text - - -def get_hparams(init=True): - parser = argparse.ArgumentParser() - parser.add_argument('-c', '--config', type=str, default="./configs/base.json", - help='JSON file for configuration') - parser.add_argument('-m', '--model', type=str, required=True, - help='Model name') - - args = parser.parse_args() - model_dir = os.path.join("./logs", args.model) - - if not os.path.exists(model_dir): - os.makedirs(model_dir) - - config_path = args.config - config_save_path = os.path.join(model_dir, "config.json") - if init: - with open(config_path, "r") as f: - data = f.read() - with open(config_save_path, "w") as f: - f.write(data) - else: - with open(config_save_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - hparams.model_dir = model_dir - return hparams - - -def get_hparams_from_dir(model_dir): - config_save_path = os.path.join(model_dir, "config.json") - with open(config_save_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams =HParams(**config) - hparams.model_dir = model_dir - return hparams - - -def get_hparams_from_file(config_path): - with open(config_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams =HParams(**config) - return hparams - - -def check_git_hash(model_dir): - source_dir = os.path.dirname(os.path.realpath(__file__)) - if not os.path.exists(os.path.join(source_dir, ".git")): - logger.warn("{} is not a git repository, therefore hash value comparison will be ignored.".format( - source_dir - )) - return - - cur_hash = subprocess.getoutput("git rev-parse HEAD") - - path = os.path.join(model_dir, "githash") - if os.path.exists(path): - saved_hash = open(path).read() - if saved_hash != cur_hash: - logger.warn("git hash values are different. {}(saved) != {}(current)".format( - saved_hash[:8], cur_hash[:8])) - else: - open(path, "w").write(cur_hash) - - -def get_logger(model_dir, filename="train.log"): - global logger - logger = logging.getLogger(os.path.basename(model_dir)) - logger.setLevel(logging.DEBUG) - - formatter = logging.Formatter("%(asctime)s\t%(name)s\t%(levelname)s\t%(message)s") - if not os.path.exists(model_dir): - os.makedirs(model_dir) - h = logging.FileHandler(os.path.join(model_dir, filename)) - h.setLevel(logging.DEBUG) - h.setFormatter(formatter) - logger.addHandler(h) - return logger - - -class HParams(): - def __init__(self, **kwargs): - for k, v in kwargs.items(): - if type(v) == dict: - v = HParams(**v) - self[k] = v - - def keys(self): - return self.__dict__.keys() - - def items(self): - return self.__dict__.items() - - def values(self): - return self.__dict__.values() - - def __len__(self): - return len(self.__dict__) - - def __getitem__(self, key): - return getattr(self, key) - - def __setitem__(self, key, value): - return setattr(self, key, value) - - def __contains__(self, key): - return key in self.__dict__ - - def __repr__(self): - return self.__dict__.__repr__() diff --git a/spaces/JeffJing/ZookChatBot/tls_client/exceptions.py b/spaces/JeffJing/ZookChatBot/tls_client/exceptions.py deleted file mode 100644 index 974f639a3f0cdc969d70f174364a3526fb46d21e..0000000000000000000000000000000000000000 --- a/spaces/JeffJing/ZookChatBot/tls_client/exceptions.py +++ /dev/null @@ -1,3 +0,0 @@ - -class TLSClientExeption(IOError): - """General error with the TLS client""" \ No newline at end of file diff --git a/spaces/JustinLin610/ImageBind_zeroshot_demo/models/transformer.py b/spaces/JustinLin610/ImageBind_zeroshot_demo/models/transformer.py deleted file mode 100644 index 98902ac8f08868c486a7c74781e952bee444c2e6..0000000000000000000000000000000000000000 --- a/spaces/JustinLin610/ImageBind_zeroshot_demo/models/transformer.py +++ /dev/null @@ -1,284 +0,0 @@ -#!/usr/bin/env python3 -# Portions Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. - -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -# Code modified from -# https://github.com/rwightman/pytorch-image-models/blob/master/timm/models/vision_transformer.py ; -# https://github.com/facebookresearch/deit/blob/main/models.py -# and https://github.com/facebookresearch/vissl/blob/main/vissl/models/trunks/vision_transformer.py - - -import copy -import fnmatch -import logging -from functools import partial -from typing import Callable, List - -import torch -import torch.nn as nn -import torch.utils.checkpoint as checkpoint - -from timm.models.layers import DropPath, trunc_normal_ - - -class Attention(nn.Module): - def __init__( - self, - dim, - num_heads=8, - qkv_bias=False, - qk_scale=None, - attn_drop=0.0, - proj_drop=0.0, - ): - super().__init__() - self.num_heads = num_heads - head_dim = dim // num_heads - # NOTE scale factor was wrong in my original version, - # can set manually to be compat with prev weights - self.scale = qk_scale or head_dim**-0.5 - - self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias) - self.attn_drop = nn.Dropout(attn_drop) - self.proj = nn.Linear(dim, dim) - self.proj_drop = nn.Dropout(proj_drop) - - def forward(self, x): - B, N, C = x.shape - qkv = ( - self.qkv(x) - .reshape(B, N, 3, self.num_heads, C // self.num_heads) - .permute(2, 0, 3, 1, 4) - ) - q, k, v = ( - qkv[0], - qkv[1], - qkv[2], - ) # make torchscript happy (cannot use tensor as tuple) - - attn = (q @ k.transpose(-2, -1)) * self.scale - attn = attn.softmax(dim=-1) - attn = self.attn_drop(attn) - - x = (attn @ v).transpose(1, 2).reshape(B, N, C) - x = self.proj(x) - x = self.proj_drop(x) - return x - - -class Mlp(nn.Module): - def __init__( - self, - in_features, - hidden_features=None, - out_features=None, - act_layer=nn.GELU, - drop=0.0, - ): - super().__init__() - out_features = out_features or in_features - hidden_features = hidden_features or in_features - self.fc1 = nn.Linear(in_features, hidden_features) - self.act = act_layer() - self.fc2 = nn.Linear(hidden_features, out_features) - self.drop = nn.Dropout(drop) - - def forward(self, x): - x = self.fc1(x) - x = self.act(x) - x = self.drop(x) - x = self.fc2(x) - x = self.drop(x) - return x - - -class MultiheadAttention(nn.MultiheadAttention): - def forward(self, x: torch.Tensor, attn_mask: torch.Tensor): - return super().forward(x, x, x, need_weights=False, attn_mask=attn_mask)[0] - - -class ViTAttention(Attention): - def forward(self, x: torch.Tensor, attn_mask: torch.Tensor): - assert attn_mask is None - return super().forward(x) - - -class BlockWithMasking(nn.Module): - def __init__( - self, - dim: int, - attn_target: Callable, - mlp_ratio: int = 4, - act_layer: Callable = nn.GELU, - norm_layer: Callable = nn.LayerNorm, - ffn_dropout_rate: float = 0.0, - drop_path: float = 0.0, - layer_scale_type: str = None, - layer_scale_init_value: float = 1e-4, - ): - super().__init__() - - assert not isinstance( - attn_target, nn.Module - ), "attn_target should be a Callable. Otherwise attn_target is shared across blocks!" - self.attn = attn_target() - if drop_path > 0.0: - self.drop_path = DropPath(drop_path) - else: - self.drop_path = nn.Identity() - self.norm_1 = norm_layer(dim) - mlp_hidden_dim = int(mlp_ratio * dim) - self.mlp = Mlp( - in_features=dim, - hidden_features=mlp_hidden_dim, - act_layer=act_layer, - drop=ffn_dropout_rate, - ) - self.norm_2 = norm_layer(dim) - self.layer_scale_type = layer_scale_type - if self.layer_scale_type is not None: - assert self.layer_scale_type in [ - "per_channel", - "scalar", - ], f"Found Layer scale type {self.layer_scale_type}" - if self.layer_scale_type == "per_channel": - # one gamma value per channel - gamma_shape = [1, 1, dim] - elif self.layer_scale_type == "scalar": - # single gamma value for all channels - gamma_shape = [1, 1, 1] - # two gammas: for each part of the fwd in the encoder - self.layer_scale_gamma1 = nn.Parameter( - torch.ones(size=gamma_shape) * layer_scale_init_value, - requires_grad=True, - ) - self.layer_scale_gamma2 = nn.Parameter( - torch.ones(size=gamma_shape) * layer_scale_init_value, - requires_grad=True, - ) - - def forward(self, x: torch.Tensor, attn_mask: torch.Tensor): - if self.layer_scale_type is None: - x = x + self.drop_path(self.attn(self.norm_1(x), attn_mask)) - x = x + self.drop_path(self.mlp(self.norm_2(x))) - else: - x = ( - x - + self.drop_path(self.attn(self.norm_1(x), attn_mask)) - * self.layer_scale_gamma1 - ) - x = x + self.drop_path(self.mlp(self.norm_2(x))) * self.layer_scale_gamma2 - return x - - -_LAYER_NORM = partial(nn.LayerNorm, eps=1e-6) - - -class SimpleTransformer(nn.Module): - def __init__( - self, - attn_target: Callable, - embed_dim: int, - num_blocks: int, - block: Callable = BlockWithMasking, - pre_transformer_layer: Callable = None, - post_transformer_layer: Callable = None, - drop_path_rate: float = 0.0, - drop_path_type: str = "progressive", - norm_layer: Callable = _LAYER_NORM, - mlp_ratio: int = 4, - ffn_dropout_rate: float = 0.0, - layer_scale_type: str = None, # from cait; possible values are None, "per_channel", "scalar" - layer_scale_init_value: float = 1e-4, # from cait; float - weight_init_style: str = "jax", # possible values jax or pytorch - ): - """ - Simple Transformer with the following features - 1. Supports masked attention - 2. Supports DropPath - 3. Supports LayerScale - 4. Supports Dropout in Attention and FFN - 5. Makes few assumptions about the input except that it is a Tensor - """ - super().__init__() - self.pre_transformer_layer = pre_transformer_layer - if drop_path_type == "progressive": - dpr = [x.item() for x in torch.linspace(0, drop_path_rate, num_blocks)] - elif drop_path_type == "uniform": - dpr = [drop_path_rate for i in range(num_blocks)] - else: - raise ValueError(f"Unknown drop_path_type: {drop_path_type}") - - self.blocks = nn.Sequential( - *[ - block( - dim=embed_dim, - attn_target=attn_target, - mlp_ratio=mlp_ratio, - ffn_dropout_rate=ffn_dropout_rate, - drop_path=dpr[i], - norm_layer=norm_layer, - layer_scale_type=layer_scale_type, - layer_scale_init_value=layer_scale_init_value, - ) - for i in range(num_blocks) - ] - ) - self.post_transformer_layer = post_transformer_layer - self.weight_init_style = weight_init_style - self.apply(self._init_weights) - - def _init_weights(self, m): - if isinstance(m, nn.Linear): - if self.weight_init_style == "jax": - # Based on MAE and official Jax ViT implementation - torch.nn.init.xavier_uniform_(m.weight) - elif self.weight_init_style == "pytorch": - # PyTorch ViT uses trunc_normal_ - trunc_normal_(m.weight, std=0.02) - - if m.bias is not None: - nn.init.constant_(m.bias, 0) - elif isinstance(m, (nn.LayerNorm)): - nn.init.constant_(m.bias, 0) - nn.init.constant_(m.weight, 1.0) - - def forward( - self, - tokens: torch.Tensor, - attn_mask: torch.Tensor = None, - use_checkpoint: bool = False, - checkpoint_every_n: int = 1, - checkpoint_blk_ids: List[int] = None, - ): - """ - Inputs - - tokens: data of shape N x L x D (or L x N x D depending on the attention implementation) - - attn: mask of shape L x L - - Output - - x: data of shape N x L x D (or L x N x D depending on the attention implementation) - """ - if self.pre_transformer_layer: - tokens = self.pre_transformer_layer(tokens) - if use_checkpoint and checkpoint_blk_ids is None: - checkpoint_blk_ids = [ - blk_id - for blk_id in range(len(self.blocks)) - if blk_id % checkpoint_every_n == 0 - ] - if checkpoint_blk_ids: - checkpoint_blk_ids = set(checkpoint_blk_ids) - for blk_id, blk in enumerate(self.blocks): - if use_checkpoint and blk_id in checkpoint_blk_ids: - tokens = checkpoint.checkpoint( - blk, tokens, attn_mask, use_reentrant=False - ) - else: - tokens = blk(tokens, attn_mask=attn_mask) - if self.post_transformer_layer: - tokens = self.post_transformer_layer(tokens) - return tokens diff --git a/spaces/KAIST-Geometric-AI-Lab/salad-demo/salad/spaghetti/utils/mesh_utils.py b/spaces/KAIST-Geometric-AI-Lab/salad-demo/salad/spaghetti/utils/mesh_utils.py deleted file mode 100644 index f50f7a9842a1fa1f296607f195141455a1099b6a..0000000000000000000000000000000000000000 --- a/spaces/KAIST-Geometric-AI-Lab/salad-demo/salad/spaghetti/utils/mesh_utils.py +++ /dev/null @@ -1,673 +0,0 @@ -# from chamferdist import ChamferDistance -from ..custom_types import * -from ..constants import EPSILON -from functools import reduce -import igl -# import trimesh -from ..custom_types import T_Mesh, TS - - -def scale_all(*values: T): - # mean_std = [(val.mean(), val.std()) for val in values] - # values = [val.clamp(scales[0] - scales[1] * 3, scales[0] + scales[1] * 3) for val,scales in zip(values, mean_std)] - max_val = max([val.max().item() for val in values]) - min_val = min([val.min().item() for val in values]) - scale = max_val - min_val - values = [(val - min_val) / scale for val in values] - if len(values) == 1: - return values[0] - return values - - -def get_faces_normals(mesh: Union[T_Mesh, T]) -> T: - if type(mesh) is not T: - vs, faces = mesh - vs_faces = vs[faces] - else: - vs_faces = mesh - if vs_faces.shape[-1] == 2: - vs_faces = torch.cat( - (vs_faces, torch.zeros(*vs_faces.shape[:2], 1, dtype=vs_faces.dtype, device=vs_faces.device)), dim=2) - face_normals = torch.cross(vs_faces[:, 1, :] - vs_faces[:, 0, :], vs_faces[:, 2, :] - vs_faces[:, 1, :]) - return face_normals - - -def compute_face_areas(mesh: Union[T_Mesh, T]) -> TS: - face_normals = get_faces_normals(mesh) - face_areas = torch.norm(face_normals, p=2, dim=1) - face_areas_ = face_areas.clone() - face_areas_[torch.eq(face_areas_, 0)] = 1 - face_normals = face_normals / face_areas_[:, None] - face_areas = 0.5 * face_areas - return face_areas, face_normals - - -def check_sign_area(*meshes: T_Mesh) -> bool: - for mesh in meshes: - face_normals = get_faces_normals(mesh) - if not face_normals[:, 2].gt(0).all(): - return False - return True - - -def to_numpy(*tensors: T) -> ARRAYS: - params = [param.detach().cpu().numpy() if type(param) is T else param for param in tensors] - return params - - -def create_mapper(mask: T) -> T: - mapper = torch.zeros(mask.shape[0], dtype=torch.int64, device=mask.device) - 1 - mapper[mask] = torch.arange(mask.sum().item(), device=mask.device) - return mapper - - -def mesh_center(mesh: T_Mesh): - return mesh[0].mean(0) - - -def get_center(vs) -> T: - max_vals = vs.max(0)[0] - min_vals = vs.min(0)[0] - center = (max_vals + min_vals) / 2 - return center - - -def to_center(vs): - vs -= get_center(vs)[None, :] - return vs - - -def scale_by_ref(mesh, ref_mesh, in_place=True, scale=1.): - vs, _ = ref_mesh - if not in_place: - vs = vs.clone() - center = get_center(vs) - vs -= center[None, :] - scale = scale / vs.norm(2, dim=1).max() - vs = (mesh[0] - center[None, :]) * scale - return vs, mesh[1] - - -def to_unit_sphere(mesh: T_Mesh, in_place: bool = True, scale=1.) -> T_Mesh: - vs, faces = mesh - if not in_place: - vs = vs.clone() - vs = to_center(vs) - norm = vs.norm(2, dim=1).max() - vs *= scale * norm ** -1 - return vs, faces - - -def scale_from_ref(mesh: T_Mesh, center: T, scale: float, in_place: bool = True) -> T_Mesh: - vs, faces = mesh - if not in_place: - vs = vs.clone() - vs -= center[None, :] - vs *= scale - return vs, faces - - -def to_unit_cube(*meshes: T_Mesh_T, scale=1, in_place: bool = True) -> Tuple[Union[T_Mesh_T, Tuple[T_Mesh_T, ...]], Tuple[T, float]]: - remove_me = 0 - meshes = [(mesh, remove_me) if type(mesh) is T else mesh for mesh in meshes] - vs, faces = meshes[0] - max_vals = vs.max(0)[0] - min_vals = vs.min(0)[0] - max_range = (max_vals - min_vals).max() / 2 - center = (max_vals + min_vals) / 2 - meshes_ = [] - scale = float(scale / max_range) - for mesh in meshes: - vs_, faces_ = scale_from_ref(mesh, center, scale) - meshes_.append(vs_ if faces_ is remove_me else (vs_, faces_)) - if len(meshes_) == 1: - meshes_ = meshes_[0] - return meshes_, (center, scale) -# # in place -# def to_unit_edge(*meshes: T_Mesh) -> Tuple[Union[T_Mesh, Tuple[T_Mesh, ...]], Tuple[T, float]]: -# ref = meshes[0] -# center = ref[0].mean(0) -# ratio = edge_lengths(ref).mean().item() -# for mesh in meshes: -# vs, _ = mesh -# vs -= center[None, :].to(vs.device) -# vs /= ratio -# if len(meshes) == 1: -# meshes = meshes[0] -# return meshes, (center, ratio) - - -def get_edges_ind(mesh: T_Mesh) -> T: - vs, faces = mesh - raw_edges = torch.cat([faces[:, [i, (i + 1) % 3]] for i in range(3)]).sort() - raw_edges = raw_edges[0].cpu().numpy() - edges = {(int(edge[0]), int(edge[1])) for edge in raw_edges} - edges = torch.tensor(list(edges), dtype=torch.int64, device=faces.device) - return edges - - -def edge_lengths(mesh: T_Mesh, edges_ind: TN = None) -> T: - vs, faces = mesh - if edges_ind is None: - edges_ind = get_edges_ind(mesh) - edges = vs[edges_ind] - return torch.norm(edges[:, 0] - edges[:, 1], 2, dim=1) - - -# in place -def to_unit_edge(*meshes: T_Mesh) -> Tuple[Union[T_Mesh, Tuple[T_Mesh, ...]], Tuple[T, float]]: - ref = meshes[0] - center = ref[0].mean(0) - ratio = edge_lengths(ref).mean().item() - for mesh in meshes: - vs, _ = mesh - vs -= center[None, :].to(vs.device) - vs /= ratio - if len(meshes) == 1: - meshes = meshes[0] - return meshes, (center, ratio) - - -def to(tensors, device: D) -> Union[T_Mesh, TS, T]: - out = [] - for tensor in tensors: - if type(tensor) is T: - out.append(tensor.to(device, )) - elif type(tensor) is tuple or type(tensors) is List: - out.append(to(list(tensor), device)) - else: - out.append(tensor) - if len(tensors) == 1: - return out[0] - else: - return tuple(out) - - -def clone(*tensors: Union[T, TS]) -> Union[TS, T_Mesh]: - out = [] - for t in tensors: - if type(t) is T: - out.append(t.clone()) - else: - out.append(clone(*t)) - return out - - -def get_box(w: float, h: float, d: float) -> T_Mesh: - vs = [[0, 0, 0], [w, 0, 0], [0, d, 0], [w, d, 0], - [0, 0, h], [w, 0, h], [0, d, h], [w, d, h]] - faces = [[0, 2, 1], [1, 2, 3], [4, 5, 6], [5, 7, 6], - [0, 1, 5], [0, 5, 4], [2, 6, 7], [3, 2, 7], - [1, 3, 5], [3, 7, 5], [0, 4, 2], [2, 4, 6]] - return torch.tensor(vs, dtype=torch.float32), torch.tensor(faces, dtype=torch.int64) - - -def normalize(t: T): - t = t / t.norm(2, dim=1)[:, None] - return t - - -def interpolate_vs(mesh: T_Mesh, faces_inds: T, weights: T) -> T: - vs = mesh[0][mesh[1][faces_inds]] - vs = vs * weights[:, :, None] - return vs.sum(1) - - -def sample_uvw(shape, device: D): - u, v = torch.rand(*shape, device=device), torch.rand(*shape, device=device) - mask = (u + v).gt(1) - u[mask], v[mask] = -u[mask] + 1, -v[mask] + 1 - w = -u - v + 1 - uvw = torch.stack([u, v, w], dim=len(shape)) - return uvw - - -def get_sampled_fe(fe: T, mesh: T_Mesh, face_ids: T, uvw: TN) -> T: - # to_squeeze = - if fe.dim() == 1: - fe = fe.unsqueeze(1) - if uvw is None: - fe_iner = fe[face_ids] - else: - vs_ids = mesh[1][face_ids] - fe_unrolled = fe[vs_ids] - fe_iner = torch.einsum('sad,sa->sd', fe_unrolled, uvw) - # if to_squeeze: - # fe_iner = fe_iner.squeeze_(1) - return fe_iner - - -def sample_on_faces(mesh: T_Mesh, num_samples: int) -> TS: - vs, faces = mesh - uvw = sample_uvw([faces.shape[0], num_samples], vs.device) - samples = torch.einsum('fad,fna->fnd', vs[faces], uvw) - return samples, uvw - - -class SampleBy(Enum): - AREAS = 0 - FACES = 1 - HYB = 2 - - -def sample_on_mesh(mesh: T_Mesh, num_samples: int, face_areas: TN = None, - sample_s: SampleBy = SampleBy.HYB) -> TNS: - vs, faces = mesh - if faces is None: # sample from pc - uvw = None - if vs.shape[0] < num_samples: - chosen_faces_inds = torch.arange(vs.shape[0]) - else: - chosen_faces_inds = torch.argsort(torch.rand(vs.shape[0]))[:num_samples] - samples = vs[chosen_faces_inds] - else: - weighted_p = [] - if sample_s == SampleBy.AREAS or sample_s == SampleBy.HYB: - if face_areas is None: - face_areas, _ = compute_face_areas(mesh) - face_areas[torch.isnan(face_areas)] = 0 - weighted_p.append(face_areas / face_areas.sum()) - if sample_s == SampleBy.FACES or sample_s == SampleBy.HYB: - weighted_p.append(torch.ones(mesh[1].shape[0], device=mesh[0].device)) - chosen_faces_inds = [torch.multinomial(weights, num_samples // len(weighted_p), replacement=True) for weights in weighted_p] - if sample_s == SampleBy.HYB: - chosen_faces_inds = torch.cat(chosen_faces_inds, dim=0) - chosen_faces = faces[chosen_faces_inds] - uvw = sample_uvw([num_samples], vs.device) - samples = torch.einsum('sf,sfd->sd', uvw, vs[chosen_faces]) - return samples, chosen_faces_inds, uvw - - -def get_samples(mesh: T_Mesh, num_samples: int, sample_s: SampleBy, *features: T) -> Union[T, TS]: - samples, face_ids, uvw = sample_on_mesh(mesh, num_samples, sample_s=sample_s) - if len(features) > 0: - samples = [samples] + [get_sampled_fe(fe, mesh, face_ids, uvw) for fe in features] - return samples, face_ids, uvw - - -def find_barycentric(vs: T, triangles: T) -> T: - - def compute_barycentric(ind): - triangles[:, ind] = vs - alpha = compute_face_areas(triangles)[0] / areas - triangles[:, ind] = recover[:, ind] - return alpha - - device, dtype = vs.device, vs.dtype - vs = vs.to(device, dtype=torch.float64) - triangles = triangles.to(device, dtype=torch.float64) - areas, _ = compute_face_areas(triangles) - recover = triangles.clone() - barycentric = [compute_barycentric(i) for i in range(3)] - barycentric = torch.stack(barycentric, dim=1) - # assert barycentric.sum(1).max().item() <= 1 + EPSILON - return barycentric.to(device, dtype=dtype) - - -def from_barycentric(mesh: Union[T_Mesh, T], face_ids: T, weights: T) -> T: - if type(mesh) is not T: - triangles: T = mesh[0][mesh[1]] - else: - triangles: T = mesh - to_squeeze = weights.dim() == 1 - if to_squeeze: - weights = weights.unsqueeze(0) - face_ids = face_ids.unsqueeze(0) - vs = torch.einsum('nad,na->nd', triangles[face_ids], weights) - if to_squeeze: - vs = vs.squeeze(0) - return vs - - -def check_circle_angles(mesh: T_Mesh, center_ind: int, select: T) -> bool: - vs, _ = mesh - all_vecs = vs[select] - vs[center_ind][None, :] - all_vecs = all_vecs / all_vecs.norm(2, 1)[:, None] - all_vecs = torch.cat([all_vecs, all_vecs[:1]], dim=0) - all_cos = torch.einsum('nd,nd->n', all_vecs[1:], all_vecs[:-1]) - all_angles = torch.acos_(all_cos) - all_angles = all_angles.sum() - return (all_angles - 2 * np.pi).abs() < EPSILON - - -def vs_over_triangle(vs_mid: T, triangle: T, normals=None) -> T: - if vs_mid.dim() == 1: - vs_mid = vs_mid.unsqueeze(0) - triangle = triangle.unsqueeze(0) - if normals is None: - _, normals = compute_face_areas(triangle) - select = torch.arange(3) - d_vs = vs_mid[:, None, :] - triangle - d_f = triangle[:, select] - triangle[:, (select + 1) % 3] - all_cross = torch.cross(d_vs, d_f, dim=2) - all_dots = torch.einsum('nd,nad->na', normals, all_cross) - is_over = all_dots.ge(0).long().sum(1).eq(3) - return is_over - - -def f2v(num_faces: int, genus: int = 0) -> int: # assuming there are not boundaries - return num_faces // 2 + (1 - genus) * 2 - - -def v2f(num_vs: int, genus: int = 0) -> int: # assuming there are not boundaries - return 2 * num_vs - 4 + 4 * genus - - -def get_dist_mat(a: T, b: T, batch_size: int = 1000, sqrt: bool = False) -> T: - """ - :param a: - :param b: - :param batch_size: Limit batches per distance calculation to avoid out-of-mem - :return: - """ - iters = a.shape[0] // batch_size - dist_list = [((a[i * batch_size: (i + 1) * batch_size, None, :] - b[None, :, :]) ** 2).sum(-1) - for i in range(iters + 1)] - all_dist: T = torch.cat(dist_list, dim=0) - if sqrt: - all_dist = all_dist.sqrt_() - return all_dist - - -def naive_knn(k: int, dist_mat: T, is_biknn=True): - """ - :param k: - :param dist_mat: - :param is_biknn: When false, calcluates only closest element in a per element of b. - When true, calcluates only closest element in a <--> b both ways. - :param batch_size: Limit batches per distance calculation to avoid out-of-mem - :return: - """ - _, close_to_b = dist_mat.topk(k, 0, largest=False) - if is_biknn: - _, close_to_a = dist_mat.topk(k, 1, largest=False) - return close_to_a, close_to_b.t() - return close_to_b.t() - - -def chamfer_igl(): - igl.cha - - -def simple_chamfer(a: T, b: T, normals_a=None, normals_b=None, dist_mat: Optional[T] = None) -> Union[T, TS]: - - def one_direction(fixed: T, search: T, n_f, n_s, closest_id) -> TS: - min_dist = (fixed - search[closest_id]).norm(2, 1).mean(0) - if n_f is not None: - normals_dist = -torch.einsum('nd,nd->n', n_f, n_s[closest_id]).mean(0) - else: - normals_dist = 0 - return min_dist, normals_dist - - if dist_mat is None: - dist_mat = get_dist_mat(a, b) - close_to_a, close_to_b = naive_knn(1, dist_mat) - dist_a, dist_a_n = one_direction(a, b, normals_a, normals_b, close_to_a.flatten()) - dist_b, dist_b_n = one_direction(b, a, normals_b, normals_a, close_to_b.flatten()) - if normals_a is None: - return dist_a + dist_b - return dist_a + dist_b, dist_a_n + dist_b_n - - -def is_quad(mesh: Union[T_Mesh, Tuple[T, List[List[int]]]]) -> bool: - if type(mesh) is T: - return False - if type(mesh[1]) is T: - return False - else: - faces: List[List[int]] = mesh[1] - for f in faces: - if len(f) == 4: - return True - return False - - -def align_mesh(mesh: T_Mesh, ref_vs: T) -> T_Mesh: - vs, faces = mesh - dist_mat = get_dist_mat(vs, ref_vs) - dist, mapping_id = dist_mat.min(1) - vs_select = dist_mat.min(0)[1] - if mapping_id.unique().shape[0] != vs.shape[0]: - print('\n\033[91mWarning, alignment is not bijective\033[0m') - vs_aligned = vs[vs_select] - faces_aligned = mapping_id[faces] - return vs_aligned, faces_aligned - - -# def triangulate_mesh(mesh: Union[T_Mesh, Tuple[T, List[List[int_b]]]]) -> Tuple[T_Mesh, Optional[T]]: -# -# def check_triangle(triangle: List[int_b]) -> bool: -# e_1: T = vs[triangle[1]] - vs[triangle[0]] -# e_2: T = vs[triangle[2]] - vs[triangle[0]] -# angle = (e_1 * e_2).sum() / (e_1.norm(2) * e_2.norm(2)) -# return angle.abs().item() < 1 - 1e-6 -# -# def add_triangle(face_: List[int_b]): -# triangle = None -# for i in range(len(face_)): -# triangle = [face_[i], face_[(i + 1) % len(face_)], face_[(i + 2) % len(face_)]] -# if check_triangle(triangle): -# face_ = [f for j, f in enumerate(face_) if j != (i + 1) % len(face_)] -# break -# assert triangle is not None -# faces_.append(triangle) -# face_twin.append(-1) -# return face_ -# -# if not is_quad(mesh): -# return mesh, None -# -# vs, faces = mesh -# faces_ = [] -# face_twin = [] -# for face in faces: -# if len(face) == 3: -# faces_.append(face) -# face_twin.append(-1) -# else: -# while len(face) > 4: -# face = add_triangle(face) -# new_faces = [[face[0], face[1], face[2]], [face[0], face[2], face[3]]] -# if not check_triangle(new_faces[0]) or not check_triangle(new_faces[1]): -# new_faces = [[face[0], face[1], face[3]], [face[1], face[2], face[3]]] -# assert check_triangle(new_faces[0]) and check_triangle(new_faces[1]) -# faces_.extend(new_faces) -# face_twin.extend([len(faces_) - 1, len(faces_) - 2]) -# # else: -# # raise ValueError(f'mesh with {len(face)} edges polygons is not supported') -# faces_ = torch.tensor(faces_, device=vs.device, dtype=torch.int64) -# face_twin = torch.tensor(face_twin, device=vs.device, dtype=torch.int64) -# return (vs, faces_), face_twin - - -def triangulate_mesh(mesh: Union[T_Mesh, Tuple[T, List[List[int]]]]) -> Tuple[T_Mesh, Optional[T]]: - - def get_skinny(faces_) -> T: - vs_faces = vs[faces_] - areas = compute_face_areas(vs_faces)[0] - edges = reduce( - lambda a, b: a + b, - map( - lambda i: ((vs_faces[:, i] - vs_faces[:, (i + 1) % 3]) ** 2).sum(1), - range(3) - ) - ) - skinny_value = np.sqrt(48) * areas / edges - return skinny_value - - - if not is_quad(mesh): - return mesh, None - - vs, faces = mesh - device = vs.device - faces_keep = torch.tensor([face for face in faces if len(face) == 3], dtype=torch.int64, device=device) - faces_quads = torch.tensor([face for face in faces if len(face) != 3], dtype=torch.int64, device=device) - faces_tris_a, faces_tris_b = faces_quads[:, :3], faces_quads[:, torch.tensor([0, 2, 3], dtype=torch.int64)] - faces_tris_c, faces_tris_d = faces_quads[:, 1:], faces_quads[:, torch.tensor([0, 1, 3], dtype=torch.int64)] - skinny = [get_skinny(f) for f in (faces_tris_a, faces_tris_b, faces_tris_c, faces_tris_d)] - skinny_ab, skinny_cd = torch.stack((skinny[0], skinny[1]), 1), torch.stack((skinny[2], skinny[3]), 1) - to_flip = skinny_ab.min(1)[0].lt(skinny_cd.min(1)[0]) - faces_tris_a[to_flip], faces_tris_b[to_flip] = faces_tris_c[to_flip], faces_tris_d[to_flip] - faces_tris = torch.cat((faces_tris_a, faces_tris_b, faces_keep), dim=0) - face_twin = torch.arange(faces_tris_a.shape[0], device=device) - face_twin = torch.cat((face_twin + faces_tris_a.shape[0], face_twin, - -torch.ones(faces_keep.shape[0], device=device, dtype=torch.int64))) - return (vs, faces_tris), face_twin - - -def igl_prepare(*dtypes): - - def decoder(func): - - def wrapper(*args, **kwargs): - mesh = args[0] - device, dtype = mesh[0].device, mesh[0].dtype - vs, faces = to_numpy(*mesh) - result = func((vs, faces), *args[1:], **kwargs) - return to_torch(result, device) - - if len(dtypes) == 0: - to_torch = to_torch_empty - elif len(dtypes) == 1: - to_torch = to_torch_multi - else: - to_torch = to_torch_singe - return wrapper - - def to_torch_singe(result, device): - return torch.from_numpy(result).to(device, dtype=dtypes[0]) - - def to_torch_multi(result, device): - return [torch.from_numpy(r).to(device, dtype=dtype) for r, dtype in zip(result, dtypes)] - - def to_torch_empty(result, device): - return result - - return decoder - - -@igl_prepare(torch.float32, torch.int64) -def decimate_igl(mesh, num_faces: int): - if mesh[1].shape[0] <= num_faces: - return mesh - vs, faces, _ = igl.remove_duplicates(*mesh, 1e-8) - return igl.decimate(vs, faces, num_faces)[1:3] - - -@igl_prepare(torch.float32) -def gaussian_curvature(mesh: T_Mesh) -> T: - gc = igl.gaussian_curvature(*mesh) - return gc - - -@igl_prepare(torch.float32) -def per_vertex_normals_igl(mesh: T_Mesh, weighting: int = 0) -> T: - normals = igl.per_vertex_normals(*mesh, weighting) - return normals - - -@igl_prepare(torch.float32, torch.int64) -def remove_duplicate_vertices(mesh: T_Mesh, epsilon=1e-7) -> T_Mesh: - vs, _, _, faces = igl.remove_duplicate_vertices(*mesh, epsilon) - return vs, faces - - -@igl_prepare(torch.float32) -def winding_number_igl(mesh: T_Mesh, query: T) -> T: - query = query.cpu().numpy() - return igl.fast_winding_number_for_meshes(*mesh, query) - - -@igl_prepare(torch.float32, torch.float32, torch.float32, torch.float32) -def principal_curvature(mesh: T_Mesh) -> TS: - out = igl.principal_curvature(*mesh) - min_dir, max_dir, min_val, max_val = out - return min_dir, max_dir, min_val, max_val - - -# def get_inside_outside(points: T, mesh: T_Mesh) -> T: -# device = points.device -# points = points.numpy() -# vs, faces = mesh[0].numpy(), mesh[1].numpy() -# winding_numbers = igl.fast_winding_number_for_meshes(vs, faces, points) -# winding_numbers = torch.from_numpy(winding_numbers) -# inside_outside = winding_numbers.lt(.5).float() * 2 - 1 -# return inside_outside.to(device) - - -@igl_prepare() -def get_inside_outside(mesh: T_Mesh, points: ARRAY) -> ARRAY: - batch_size = 1000000 - labels = [] - num_batch = points.shape[0] // batch_size + 1 - for i in range(points.shape[0] // batch_size + 1): - if i == num_batch - 1: - pts_in = points[batch_size * i:] - else: - pts_in = points[batch_size * i: batch_size * (i + 1)] - w = igl.winding_number(*mesh, pts_in) - w = np.less_equal(w, .9) - labels.append(w) - return np.concatenate(labels, axis=0) - - -@igl_prepare() -def get_fast_inside_outside(mesh: T_Mesh, points: ARRAY): - batch_size = 1000000 - labels = [] - num_batch = points.shape[0] // batch_size + 1 - for i in range(points.shape[0] // batch_size + 1): - if i == num_batch - 1: - pts_in = points[batch_size * i:] - else: - pts_in = points[batch_size * i: batch_size * (i + 1)] - w = igl.fast_winding_number_for_meshes(*mesh, pts_in) - w = np.less_equal(w, .9) - labels.append(w) - return np.concatenate(labels, axis=0) - -# def get_inside_outside_trimes(mesh: T_Mesh, points: T) -> Optional[ARRAY]: -# mesh = mesh_utils.to(mesh, points.device) -# mesh = make_data.trimmesh(mesh) -# batch_size = 1000000 -# num_batch = points.shape[0] // batch_size + 1 -# labels = [] -# # try: -# for i in range(points.shape[0] // batch_size + 1): -# if i == num_batch - 1: -# pts_in = points[batch_size * i:] -# else: -# pts_in = points[batch_size * i: batch_size * (i + 1)] -# label = make_data.sdfmeshfun(pts_in, mesh).lt(0) -# label = label.cpu() -# labels.append(label.numpy()) -# # except RuntimeError: -# # return None -# return np.concatenate(labels, axis=0) - -@igl_prepare(torch.float32, torch.int64) -def trimesh_smooth(mesh, lamb=0.5, iterations=10): - mesh = trimesh.Trimesh(vertices=mesh[0], faces=mesh[1]) - # trimesh.smoothing.filter_mut_dif_laplacian(mesh, lamb=lamb, iterations=iterations, volume_constraint=True, - # laplacian_operator=None) - trimesh.smoothing.filter_humphrey(mesh, alpha=0.1, beta=lamb, iterations=iterations, laplacian_operator=None) - return V(mesh.vertices), V(mesh.faces) - - -def split_by_seg(mesh: T_Mesh, seg: TS) -> TS: - # faces_split, vs_split = {}, {} - labels_all = [] - vs, faces = mesh - vs_mid_faces = vs[faces].mean(1) - for vs_ in (vs, vs_mid_faces): - chamfer_distance_a, chamfer_distance_a_nn = ChamferDistance()(vs_.unsqueeze(0), seg[0].unsqueeze(0), bidirectional=False) - # nn_sanity = slow_nn(vs_mid_faces, seg[0]) - labels_all.append(seg[1][chamfer_distance_a_nn.flatten()]) - # for i in range(seg[1].min(), seg[1].max() + 1): - # mask = labels.eq(i) - # if mask.any(): - # split[i] = faces[mask] - # else: - # faces_split[i] = None - return labels_all diff --git a/spaces/KOFTRFU204/AICoverGen/src/vc_infer_pipeline.py b/spaces/KOFTRFU204/AICoverGen/src/vc_infer_pipeline.py deleted file mode 100644 index 25f873e1e210879e085afd073306d796bf5114ea..0000000000000000000000000000000000000000 --- a/spaces/KOFTRFU204/AICoverGen/src/vc_infer_pipeline.py +++ /dev/null @@ -1,653 +0,0 @@ -from functools import lru_cache -from time import time as ttime - -import faiss -import librosa -import numpy as np -import os -import parselmouth -import pyworld -import sys -import torch -import torch.nn.functional as F -import torchcrepe -import traceback -from scipy import signal -from torch import Tensor - -BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__))) -now_dir = os.path.join(BASE_DIR, 'src') -sys.path.append(now_dir) - -bh, ah = signal.butter(N=5, Wn=48, btype="high", fs=16000) - -input_audio_path2wav = {} - - -@lru_cache -def cache_harvest_f0(input_audio_path, fs, f0max, f0min, frame_period): - audio = input_audio_path2wav[input_audio_path] - f0, t = pyworld.harvest( - audio, - fs=fs, - f0_ceil=f0max, - f0_floor=f0min, - frame_period=frame_period, - ) - f0 = pyworld.stonemask(audio, f0, t, fs) - return f0 - - -def change_rms(data1, sr1, data2, sr2, rate): # 1是输入音频,2是输出音频,rate是2的占比 - # print(data1.max(),data2.max()) - rms1 = librosa.feature.rms( - y=data1, frame_length=sr1 // 2 * 2, hop_length=sr1 // 2 - ) # 每半秒一个点 - rms2 = librosa.feature.rms(y=data2, frame_length=sr2 // 2 * 2, hop_length=sr2 // 2) - rms1 = torch.from_numpy(rms1) - rms1 = F.interpolate( - rms1.unsqueeze(0), size=data2.shape[0], mode="linear" - ).squeeze() - rms2 = torch.from_numpy(rms2) - rms2 = F.interpolate( - rms2.unsqueeze(0), size=data2.shape[0], mode="linear" - ).squeeze() - rms2 = torch.max(rms2, torch.zeros_like(rms2) + 1e-6) - data2 *= ( - torch.pow(rms1, torch.tensor(1 - rate)) - * torch.pow(rms2, torch.tensor(rate - 1)) - ).numpy() - return data2 - - -class VC(object): - def __init__(self, tgt_sr, config): - self.x_pad, self.x_query, self.x_center, self.x_max, self.is_half = ( - config.x_pad, - config.x_query, - config.x_center, - config.x_max, - config.is_half, - ) - self.sr = 16000 # hubert输入采样率 - self.window = 160 # 每帧点数 - self.t_pad = self.sr * self.x_pad # 每条前后pad时间 - self.t_pad_tgt = tgt_sr * self.x_pad - self.t_pad2 = self.t_pad * 2 - self.t_query = self.sr * self.x_query # 查询切点前后查询时间 - self.t_center = self.sr * self.x_center # 查询切点位置 - self.t_max = self.sr * self.x_max # 免查询时长阈值 - self.device = config.device - - # Fork Feature: Get the best torch device to use for f0 algorithms that require a torch device. Will return the type (torch.device) - def get_optimal_torch_device(self, index: int = 0) -> torch.device: - # Get cuda device - if torch.cuda.is_available(): - return torch.device( - f"cuda:{index % torch.cuda.device_count()}" - ) # Very fast - elif torch.backends.mps.is_available(): - return torch.device("mps") - # Insert an else here to grab "xla" devices if available. TO DO later. Requires the torch_xla.core.xla_model library - # Else wise return the "cpu" as a torch device, - return torch.device("cpu") - - # Fork Feature: Compute f0 with the crepe method - def get_f0_crepe_computation( - self, - x, - f0_min, - f0_max, - p_len, - hop_length=160, # 512 before. Hop length changes the speed that the voice jumps to a different dramatic pitch. Lower hop lengths means more pitch accuracy but longer inference time. - model="full", # Either use crepe-tiny "tiny" or crepe "full". Default is full - ): - x = x.astype( - np.float32 - ) # fixes the F.conv2D exception. We needed to convert double to float. - x /= np.quantile(np.abs(x), 0.999) - torch_device = self.get_optimal_torch_device() - audio = torch.from_numpy(x).to(torch_device, copy=True) - audio = torch.unsqueeze(audio, dim=0) - if audio.ndim == 2 and audio.shape[0] > 1: - audio = torch.mean(audio, dim=0, keepdim=True).detach() - audio = audio.detach() - print("Initiating prediction with a crepe_hop_length of: " + str(hop_length)) - pitch: Tensor = torchcrepe.predict( - audio, - self.sr, - hop_length, - f0_min, - f0_max, - model, - batch_size=hop_length * 2, - device=torch_device, - pad=True, - ) - p_len = p_len or x.shape[0] // hop_length - # Resize the pitch for final f0 - source = np.array(pitch.squeeze(0).cpu().float().numpy()) - source[source < 0.001] = np.nan - target = np.interp( - np.arange(0, len(source) * p_len, len(source)) / p_len, - np.arange(0, len(source)), - source, - ) - f0 = np.nan_to_num(target) - return f0 # Resized f0 - - def get_f0_official_crepe_computation( - self, - x, - f0_min, - f0_max, - model="full", - ): - # Pick a batch size that doesn't cause memory errors on your gpu - batch_size = 512 - # Compute pitch using first gpu - audio = torch.tensor(np.copy(x))[None].float() - f0, pd = torchcrepe.predict( - audio, - self.sr, - self.window, - f0_min, - f0_max, - model, - batch_size=batch_size, - device=self.device, - return_periodicity=True, - ) - pd = torchcrepe.filter.median(pd, 3) - f0 = torchcrepe.filter.mean(f0, 3) - f0[pd < 0.1] = 0 - f0 = f0[0].cpu().numpy() - return f0 - - # Fork Feature: Compute pYIN f0 method - def get_f0_pyin_computation(self, x, f0_min, f0_max): - y, sr = librosa.load("saudio/Sidney.wav", self.sr, mono=True) - f0, _, _ = librosa.pyin(y, sr=self.sr, fmin=f0_min, fmax=f0_max) - f0 = f0[1:] # Get rid of extra first frame - return f0 - - # Fork Feature: Acquire median hybrid f0 estimation calculation - def get_f0_hybrid_computation( - self, - methods_str, - input_audio_path, - x, - f0_min, - f0_max, - p_len, - filter_radius, - crepe_hop_length, - time_step, - ): - # Get various f0 methods from input to use in the computation stack - s = methods_str - s = s.split("hybrid")[1] - s = s.replace("[", "").replace("]", "") - methods = s.split("+") - f0_computation_stack = [] - - print("Calculating f0 pitch estimations for methods: %s" % str(methods)) - x = x.astype(np.float32) - x /= np.quantile(np.abs(x), 0.999) - # Get f0 calculations for all methods specified - for method in methods: - f0 = None - if method == "pm": - f0 = ( - parselmouth.Sound(x, self.sr) - .to_pitch_ac( - time_step=time_step / 1000, - voicing_threshold=0.6, - pitch_floor=f0_min, - pitch_ceiling=f0_max, - ) - .selected_array["frequency"] - ) - pad_size = (p_len - len(f0) + 1) // 2 - if pad_size > 0 or p_len - len(f0) - pad_size > 0: - f0 = np.pad( - f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant" - ) - elif method == "crepe": - f0 = self.get_f0_official_crepe_computation(x, f0_min, f0_max) - f0 = f0[1:] # Get rid of extra first frame - elif method == "crepe-tiny": - f0 = self.get_f0_official_crepe_computation(x, f0_min, f0_max, "tiny") - f0 = f0[1:] # Get rid of extra first frame - elif method == "mangio-crepe": - f0 = self.get_f0_crepe_computation( - x, f0_min, f0_max, p_len, crepe_hop_length - ) - elif method == "mangio-crepe-tiny": - f0 = self.get_f0_crepe_computation( - x, f0_min, f0_max, p_len, crepe_hop_length, "tiny" - ) - elif method == "harvest": - f0 = cache_harvest_f0(input_audio_path, self.sr, f0_max, f0_min, 10) - if filter_radius > 2: - f0 = signal.medfilt(f0, 3) - f0 = f0[1:] # Get rid of first frame. - elif method == "dio": # Potentially buggy? - f0, t = pyworld.dio( - x.astype(np.double), - fs=self.sr, - f0_ceil=f0_max, - f0_floor=f0_min, - frame_period=10, - ) - f0 = pyworld.stonemask(x.astype(np.double), f0, t, self.sr) - f0 = signal.medfilt(f0, 3) - f0 = f0[1:] - # elif method == "pyin": Not Working just yet - # f0 = self.get_f0_pyin_computation(x, f0_min, f0_max) - # Push method to the stack - f0_computation_stack.append(f0) - - for fc in f0_computation_stack: - print(len(fc)) - - print("Calculating hybrid median f0 from the stack of: %s" % str(methods)) - f0_median_hybrid = None - if len(f0_computation_stack) == 1: - f0_median_hybrid = f0_computation_stack[0] - else: - f0_median_hybrid = np.nanmedian(f0_computation_stack, axis=0) - return f0_median_hybrid - - def get_f0( - self, - input_audio_path, - x, - p_len, - f0_up_key, - f0_method, - filter_radius, - crepe_hop_length, - inp_f0=None, - ): - global input_audio_path2wav - time_step = self.window / self.sr * 1000 - f0_min = 50 - f0_max = 1100 - f0_mel_min = 1127 * np.log(1 + f0_min / 700) - f0_mel_max = 1127 * np.log(1 + f0_max / 700) - if f0_method == "pm": - f0 = ( - parselmouth.Sound(x, self.sr) - .to_pitch_ac( - time_step=time_step / 1000, - voicing_threshold=0.6, - pitch_floor=f0_min, - pitch_ceiling=f0_max, - ) - .selected_array["frequency"] - ) - pad_size = (p_len - len(f0) + 1) // 2 - if pad_size > 0 or p_len - len(f0) - pad_size > 0: - f0 = np.pad( - f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant" - ) - elif f0_method == "harvest": - input_audio_path2wav[input_audio_path] = x.astype(np.double) - f0 = cache_harvest_f0(input_audio_path, self.sr, f0_max, f0_min, 10) - if filter_radius > 2: - f0 = signal.medfilt(f0, 3) - elif f0_method == "dio": # Potentially Buggy? - f0, t = pyworld.dio( - x.astype(np.double), - fs=self.sr, - f0_ceil=f0_max, - f0_floor=f0_min, - frame_period=10, - ) - f0 = pyworld.stonemask(x.astype(np.double), f0, t, self.sr) - f0 = signal.medfilt(f0, 3) - elif f0_method == "crepe": - f0 = self.get_f0_official_crepe_computation(x, f0_min, f0_max) - elif f0_method == "crepe-tiny": - f0 = self.get_f0_official_crepe_computation(x, f0_min, f0_max, "tiny") - elif f0_method == "mangio-crepe": - f0 = self.get_f0_crepe_computation( - x, f0_min, f0_max, p_len, crepe_hop_length - ) - elif f0_method == "mangio-crepe-tiny": - f0 = self.get_f0_crepe_computation( - x, f0_min, f0_max, p_len, crepe_hop_length, "tiny" - ) - elif f0_method == "rmvpe": - if hasattr(self, "model_rmvpe") == False: - from rmvpe import RMVPE - - self.model_rmvpe = RMVPE( - os.path.join(BASE_DIR, 'rvc_models', 'rmvpe.pt'), is_half=self.is_half, device=self.device - ) - f0 = self.model_rmvpe.infer_from_audio(x, thred=0.03) - - elif "hybrid" in f0_method: - # Perform hybrid median pitch estimation - input_audio_path2wav[input_audio_path] = x.astype(np.double) - f0 = self.get_f0_hybrid_computation( - f0_method, - input_audio_path, - x, - f0_min, - f0_max, - p_len, - filter_radius, - crepe_hop_length, - time_step, - ) - - f0 *= pow(2, f0_up_key / 12) - # with open("test.txt","w")as f:f.write("\n".join([str(i)for i in f0.tolist()])) - tf0 = self.sr // self.window # 每秒f0点数 - if inp_f0 is not None: - delta_t = np.round( - (inp_f0[:, 0].max() - inp_f0[:, 0].min()) * tf0 + 1 - ).astype("int16") - replace_f0 = np.interp( - list(range(delta_t)), inp_f0[:, 0] * 100, inp_f0[:, 1] - ) - shape = f0[self.x_pad * tf0 : self.x_pad * tf0 + len(replace_f0)].shape[0] - f0[self.x_pad * tf0 : self.x_pad * tf0 + len(replace_f0)] = replace_f0[ - :shape - ] - # with open("test_opt.txt","w")as f:f.write("\n".join([str(i)for i in f0.tolist()])) - f0bak = f0.copy() - f0_mel = 1127 * np.log(1 + f0 / 700) - f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * 254 / ( - f0_mel_max - f0_mel_min - ) + 1 - f0_mel[f0_mel <= 1] = 1 - f0_mel[f0_mel > 255] = 255 - f0_coarse = np.rint(f0_mel).astype(np.int) - - return f0_coarse, f0bak # 1-0 - - def vc( - self, - model, - net_g, - sid, - audio0, - pitch, - pitchf, - times, - index, - big_npy, - index_rate, - version, - protect, - ): # ,file_index,file_big_npy - feats = torch.from_numpy(audio0) - if self.is_half: - feats = feats.half() - else: - feats = feats.float() - if feats.dim() == 2: # double channels - feats = feats.mean(-1) - assert feats.dim() == 1, feats.dim() - feats = feats.view(1, -1) - padding_mask = torch.BoolTensor(feats.shape).to(self.device).fill_(False) - - inputs = { - "source": feats.to(self.device), - "padding_mask": padding_mask, - "output_layer": 9 if version == "v1" else 12, - } - t0 = ttime() - with torch.no_grad(): - logits = model.extract_features(**inputs) - feats = model.final_proj(logits[0]) if version == "v1" else logits[0] - if protect < 0.5 and pitch != None and pitchf != None: - feats0 = feats.clone() - if ( - isinstance(index, type(None)) == False - and isinstance(big_npy, type(None)) == False - and index_rate != 0 - ): - npy = feats[0].cpu().numpy() - if self.is_half: - npy = npy.astype("float32") - - # _, I = index.search(npy, 1) - # npy = big_npy[I.squeeze()] - - score, ix = index.search(npy, k=8) - weight = np.square(1 / score) - weight /= weight.sum(axis=1, keepdims=True) - npy = np.sum(big_npy[ix] * np.expand_dims(weight, axis=2), axis=1) - - if self.is_half: - npy = npy.astype("float16") - feats = ( - torch.from_numpy(npy).unsqueeze(0).to(self.device) * index_rate - + (1 - index_rate) * feats - ) - - feats = F.interpolate(feats.permute(0, 2, 1), scale_factor=2).permute(0, 2, 1) - if protect < 0.5 and pitch != None and pitchf != None: - feats0 = F.interpolate(feats0.permute(0, 2, 1), scale_factor=2).permute( - 0, 2, 1 - ) - t1 = ttime() - p_len = audio0.shape[0] // self.window - if feats.shape[1] < p_len: - p_len = feats.shape[1] - if pitch != None and pitchf != None: - pitch = pitch[:, :p_len] - pitchf = pitchf[:, :p_len] - - if protect < 0.5 and pitch != None and pitchf != None: - pitchff = pitchf.clone() - pitchff[pitchf > 0] = 1 - pitchff[pitchf < 1] = protect - pitchff = pitchff.unsqueeze(-1) - feats = feats * pitchff + feats0 * (1 - pitchff) - feats = feats.to(feats0.dtype) - p_len = torch.tensor([p_len], device=self.device).long() - with torch.no_grad(): - if pitch != None and pitchf != None: - audio1 = ( - (net_g.infer(feats, p_len, pitch, pitchf, sid)[0][0, 0]) - .data.cpu() - .float() - .numpy() - ) - else: - audio1 = ( - (net_g.infer(feats, p_len, sid)[0][0, 0]).data.cpu().float().numpy() - ) - del feats, p_len, padding_mask - if torch.cuda.is_available(): - torch.cuda.empty_cache() - t2 = ttime() - times[0] += t1 - t0 - times[2] += t2 - t1 - return audio1 - - def pipeline( - self, - model, - net_g, - sid, - audio, - input_audio_path, - times, - f0_up_key, - f0_method, - file_index, - # file_big_npy, - index_rate, - if_f0, - filter_radius, - tgt_sr, - resample_sr, - rms_mix_rate, - version, - protect, - crepe_hop_length, - f0_file=None, - ): - if ( - file_index != "" - # and file_big_npy != "" - # and os.path.exists(file_big_npy) == True - and os.path.exists(file_index) == True - and index_rate != 0 - ): - try: - index = faiss.read_index(file_index) - # big_npy = np.load(file_big_npy) - big_npy = index.reconstruct_n(0, index.ntotal) - except: - traceback.print_exc() - index = big_npy = None - else: - index = big_npy = None - audio = signal.filtfilt(bh, ah, audio) - audio_pad = np.pad(audio, (self.window // 2, self.window // 2), mode="reflect") - opt_ts = [] - if audio_pad.shape[0] > self.t_max: - audio_sum = np.zeros_like(audio) - for i in range(self.window): - audio_sum += audio_pad[i : i - self.window] - for t in range(self.t_center, audio.shape[0], self.t_center): - opt_ts.append( - t - - self.t_query - + np.where( - np.abs(audio_sum[t - self.t_query : t + self.t_query]) - == np.abs(audio_sum[t - self.t_query : t + self.t_query]).min() - )[0][0] - ) - s = 0 - audio_opt = [] - t = None - t1 = ttime() - audio_pad = np.pad(audio, (self.t_pad, self.t_pad), mode="reflect") - p_len = audio_pad.shape[0] // self.window - inp_f0 = None - if hasattr(f0_file, "name") == True: - try: - with open(f0_file.name, "r") as f: - lines = f.read().strip("\n").split("\n") - inp_f0 = [] - for line in lines: - inp_f0.append([float(i) for i in line.split(",")]) - inp_f0 = np.array(inp_f0, dtype="float32") - except: - traceback.print_exc() - sid = torch.tensor(sid, device=self.device).unsqueeze(0).long() - pitch, pitchf = None, None - if if_f0 == 1: - pitch, pitchf = self.get_f0( - input_audio_path, - audio_pad, - p_len, - f0_up_key, - f0_method, - filter_radius, - crepe_hop_length, - inp_f0, - ) - pitch = pitch[:p_len] - pitchf = pitchf[:p_len] - if self.device == "mps": - pitchf = pitchf.astype(np.float32) - pitch = torch.tensor(pitch, device=self.device).unsqueeze(0).long() - pitchf = torch.tensor(pitchf, device=self.device).unsqueeze(0).float() - t2 = ttime() - times[1] += t2 - t1 - for t in opt_ts: - t = t // self.window * self.window - if if_f0 == 1: - audio_opt.append( - self.vc( - model, - net_g, - sid, - audio_pad[s : t + self.t_pad2 + self.window], - pitch[:, s // self.window : (t + self.t_pad2) // self.window], - pitchf[:, s // self.window : (t + self.t_pad2) // self.window], - times, - index, - big_npy, - index_rate, - version, - protect, - )[self.t_pad_tgt : -self.t_pad_tgt] - ) - else: - audio_opt.append( - self.vc( - model, - net_g, - sid, - audio_pad[s : t + self.t_pad2 + self.window], - None, - None, - times, - index, - big_npy, - index_rate, - version, - protect, - )[self.t_pad_tgt : -self.t_pad_tgt] - ) - s = t - if if_f0 == 1: - audio_opt.append( - self.vc( - model, - net_g, - sid, - audio_pad[t:], - pitch[:, t // self.window :] if t is not None else pitch, - pitchf[:, t // self.window :] if t is not None else pitchf, - times, - index, - big_npy, - index_rate, - version, - protect, - )[self.t_pad_tgt : -self.t_pad_tgt] - ) - else: - audio_opt.append( - self.vc( - model, - net_g, - sid, - audio_pad[t:], - None, - None, - times, - index, - big_npy, - index_rate, - version, - protect, - )[self.t_pad_tgt : -self.t_pad_tgt] - ) - audio_opt = np.concatenate(audio_opt) - if rms_mix_rate != 1: - audio_opt = change_rms(audio, 16000, audio_opt, tgt_sr, rms_mix_rate) - if resample_sr >= 16000 and tgt_sr != resample_sr: - audio_opt = librosa.resample( - audio_opt, orig_sr=tgt_sr, target_sr=resample_sr - ) - audio_max = np.abs(audio_opt).max() / 0.99 - max_int16 = 32768 - if audio_max > 1: - max_int16 /= audio_max - audio_opt = (audio_opt * max_int16).astype(np.int16) - del pitch, pitchf, sid - if torch.cuda.is_available(): - torch.cuda.empty_cache() - return audio_opt diff --git a/spaces/Kangarroar/ApplioRVC-Inference/infer/lib/infer_pack/transforms.py b/spaces/Kangarroar/ApplioRVC-Inference/infer/lib/infer_pack/transforms.py deleted file mode 100644 index 6f30b7177d17fc61a4173c21b4233172a890be58..0000000000000000000000000000000000000000 --- a/spaces/Kangarroar/ApplioRVC-Inference/infer/lib/infer_pack/transforms.py +++ /dev/null @@ -1,207 +0,0 @@ -import numpy as np -import torch -from torch.nn import functional as F - -DEFAULT_MIN_BIN_WIDTH = 1e-3 -DEFAULT_MIN_BIN_HEIGHT = 1e-3 -DEFAULT_MIN_DERIVATIVE = 1e-3 - - -def piecewise_rational_quadratic_transform( - inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails=None, - tail_bound=1.0, - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE, -): - if tails is None: - spline_fn = rational_quadratic_spline - spline_kwargs = {} - else: - spline_fn = unconstrained_rational_quadratic_spline - spline_kwargs = {"tails": tails, "tail_bound": tail_bound} - - outputs, logabsdet = spline_fn( - inputs=inputs, - unnormalized_widths=unnormalized_widths, - unnormalized_heights=unnormalized_heights, - unnormalized_derivatives=unnormalized_derivatives, - inverse=inverse, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative, - **spline_kwargs - ) - return outputs, logabsdet - - -def searchsorted(bin_locations, inputs, eps=1e-6): - bin_locations[..., -1] += eps - return torch.sum(inputs[..., None] >= bin_locations, dim=-1) - 1 - - -def unconstrained_rational_quadratic_spline( - inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails="linear", - tail_bound=1.0, - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE, -): - inside_interval_mask = (inputs >= -tail_bound) & (inputs <= tail_bound) - outside_interval_mask = ~inside_interval_mask - - outputs = torch.zeros_like(inputs) - logabsdet = torch.zeros_like(inputs) - - if tails == "linear": - unnormalized_derivatives = F.pad(unnormalized_derivatives, pad=(1, 1)) - constant = np.log(np.exp(1 - min_derivative) - 1) - unnormalized_derivatives[..., 0] = constant - unnormalized_derivatives[..., -1] = constant - - outputs[outside_interval_mask] = inputs[outside_interval_mask] - logabsdet[outside_interval_mask] = 0 - else: - raise RuntimeError("{} tails are not implemented.".format(tails)) - - ( - outputs[inside_interval_mask], - logabsdet[inside_interval_mask], - ) = rational_quadratic_spline( - inputs=inputs[inside_interval_mask], - unnormalized_widths=unnormalized_widths[inside_interval_mask, :], - unnormalized_heights=unnormalized_heights[inside_interval_mask, :], - unnormalized_derivatives=unnormalized_derivatives[inside_interval_mask, :], - inverse=inverse, - left=-tail_bound, - right=tail_bound, - bottom=-tail_bound, - top=tail_bound, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative, - ) - - return outputs, logabsdet - - -def rational_quadratic_spline( - inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - left=0.0, - right=1.0, - bottom=0.0, - top=1.0, - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE, -): - if torch.min(inputs) < left or torch.max(inputs) > right: - raise ValueError("Input to a transform is not within its domain") - - num_bins = unnormalized_widths.shape[-1] - - if min_bin_width * num_bins > 1.0: - raise ValueError("Minimal bin width too large for the number of bins") - if min_bin_height * num_bins > 1.0: - raise ValueError("Minimal bin height too large for the number of bins") - - widths = F.softmax(unnormalized_widths, dim=-1) - widths = min_bin_width + (1 - min_bin_width * num_bins) * widths - cumwidths = torch.cumsum(widths, dim=-1) - cumwidths = F.pad(cumwidths, pad=(1, 0), mode="constant", value=0.0) - cumwidths = (right - left) * cumwidths + left - cumwidths[..., 0] = left - cumwidths[..., -1] = right - widths = cumwidths[..., 1:] - cumwidths[..., :-1] - - derivatives = min_derivative + F.softplus(unnormalized_derivatives) - - heights = F.softmax(unnormalized_heights, dim=-1) - heights = min_bin_height + (1 - min_bin_height * num_bins) * heights - cumheights = torch.cumsum(heights, dim=-1) - cumheights = F.pad(cumheights, pad=(1, 0), mode="constant", value=0.0) - cumheights = (top - bottom) * cumheights + bottom - cumheights[..., 0] = bottom - cumheights[..., -1] = top - heights = cumheights[..., 1:] - cumheights[..., :-1] - - if inverse: - bin_idx = searchsorted(cumheights, inputs)[..., None] - else: - bin_idx = searchsorted(cumwidths, inputs)[..., None] - - input_cumwidths = cumwidths.gather(-1, bin_idx)[..., 0] - input_bin_widths = widths.gather(-1, bin_idx)[..., 0] - - input_cumheights = cumheights.gather(-1, bin_idx)[..., 0] - delta = heights / widths - input_delta = delta.gather(-1, bin_idx)[..., 0] - - input_derivatives = derivatives.gather(-1, bin_idx)[..., 0] - input_derivatives_plus_one = derivatives[..., 1:].gather(-1, bin_idx)[..., 0] - - input_heights = heights.gather(-1, bin_idx)[..., 0] - - if inverse: - a = (inputs - input_cumheights) * ( - input_derivatives + input_derivatives_plus_one - 2 * input_delta - ) + input_heights * (input_delta - input_derivatives) - b = input_heights * input_derivatives - (inputs - input_cumheights) * ( - input_derivatives + input_derivatives_plus_one - 2 * input_delta - ) - c = -input_delta * (inputs - input_cumheights) - - discriminant = b.pow(2) - 4 * a * c - assert (discriminant >= 0).all() - - root = (2 * c) / (-b - torch.sqrt(discriminant)) - outputs = root * input_bin_widths + input_cumwidths - - theta_one_minus_theta = root * (1 - root) - denominator = input_delta + ( - (input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta - ) - derivative_numerator = input_delta.pow(2) * ( - input_derivatives_plus_one * root.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - root).pow(2) - ) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, -logabsdet - else: - theta = (inputs - input_cumwidths) / input_bin_widths - theta_one_minus_theta = theta * (1 - theta) - - numerator = input_heights * ( - input_delta * theta.pow(2) + input_derivatives * theta_one_minus_theta - ) - denominator = input_delta + ( - (input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta - ) - outputs = input_cumheights + numerator / denominator - - derivative_numerator = input_delta.pow(2) * ( - input_derivatives_plus_one * theta.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - theta).pow(2) - ) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, logabsdet diff --git a/spaces/Kevin676/AutoGPT/autogpt/json_utils/json_fix_llm.py b/spaces/Kevin676/AutoGPT/autogpt/json_utils/json_fix_llm.py deleted file mode 100644 index 869aed125cfb8cd7a69ed02eeb389cc72a3e296b..0000000000000000000000000000000000000000 --- a/spaces/Kevin676/AutoGPT/autogpt/json_utils/json_fix_llm.py +++ /dev/null @@ -1,220 +0,0 @@ -"""This module contains functions to fix JSON strings generated by LLM models, such as ChatGPT, using the assistance -of the ChatGPT API or LLM models.""" -from __future__ import annotations - -import contextlib -import json -from typing import Any, Dict - -from colorama import Fore -from regex import regex - -from autogpt.config import Config -from autogpt.json_utils.json_fix_general import correct_json -from autogpt.llm_utils import call_ai_function -from autogpt.logs import logger -from autogpt.speech import say_text - -JSON_SCHEMA = """ -{ - "command": { - "name": "command name", - "args": { - "arg name": "value" - } - }, - "thoughts": - { - "text": "thought", - "reasoning": "reasoning", - "plan": "- short bulleted\n- list that conveys\n- long-term plan", - "criticism": "constructive self-criticism", - "speak": "thoughts summary to say to user" - } -} -""" - -CFG = Config() - - -def auto_fix_json(json_string: str, schema: str) -> str: - """Fix the given JSON string to make it parseable and fully compliant with - the provided schema using GPT-3. - - Args: - json_string (str): The JSON string to fix. - schema (str): The schema to use to fix the JSON. - Returns: - str: The fixed JSON string. - """ - # Try to fix the JSON using GPT: - function_string = "def fix_json(json_string: str, schema:str=None) -> str:" - args = [f"'''{json_string}'''", f"'''{schema}'''"] - description_string = ( - "This function takes a JSON string and ensures that it" - " is parseable and fully compliant with the provided schema. If an object" - " or field specified in the schema isn't contained within the correct JSON," - " it is omitted. The function also escapes any double quotes within JSON" - " string values to ensure that they are valid. If the JSON string contains" - " any None or NaN values, they are replaced with null before being parsed." - ) - - # If it doesn't already start with a "`", add one: - if not json_string.startswith("`"): - json_string = "```json\n" + json_string + "\n```" - result_string = call_ai_function( - function_string, args, description_string, model=CFG.fast_llm_model - ) - logger.debug("------------ JSON FIX ATTEMPT ---------------") - logger.debug(f"Original JSON: {json_string}") - logger.debug("-----------") - logger.debug(f"Fixed JSON: {result_string}") - logger.debug("----------- END OF FIX ATTEMPT ----------------") - - try: - json.loads(result_string) # just check the validity - return result_string - except json.JSONDecodeError: # noqa: E722 - # Get the call stack: - # import traceback - # call_stack = traceback.format_exc() - # print(f"Failed to fix JSON: '{json_string}' "+call_stack) - return "failed" - - -def fix_json_using_multiple_techniques(assistant_reply: str) -> Dict[Any, Any]: - """Fix the given JSON string to make it parseable and fully compliant with two techniques. - - Args: - json_string (str): The JSON string to fix. - - Returns: - str: The fixed JSON string. - """ - - # Parse and print Assistant response - assistant_reply_json = fix_and_parse_json(assistant_reply) - if assistant_reply_json == {}: - assistant_reply_json = attempt_to_fix_json_by_finding_outermost_brackets( - assistant_reply - ) - - if assistant_reply_json != {}: - return assistant_reply_json - - logger.error( - "Error: The following AI output couldn't be converted to a JSON:\n", - assistant_reply, - ) - if CFG.speak_mode: - say_text("I have received an invalid JSON response from the OpenAI API.") - - return {} - - -def fix_and_parse_json( - json_to_load: str, try_to_fix_with_gpt: bool = True -) -> Dict[Any, Any]: - """Fix and parse JSON string - - Args: - json_to_load (str): The JSON string. - try_to_fix_with_gpt (bool, optional): Try to fix the JSON with GPT. - Defaults to True. - - Returns: - str or dict[Any, Any]: The parsed JSON. - """ - - with contextlib.suppress(json.JSONDecodeError): - json_to_load = json_to_load.replace("\t", "") - return json.loads(json_to_load) - - with contextlib.suppress(json.JSONDecodeError): - json_to_load = correct_json(json_to_load) - return json.loads(json_to_load) - # Let's do something manually: - # sometimes GPT responds with something BEFORE the braces: - # "I'm sorry, I don't understand. Please try again." - # {"text": "I'm sorry, I don't understand. Please try again.", - # "confidence": 0.0} - # So let's try to find the first brace and then parse the rest - # of the string - try: - brace_index = json_to_load.index("{") - maybe_fixed_json = json_to_load[brace_index:] - last_brace_index = maybe_fixed_json.rindex("}") - maybe_fixed_json = maybe_fixed_json[: last_brace_index + 1] - return json.loads(maybe_fixed_json) - except (json.JSONDecodeError, ValueError) as e: - return try_ai_fix(try_to_fix_with_gpt, e, json_to_load) - - -def try_ai_fix( - try_to_fix_with_gpt: bool, exception: Exception, json_to_load: str -) -> Dict[Any, Any]: - """Try to fix the JSON with the AI - - Args: - try_to_fix_with_gpt (bool): Whether to try to fix the JSON with the AI. - exception (Exception): The exception that was raised. - json_to_load (str): The JSON string to load. - - Raises: - exception: If try_to_fix_with_gpt is False. - - Returns: - str or dict[Any, Any]: The JSON string or dictionary. - """ - if not try_to_fix_with_gpt: - raise exception - if CFG.debug_mode: - logger.warn( - "Warning: Failed to parse AI output, attempting to fix." - "\n If you see this warning frequently, it's likely that" - " your prompt is confusing the AI. Try changing it up" - " slightly." - ) - # Now try to fix this up using the ai_functions - ai_fixed_json = auto_fix_json(json_to_load, JSON_SCHEMA) - - if ai_fixed_json != "failed": - return json.loads(ai_fixed_json) - # This allows the AI to react to the error message, - # which usually results in it correcting its ways. - # logger.error("Failed to fix AI output, telling the AI.") - return {} - - -def attempt_to_fix_json_by_finding_outermost_brackets(json_string: str): - if CFG.speak_mode and CFG.debug_mode: - say_text( - "I have received an invalid JSON response from the OpenAI API. " - "Trying to fix it now." - ) - logger.error("Attempting to fix JSON by finding outermost brackets\n") - - try: - json_pattern = regex.compile(r"\{(?:[^{}]|(?R))*\}") - json_match = json_pattern.search(json_string) - - if json_match: - # Extract the valid JSON object from the string - json_string = json_match.group(0) - logger.typewriter_log( - title="Apparently json was fixed.", title_color=Fore.GREEN - ) - if CFG.speak_mode and CFG.debug_mode: - say_text("Apparently json was fixed.") - else: - return {} - - except (json.JSONDecodeError, ValueError): - if CFG.debug_mode: - logger.error(f"Error: Invalid JSON: {json_string}\n") - if CFG.speak_mode: - say_text("Didn't work. I will have to ignore this response then.") - logger.error("Error: Invalid JSON, setting it to empty JSON now.\n") - json_string = {} - - return fix_and_parse_json(json_string) diff --git a/spaces/KonradSzafer/HF-QA-Demo/tests/qa_engine/__init__.py b/spaces/KonradSzafer/HF-QA-Demo/tests/qa_engine/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/KyanChen/RSPrompter/mmdet/evaluation/metrics/crowdhuman_metric.py b/spaces/KyanChen/RSPrompter/mmdet/evaluation/metrics/crowdhuman_metric.py deleted file mode 100644 index de2a54edc2b97738a76c8f9cc6c01716f33acdac..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/RSPrompter/mmdet/evaluation/metrics/crowdhuman_metric.py +++ /dev/null @@ -1,824 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import copy -import json -import os.path as osp -import tempfile -from collections import OrderedDict -from multiprocessing import Process, Queue -from typing import Dict, List, Optional, Sequence, Union - -import numpy as np -from mmengine.evaluator import BaseMetric -from mmengine.fileio import dump, get_text, load -from mmengine.logging import MMLogger -from scipy.sparse import csr_matrix -from scipy.sparse.csgraph import maximum_bipartite_matching - -from mmdet.evaluation.functional.bbox_overlaps import bbox_overlaps -from mmdet.registry import METRICS - -PERSON_CLASSES = ['background', 'person'] - - -@METRICS.register_module() -class CrowdHumanMetric(BaseMetric): - """CrowdHuman evaluation metric. - - Evaluate Average Precision (AP), Miss Rate (MR) and Jaccard Index (JI) - for detection tasks. - - Args: - ann_file (str): Path to the annotation file. - metric (str | List[str]): Metrics to be evaluated. Valid metrics - include 'AP', 'MR' and 'JI'. Defaults to 'AP'. - format_only (bool): Format the output results without perform - evaluation. It is useful when you want to format the result - to a specific format and submit it to the test server. - Defaults to False. - outfile_prefix (str, optional): The prefix of json files. It includes - the file path and the prefix of filename, e.g., "a/b/prefix". - If not specified, a temp file will be created. Defaults to None. - file_client_args (dict, optional): Arguments to instantiate the - corresponding backend in mmdet <= 3.0.0rc6. Defaults to None. - backend_args (dict, optional): Arguments to instantiate the - corresponding backend. Defaults to None. - collect_device (str): Device name used for collecting results from - different ranks during distributed training. Must be 'cpu' or - 'gpu'. Defaults to 'cpu'. - prefix (str, optional): The prefix that will be added in the metric - names to disambiguate homonymous metrics of different evaluators. - If prefix is not provided in the argument, self.default_prefix - will be used instead. Defaults to None. - eval_mode (int): Select the mode of evaluate. Valid mode include - 0(just body box), 1(just head box) and 2(both of them). - Defaults to 0. - iou_thres (float): IoU threshold. Defaults to 0.5. - compare_matching_method (str, optional): Matching method to compare - the detection results with the ground_truth when compute 'AP' - and 'MR'.Valid method include VOC and None(CALTECH). Default to - None. - mr_ref (str): Different parameter selection to calculate MR. Valid - ref include CALTECH_-2 and CALTECH_-4. Defaults to CALTECH_-2. - num_ji_process (int): The number of processes to evaluation JI. - Defaults to 10. - """ - default_prefix: Optional[str] = 'crowd_human' - - def __init__(self, - ann_file: str, - metric: Union[str, List[str]] = ['AP', 'MR', 'JI'], - format_only: bool = False, - outfile_prefix: Optional[str] = None, - file_client_args: dict = None, - backend_args: dict = None, - collect_device: str = 'cpu', - prefix: Optional[str] = None, - eval_mode: int = 0, - iou_thres: float = 0.5, - compare_matching_method: Optional[str] = None, - mr_ref: str = 'CALTECH_-2', - num_ji_process: int = 10) -> None: - super().__init__(collect_device=collect_device, prefix=prefix) - - self.ann_file = ann_file - # crowdhuman evaluation metrics - self.metrics = metric if isinstance(metric, list) else [metric] - allowed_metrics = ['MR', 'AP', 'JI'] - for metric in self.metrics: - if metric not in allowed_metrics: - raise KeyError(f"metric should be one of 'MR', 'AP', 'JI'," - f'but got {metric}.') - - self.format_only = format_only - if self.format_only: - assert outfile_prefix is not None, 'outfile_prefix must be not' - 'None when format_only is True, otherwise the result files will' - 'be saved to a temp directory which will be cleaned up at the end.' - self.outfile_prefix = outfile_prefix - self.backend_args = backend_args - if file_client_args is not None: - raise RuntimeError( - 'The `file_client_args` is deprecated, ' - 'please use `backend_args` instead, please refer to' - 'https://github.com/open-mmlab/mmdetection/blob/main/configs/_base_/datasets/coco_detection.py' # noqa: E501 - ) - - assert eval_mode in [0, 1, 2], \ - "Unknown eval mode. mr_ref should be one of '0', '1', '2'." - assert compare_matching_method is None or \ - compare_matching_method == 'VOC', \ - 'The alternative compare_matching_method is VOC.' \ - 'This parameter defaults to CALTECH(None)' - assert mr_ref == 'CALTECH_-2' or mr_ref == 'CALTECH_-4', \ - "mr_ref should be one of 'CALTECH_-2', 'CALTECH_-4'." - self.eval_mode = eval_mode - self.iou_thres = iou_thres - self.compare_matching_method = compare_matching_method - self.mr_ref = mr_ref - self.num_ji_process = num_ji_process - - @staticmethod - def results2json(results: Sequence[dict], outfile_prefix: str) -> str: - """Dump the detection results to a json file.""" - result_file_path = f'{outfile_prefix}.json' - bbox_json_results = [] - for i, result in enumerate(results): - ann, pred = result - dump_dict = dict() - dump_dict['ID'] = ann['ID'] - dump_dict['width'] = ann['width'] - dump_dict['height'] = ann['height'] - dtboxes = [] - bboxes = pred.tolist() - for _, single_bbox in enumerate(bboxes): - temp_dict = dict() - x1, y1, x2, y2, score = single_bbox - temp_dict['box'] = [x1, y1, x2 - x1, y2 - y1] - temp_dict['score'] = score - temp_dict['tag'] = 1 - dtboxes.append(temp_dict) - dump_dict['dtboxes'] = dtboxes - bbox_json_results.append(dump_dict) - dump(bbox_json_results, result_file_path) - return result_file_path - - def process(self, data_batch: Sequence[dict], - data_samples: Sequence[dict]) -> None: - """Process one batch of data samples and predictions. The processed - results should be stored in ``self.results``, which will be used to - compute the metrics when all batches have been processed. - - Args: - data_batch (dict): A batch of data from the dataloader. - data_samples (Sequence[dict]): A batch of data samples that - contain annotations and predictions. - """ - for data_sample in data_samples: - ann = dict() - ann['ID'] = data_sample['img_id'] - ann['width'] = data_sample['ori_shape'][1] - ann['height'] = data_sample['ori_shape'][0] - pred_bboxes = data_sample['pred_instances']['bboxes'].cpu().numpy() - pred_scores = data_sample['pred_instances']['scores'].cpu().numpy() - - pred_bbox_scores = np.hstack( - [pred_bboxes, pred_scores.reshape((-1, 1))]) - - self.results.append((ann, pred_bbox_scores)) - - def compute_metrics(self, results: list) -> Dict[str, float]: - """Compute the metrics from processed results. - - Args: - results (list): The processed results of each batch. - - Returns: - eval_results(Dict[str, float]): The computed metrics. - The keys are the names of the metrics, and the values - are corresponding results. - """ - logger: MMLogger = MMLogger.get_current_instance() - - tmp_dir = None - if self.outfile_prefix is None: - tmp_dir = tempfile.TemporaryDirectory() - outfile_prefix = osp.join(tmp_dir.name, 'result') - else: - outfile_prefix = self.outfile_prefix - - # convert predictions to coco format and dump to json file - result_file = self.results2json(results, outfile_prefix) - eval_results = OrderedDict() - if self.format_only: - logger.info(f'results are saved in {osp.dirname(outfile_prefix)}') - return eval_results - - # load evaluation samples - eval_samples = self.load_eval_samples(result_file) - - if 'AP' in self.metrics or 'MR' in self.metrics: - score_list = self.compare(eval_samples) - gt_num = sum([eval_samples[i].gt_num for i in eval_samples]) - ign_num = sum([eval_samples[i].ign_num for i in eval_samples]) - gt_num = gt_num - ign_num - img_num = len(eval_samples) - - for metric in self.metrics: - logger.info(f'Evaluating {metric}...') - if metric == 'AP': - AP = self.eval_ap(score_list, gt_num, img_num) - eval_results['mAP'] = float(f'{round(AP, 4)}') - if metric == 'MR': - MR = self.eval_mr(score_list, gt_num, img_num) - eval_results['mMR'] = float(f'{round(MR, 4)}') - if metric == 'JI': - JI = self.eval_ji(eval_samples) - eval_results['JI'] = float(f'{round(JI, 4)}') - if tmp_dir is not None: - tmp_dir.cleanup() - - return eval_results - - def load_eval_samples(self, result_file): - """Load data from annotations file and detection results. - - Args: - result_file (str): The file path of the saved detection results. - - Returns: - Dict[Image]: The detection result packaged by Image - """ - gt_str = get_text( - self.ann_file, backend_args=self.backend_args).strip().split('\n') - gt_records = [json.loads(line) for line in gt_str] - - pred_records = load(result_file, backend_args=self.backend_args) - eval_samples = dict() - for gt_record, pred_record in zip(gt_records, pred_records): - assert gt_record['ID'] == pred_record['ID'], \ - 'please set val_dataloader.sampler.shuffle=False and try again' - eval_samples[pred_record['ID']] = Image(self.eval_mode) - eval_samples[pred_record['ID']].load(gt_record, 'box', None, - PERSON_CLASSES, True) - eval_samples[pred_record['ID']].load(pred_record, 'box', None, - PERSON_CLASSES, False) - eval_samples[pred_record['ID']].clip_all_boader() - return eval_samples - - def compare(self, samples): - """Match the detection results with the ground_truth. - - Args: - samples (dict[Image]): The detection result packaged by Image. - - Returns: - score_list(list[tuple[ndarray, int, str]]): Matching result. - a list of tuples (dtbox, label, imgID) in the descending - sort of dtbox.score. - """ - score_list = list() - for id in samples: - if self.compare_matching_method == 'VOC': - result = samples[id].compare_voc(self.iou_thres) - else: - result = samples[id].compare_caltech(self.iou_thres) - score_list.extend(result) - # In the descending sort of dtbox score. - score_list.sort(key=lambda x: x[0][-1], reverse=True) - return score_list - - @staticmethod - def eval_ap(score_list, gt_num, img_num): - """Evaluate by average precision. - - Args: - score_list(list[tuple[ndarray, int, str]]): Matching result. - a list of tuples (dtbox, label, imgID) in the descending - sort of dtbox.score. - gt_num(int): The number of gt boxes in the entire dataset. - img_num(int): The number of images in the entire dataset. - - Returns: - ap(float): result of average precision. - """ - - # calculate general ap score - def _calculate_map(_recall, _precision): - assert len(_recall) == len(_precision) - area = 0 - for k in range(1, len(_recall)): - delta_h = (_precision[k - 1] + _precision[k]) / 2 - delta_w = _recall[k] - _recall[k - 1] - area += delta_w * delta_h - return area - - tp, fp = 0.0, 0.0 - rpX, rpY = list(), list() - - fpn = [] - recalln = [] - thr = [] - fppi = [] - for i, item in enumerate(score_list): - if item[1] == 1: - tp += 1.0 - elif item[1] == 0: - fp += 1.0 - fn = gt_num - tp - recall = tp / (tp + fn) - precision = tp / (tp + fp) - rpX.append(recall) - rpY.append(precision) - fpn.append(fp) - recalln.append(tp) - thr.append(item[0][-1]) - fppi.append(fp / img_num) - - ap = _calculate_map(rpX, rpY) - return ap - - def eval_mr(self, score_list, gt_num, img_num): - """Evaluate by Caltech-style log-average miss rate. - - Args: - score_list(list[tuple[ndarray, int, str]]): Matching result. - a list of tuples (dtbox, label, imgID) in the descending - sort of dtbox.score. - gt_num(int): The number of gt boxes in the entire dataset. - img_num(int): The number of image in the entire dataset. - - Returns: - mr(float): result of miss rate. - """ - - # find greater_than - def _find_gt(lst, target): - for idx, _item in enumerate(lst): - if _item >= target: - return idx - return len(lst) - 1 - - if self.mr_ref == 'CALTECH_-2': - # CALTECH_MRREF_2: anchor points (from 10^-2 to 1) as in - # P.Dollar's paper - ref = [ - 0.0100, 0.0178, 0.03160, 0.0562, 0.1000, 0.1778, 0.3162, - 0.5623, 1.000 - ] - else: - # CALTECH_MRREF_4: anchor points (from 10^-4 to 1) as in - # S.Zhang's paper - ref = [ - 0.0001, 0.0003, 0.00100, 0.0032, 0.0100, 0.0316, 0.1000, - 0.3162, 1.000 - ] - - tp, fp = 0.0, 0.0 - fppiX, fppiY = list(), list() - for i, item in enumerate(score_list): - if item[1] == 1: - tp += 1.0 - elif item[1] == 0: - fp += 1.0 - - fn = gt_num - tp - recall = tp / (tp + fn) - missrate = 1.0 - recall - fppi = fp / img_num - fppiX.append(fppi) - fppiY.append(missrate) - - score = list() - for pos in ref: - argmin = _find_gt(fppiX, pos) - if argmin >= 0: - score.append(fppiY[argmin]) - score = np.array(score) - mr = np.exp(np.log(score).mean()) - return mr - - def eval_ji(self, samples): - """Evaluate by JI using multi_process. - - Args: - samples(Dict[str, Image]): The detection result packaged by Image. - - Returns: - ji(float): result of jaccard index. - """ - import math - res_line = [] - res_ji = [] - for i in range(10): - score_thr = 1e-1 * i - total = len(samples) - stride = math.ceil(total / self.num_ji_process) - result_queue = Queue(10000) - results, procs = [], [] - records = list(samples.items()) - for i in range(self.num_ji_process): - start = i * stride - end = np.min([start + stride, total]) - sample_data = dict(records[start:end]) - p = Process( - target=self.compute_ji_with_ignore, - args=(result_queue, sample_data, score_thr)) - p.start() - procs.append(p) - for i in range(total): - t = result_queue.get() - results.append(t) - for p in procs: - p.join() - line, mean_ratio = self.gather(results) - line = 'score_thr:{:.1f}, {}'.format(score_thr, line) - res_line.append(line) - res_ji.append(mean_ratio) - return max(res_ji) - - def compute_ji_with_ignore(self, result_queue, dt_result, score_thr): - """Compute JI with ignore. - - Args: - result_queue(Queue): The Queue for save compute result when - multi_process. - dt_result(dict[Image]): Detection result packaged by Image. - score_thr(float): The threshold of detection score. - Returns: - dict: compute result. - """ - for ID, record in dt_result.items(): - gt_boxes = record.gt_boxes - dt_boxes = record.dt_boxes - keep = dt_boxes[:, -1] > score_thr - dt_boxes = dt_boxes[keep][:, :-1] - - gt_tag = np.array(gt_boxes[:, -1] != -1) - matches = self.compute_ji_matching(dt_boxes, gt_boxes[gt_tag, :4]) - # get the unmatched_indices - matched_indices = np.array([j for (j, _) in matches]) - unmatched_indices = list( - set(np.arange(dt_boxes.shape[0])) - set(matched_indices)) - num_ignore_dt = self.get_ignores(dt_boxes[unmatched_indices], - gt_boxes[~gt_tag, :4]) - matched_indices = np.array([j for (_, j) in matches]) - unmatched_indices = list( - set(np.arange(gt_boxes[gt_tag].shape[0])) - - set(matched_indices)) - num_ignore_gt = self.get_ignores( - gt_boxes[gt_tag][unmatched_indices], gt_boxes[~gt_tag, :4]) - # compute results - eps = 1e-6 - k = len(matches) - m = gt_tag.sum() - num_ignore_gt - n = dt_boxes.shape[0] - num_ignore_dt - ratio = k / (m + n - k + eps) - recall = k / (m + eps) - cover = k / (n + eps) - noise = 1 - cover - result_dict = dict( - ratio=ratio, - recall=recall, - cover=cover, - noise=noise, - k=k, - m=m, - n=n) - result_queue.put_nowait(result_dict) - - @staticmethod - def gather(results): - """Integrate test results.""" - assert len(results) - img_num = 0 - for result in results: - if result['n'] != 0 or result['m'] != 0: - img_num += 1 - mean_ratio = np.sum([rb['ratio'] for rb in results]) / img_num - valids = np.sum([rb['k'] for rb in results]) - total = np.sum([rb['n'] for rb in results]) - gtn = np.sum([rb['m'] for rb in results]) - line = 'mean_ratio:{:.4f}, valids:{}, total:{}, gtn:{}'\ - .format(mean_ratio, valids, total, gtn) - return line, mean_ratio - - def compute_ji_matching(self, dt_boxes, gt_boxes): - """Match the annotation box for each detection box. - - Args: - dt_boxes(ndarray): Detection boxes. - gt_boxes(ndarray): Ground_truth boxes. - - Returns: - matches_(list[tuple[int, int]]): Match result. - """ - assert dt_boxes.shape[-1] > 3 and gt_boxes.shape[-1] > 3 - if dt_boxes.shape[0] < 1 or gt_boxes.shape[0] < 1: - return list() - - ious = bbox_overlaps(dt_boxes, gt_boxes, mode='iou') - input_ = copy.deepcopy(ious) - input_[input_ < self.iou_thres] = 0 - match_scipy = maximum_bipartite_matching( - csr_matrix(input_), perm_type='column') - matches_ = [] - for i in range(len(match_scipy)): - if match_scipy[i] != -1: - matches_.append((i, int(match_scipy[i]))) - return matches_ - - def get_ignores(self, dt_boxes, gt_boxes): - """Get the number of ignore bboxes.""" - if gt_boxes.size: - ioas = bbox_overlaps(dt_boxes, gt_boxes, mode='iof') - ioas = np.max(ioas, axis=1) - rows = np.where(ioas > self.iou_thres)[0] - return len(rows) - else: - return 0 - - -class Image(object): - """Data structure for evaluation of CrowdHuman. - - Note: - This implementation is modified from https://github.com/Purkialo/ - CrowdDet/blob/master/lib/evaluate/APMRToolkits/image.py - - Args: - mode (int): Select the mode of evaluate. Valid mode include - 0(just body box), 1(just head box) and 2(both of them). - Defaults to 0. - """ - - def __init__(self, mode): - self.ID = None - self.width = None - self.height = None - self.dt_boxes = None - self.gt_boxes = None - self.eval_mode = mode - - self.ign_num = None - self.gt_num = None - self.dt_num = None - - def load(self, record, body_key, head_key, class_names, gt_flag): - """Loading information for evaluation. - - Args: - record (dict): Label information or test results. - The format might look something like this: - { - 'ID': '273271,c9db000d5146c15', - 'gtboxes': [ - {'fbox': [72, 202, 163, 503], 'tag': 'person', ...}, - {'fbox': [199, 180, 144, 499], 'tag': 'person', ...}, - ... - ] - } - or: - { - 'ID': '273271,c9db000d5146c15', - 'width': 800, - 'height': 1067, - 'dtboxes': [ - { - 'box': [306.22, 205.95, 164.05, 394.04], - 'score': 0.99, - 'tag': 1 - }, - { - 'box': [403.60, 178.66, 157.15, 421.33], - 'score': 0.99, - 'tag': 1 - }, - ... - ] - } - body_key (str, None): key of detection body box. - Valid when loading detection results and self.eval_mode!=1. - head_key (str, None): key of detection head box. - Valid when loading detection results and self.eval_mode!=0. - class_names (list[str]):class names of data set. - Defaults to ['background', 'person']. - gt_flag (bool): Indicate whether record is ground truth - or predicting the outcome. - """ - if 'ID' in record and self.ID is None: - self.ID = record['ID'] - if 'width' in record and self.width is None: - self.width = record['width'] - if 'height' in record and self.height is None: - self.height = record['height'] - if gt_flag: - self.gt_num = len(record['gtboxes']) - body_bbox, head_bbox = self.load_gt_boxes(record, 'gtboxes', - class_names) - if self.eval_mode == 0: - self.gt_boxes = body_bbox - self.ign_num = (body_bbox[:, -1] == -1).sum() - elif self.eval_mode == 1: - self.gt_boxes = head_bbox - self.ign_num = (head_bbox[:, -1] == -1).sum() - else: - gt_tag = np.array([ - body_bbox[i, -1] != -1 and head_bbox[i, -1] != -1 - for i in range(len(body_bbox)) - ]) - self.ign_num = (gt_tag == 0).sum() - self.gt_boxes = np.hstack( - (body_bbox[:, :-1], head_bbox[:, :-1], - gt_tag.reshape(-1, 1))) - - if not gt_flag: - self.dt_num = len(record['dtboxes']) - if self.eval_mode == 0: - self.dt_boxes = self.load_det_boxes(record, 'dtboxes', - body_key, 'score') - elif self.eval_mode == 1: - self.dt_boxes = self.load_det_boxes(record, 'dtboxes', - head_key, 'score') - else: - body_dtboxes = self.load_det_boxes(record, 'dtboxes', body_key, - 'score') - head_dtboxes = self.load_det_boxes(record, 'dtboxes', head_key, - 'score') - self.dt_boxes = np.hstack((body_dtboxes, head_dtboxes)) - - @staticmethod - def load_gt_boxes(dict_input, key_name, class_names): - """load ground_truth and transform [x, y, w, h] to [x1, y1, x2, y2]""" - assert key_name in dict_input - if len(dict_input[key_name]) < 1: - return np.empty([0, 5]) - head_bbox = [] - body_bbox = [] - for rb in dict_input[key_name]: - if rb['tag'] in class_names: - body_tag = class_names.index(rb['tag']) - head_tag = copy.deepcopy(body_tag) - else: - body_tag = -1 - head_tag = -1 - if 'extra' in rb: - if 'ignore' in rb['extra']: - if rb['extra']['ignore'] != 0: - body_tag = -1 - head_tag = -1 - if 'head_attr' in rb: - if 'ignore' in rb['head_attr']: - if rb['head_attr']['ignore'] != 0: - head_tag = -1 - head_bbox.append(np.hstack((rb['hbox'], head_tag))) - body_bbox.append(np.hstack((rb['fbox'], body_tag))) - head_bbox = np.array(head_bbox) - head_bbox[:, 2:4] += head_bbox[:, :2] - body_bbox = np.array(body_bbox) - body_bbox[:, 2:4] += body_bbox[:, :2] - return body_bbox, head_bbox - - @staticmethod - def load_det_boxes(dict_input, key_name, key_box, key_score, key_tag=None): - """load detection boxes.""" - assert key_name in dict_input - if len(dict_input[key_name]) < 1: - return np.empty([0, 5]) - else: - assert key_box in dict_input[key_name][0] - if key_score: - assert key_score in dict_input[key_name][0] - if key_tag: - assert key_tag in dict_input[key_name][0] - if key_score: - if key_tag: - bboxes = np.vstack([ - np.hstack((rb[key_box], rb[key_score], rb[key_tag])) - for rb in dict_input[key_name] - ]) - else: - bboxes = np.vstack([ - np.hstack((rb[key_box], rb[key_score])) - for rb in dict_input[key_name] - ]) - else: - if key_tag: - bboxes = np.vstack([ - np.hstack((rb[key_box], rb[key_tag])) - for rb in dict_input[key_name] - ]) - else: - bboxes = np.vstack( - [rb[key_box] for rb in dict_input[key_name]]) - bboxes[:, 2:4] += bboxes[:, :2] - return bboxes - - def clip_all_boader(self): - """Make sure boxes are within the image range.""" - - def _clip_boundary(boxes, height, width): - assert boxes.shape[-1] >= 4 - boxes[:, 0] = np.minimum(np.maximum(boxes[:, 0], 0), width - 1) - boxes[:, 1] = np.minimum(np.maximum(boxes[:, 1], 0), height - 1) - boxes[:, 2] = np.maximum(np.minimum(boxes[:, 2], width), 0) - boxes[:, 3] = np.maximum(np.minimum(boxes[:, 3], height), 0) - return boxes - - assert self.dt_boxes.shape[-1] >= 4 - assert self.gt_boxes.shape[-1] >= 4 - assert self.width is not None and self.height is not None - if self.eval_mode == 2: - self.dt_boxes[:, :4] = _clip_boundary(self.dt_boxes[:, :4], - self.height, self.width) - self.gt_boxes[:, :4] = _clip_boundary(self.gt_boxes[:, :4], - self.height, self.width) - self.dt_boxes[:, 4:8] = _clip_boundary(self.dt_boxes[:, 4:8], - self.height, self.width) - self.gt_boxes[:, 4:8] = _clip_boundary(self.gt_boxes[:, 4:8], - self.height, self.width) - else: - self.dt_boxes = _clip_boundary(self.dt_boxes, self.height, - self.width) - self.gt_boxes = _clip_boundary(self.gt_boxes, self.height, - self.width) - - def compare_voc(self, thres): - """Match the detection results with the ground_truth by VOC. - - Args: - thres (float): IOU threshold. - - Returns: - score_list(list[tuple[ndarray, int, str]]): Matching result. - a list of tuples (dtbox, label, imgID) in the descending - sort of dtbox.score. - """ - if self.dt_boxes is None: - return list() - dtboxes = self.dt_boxes - gtboxes = self.gt_boxes if self.gt_boxes is not None else list() - dtboxes.sort(key=lambda x: x.score, reverse=True) - gtboxes.sort(key=lambda x: x.ign) - - score_list = list() - for i, dt in enumerate(dtboxes): - maxpos = -1 - maxiou = thres - - for j, gt in enumerate(gtboxes): - overlap = dt.iou(gt) - if overlap > maxiou: - maxiou = overlap - maxpos = j - - if maxpos >= 0: - if gtboxes[maxpos].ign == 0: - gtboxes[maxpos].matched = 1 - dtboxes[i].matched = 1 - score_list.append((dt, self.ID)) - else: - dtboxes[i].matched = -1 - else: - dtboxes[i].matched = 0 - score_list.append((dt, self.ID)) - return score_list - - def compare_caltech(self, thres): - """Match the detection results with the ground_truth by Caltech - matching strategy. - - Args: - thres (float): IOU threshold. - - Returns: - score_list(list[tuple[ndarray, int, str]]): Matching result. - a list of tuples (dtbox, label, imgID) in the descending - sort of dtbox.score. - """ - if self.dt_boxes is None or self.gt_boxes is None: - return list() - - dtboxes = self.dt_boxes if self.dt_boxes is not None else list() - gtboxes = self.gt_boxes if self.gt_boxes is not None else list() - dt_matched = np.zeros(dtboxes.shape[0]) - gt_matched = np.zeros(gtboxes.shape[0]) - - dtboxes = np.array(sorted(dtboxes, key=lambda x: x[-1], reverse=True)) - gtboxes = np.array(sorted(gtboxes, key=lambda x: x[-1], reverse=True)) - if len(dtboxes): - overlap_iou = bbox_overlaps(dtboxes, gtboxes, mode='iou') - overlap_ioa = bbox_overlaps(dtboxes, gtboxes, mode='iof') - else: - return list() - - score_list = list() - for i, dt in enumerate(dtboxes): - maxpos = -1 - maxiou = thres - for j, gt in enumerate(gtboxes): - if gt_matched[j] == 1: - continue - if gt[-1] > 0: - overlap = overlap_iou[i][j] - if overlap > maxiou: - maxiou = overlap - maxpos = j - else: - if maxpos >= 0: - break - else: - overlap = overlap_ioa[i][j] - if overlap > thres: - maxiou = overlap - maxpos = j - if maxpos >= 0: - if gtboxes[maxpos, -1] > 0: - gt_matched[maxpos] = 1 - dt_matched[i] = 1 - score_list.append((dt, 1, self.ID)) - else: - dt_matched[i] = -1 - else: - dt_matched[i] = 0 - score_list.append((dt, 0, self.ID)) - return score_list diff --git a/spaces/KyanChen/RSPrompter/mmdet/models/utils/misc.py b/spaces/KyanChen/RSPrompter/mmdet/models/utils/misc.py deleted file mode 100644 index 823d73c0ac3470f90f7e8780c827f37e8e0ce889..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/RSPrompter/mmdet/models/utils/misc.py +++ /dev/null @@ -1,652 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from functools import partial -from typing import List, Sequence, Tuple, Union - -import numpy as np -import torch -from mmengine.structures import InstanceData -from mmengine.utils import digit_version -from six.moves import map, zip -from torch import Tensor -from torch.autograd import Function -from torch.nn import functional as F - -from mmdet.structures import SampleList -from mmdet.structures.bbox import BaseBoxes, get_box_type, stack_boxes -from mmdet.structures.mask import BitmapMasks, PolygonMasks -from mmdet.utils import OptInstanceList - - -class SigmoidGeometricMean(Function): - """Forward and backward function of geometric mean of two sigmoid - functions. - - This implementation with analytical gradient function substitutes - the autograd function of (x.sigmoid() * y.sigmoid()).sqrt(). The - original implementation incurs none during gradient backprapagation - if both x and y are very small values. - """ - - @staticmethod - def forward(ctx, x, y): - x_sigmoid = x.sigmoid() - y_sigmoid = y.sigmoid() - z = (x_sigmoid * y_sigmoid).sqrt() - ctx.save_for_backward(x_sigmoid, y_sigmoid, z) - return z - - @staticmethod - def backward(ctx, grad_output): - x_sigmoid, y_sigmoid, z = ctx.saved_tensors - grad_x = grad_output * z * (1 - x_sigmoid) / 2 - grad_y = grad_output * z * (1 - y_sigmoid) / 2 - return grad_x, grad_y - - -sigmoid_geometric_mean = SigmoidGeometricMean.apply - - -def interpolate_as(source, target, mode='bilinear', align_corners=False): - """Interpolate the `source` to the shape of the `target`. - - The `source` must be a Tensor, but the `target` can be a Tensor or a - np.ndarray with the shape (..., target_h, target_w). - - Args: - source (Tensor): A 3D/4D Tensor with the shape (N, H, W) or - (N, C, H, W). - target (Tensor | np.ndarray): The interpolation target with the shape - (..., target_h, target_w). - mode (str): Algorithm used for interpolation. The options are the - same as those in F.interpolate(). Default: ``'bilinear'``. - align_corners (bool): The same as the argument in F.interpolate(). - - Returns: - Tensor: The interpolated source Tensor. - """ - assert len(target.shape) >= 2 - - def _interpolate_as(source, target, mode='bilinear', align_corners=False): - """Interpolate the `source` (4D) to the shape of the `target`.""" - target_h, target_w = target.shape[-2:] - source_h, source_w = source.shape[-2:] - if target_h != source_h or target_w != source_w: - source = F.interpolate( - source, - size=(target_h, target_w), - mode=mode, - align_corners=align_corners) - return source - - if len(source.shape) == 3: - source = source[:, None, :, :] - source = _interpolate_as(source, target, mode, align_corners) - return source[:, 0, :, :] - else: - return _interpolate_as(source, target, mode, align_corners) - - -def unpack_gt_instances(batch_data_samples: SampleList) -> tuple: - """Unpack ``gt_instances``, ``gt_instances_ignore`` and ``img_metas`` based - on ``batch_data_samples`` - - Args: - batch_data_samples (List[:obj:`DetDataSample`]): The Data - Samples. It usually includes information such as - `gt_instance`, `gt_panoptic_seg` and `gt_sem_seg`. - - Returns: - tuple: - - - batch_gt_instances (list[:obj:`InstanceData`]): Batch of - gt_instance. It usually includes ``bboxes`` and ``labels`` - attributes. - - batch_gt_instances_ignore (list[:obj:`InstanceData`]): - Batch of gt_instances_ignore. It includes ``bboxes`` attribute - data that is ignored during training and testing. - Defaults to None. - - batch_img_metas (list[dict]): Meta information of each image, - e.g., image size, scaling factor, etc. - """ - batch_gt_instances = [] - batch_gt_instances_ignore = [] - batch_img_metas = [] - for data_sample in batch_data_samples: - batch_img_metas.append(data_sample.metainfo) - batch_gt_instances.append(data_sample.gt_instances) - if 'ignored_instances' in data_sample: - batch_gt_instances_ignore.append(data_sample.ignored_instances) - else: - batch_gt_instances_ignore.append(None) - - return batch_gt_instances, batch_gt_instances_ignore, batch_img_metas - - -def empty_instances(batch_img_metas: List[dict], - device: torch.device, - task_type: str, - instance_results: OptInstanceList = None, - mask_thr_binary: Union[int, float] = 0, - box_type: Union[str, type] = 'hbox', - use_box_type: bool = False, - num_classes: int = 80, - score_per_cls: bool = False) -> List[InstanceData]: - """Handle predicted instances when RoI is empty. - - Note: If ``instance_results`` is not None, it will be modified - in place internally, and then return ``instance_results`` - - Args: - batch_img_metas (list[dict]): List of image information. - device (torch.device): Device of tensor. - task_type (str): Expected returned task type. it currently - supports bbox and mask. - instance_results (list[:obj:`InstanceData`]): List of instance - results. - mask_thr_binary (int, float): mask binarization threshold. - Defaults to 0. - box_type (str or type): The empty box type. Defaults to `hbox`. - use_box_type (bool): Whether to warp boxes with the box type. - Defaults to False. - num_classes (int): num_classes of bbox_head. Defaults to 80. - score_per_cls (bool): Whether to generate classwise score for - the empty instance. ``score_per_cls`` will be True when the model - needs to produce raw results without nms. Defaults to False. - - Returns: - list[:obj:`InstanceData`]: Detection results of each image - """ - assert task_type in ('bbox', 'mask'), 'Only support bbox and mask,' \ - f' but got {task_type}' - - if instance_results is not None: - assert len(instance_results) == len(batch_img_metas) - - results_list = [] - for img_id in range(len(batch_img_metas)): - if instance_results is not None: - results = instance_results[img_id] - assert isinstance(results, InstanceData) - else: - results = InstanceData() - - if task_type == 'bbox': - _, box_type = get_box_type(box_type) - bboxes = torch.zeros(0, box_type.box_dim, device=device) - if use_box_type: - bboxes = box_type(bboxes, clone=False) - results.bboxes = bboxes - score_shape = (0, num_classes + 1) if score_per_cls else (0, ) - results.scores = torch.zeros(score_shape, device=device) - results.labels = torch.zeros((0, ), - device=device, - dtype=torch.long) - else: - # TODO: Handle the case where rescale is false - img_h, img_w = batch_img_metas[img_id]['ori_shape'][:2] - # the type of `im_mask` will be torch.bool or torch.uint8, - # where uint8 if for visualization and debugging. - im_mask = torch.zeros( - 0, - img_h, - img_w, - device=device, - dtype=torch.bool if mask_thr_binary >= 0 else torch.uint8) - results.masks = im_mask - results_list.append(results) - return results_list - - -def multi_apply(func, *args, **kwargs): - """Apply function to a list of arguments. - - Note: - This function applies the ``func`` to multiple inputs and - map the multiple outputs of the ``func`` into different - list. Each list contains the same type of outputs corresponding - to different inputs. - - Args: - func (Function): A function that will be applied to a list of - arguments - - Returns: - tuple(list): A tuple containing multiple list, each list contains \ - a kind of returned results by the function - """ - pfunc = partial(func, **kwargs) if kwargs else func - map_results = map(pfunc, *args) - return tuple(map(list, zip(*map_results))) - - -def unmap(data, count, inds, fill=0): - """Unmap a subset of item (data) back to the original set of items (of size - count)""" - if data.dim() == 1: - ret = data.new_full((count, ), fill) - ret[inds.type(torch.bool)] = data - else: - new_size = (count, ) + data.size()[1:] - ret = data.new_full(new_size, fill) - ret[inds.type(torch.bool), :] = data - return ret - - -def mask2ndarray(mask): - """Convert Mask to ndarray.. - - Args: - mask (:obj:`BitmapMasks` or :obj:`PolygonMasks` or - torch.Tensor or np.ndarray): The mask to be converted. - - Returns: - np.ndarray: Ndarray mask of shape (n, h, w) that has been converted - """ - if isinstance(mask, (BitmapMasks, PolygonMasks)): - mask = mask.to_ndarray() - elif isinstance(mask, torch.Tensor): - mask = mask.detach().cpu().numpy() - elif not isinstance(mask, np.ndarray): - raise TypeError(f'Unsupported {type(mask)} data type') - return mask - - -def flip_tensor(src_tensor, flip_direction): - """flip tensor base on flip_direction. - - Args: - src_tensor (Tensor): input feature map, shape (B, C, H, W). - flip_direction (str): The flipping direction. Options are - 'horizontal', 'vertical', 'diagonal'. - - Returns: - out_tensor (Tensor): Flipped tensor. - """ - assert src_tensor.ndim == 4 - valid_directions = ['horizontal', 'vertical', 'diagonal'] - assert flip_direction in valid_directions - if flip_direction == 'horizontal': - out_tensor = torch.flip(src_tensor, [3]) - elif flip_direction == 'vertical': - out_tensor = torch.flip(src_tensor, [2]) - else: - out_tensor = torch.flip(src_tensor, [2, 3]) - return out_tensor - - -def select_single_mlvl(mlvl_tensors, batch_id, detach=True): - """Extract a multi-scale single image tensor from a multi-scale batch - tensor based on batch index. - - Note: The default value of detach is True, because the proposal gradient - needs to be detached during the training of the two-stage model. E.g - Cascade Mask R-CNN. - - Args: - mlvl_tensors (list[Tensor]): Batch tensor for all scale levels, - each is a 4D-tensor. - batch_id (int): Batch index. - detach (bool): Whether detach gradient. Default True. - - Returns: - list[Tensor]: Multi-scale single image tensor. - """ - assert isinstance(mlvl_tensors, (list, tuple)) - num_levels = len(mlvl_tensors) - - if detach: - mlvl_tensor_list = [ - mlvl_tensors[i][batch_id].detach() for i in range(num_levels) - ] - else: - mlvl_tensor_list = [ - mlvl_tensors[i][batch_id] for i in range(num_levels) - ] - return mlvl_tensor_list - - -def filter_scores_and_topk(scores, score_thr, topk, results=None): - """Filter results using score threshold and topk candidates. - - Args: - scores (Tensor): The scores, shape (num_bboxes, K). - score_thr (float): The score filter threshold. - topk (int): The number of topk candidates. - results (dict or list or Tensor, Optional): The results to - which the filtering rule is to be applied. The shape - of each item is (num_bboxes, N). - - Returns: - tuple: Filtered results - - - scores (Tensor): The scores after being filtered, \ - shape (num_bboxes_filtered, ). - - labels (Tensor): The class labels, shape \ - (num_bboxes_filtered, ). - - anchor_idxs (Tensor): The anchor indexes, shape \ - (num_bboxes_filtered, ). - - filtered_results (dict or list or Tensor, Optional): \ - The filtered results. The shape of each item is \ - (num_bboxes_filtered, N). - """ - valid_mask = scores > score_thr - scores = scores[valid_mask] - valid_idxs = torch.nonzero(valid_mask) - - num_topk = min(topk, valid_idxs.size(0)) - # torch.sort is actually faster than .topk (at least on GPUs) - scores, idxs = scores.sort(descending=True) - scores = scores[:num_topk] - topk_idxs = valid_idxs[idxs[:num_topk]] - keep_idxs, labels = topk_idxs.unbind(dim=1) - - filtered_results = None - if results is not None: - if isinstance(results, dict): - filtered_results = {k: v[keep_idxs] for k, v in results.items()} - elif isinstance(results, list): - filtered_results = [result[keep_idxs] for result in results] - elif isinstance(results, torch.Tensor): - filtered_results = results[keep_idxs] - else: - raise NotImplementedError(f'Only supports dict or list or Tensor, ' - f'but get {type(results)}.') - return scores, labels, keep_idxs, filtered_results - - -def center_of_mass(mask, esp=1e-6): - """Calculate the centroid coordinates of the mask. - - Args: - mask (Tensor): The mask to be calculated, shape (h, w). - esp (float): Avoid dividing by zero. Default: 1e-6. - - Returns: - tuple[Tensor]: the coordinates of the center point of the mask. - - - center_h (Tensor): the center point of the height. - - center_w (Tensor): the center point of the width. - """ - h, w = mask.shape - grid_h = torch.arange(h, device=mask.device)[:, None] - grid_w = torch.arange(w, device=mask.device) - normalizer = mask.sum().float().clamp(min=esp) - center_h = (mask * grid_h).sum() / normalizer - center_w = (mask * grid_w).sum() / normalizer - return center_h, center_w - - -def generate_coordinate(featmap_sizes, device='cuda'): - """Generate the coordinate. - - Args: - featmap_sizes (tuple): The feature to be calculated, - of shape (N, C, W, H). - device (str): The device where the feature will be put on. - Returns: - coord_feat (Tensor): The coordinate feature, of shape (N, 2, W, H). - """ - - x_range = torch.linspace(-1, 1, featmap_sizes[-1], device=device) - y_range = torch.linspace(-1, 1, featmap_sizes[-2], device=device) - y, x = torch.meshgrid(y_range, x_range) - y = y.expand([featmap_sizes[0], 1, -1, -1]) - x = x.expand([featmap_sizes[0], 1, -1, -1]) - coord_feat = torch.cat([x, y], 1) - - return coord_feat - - -def levels_to_images(mlvl_tensor: List[torch.Tensor]) -> List[torch.Tensor]: - """Concat multi-level feature maps by image. - - [feature_level0, feature_level1...] -> [feature_image0, feature_image1...] - Convert the shape of each element in mlvl_tensor from (N, C, H, W) to - (N, H*W , C), then split the element to N elements with shape (H*W, C), and - concat elements in same image of all level along first dimension. - - Args: - mlvl_tensor (list[Tensor]): list of Tensor which collect from - corresponding level. Each element is of shape (N, C, H, W) - - Returns: - list[Tensor]: A list that contains N tensors and each tensor is - of shape (num_elements, C) - """ - batch_size = mlvl_tensor[0].size(0) - batch_list = [[] for _ in range(batch_size)] - channels = mlvl_tensor[0].size(1) - for t in mlvl_tensor: - t = t.permute(0, 2, 3, 1) - t = t.view(batch_size, -1, channels).contiguous() - for img in range(batch_size): - batch_list[img].append(t[img]) - return [torch.cat(item, 0) for item in batch_list] - - -def images_to_levels(target, num_levels): - """Convert targets by image to targets by feature level. - - [target_img0, target_img1] -> [target_level0, target_level1, ...] - """ - target = stack_boxes(target, 0) - level_targets = [] - start = 0 - for n in num_levels: - end = start + n - # level_targets.append(target[:, start:end].squeeze(0)) - level_targets.append(target[:, start:end]) - start = end - return level_targets - - -def samplelist_boxtype2tensor(batch_data_samples: SampleList) -> SampleList: - for data_samples in batch_data_samples: - if 'gt_instances' in data_samples: - bboxes = data_samples.gt_instances.get('bboxes', None) - if isinstance(bboxes, BaseBoxes): - data_samples.gt_instances.bboxes = bboxes.tensor - if 'pred_instances' in data_samples: - bboxes = data_samples.pred_instances.get('bboxes', None) - if isinstance(bboxes, BaseBoxes): - data_samples.pred_instances.bboxes = bboxes.tensor - if 'ignored_instances' in data_samples: - bboxes = data_samples.ignored_instances.get('bboxes', None) - if isinstance(bboxes, BaseBoxes): - data_samples.ignored_instances.bboxes = bboxes.tensor - - -_torch_version_div_indexing = ( - 'parrots' not in torch.__version__ - and digit_version(torch.__version__) >= digit_version('1.8')) - - -def floordiv(dividend, divisor, rounding_mode='trunc'): - if _torch_version_div_indexing: - return torch.div(dividend, divisor, rounding_mode=rounding_mode) - else: - return dividend // divisor - - -def _filter_gt_instances_by_score(batch_data_samples: SampleList, - score_thr: float) -> SampleList: - """Filter ground truth (GT) instances by score. - - Args: - batch_data_samples (SampleList): The Data - Samples. It usually includes information such as - `gt_instance`, `gt_panoptic_seg` and `gt_sem_seg`. - score_thr (float): The score filter threshold. - - Returns: - SampleList: The Data Samples filtered by score. - """ - for data_samples in batch_data_samples: - assert 'scores' in data_samples.gt_instances, \ - 'there does not exit scores in instances' - if data_samples.gt_instances.bboxes.shape[0] > 0: - data_samples.gt_instances = data_samples.gt_instances[ - data_samples.gt_instances.scores > score_thr] - return batch_data_samples - - -def _filter_gt_instances_by_size(batch_data_samples: SampleList, - wh_thr: tuple) -> SampleList: - """Filter ground truth (GT) instances by size. - - Args: - batch_data_samples (SampleList): The Data - Samples. It usually includes information such as - `gt_instance`, `gt_panoptic_seg` and `gt_sem_seg`. - wh_thr (tuple): Minimum width and height of bbox. - - Returns: - SampleList: The Data Samples filtered by score. - """ - for data_samples in batch_data_samples: - bboxes = data_samples.gt_instances.bboxes - if bboxes.shape[0] > 0: - w = bboxes[:, 2] - bboxes[:, 0] - h = bboxes[:, 3] - bboxes[:, 1] - data_samples.gt_instances = data_samples.gt_instances[ - (w > wh_thr[0]) & (h > wh_thr[1])] - return batch_data_samples - - -def filter_gt_instances(batch_data_samples: SampleList, - score_thr: float = None, - wh_thr: tuple = None): - """Filter ground truth (GT) instances by score and/or size. - - Args: - batch_data_samples (SampleList): The Data - Samples. It usually includes information such as - `gt_instance`, `gt_panoptic_seg` and `gt_sem_seg`. - score_thr (float): The score filter threshold. - wh_thr (tuple): Minimum width and height of bbox. - - Returns: - SampleList: The Data Samples filtered by score and/or size. - """ - - if score_thr is not None: - batch_data_samples = _filter_gt_instances_by_score( - batch_data_samples, score_thr) - if wh_thr is not None: - batch_data_samples = _filter_gt_instances_by_size( - batch_data_samples, wh_thr) - return batch_data_samples - - -def rename_loss_dict(prefix: str, losses: dict) -> dict: - """Rename the key names in loss dict by adding a prefix. - - Args: - prefix (str): The prefix for loss components. - losses (dict): A dictionary of loss components. - - Returns: - dict: A dictionary of loss components with prefix. - """ - return {prefix + k: v for k, v in losses.items()} - - -def reweight_loss_dict(losses: dict, weight: float) -> dict: - """Reweight losses in the dict by weight. - - Args: - losses (dict): A dictionary of loss components. - weight (float): Weight for loss components. - - Returns: - dict: A dictionary of weighted loss components. - """ - for name, loss in losses.items(): - if 'loss' in name: - if isinstance(loss, Sequence): - losses[name] = [item * weight for item in loss] - else: - losses[name] = loss * weight - return losses - - -def relative_coordinate_maps( - locations: Tensor, - centers: Tensor, - strides: Tensor, - size_of_interest: int, - feat_sizes: Tuple[int], -) -> Tensor: - """Generate the relative coordinate maps with feat_stride. - - Args: - locations (Tensor): The prior location of mask feature map. - It has shape (num_priors, 2). - centers (Tensor): The prior points of a object in - all feature pyramid. It has shape (num_pos, 2) - strides (Tensor): The prior strides of a object in - all feature pyramid. It has shape (num_pos, 1) - size_of_interest (int): The size of the region used in rel coord. - feat_sizes (Tuple[int]): The feature size H and W, which has 2 dims. - Returns: - rel_coord_feat (Tensor): The coordinate feature - of shape (num_pos, 2, H, W). - """ - - H, W = feat_sizes - rel_coordinates = centers.reshape(-1, 1, 2) - locations.reshape(1, -1, 2) - rel_coordinates = rel_coordinates.permute(0, 2, 1).float() - rel_coordinates = rel_coordinates / ( - strides[:, None, None] * size_of_interest) - return rel_coordinates.reshape(-1, 2, H, W) - - -def aligned_bilinear(tensor: Tensor, factor: int) -> Tensor: - """aligned bilinear, used in original implement in CondInst: - - https://github.com/aim-uofa/AdelaiDet/blob/\ - c0b2092ce72442b0f40972f7c6dda8bb52c46d16/adet/utils/comm.py#L23 - """ - - assert tensor.dim() == 4 - assert factor >= 1 - assert int(factor) == factor - - if factor == 1: - return tensor - - h, w = tensor.size()[2:] - tensor = F.pad(tensor, pad=(0, 1, 0, 1), mode='replicate') - oh = factor * h + 1 - ow = factor * w + 1 - tensor = F.interpolate( - tensor, size=(oh, ow), mode='bilinear', align_corners=True) - tensor = F.pad( - tensor, pad=(factor // 2, 0, factor // 2, 0), mode='replicate') - - return tensor[:, :, :oh - 1, :ow - 1] - - -def unfold_wo_center(x, kernel_size: int, dilation: int) -> Tensor: - """unfold_wo_center, used in original implement in BoxInst: - - https://github.com/aim-uofa/AdelaiDet/blob/\ - 4a3a1f7372c35b48ebf5f6adc59f135a0fa28d60/\ - adet/modeling/condinst/condinst.py#L53 - """ - assert x.dim() == 4 - assert kernel_size % 2 == 1 - - # using SAME padding - padding = (kernel_size + (dilation - 1) * (kernel_size - 1)) // 2 - unfolded_x = F.unfold( - x, kernel_size=kernel_size, padding=padding, dilation=dilation) - unfolded_x = unfolded_x.reshape( - x.size(0), x.size(1), -1, x.size(2), x.size(3)) - # remove the center pixels - size = kernel_size**2 - unfolded_x = torch.cat( - (unfolded_x[:, :, :size // 2], unfolded_x[:, :, size // 2 + 1:]), - dim=2) - - return unfolded_x diff --git a/spaces/Kyo-Kai/Fsg-pp/sites/danbooru.py b/spaces/Kyo-Kai/Fsg-pp/sites/danbooru.py deleted file mode 100644 index 529070d307f4d248e313216edd905fdce9829a08..0000000000000000000000000000000000000000 --- a/spaces/Kyo-Kai/Fsg-pp/sites/danbooru.py +++ /dev/null @@ -1,213 +0,0 @@ -import time -import urllib.request -import os -from random import randint -from selenium.webdriver.common.by import By -from selenium.webdriver.common.keys import Keys -from selenium.webdriver.support.ui import WebDriverWait -from selenium.webdriver.support import expected_conditions as EC -from commands.driver_instance import create_url_headers, tab_handler -from commands.exec_path import imgList -from commands.universal import searchQuery, save_Search, continue_Search, contains_works -from ai.classifying_ai import img_classifier - -def getOrderedDanbooruImages(driver, exec_path, user_search, num_pics, num_pages, filters, bl_tags, inc_tags, imageControl): - global image_locations, bl_tags_list, inc_tags_list, image_names, ai_mode,rating_filters - image_names = imgList(mode=0) - image_locations = [] - link = "https://danbooru.donmai.us/" - - if 0 in imageControl: - continue_Search(driver, link, mode=1) - else: - driver.get(link) - - # Rating Filter Creation - rating_filters = ["e"] if 2 in filters else [] - rating_filters = ["s","e"] if 3 in filters else [] - rating_filters = ["q","s","e"] if 4 in filters else [] - - # Tag list creation - score = 1 if 0 in filters else 0 - match_type = 1 if 1 in filters else 0 - r_18 = pg_lenient() if 2 in filters else [] - r_18 = pg_strict() if 3 in filters else r_18 - ai_mode = 1 if 5 in filters else 0 - - continue_search = 1 if imageControl else 0 - - # Replace spaces to make spaces feasible by the user - user_search = user_search.replace(" ", "_") - score = filter_score(score) - - bl_tags_list = create_filter_tag_list(bl_tags, r_18) - inc_tags_list = create_tag_list(inc_tags, match_type) if inc_tags else [] - - if 0 not in imageControl: - searchQuery(user_search, driver, '//*[@name="tags"]', mode=1, score=score) - - if not contains_works(driver, '//*[@class="posts-container gap-2"]'): - print("No works found...") - return [] - - if ai_mode: - WebDriverWait(driver, timeout=11).until(EC.presence_of_element_located((By.XPATH, '//*[@class="popup-menu-content"]'))) - driver.get(driver.find_element(By.XPATH, '(//*[@class="popup-menu-content"]//li)[6]//a').get_attribute("href")) - - curr_page = driver.current_url - while len(image_locations) < num_pics*num_pages: - pages_to_search(driver, num_pages, num_pics, exec_path) - if curr_page == driver.current_url and len(image_locations) < num_pics*num_pages: - print("Reached end of search results") - break - curr_page = driver.current_url - driver.close() - - return image_locations - -def filter_score(score): - if score: - return " order:score" - return "" - -def pages_to_search(driver, num_pages, num_pics, exec_path): - for i in range(num_pages): - WebDriverWait(driver, timeout=11).until(EC.presence_of_element_located((By.XPATH, '//*[@class="posts-container gap-2"]'))) - # Selects the picture grids - images = driver.find_element( - By.XPATH, '//*[@class="posts-container gap-2"]' - ).find_elements(By.CLASS_NAME, "post-preview-link") - grid_search(driver, num_pics, images, exec_path, num_pages) - save_Search(driver, mode=1) - if not valid_page(driver) or len(image_locations) >= num_pics*num_pages: - break - -def grid_search(driver, num_pics, images, exec_path, num_pages): - temp_img_len = len(image_locations) - for n_iter, image in enumerate(images): - if len(image_locations) >= num_pics*num_pages or len(image_locations) - temp_img_len >= num_pics: - break - - try: - if image.find_element(By.XPATH, ".//img").get_attribute('src').split("/")[-1].split(".")[0].encode("ascii", "ignore").decode("ascii") in image_names: - print("\nImage already exists, moving to another image...") - continue - - # Has to be checked this way otherwise tags are not visible in headless mode - img_tags = driver.find_elements(By.CLASS_NAME, "post-preview")[n_iter].get_attribute('data-tags') - img_rating = driver.find_elements(By.CLASS_NAME, "post-preview")[n_iter].get_attribute('data-rating') - - if filter_ratings(img_rating,rating_filters) and filter_tags(bl_tags_list, inc_tags_list, img_tags): - - - if ai_mode: - checker = 0 - image_loc = download_image(exec_path=exec_path, driver=driver, image=image) - if img_classifier(image_loc): - print("AI Mode: I approve this image") - else: - print("AI Mode: Skipping this image") - checker = 1 - os.remove(image_loc) - if checker: - continue - - driver, tempImg = tab_handler(driver=driver,image=image) - WebDriverWait(driver, timeout=15).until(EC.presence_of_element_located((By.XPATH, '//*[@id="post-option-download"]/a'))) - download_image(exec_path=exec_path, driver=driver) - driver = tab_handler(driver=driver) - - else: - print("\nFilters did not match/Not an image, moving to another image...") - - except: - print("\nI ran into an error, closing the tab and moving on...") - if driver.window_handles[-1] != driver.window_handles[0]: - driver = tab_handler(driver=driver) - time.sleep(randint(0,2) + randint(0,9)/10) - -def filter_ratings(img_rating,rating_filters): - if img_rating not in rating_filters: - return True - return False - -def filter_tags(bl_tags_list, inc_tags_list, img_tags): - # Hashmap of picture's tags for O(1) time searching - img_hash = {} - for img_tag in img_tags.split(" "): - img_hash[img_tag] = 1 - - # Included tags (exact match or not exact) - if inc_tags_list and inc_tags_list[-1] == 1: - inc_tags_list.pop() - for tag in inc_tags_list: - if not img_hash.get(tag, 0): - return False - elif inc_tags_list: - cond = False - for tag in inc_tags_list: - if img_hash.get(tag, 0): - cond = True - break - if not cond: - return False - - # Note that bl_tags_list is never empty since it filters videos - for tag in bl_tags_list: - if img_hash.get(tag,0): - return False - return True - -def create_tag_list(inc_tags, match_type): - temp_tags = [tag.lstrip().replace(" ","_") for tag in inc_tags.split(",")] - if match_type: - temp_tags.append(1) - return temp_tags - -def create_filter_tag_list(bl_tags, r_18): - temp_tags = ["animated", "video", "sound"] - if bl_tags: - temp_tags += [tag.lstrip().replace(" ","_") for tag in bl_tags.split(",")] - if r_18: - temp_tags += r_18 - return temp_tags - -# Find the next page and ensure it isn't the last page -def valid_page(driver): - cur_url = driver.current_url - driver.find_element(By.CLASS_NAME, "paginator-next").click() - if cur_url == driver.current_url: - return 0 - return 1 - -def pg_lenient(): - return ["sex","penis","vaginal","completely_nude","nude","exposed_boobs","ahegao","cum","no_panties","no_bra", - "nipple_piercing", "anal_fluid","uncensored", "see-through", "pussy", "cunnilingus", "oral", "ass_focus", - "anal", "sex_from_behind", "cum_on_clothes", "cum_on_face", "nipple","nipples", "missionary" - "fellatio", "rape", "breasts_out","cum_in_pussy", "condom", "dildo", "sex_toy", "cum_in_mouth", "heavy_breathing", "cum_on_tongue" - "panties", "panty_pull", "nude_cover", "underwear_only","grabbing_own_breast","ass_grab","censored","areola_slip","areolae","torn_pantyhose","micro_bikini","steaming_body"] - -def pg_strict(): - return pg_lenient() + ["piercings", "cleavage","boobs","thongs","fellatio_gesture", "mosaic_censoring", "ass", "mosaic_censoring", - "covered_nipples", "thigh_focus", "thighs", "bikini", "swimsuit", "grabbing_another's_breast", "huge_breasts", - "foot_focus", "licking_foot", "foot_worship", "shirt_lift","clothes_lift", "underwear", "panties_under_pantyhose"] - -def download_image(exec_path, driver, image=0): - if not image: - tempDL = driver.find_element(By.XPATH, '//*[@id="post-option-download"]/a') - tempDLAttr = tempDL.get_attribute("href") - tempDLName = tempDL.get_attribute("download").encode('ascii', 'ignore').decode('ascii') - else: - tempDLAttr = image.find_element(By.XPATH, ".//img").get_attribute('src') - tempDLName = tempDLAttr.split("/")[-1].encode("ascii", "ignore").decode("ascii") - print(f"\n{tempDLAttr.split('?')[0]}") - - img_loc = f"./{exec_path.folder_path}/{tempDLName}" - urllib.request.install_opener(create_url_headers(driver.current_url)) - urllib.request.urlretrieve( - tempDLAttr, f"./{exec_path.folder_path}/{tempDLName}" - ) - if not image: - image_locations.append(f"./{exec_path.folder_path}/{tempDLName}") - image_names.append(f"{tempDLName.split('.')[0]}") - return img_loc \ No newline at end of file diff --git a/spaces/LDY/Text-To-Image/README.md b/spaces/LDY/Text-To-Image/README.md deleted file mode 100644 index f76ecc749181e8e3e2e4875dbd7372e97d919c4b..0000000000000000000000000000000000000000 --- a/spaces/LDY/Text-To-Image/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Text To Image -emoji: 💻 -colorFrom: blue -colorTo: blue -sdk: gradio -sdk_version: 3.1.0 -app_file: app.py -pinned: false -license: afl-3.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Lamai/LAMAIGPT/tests/test_image_gen.py b/spaces/Lamai/LAMAIGPT/tests/test_image_gen.py deleted file mode 100644 index 19c57e427d5c1b84aa7f72925733d0056ddf5268..0000000000000000000000000000000000000000 --- a/spaces/Lamai/LAMAIGPT/tests/test_image_gen.py +++ /dev/null @@ -1,102 +0,0 @@ -import hashlib -import os -import unittest - -from PIL import Image - -from autogpt.commands.image_gen import generate_image, generate_image_with_sd_webui -from autogpt.config import Config -from autogpt.workspace import path_in_workspace - - -def lst(txt): - return txt.split(":")[1].strip() - - -@unittest.skipIf(os.getenv("CI"), "Skipping image generation tests") -class TestImageGen(unittest.TestCase): - def setUp(self): - self.config = Config() - - def test_dalle(self): - self.config.image_provider = "dalle" - - # Test using size 256 - result = lst(generate_image("astronaut riding a horse", 256)) - image_path = path_in_workspace(result) - self.assertTrue(image_path.exists()) - with Image.open(image_path) as img: - self.assertEqual(img.size, (256, 256)) - image_path.unlink() - - # Test using size 512 - result = lst(generate_image("astronaut riding a horse", 512)) - image_path = path_in_workspace(result) - with Image.open(image_path) as img: - self.assertEqual(img.size, (512, 512)) - image_path.unlink() - - def test_huggingface(self): - self.config.image_provider = "huggingface" - - # Test usin SD 1.4 model and size 512 - self.config.huggingface_image_model = "CompVis/stable-diffusion-v1-4" - result = lst(generate_image("astronaut riding a horse", 512)) - image_path = path_in_workspace(result) - self.assertTrue(image_path.exists()) - with Image.open(image_path) as img: - self.assertEqual(img.size, (512, 512)) - image_path.unlink() - - # Test using SD 2.1 768 model and size 768 - self.config.huggingface_image_model = "stabilityai/stable-diffusion-2-1" - result = lst(generate_image("astronaut riding a horse", 768)) - image_path = path_in_workspace(result) - with Image.open(image_path) as img: - self.assertEqual(img.size, (768, 768)) - image_path.unlink() - - def test_sd_webui(self): - self.config.image_provider = "sd_webui" - return - - # Test using size 128 - result = lst(generate_image_with_sd_webui("astronaut riding a horse", 128)) - image_path = path_in_workspace(result) - self.assertTrue(image_path.exists()) - with Image.open(image_path) as img: - self.assertEqual(img.size, (128, 128)) - image_path.unlink() - - # Test using size 64 and negative prompt - result = lst( - generate_image_with_sd_webui( - "astronaut riding a horse", - negative_prompt="horse", - size=64, - extra={"seed": 123}, - ) - ) - image_path = path_in_workspace(result) - with Image.open(image_path) as img: - self.assertEqual(img.size, (64, 64)) - neg_image_hash = hashlib.md5(img.tobytes()).hexdigest() - image_path.unlink() - - # Same test as above but without the negative prompt - result = lst( - generate_image_with_sd_webui( - "astronaut riding a horse", image_size=64, size=1, extra={"seed": 123} - ) - ) - image_path = path_in_workspace(result) - with Image.open(image_path) as img: - self.assertEqual(img.size, (64, 64)) - image_hash = hashlib.md5(img.tobytes()).hexdigest() - image_path.unlink() - - self.assertNotEqual(image_hash, neg_image_hash) - - -if __name__ == "__main__": - unittest.main() diff --git a/spaces/LandonBurlingham/07-Seq2Seq/README.md b/spaces/LandonBurlingham/07-Seq2Seq/README.md deleted file mode 100644 index 4b7cc8b8b1bb2d5fe158af5e57a2f34e12fe7a07..0000000000000000000000000000000000000000 --- a/spaces/LandonBurlingham/07-Seq2Seq/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: 07 Seq2Seq -emoji: 👁 -colorFrom: indigo -colorTo: purple -sdk: gradio -sdk_version: 3.6 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/LanguageBind/LanguageBind/languagebind/image/modeling_image.py b/spaces/LanguageBind/LanguageBind/languagebind/image/modeling_image.py deleted file mode 100644 index 7228f5daed51a2f2b0c94d9fd68076eff1a39ae1..0000000000000000000000000000000000000000 --- a/spaces/LanguageBind/LanguageBind/languagebind/image/modeling_image.py +++ /dev/null @@ -1,1030 +0,0 @@ -import math -from typing import Optional, Tuple, Union - -import torch -from einops import rearrange -from peft import LoraConfig, get_peft_model -from torch import nn -from torch.nn import functional as F -from transformers import PreTrainedModel, add_start_docstrings -from transformers.modeling_outputs import BaseModelOutput, BaseModelOutputWithPooling -from transformers.models.clip.modeling_clip import CLIPMLP, CLIPAttention, CLIPTextEmbeddings, CLIPVisionEmbeddings, \ - CLIPVisionModelWithProjection, CLIPTextModelWithProjection, _expand_mask, CLIPOutput, clip_loss -from transformers.utils import add_start_docstrings_to_model_forward, replace_return_docstrings - -from .configuration_image import LanguageBindImageConfig, CLIPVisionConfig, CLIPTextConfig - - - -class PatchDropout(nn.Module): - """ - https://arxiv.org/abs/2212.00794 - """ - - def __init__(self, prob, exclude_first_token=True): - super().__init__() - assert 0 <= prob < 1. - self.prob = prob - self.exclude_first_token = exclude_first_token # exclude CLS token - - def forward(self, x, B, T): - if not self.training or self.prob == 0.: - return x - - if self.exclude_first_token: - cls_tokens, x = x[:, :1], x[:, 1:] - else: - cls_tokens = torch.jit.annotate(torch.Tensor, x[:, :1]) - - batch = x.size()[0] - num_tokens = x.size()[1] - - batch_indices = torch.arange(batch) - batch_indices = batch_indices[..., None] - - keep_prob = 1 - self.prob - num_patches_keep = max(1, int(num_tokens * keep_prob)) - - if T == 1: - rand = torch.randn(batch, num_tokens) - patch_indices_keep = rand.topk(num_patches_keep, dim=-1).indices - else: - rand = torch.randn(B, num_tokens) - patch_indices_keep = rand.topk(num_patches_keep, dim=-1).indices - patch_indices_keep = patch_indices_keep.unsqueeze(1).repeat(1, T, 1) - patch_indices_keep = rearrange(patch_indices_keep, 'b t n -> (b t) n') - - - x = x[batch_indices, patch_indices_keep] - - if self.exclude_first_token: - x = torch.cat((cls_tokens, x), dim=1) - - return x - -class CLIPEncoderLayer(nn.Module): - def __init__(self, config: LanguageBindImageConfig): - super().__init__() - self.embed_dim = config.hidden_size - self.self_attn = CLIPAttention(config) - self.layer_norm1 = nn.LayerNorm(self.embed_dim, eps=config.layer_norm_eps) - self.mlp = CLIPMLP(config) - self.layer_norm2 = nn.LayerNorm(self.embed_dim, eps=config.layer_norm_eps) - - self.add_time_attn = config.add_time_attn - if self.add_time_attn: - self.t = config.num_frames - self.temporal_embedding = nn.Parameter(torch.zeros(1, config.num_frames, config.hidden_size)) - nn.init.normal_(self.temporal_embedding, std=config.hidden_size ** -0.5) - - self.embed_dim = config.hidden_size - self.temporal_attn = CLIPAttention(config) - self.temporal_layer_norm1 = nn.LayerNorm(self.embed_dim, eps=config.layer_norm_eps) - self.temporal_mlp = CLIPMLP(config) - self.temporal_layer_norm2 = nn.LayerNorm(self.embed_dim, eps=config.layer_norm_eps) - - def forward( - self, - hidden_states: torch.Tensor, - attention_mask: torch.Tensor, - causal_attention_mask: torch.Tensor, - output_attentions: Optional[bool] = False, - ) -> Tuple[torch.FloatTensor]: - """ - Args: - hidden_states (`torch.FloatTensor`): input to the layer of shape `(batch, seq_len, embed_dim)` - attention_mask (`torch.FloatTensor`): attention mask of size - `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values. - `(config.encoder_attention_heads,)`. - output_attentions (`bool`, *optional*): - Whether or not to return the attentions tensors of all attention layers. See `attentions` under - returned tensors for more detail. - """ - - - if self.add_time_attn: - bt, n, d = hidden_states.shape - t = self.t - - # time embed - if t != 1: - n = hidden_states.shape[1] - hidden_states = rearrange(hidden_states, '(b t) n d -> (b n) t d', t=t) - hidden_states = hidden_states + self.temporal_embedding[:, :t, :] - hidden_states = rearrange(hidden_states, '(b n) t d -> (b t) n d', n=n) - - # time attn - residual = hidden_states - hidden_states = rearrange(hidden_states, '(b t) n d -> (b n) t d', t=t) - # hidden_states = self.layer_norm1(hidden_states) # share layernorm - hidden_states = self.temporal_layer_norm1(hidden_states) - hidden_states, attn_weights = self.temporal_attn( - hidden_states=hidden_states, - attention_mask=attention_mask, - causal_attention_mask=causal_attention_mask, - output_attentions=output_attentions, - ) - hidden_states = residual + rearrange(hidden_states, '(b n) t d -> (b t) n d', n=n) - - residual = hidden_states - hidden_states = rearrange(hidden_states, '(b t) n d -> (b n) t d', t=t) - # hidden_states = self.layer_norm2(hidden_states) # share layernorm - hidden_states = self.temporal_layer_norm2(hidden_states) - hidden_states = self.temporal_mlp(hidden_states) - hidden_states = residual + rearrange(hidden_states, '(b n) t d -> (b t) n d', n=n) - - # spatial attn - residual = hidden_states - - hidden_states = self.layer_norm1(hidden_states) - hidden_states, attn_weights = self.self_attn( - hidden_states=hidden_states, - attention_mask=attention_mask, - causal_attention_mask=causal_attention_mask, - output_attentions=output_attentions, - ) - hidden_states = residual + hidden_states - - residual = hidden_states - hidden_states = self.layer_norm2(hidden_states) - hidden_states = self.mlp(hidden_states) - hidden_states = residual + hidden_states - - outputs = (hidden_states,) - - if output_attentions: - outputs += (attn_weights,) - - return outputs - - - - - - - - - -class CLIPPreTrainedModel(PreTrainedModel): - """ - An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained - models. - """ - - config_class = LanguageBindImageConfig - base_model_prefix = "clip" - supports_gradient_checkpointing = True - _keys_to_ignore_on_load_missing = [r"position_ids"] - - def _init_weights(self, module): - """Initialize the weights""" - factor = self.config.initializer_factor - if isinstance(module, CLIPTextEmbeddings): - module.token_embedding.weight.data.normal_(mean=0.0, std=factor * 0.02) - module.position_embedding.weight.data.normal_(mean=0.0, std=factor * 0.02) - elif isinstance(module, CLIPVisionEmbeddings): - factor = self.config.initializer_factor - nn.init.normal_(module.class_embedding, mean=0.0, std=module.embed_dim**-0.5 * factor) - nn.init.normal_(module.patch_embedding.weight, std=module.config.initializer_range * factor) - nn.init.normal_(module.position_embedding.weight, std=module.config.initializer_range * factor) - elif isinstance(module, CLIPAttention): - factor = self.config.initializer_factor - in_proj_std = (module.embed_dim**-0.5) * ((2 * module.config.num_hidden_layers) ** -0.5) * factor - out_proj_std = (module.embed_dim**-0.5) * factor - nn.init.normal_(module.q_proj.weight, std=in_proj_std) - nn.init.normal_(module.k_proj.weight, std=in_proj_std) - nn.init.normal_(module.v_proj.weight, std=in_proj_std) - nn.init.normal_(module.out_proj.weight, std=out_proj_std) - elif isinstance(module, CLIPMLP): - factor = self.config.initializer_factor - in_proj_std = ( - (module.config.hidden_size**-0.5) * ((2 * module.config.num_hidden_layers) ** -0.5) * factor - ) - fc_std = (2 * module.config.hidden_size) ** -0.5 * factor - nn.init.normal_(module.fc1.weight, std=fc_std) - nn.init.normal_(module.fc2.weight, std=in_proj_std) - elif isinstance(module, LanguageBindImage): - nn.init.normal_( - module.text_projection.weight, - std=module.text_embed_dim**-0.5 * self.config.initializer_factor, - ) - nn.init.normal_( - module.visual_projection.weight, - std=module.vision_embed_dim**-0.5 * self.config.initializer_factor, - ) - elif isinstance(module, CLIPVisionModelWithProjection): - nn.init.normal_( - module.visual_projection.weight, - std=self.config.hidden_size**-0.5 * self.config.initializer_factor, - ) - elif isinstance(module, CLIPTextModelWithProjection): - nn.init.normal_( - module.text_projection.weight, - std=self.config.hidden_size**-0.5 * self.config.initializer_factor, - ) - - if isinstance(module, nn.LayerNorm): - module.bias.data.zero_() - module.weight.data.fill_(1.0) - if isinstance(module, nn.Linear) and module.bias is not None: - module.bias.data.zero_() - - def _set_gradient_checkpointing(self, module, value=False): - if isinstance(module, CLIPEncoder): - module.gradient_checkpointing = value - - -CLIP_START_DOCSTRING = r""" - This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the - library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads - etc.) - - This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. - Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage - and behavior. - - Parameters: - config ([`CLIPConfig`]): Model configuration class with all the parameters of the model. - Initializing with a config file does not load the weights associated with the model, only the - configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. -""" - -CLIP_TEXT_INPUTS_DOCSTRING = r""" - Args: - input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`): - Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide - it. - - Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and - [`PreTrainedTokenizer.__call__`] for details. - - [What are input IDs?](../glossary#input-ids) - attention_mask (`torch.Tensor` of shape `(batch_size, sequence_length)`, *optional*): - Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`: - - - 1 for tokens that are **not masked**, - - 0 for tokens that are **masked**. - - [What are attention masks?](../glossary#attention-mask) - position_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): - Indices of positions of each input sequence tokens in the position embeddings. Selected in the range `[0, - config.max_position_embeddings - 1]`. - - [What are position IDs?](../glossary#position-ids) - output_attentions (`bool`, *optional*): - Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned - tensors for more detail. - output_hidden_states (`bool`, *optional*): - Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for - more detail. - return_dict (`bool`, *optional*): - Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple. -""" - -CLIP_VISION_INPUTS_DOCSTRING = r""" - Args: - pixel_values (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)`): - Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using - [`AutoImageProcessor`]. See [`CLIPImageProcessor.__call__`] for details. - output_attentions (`bool`, *optional*): - Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned - tensors for more detail. - output_hidden_states (`bool`, *optional*): - Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for - more detail. - return_dict (`bool`, *optional*): - Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple. -""" - -CLIP_INPUTS_DOCSTRING = r""" - Args: - input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`): - Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide - it. - - Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and - [`PreTrainedTokenizer.__call__`] for details. - - [What are input IDs?](../glossary#input-ids) - attention_mask (`torch.Tensor` of shape `(batch_size, sequence_length)`, *optional*): - Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`: - - - 1 for tokens that are **not masked**, - - 0 for tokens that are **masked**. - - [What are attention masks?](../glossary#attention-mask) - position_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): - Indices of positions of each input sequence tokens in the position embeddings. Selected in the range `[0, - config.max_position_embeddings - 1]`. - - [What are position IDs?](../glossary#position-ids) - pixel_values (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)`): - Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using - [`AutoImageProcessor`]. See [`CLIPImageProcessor.__call__`] for details. - return_loss (`bool`, *optional*): - Whether or not to return the contrastive loss. - output_attentions (`bool`, *optional*): - Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned - tensors for more detail. - output_hidden_states (`bool`, *optional*): - Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for - more detail. - return_dict (`bool`, *optional*): - Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple. -""" - - -class CLIPEncoder(nn.Module): - """ - Transformer encoder consisting of `config.num_hidden_layers` self attention layers. Each layer is a - [`CLIPEncoderLayer`]. - - Args: - config: CLIPConfig - """ - - def __init__(self, config: LanguageBindImageConfig): - super().__init__() - self.config = config - self.layers = nn.ModuleList([CLIPEncoderLayer(config) for _ in range(config.num_hidden_layers)]) - self.gradient_checkpointing = False - - def forward( - self, - inputs_embeds, - attention_mask: Optional[torch.Tensor] = None, - causal_attention_mask: Optional[torch.Tensor] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[Tuple, BaseModelOutput]: - r""" - Args: - inputs_embeds (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`): - Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. - This is useful if you want more control over how to convert `input_ids` indices into associated vectors - than the model's internal embedding lookup matrix. - attention_mask (`torch.Tensor` of shape `(batch_size, sequence_length)`, *optional*): - Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`: - - - 1 for tokens that are **not masked**, - - 0 for tokens that are **masked**. - - [What are attention masks?](../glossary#attention-mask) - causal_attention_mask (`torch.Tensor` of shape `(batch_size, sequence_length)`, *optional*): - Causal mask for the text model. Mask values selected in `[0, 1]`: - - - 1 for tokens that are **not masked**, - - 0 for tokens that are **masked**. - - [What are attention masks?](../glossary#attention-mask) - output_attentions (`bool`, *optional*): - Whether or not to return the attentions tensors of all attention layers. See `attentions` under - returned tensors for more detail. - output_hidden_states (`bool`, *optional*): - Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors - for more detail. - return_dict (`bool`, *optional*): - Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple. - """ - output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions - output_hidden_states = ( - output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states - ) - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - encoder_states = () if output_hidden_states else None - all_attentions = () if output_attentions else None - - hidden_states = inputs_embeds - for idx, encoder_layer in enumerate(self.layers): - if output_hidden_states: - encoder_states = encoder_states + (hidden_states,) - if self.gradient_checkpointing and self.training: - - def create_custom_forward(module): - def custom_forward(*inputs): - return module(*inputs, output_attentions) - - return custom_forward - - layer_outputs = torch.utils.checkpoint.checkpoint( - create_custom_forward(encoder_layer), - hidden_states, - attention_mask, - causal_attention_mask, - ) - else: - layer_outputs = encoder_layer( - hidden_states, - attention_mask, - causal_attention_mask, - output_attentions=output_attentions, - ) - - hidden_states = layer_outputs[0] - - if output_attentions: - all_attentions = all_attentions + (layer_outputs[1],) - - if output_hidden_states: - encoder_states = encoder_states + (hidden_states,) - - if not return_dict: - return tuple(v for v in [hidden_states, encoder_states, all_attentions] if v is not None) - return BaseModelOutput( - last_hidden_state=hidden_states, hidden_states=encoder_states, attentions=all_attentions - ) - - -# Copied from transformers.models.bart.modeling_bart._make_causal_mask -def _make_causal_mask( - input_ids_shape: torch.Size, dtype: torch.dtype, device: torch.device, past_key_values_length: int = 0 -): - """ - Make causal mask used for bi-directional self-attention. - """ - bsz, tgt_len = input_ids_shape - mask = torch.full((tgt_len, tgt_len), torch.finfo(dtype).min, device=device) - mask_cond = torch.arange(mask.size(-1), device=device) - mask.masked_fill_(mask_cond < (mask_cond + 1).view(mask.size(-1), 1), 0) - mask = mask.to(dtype) - - if past_key_values_length > 0: - mask = torch.cat([torch.zeros(tgt_len, past_key_values_length, dtype=dtype, device=device), mask], dim=-1) - return mask[None, None, :, :].expand(bsz, 1, tgt_len, tgt_len + past_key_values_length) - - -class CLIPTextTransformer(nn.Module): - def __init__(self, config: CLIPTextConfig): - super().__init__() - self.config = config - embed_dim = config.hidden_size - self.embeddings = CLIPTextEmbeddings(config) - self.encoder = CLIPEncoder(config) - self.final_layer_norm = nn.LayerNorm(embed_dim, eps=config.layer_norm_eps) - - @add_start_docstrings_to_model_forward(CLIP_TEXT_INPUTS_DOCSTRING) - @replace_return_docstrings(output_type=BaseModelOutputWithPooling, config_class=CLIPTextConfig) - def forward( - self, - input_ids: Optional[torch.Tensor] = None, - attention_mask: Optional[torch.Tensor] = None, - position_ids: Optional[torch.Tensor] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[Tuple, BaseModelOutputWithPooling]: - r""" - Returns: - - """ - output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions - output_hidden_states = ( - output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states - ) - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - if input_ids is None: - raise ValueError("You have to specify input_ids") - - input_shape = input_ids.size() - input_ids = input_ids.view(-1, input_shape[-1]) - - hidden_states = self.embeddings(input_ids=input_ids, position_ids=position_ids) - - # CLIP's text model uses causal mask, prepare it here. - # https://github.com/openai/CLIP/blob/cfcffb90e69f37bf2ff1e988237a0fbe41f33c04/clip/model.py#L324 - causal_attention_mask = _make_causal_mask(input_shape, hidden_states.dtype, device=hidden_states.device) - # expand attention_mask - if attention_mask is not None: - # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] - attention_mask = _expand_mask(attention_mask, hidden_states.dtype) - - encoder_outputs = self.encoder( - inputs_embeds=hidden_states, - attention_mask=attention_mask, - causal_attention_mask=causal_attention_mask, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - last_hidden_state = encoder_outputs[0] - last_hidden_state = self.final_layer_norm(last_hidden_state) - - # text_embeds.shape = [batch_size, sequence_length, transformer.width] - # take features from the eot embedding (eot_token is the highest number in each sequence) - # casting to torch.int for onnx compatibility: argmax doesn't support int64 inputs with opset 14 - pooled_output = last_hidden_state[ - torch.arange(last_hidden_state.shape[0], device=last_hidden_state.device), - input_ids.to(dtype=torch.int, device=last_hidden_state.device).argmax(dim=-1), - ] - - if not return_dict: - return (last_hidden_state, pooled_output) + encoder_outputs[1:] - - return BaseModelOutputWithPooling( - last_hidden_state=last_hidden_state, - pooler_output=pooled_output, - hidden_states=encoder_outputs.hidden_states, - attentions=encoder_outputs.attentions, - ) - - -@add_start_docstrings( - """The text model from CLIP without any head or projection on top.""", - CLIP_START_DOCSTRING, -) -class CLIPTextModel(CLIPPreTrainedModel): - config_class = CLIPTextConfig - - _no_split_modules = ["CLIPEncoderLayer"] - - def __init__(self, config: CLIPTextConfig): - super().__init__(config) - self.text_model = CLIPTextTransformer(config) - # Initialize weights and apply final processing - self.post_init() - - def get_input_embeddings(self) -> nn.Module: - return self.text_model.embeddings.token_embedding - - def set_input_embeddings(self, value): - self.text_model.embeddings.token_embedding = value - - @add_start_docstrings_to_model_forward(CLIP_TEXT_INPUTS_DOCSTRING) - @replace_return_docstrings(output_type=BaseModelOutputWithPooling, config_class=CLIPTextConfig) - def forward( - self, - input_ids: Optional[torch.Tensor] = None, - attention_mask: Optional[torch.Tensor] = None, - position_ids: Optional[torch.Tensor] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[Tuple, BaseModelOutputWithPooling]: - r""" - Returns: - - Examples: - - ```python - >>> from transformers import AutoTokenizer, CLIPTextModel - - >>> model = CLIPTextModel.from_pretrained("openai/clip-vit-base-patch32") - >>> tokenizer = AutoTokenizer.from_pretrained("openai/clip-vit-base-patch32") - - >>> inputs = tokenizer(["a photo of a cat", "a photo of a dog"], padding=True, return_tensors="pt") - - >>> outputs = model(**inputs) - >>> last_hidden_state = outputs.last_hidden_state - >>> pooled_output = outputs.pooler_output # pooled (EOS token) states - ```""" - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - return self.text_model( - input_ids=input_ids, - attention_mask=attention_mask, - position_ids=position_ids, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - -class CLIPVisionTransformer(nn.Module): - def __init__(self, config: CLIPVisionConfig): - super().__init__() - self.config = config - embed_dim = config.hidden_size - - self.embeddings = CLIPVisionEmbeddings(config) - self.patch_dropout = PatchDropout(config.force_patch_dropout) - self.pre_layrnorm = nn.LayerNorm(embed_dim, eps=config.layer_norm_eps) - self.encoder = CLIPEncoder(config) - self.post_layernorm = nn.LayerNorm(embed_dim, eps=config.layer_norm_eps) - - @add_start_docstrings_to_model_forward(CLIP_VISION_INPUTS_DOCSTRING) - @replace_return_docstrings(output_type=BaseModelOutputWithPooling, config_class=CLIPVisionConfig) - def forward( - self, - pixel_values: Optional[torch.FloatTensor] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[Tuple, BaseModelOutputWithPooling]: - r""" - Returns: - - """ - output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions - output_hidden_states = ( - output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states - ) - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - if pixel_values is None: - raise ValueError("You have to specify pixel_values") - ###################################### - if len(pixel_values.shape) == 7: - b_new, pair_new, T, bs_new, channel_new, h_new, w_new = pixel_values.shape - # print(pixel_values.shape) - B = b_new * pair_new * bs_new - pixel_values = pixel_values.reshape(B*T, channel_new, h_new, w_new) - - elif len(pixel_values.shape) == 5: - B, _, T, _, _ = pixel_values.shape - # print(pixel_values.shape) - pixel_values = rearrange(pixel_values, 'b c t h w -> (b t) c h w') - else: - # print(pixel_values.shape) - B, _, _, _ = pixel_values.shape - T = 1 - ########################### - hidden_states = self.embeddings(pixel_values) - - hidden_states = self.patch_dropout(hidden_states, B, T) ############################################## - - hidden_states = self.pre_layrnorm(hidden_states) - - encoder_outputs = self.encoder( - inputs_embeds=hidden_states, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - last_hidden_state = encoder_outputs[0] - pooled_output = last_hidden_state[:, 0, :] - pooled_output = self.post_layernorm(pooled_output) - - pooled_output = pooled_output.reshape(B, T, -1).mean(1) ################################ - - if not return_dict: - return (last_hidden_state, pooled_output) + encoder_outputs[1:] - - return BaseModelOutputWithPooling( - last_hidden_state=last_hidden_state, - pooler_output=pooled_output, - hidden_states=encoder_outputs.hidden_states, - attentions=encoder_outputs.attentions, - ) - - -@add_start_docstrings( - """The vision model from CLIP without any head or projection on top.""", - CLIP_START_DOCSTRING, -) -class CLIPVisionModel(CLIPPreTrainedModel): - config_class = CLIPVisionConfig - main_input_name = "pixel_values" - - def __init__(self, config: CLIPVisionConfig): - super().__init__(config) - self.vision_model = CLIPVisionTransformer(config) - # Initialize weights and apply final processing - self.post_init() - - def get_input_embeddings(self) -> nn.Module: - return self.vision_model.embeddings.patch_embedding - - @add_start_docstrings_to_model_forward(CLIP_VISION_INPUTS_DOCSTRING) - @replace_return_docstrings(output_type=BaseModelOutputWithPooling, config_class=CLIPVisionConfig) - def forward( - self, - pixel_values: Optional[torch.FloatTensor] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[Tuple, BaseModelOutputWithPooling]: - r""" - Returns: - - Examples: - - ```python - >>> from PIL import Image - >>> import requests - >>> from transformers import AutoProcessor, CLIPVisionModel - - >>> model = CLIPVisionModel.from_pretrained("openai/clip-vit-base-patch32") - >>> processor = AutoProcessor.from_pretrained("openai/clip-vit-base-patch32") - - >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" - >>> image = Image.open(requests.get(url, stream=True).raw) - - >>> inputs = processor(images=image, return_tensors="pt") - - >>> outputs = model(**inputs) - >>> last_hidden_state = outputs.last_hidden_state - >>> pooled_output = outputs.pooler_output # pooled CLS states - ```""" - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - return self.vision_model( - pixel_values=pixel_values, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - -@add_start_docstrings(CLIP_START_DOCSTRING) -class LanguageBindImage(CLIPPreTrainedModel): - config_class = LanguageBindImageConfig - - def __init__(self, config: LanguageBindImageConfig): - super().__init__(config) - - if not isinstance(config.text_config, CLIPTextConfig): - raise ValueError( - "config.text_config is expected to be of type CLIPTextConfig but is of type" - f" {type(config.text_config)}." - ) - - if not isinstance(config.vision_config, CLIPVisionConfig): - raise ValueError( - "config.vision_config is expected to be of type CLIPVisionConfig but is of type" - f" {type(config.vision_config)}." - ) - - text_config = config.text_config - vision_config = config.vision_config - self.add_time_attn = vision_config.add_time_attn - self.lora_r = vision_config.lora_r - self.lora_alpha = vision_config.lora_alpha - self.lora_dropout = vision_config.lora_dropout - - self.projection_dim = config.projection_dim - self.text_embed_dim = text_config.hidden_size - self.vision_embed_dim = vision_config.hidden_size - - self.text_model = CLIPTextTransformer(text_config) - self.vision_model = CLIPVisionTransformer(vision_config) - - self.visual_projection = nn.Linear(self.vision_embed_dim, self.projection_dim, bias=False) - self.text_projection = nn.Linear(self.text_embed_dim, self.projection_dim, bias=False) - self.logit_scale = nn.Parameter(torch.tensor(self.config.logit_scale_init_value)) - - # Initialize weights and apply final processing - self.post_init() - self.convert_to_lora() - self.resize_pos(self.vision_model.embeddings, vision_config) - - def convert_to_lora(self): - if self.lora_r == 0: - return - if self.add_time_attn: - target_modules = ["temporal_attn.k_proj", "temporal_attn.v_proj", - "temporal_attn.q_proj", "temporal_attn.out_proj", - "temporal_mlp.fc1", "temporal_mlp.fc2"] - else: - target_modules = ["k_proj", "v_proj", "q_proj", "out_proj"] - config = LoraConfig( - r=self.lora_r, # 16 - lora_alpha=self.lora_alpha, # 16 - target_modules=target_modules, # self_attn.out_proj - lora_dropout=self.lora_dropout, # 0.1 - bias="none", - modules_to_save=[], - ) - self.vision_model.encoder.is_gradient_checkpointing = False - self.vision_model.encoder = get_peft_model(self.vision_model.encoder, config) - - def resize_pos(self, m, vision_config): - # convert embedding - if vision_config.num_mel_bins!=0 and vision_config.target_length!=0: - m.image_size = [vision_config.num_mel_bins, vision_config.target_length] - m.config.image_size = [m.image_size, m.image_size] if isinstance(m.image_size, int) else m.image_size - # pos resize - old_pos_embed_state_dict = m.position_embedding.state_dict() - old_pos_embed = old_pos_embed_state_dict['weight'] - dtype = old_pos_embed.dtype - grid_size = [m.config.image_size[0] // m.patch_size, m.config.image_size[1] // m.patch_size] - extra_tokens = 1 # FIXME detect different token configs (ie no class token, or more) - new_seq_len = grid_size[0] * grid_size[1] + extra_tokens - if new_seq_len == old_pos_embed.shape[0]: - # m.to(args.device) - return - - m.num_patches = grid_size[0] * grid_size[1] - m.num_positions = m.num_patches + 1 - m.register_buffer("position_ids", torch.arange(m.num_positions).expand((1, -1))) - new_position_embedding = nn.Embedding(m.num_positions, m.embed_dim) - - if extra_tokens: - pos_emb_tok, pos_emb_img = old_pos_embed[:extra_tokens], old_pos_embed[extra_tokens:] - else: - pos_emb_tok, pos_emb_img = None, old_pos_embed - old_grid_size = [int(math.sqrt(len(pos_emb_img)))] * 2 - - # if is_master(args): - # logging.info('Resizing position embedding grid-size from %s to %s', old_grid_size, grid_size) - pos_emb_img = pos_emb_img.reshape(1, old_grid_size[0], old_grid_size[1], -1).permute(0, 3, 1, 2) - pos_emb_img = F.interpolate( - pos_emb_img, - size=grid_size, - mode='bicubic', - antialias=True, - align_corners=False, - ) - pos_emb_img = pos_emb_img.permute(0, 2, 3, 1).reshape(1, grid_size[0] * grid_size[1], -1)[0] - if pos_emb_tok is not None: - new_pos_embed = torch.cat([pos_emb_tok, pos_emb_img], dim=0) - else: - new_pos_embed = pos_emb_img - old_pos_embed_state_dict['weight'] = new_pos_embed.to(dtype) - m.position_embedding = new_position_embedding - m.position_embedding.load_state_dict(old_pos_embed_state_dict) - - # m.to(args.device) - - @add_start_docstrings_to_model_forward(CLIP_TEXT_INPUTS_DOCSTRING) - def get_text_features( - self, - input_ids: Optional[torch.Tensor] = None, - attention_mask: Optional[torch.Tensor] = None, - position_ids: Optional[torch.Tensor] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> torch.FloatTensor: - r""" - Returns: - text_features (`torch.FloatTensor` of shape `(batch_size, output_dim`): The text embeddings obtained by - applying the projection layer to the pooled output of [`CLIPTextModel`]. - - Examples: - - ```python - >>> from transformers import AutoTokenizer, CLIPModel - - >>> model = CLIPModel.from_pretrained("openai/clip-vit-base-patch32") - >>> tokenizer = AutoTokenizer.from_pretrained("openai/clip-vit-base-patch32") - - >>> inputs = tokenizer(["a photo of a cat", "a photo of a dog"], padding=True, return_tensors="pt") - >>> text_features = model.get_text_features(**inputs) - ```""" - # Use CLIP model's config for some fields (if specified) instead of those of vision & text components. - output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions - output_hidden_states = ( - output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states - ) - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - text_outputs = self.text_model( - input_ids=input_ids, - attention_mask=attention_mask, - position_ids=position_ids, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - pooled_output = text_outputs[1] - text_features = self.text_projection(pooled_output) - - return text_features - - @add_start_docstrings_to_model_forward(CLIP_VISION_INPUTS_DOCSTRING) - def get_image_features( - self, - pixel_values: Optional[torch.FloatTensor] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> torch.FloatTensor: - r""" - Returns: - image_features (`torch.FloatTensor` of shape `(batch_size, output_dim`): The image embeddings obtained by - applying the projection layer to the pooled output of [`CLIPVisionModel`]. - - Examples: - - ```python - >>> from PIL import Image - >>> import requests - >>> from transformers import AutoProcessor, CLIPModel - - >>> model = CLIPModel.from_pretrained("openai/clip-vit-base-patch32") - >>> processor = AutoProcessor.from_pretrained("openai/clip-vit-base-patch32") - - >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" - >>> image = Image.open(requests.get(url, stream=True).raw) - - >>> inputs = processor(images=image, return_tensors="pt") - - >>> image_features = model.get_image_features(**inputs) - ```""" - # Use CLIP model's config for some fields (if specified) instead of those of vision & text components. - output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions - output_hidden_states = ( - output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states - ) - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - vision_outputs = self.vision_model( - pixel_values=pixel_values, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - pooled_output = vision_outputs[1] # pooled_output - image_features = self.visual_projection(pooled_output) - - return image_features - - @add_start_docstrings_to_model_forward(CLIP_INPUTS_DOCSTRING) - @replace_return_docstrings(output_type=CLIPOutput, config_class=LanguageBindImageConfig) - def forward( - self, - input_ids: Optional[torch.LongTensor] = None, - pixel_values: Optional[torch.FloatTensor] = None, - attention_mask: Optional[torch.Tensor] = None, - position_ids: Optional[torch.LongTensor] = None, - return_loss: Optional[bool] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[Tuple, CLIPOutput]: - r""" - Returns: - - Examples: - - ```python - >>> from PIL import Image - >>> import requests - >>> from transformers import AutoProcessor, CLIPModel - - >>> model = CLIPModel.from_pretrained("openai/clip-vit-base-patch32") - >>> processor = AutoProcessor.from_pretrained("openai/clip-vit-base-patch32") - - >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" - >>> image = Image.open(requests.get(url, stream=True).raw) - - >>> inputs = processor( - ... text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="pt", padding=True - ... ) - - >>> outputs = model(**inputs) - >>> logits_per_image = outputs.logits_per_image # this is the image-text similarity score - >>> probs = logits_per_image.softmax(dim=1) # we can take the softmax to get the label probabilities - ```""" - # Use CLIP model's config for some fields (if specified) instead of those of vision & text components. - output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions - output_hidden_states = ( - output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states - ) - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - vision_outputs = self.vision_model( - pixel_values=pixel_values, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - text_outputs = self.text_model( - input_ids=input_ids, - attention_mask=attention_mask, - position_ids=position_ids, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - image_embeds = vision_outputs[1] - image_embeds = self.visual_projection(image_embeds) - - text_embeds = text_outputs[1] - text_embeds = self.text_projection(text_embeds) - - # normalized features - image_embeds = image_embeds / image_embeds.norm(p=2, dim=-1, keepdim=True) - text_embeds = text_embeds / text_embeds.norm(p=2, dim=-1, keepdim=True) - - # cosine similarity as logits - logit_scale = self.logit_scale.exp() - logits_per_text = torch.matmul(text_embeds, image_embeds.t()) * logit_scale - logits_per_image = logits_per_text.t() - - loss = None - if return_loss: - loss = clip_loss(logits_per_text) - - if not return_dict: - output = (logits_per_image, logits_per_text, text_embeds, image_embeds, text_outputs, vision_outputs) - return ((loss,) + output) if loss is not None else output - - return CLIPOutput( - loss=loss, - logits_per_image=logits_per_image, - logits_per_text=logits_per_text, - text_embeds=text_embeds, - image_embeds=image_embeds, - text_model_output=text_outputs, - vision_model_output=vision_outputs, - ) \ No newline at end of file diff --git a/spaces/Luelll/ChuanhuChatGPT/modules/__init__.py b/spaces/Luelll/ChuanhuChatGPT/modules/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/MLearningAI/AIart_sources_of_inspiration/README.md b/spaces/MLearningAI/AIart_sources_of_inspiration/README.md deleted file mode 100644 index 01f481de9e0370ad10758da0aaf97143d35645d1..0000000000000000000000000000000000000000 --- a/spaces/MLearningAI/AIart_sources_of_inspiration/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Identifying Painting Authors -emoji: 🎨 -colorFrom: indigo -colorTo: blue -sdk: gradio -sdk_version: 3.12.0 -app_file: app.py -pinned: false -duplicated_from: Datasculptor/AIart_sources_of_inspiration ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Mahiruoshi/Lovelive_Nijigasaki_VITS/attentions.py b/spaces/Mahiruoshi/Lovelive_Nijigasaki_VITS/attentions.py deleted file mode 100644 index 86bc73b5fe98cc7b443e9078553920346c996707..0000000000000000000000000000000000000000 --- a/spaces/Mahiruoshi/Lovelive_Nijigasaki_VITS/attentions.py +++ /dev/null @@ -1,300 +0,0 @@ -import math -import torch -from torch import nn -from torch.nn import functional as F - -import commons -from modules import LayerNorm - - -class Encoder(nn.Module): - def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., window_size=4, **kwargs): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.window_size = window_size - - self.drop = nn.Dropout(p_dropout) - self.attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, window_size=window_size)) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout)) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask): - attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.attn_layers[i](x, x, attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class Decoder(nn.Module): - def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., proximal_bias=False, proximal_init=True, **kwargs): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - - self.drop = nn.Dropout(p_dropout) - self.self_attn_layers = nn.ModuleList() - self.norm_layers_0 = nn.ModuleList() - self.encdec_attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.self_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, proximal_bias=proximal_bias, proximal_init=proximal_init)) - self.norm_layers_0.append(LayerNorm(hidden_channels)) - self.encdec_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout)) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout, causal=True)) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask, h, h_mask): - """ - x: decoder input - h: encoder output - """ - self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to(device=x.device, dtype=x.dtype) - encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.self_attn_layers[i](x, x, self_attn_mask) - y = self.drop(y) - x = self.norm_layers_0[i](x + y) - - y = self.encdec_attn_layers[i](x, h, encdec_attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class MultiHeadAttention(nn.Module): - def __init__(self, channels, out_channels, n_heads, p_dropout=0., window_size=None, heads_share=True, block_length=None, proximal_bias=False, proximal_init=False): - super().__init__() - assert channels % n_heads == 0 - - self.channels = channels - self.out_channels = out_channels - self.n_heads = n_heads - self.p_dropout = p_dropout - self.window_size = window_size - self.heads_share = heads_share - self.block_length = block_length - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - self.attn = None - - self.k_channels = channels // n_heads - self.conv_q = nn.Conv1d(channels, channels, 1) - self.conv_k = nn.Conv1d(channels, channels, 1) - self.conv_v = nn.Conv1d(channels, channels, 1) - self.conv_o = nn.Conv1d(channels, out_channels, 1) - self.drop = nn.Dropout(p_dropout) - - if window_size is not None: - n_heads_rel = 1 if heads_share else n_heads - rel_stddev = self.k_channels**-0.5 - self.emb_rel_k = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev) - self.emb_rel_v = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev) - - nn.init.xavier_uniform_(self.conv_q.weight) - nn.init.xavier_uniform_(self.conv_k.weight) - nn.init.xavier_uniform_(self.conv_v.weight) - if proximal_init: - with torch.no_grad(): - self.conv_k.weight.copy_(self.conv_q.weight) - self.conv_k.bias.copy_(self.conv_q.bias) - - def forward(self, x, c, attn_mask=None): - q = self.conv_q(x) - k = self.conv_k(c) - v = self.conv_v(c) - - x, self.attn = self.attention(q, k, v, mask=attn_mask) - - x = self.conv_o(x) - return x - - def attention(self, query, key, value, mask=None): - # reshape [b, d, t] -> [b, n_h, t, d_k] - b, d, t_s, t_t = (*key.size(), query.size(2)) - query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3) - key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - - scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1)) - if self.window_size is not None: - assert t_s == t_t, "Relative attention is only available for self-attention." - key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s) - rel_logits = self._matmul_with_relative_keys(query /math.sqrt(self.k_channels), key_relative_embeddings) - scores_local = self._relative_position_to_absolute_position(rel_logits) - scores = scores + scores_local - if self.proximal_bias: - assert t_s == t_t, "Proximal bias is only available for self-attention." - scores = scores + self._attention_bias_proximal(t_s).to(device=scores.device, dtype=scores.dtype) - if mask is not None: - scores = scores.masked_fill(mask == 0, -1e4) - if self.block_length is not None: - assert t_s == t_t, "Local attention is only available for self-attention." - block_mask = torch.ones_like(scores).triu(-self.block_length).tril(self.block_length) - scores = scores.masked_fill(block_mask == 0, -1e4) - p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s] - p_attn = self.drop(p_attn) - output = torch.matmul(p_attn, value) - if self.window_size is not None: - relative_weights = self._absolute_position_to_relative_position(p_attn) - value_relative_embeddings = self._get_relative_embeddings(self.emb_rel_v, t_s) - output = output + self._matmul_with_relative_values(relative_weights, value_relative_embeddings) - output = output.transpose(2, 3).contiguous().view(b, d, t_t) # [b, n_h, t_t, d_k] -> [b, d, t_t] - return output, p_attn - - def _matmul_with_relative_values(self, x, y): - """ - x: [b, h, l, m] - y: [h or 1, m, d] - ret: [b, h, l, d] - """ - ret = torch.matmul(x, y.unsqueeze(0)) - return ret - - def _matmul_with_relative_keys(self, x, y): - """ - x: [b, h, l, d] - y: [h or 1, m, d] - ret: [b, h, l, m] - """ - ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1)) - return ret - - def _get_relative_embeddings(self, relative_embeddings, length): - max_relative_position = 2 * self.window_size + 1 - # Pad first before slice to avoid using cond ops. - pad_length = max(length - (self.window_size + 1), 0) - slice_start_position = max((self.window_size + 1) - length, 0) - slice_end_position = slice_start_position + 2 * length - 1 - if pad_length > 0: - padded_relative_embeddings = F.pad( - relative_embeddings, - commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]])) - else: - padded_relative_embeddings = relative_embeddings - used_relative_embeddings = padded_relative_embeddings[:,slice_start_position:slice_end_position] - return used_relative_embeddings - - def _relative_position_to_absolute_position(self, x): - """ - x: [b, h, l, 2*l-1] - ret: [b, h, l, l] - """ - batch, heads, length, _ = x.size() - # Concat columns of pad to shift from relative to absolute indexing. - x = F.pad(x, commons.convert_pad_shape([[0,0],[0,0],[0,0],[0,1]])) - - # Concat extra elements so to add up to shape (len+1, 2*len-1). - x_flat = x.view([batch, heads, length * 2 * length]) - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0,0],[0,0],[0,length-1]])) - - # Reshape and slice out the padded elements. - x_final = x_flat.view([batch, heads, length+1, 2*length-1])[:, :, :length, length-1:] - return x_final - - def _absolute_position_to_relative_position(self, x): - """ - x: [b, h, l, l] - ret: [b, h, l, 2*l-1] - """ - batch, heads, length, _ = x.size() - # padd along column - x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length-1]])) - x_flat = x.view([batch, heads, length**2 + length*(length -1)]) - # add 0's in the beginning that will skew the elements after reshape - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]])) - x_final = x_flat.view([batch, heads, length, 2*length])[:,:,:,1:] - return x_final - - def _attention_bias_proximal(self, length): - """Bias for self-attention to encourage attention to close positions. - Args: - length: an integer scalar. - Returns: - a Tensor with shape [1, 1, length, length] - """ - r = torch.arange(length, dtype=torch.float32) - diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1) - return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0) - - -class FFN(nn.Module): - def __init__(self, in_channels, out_channels, filter_channels, kernel_size, p_dropout=0., activation=None, causal=False): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.activation = activation - self.causal = causal - - if causal: - self.padding = self._causal_padding - else: - self.padding = self._same_padding - - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size) - self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size) - self.drop = nn.Dropout(p_dropout) - - def forward(self, x, x_mask): - x = self.conv_1(self.padding(x * x_mask)) - if self.activation == "gelu": - x = x * torch.sigmoid(1.702 * x) - else: - x = torch.relu(x) - x = self.drop(x) - x = self.conv_2(self.padding(x * x_mask)) - return x * x_mask - - def _causal_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = self.kernel_size - 1 - pad_r = 0 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x - - def _same_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = (self.kernel_size - 1) // 2 - pad_r = self.kernel_size // 2 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x diff --git a/spaces/Matthijs/mms-tts-demo/uroman/bin/de-accent.pl b/spaces/Matthijs/mms-tts-demo/uroman/bin/de-accent.pl deleted file mode 100644 index d73ed8361f2a65377e605504b67d74d8fb1a755b..0000000000000000000000000000000000000000 --- a/spaces/Matthijs/mms-tts-demo/uroman/bin/de-accent.pl +++ /dev/null @@ -1,201 +0,0 @@ -#!/usr/bin/perl -w - -sub print_version { - print STDERR "$0 version 1.1\n"; - print STDERR " Author: Ulf Hermjakob\n"; - print STDERR " Last changed: March 14, 2011\n"; -} - -sub print_usage { - print STDERR "$0 [options] < with_accents.txt > without_accents.txt\n"; - print STDERR " -h or -help\n"; - print STDERR " -v or -version\n"; -} - -sub de_accent_string { - local($s) = @_; - - # $s =~ tr/A-Z/a-z/; - unless (0) { - # Latin-1 - if ($s =~ /\xC3[\x80-\xBF]/) { - $s =~ s/(À|Á|Â|Ã|Ä|Å)/A/g; - $s =~ s/Æ/Ae/g; - $s =~ s/Ç/C/g; - $s =~ s/Ð/D/g; - $s =~ s/(È|É|Ê|Ë)/E/g; - $s =~ s/(Ì|Í|Î|Ï)/I/g; - $s =~ s/Ñ/N/g; - $s =~ s/(Ò|Ó|Ô|Õ|Ö|Ø)/O/g; - $s =~ s/(Ù|Ú|Û|Ü)/U/g; - $s =~ s/Þ/Th/g; - $s =~ s/Ý/Y/g; - $s =~ s/(à|á|â|ã|ä|å)/a/g; - $s =~ s/æ/ae/g; - $s =~ s/ç/c/g; - $s =~ s/(è|é|ê|ë)/e/g; - $s =~ s/(ì|í|î|ï)/i/g; - $s =~ s/ð/d/g; - $s =~ s/ñ/n/g; - $s =~ s/(ò|ó|ô|õ|ö)/o/g; - $s =~ s/ß/ss/g; - $s =~ s/þ/th/g; - $s =~ s/(ù|ú|û|ü)/u/g; - $s =~ s/(ý|ÿ)/y/g; - } - # Latin Extended-A - if ($s =~ /[\xC4-\xC5][\x80-\xBF]/) { - $s =~ s/(Ā|Ă|Ą)/A/g; - $s =~ s/(ā|ă|ą)/a/g; - $s =~ s/(Ć|Ĉ|Ċ|Č)/C/g; - $s =~ s/(ć|ĉ|ċ|č)/c/g; - $s =~ s/(Ď|Đ)/D/g; - $s =~ s/(ď|đ)/d/g; - $s =~ s/(Ē|Ĕ|Ė|Ę|Ě)/E/g; - $s =~ s/(ē|ĕ|ė|ę|ě)/e/g; - $s =~ s/(Ĝ|Ğ|Ġ|Ģ)/G/g; - $s =~ s/(ĝ|ğ|ġ|ģ)/g/g; - $s =~ s/(Ĥ|Ħ)/H/g; - $s =~ s/(ĥ|ħ)/h/g; - $s =~ s/(Ĩ|Ī|Ĭ|Į|İ)/I/g; - $s =~ s/(ĩ|ī|ĭ|į|ı)/i/g; - $s =~ s/IJ/Ij/g; - $s =~ s/ij/ij/g; - $s =~ s/Ĵ/J/g; - $s =~ s/ĵ/j/g; - $s =~ s/Ķ/K/g; - $s =~ s/(ķ|ĸ)/k/g; - $s =~ s/(Ĺ|Ļ|Ľ|Ŀ|Ł)/L/g; - $s =~ s/(ļ|ľ|ŀ|ł)/l/g; - $s =~ s/(Ń|Ņ|Ň|Ŋ)/N/g; - $s =~ s/(ń|ņ|ň|ʼn|ŋ)/n/g; - $s =~ s/(Ō|Ŏ|Ő)/O/g; - $s =~ s/(ō|ŏ|ő)/o/g; - $s =~ s/Œ/Oe/g; - $s =~ s/œ/oe/g; - $s =~ s/(Ŕ|Ŗ|Ř)/R/g; - $s =~ s/(ŕ|ŗ|ř)/r/g; - $s =~ s/(Ś|Ŝ|Ş|Š)/S/g; - $s =~ s/(ś|ŝ|ş|š|ſ)/s/g; - $s =~ s/(Ţ|Ť|Ŧ)/T/g; - $s =~ s/(ţ|ť|ŧ)/t/g; - $s =~ s/(Ũ|Ū|Ŭ|Ů|Ű|Ų)/U/g; - $s =~ s/(ũ|ū|ŭ|ů|ű|ų)/u/g; - $s =~ s/Ŵ/W/g; - $s =~ s/ŵ/w/g; - $s =~ s/(Ŷ|Ÿ)/Y/g; - $s =~ s/ŷ/y/g; - $s =~ s/(Ź|Ż|Ž)/Z/g; - $s =~ s/(ź|ż|ž)/z/g; - } - # Latin Extended Additional - if ($s =~ /\xE1[\xB8-\xBF][\x80-\xBF]/) { - $s =~ s/(ḁ|ạ|ả|ấ|ầ|ẩ|ẫ|ậ|ắ|ằ|ẳ|ẵ|ặ|ẚ)/a/g; - $s =~ s/(ḃ|ḅ|ḇ)/b/g; - $s =~ s/(ḉ)/c/g; - $s =~ s/(ḋ|ḍ|ḏ|ḑ|ḓ)/d/g; - $s =~ s/(ḕ|ḗ|ḙ|ḛ|ḝ|ẹ|ẻ|ẽ|ế|ề|ể|ễ|ệ)/e/g; - $s =~ s/(ḟ)/f/g; - $s =~ s/(ḡ)/g/g; - $s =~ s/(ḣ|ḥ|ḧ|ḩ|ḫ)/h/g; - $s =~ s/(ḭ|ḯ|ỉ|ị)/i/g; - $s =~ s/(ḱ|ḳ|ḵ)/k/g; - $s =~ s/(ḷ|ḹ|ḻ|ḽ)/l/g; - $s =~ s/(ḿ|ṁ|ṃ)/m/g; - $s =~ s/(ṅ|ṇ|ṉ|ṋ)/m/g; - $s =~ s/(ọ|ỏ|ố|ồ|ổ|ỗ|ộ|ớ|ờ|ở|ỡ|ợ|ṍ|ṏ|ṑ|ṓ)/o/g; - $s =~ s/(ṕ|ṗ)/p/g; - $s =~ s/(ṙ|ṛ|ṝ|ṟ)/r/g; - $s =~ s/(ṡ|ṣ|ṥ|ṧ|ṩ|ẛ)/s/g; - $s =~ s/(ṫ|ṭ|ṯ|ṱ)/t/g; - $s =~ s/(ṳ|ṵ|ṷ|ṹ|ṻ|ụ|ủ|ứ|ừ|ử|ữ|ự)/u/g; - $s =~ s/(ṽ|ṿ)/v/g; - $s =~ s/(ẁ|ẃ|ẅ|ẇ|ẉ|ẘ)/w/g; - $s =~ s/(ẋ|ẍ)/x/g; - $s =~ s/(ẏ|ỳ|ỵ|ỷ|ỹ|ẙ)/y/g; - $s =~ s/(ẑ|ẓ|ẕ)/z/g; - $s =~ s/(Ḁ|Ạ|Ả|Ấ|Ầ|Ẩ|Ẫ|Ậ|Ắ|Ằ|Ẳ|Ẵ|Ặ)/A/g; - $s =~ s/(Ḃ|Ḅ|Ḇ)/B/g; - $s =~ s/(Ḉ)/C/g; - $s =~ s/(Ḋ|Ḍ|Ḏ|Ḑ|Ḓ)/D/g; - $s =~ s/(Ḕ|Ḗ|Ḙ|Ḛ|Ḝ|Ẹ|Ẻ|Ẽ|Ế|Ề|Ể|Ễ|Ệ)/E/g; - $s =~ s/(Ḟ)/F/g; - $s =~ s/(Ḡ)/G/g; - $s =~ s/(Ḣ|Ḥ|Ḧ|Ḩ|Ḫ)/H/g; - $s =~ s/(Ḭ|Ḯ|Ỉ|Ị)/I/g; - $s =~ s/(Ḱ|Ḳ|Ḵ)/K/g; - $s =~ s/(Ḷ|Ḹ|Ḻ|Ḽ)/L/g; - $s =~ s/(Ḿ|Ṁ|Ṃ)/M/g; - $s =~ s/(Ṅ|Ṇ|Ṉ|Ṋ)/N/g; - $s =~ s/(Ṍ|Ṏ|Ṑ|Ṓ|Ọ|Ỏ|Ố|Ồ|Ổ|Ỗ|Ộ|Ớ|Ờ|Ở|Ỡ|Ợ)/O/g; - $s =~ s/(Ṕ|Ṗ)/P/g; - $s =~ s/(Ṙ|Ṛ|Ṝ|Ṟ)/R/g; - $s =~ s/(Ṡ|Ṣ|Ṥ|Ṧ|Ṩ)/S/g; - $s =~ s/(Ṫ|Ṭ|Ṯ|Ṱ)/T/g; - $s =~ s/(Ṳ|Ṵ|Ṷ|Ṹ|Ṻ|Ụ|Ủ|Ứ|Ừ|Ử|Ữ|Ự)/U/g; - $s =~ s/(Ṽ|Ṿ)/V/g; - $s =~ s/(Ẁ|Ẃ|Ẅ|Ẇ|Ẉ)/W/g; - $s =~ s/(Ẍ)/X/g; - $s =~ s/(Ẏ|Ỳ|Ỵ|Ỷ|Ỹ)/Y/g; - $s =~ s/(Ẑ|Ẓ|Ẕ)/Z/g; - } - # Greek letters - if ($s =~ /\xCE[\x86-\xAB]/) { - $s =~ s/ά/α/g; - $s =~ s/έ/ε/g; - $s =~ s/ί/ι/g; - $s =~ s/ϊ/ι/g; - $s =~ s/ΐ/ι/g; - $s =~ s/ό/ο/g; - $s =~ s/ύ/υ/g; - $s =~ s/ϋ/υ/g; - $s =~ s/ΰ/υ/g; - $s =~ s/ώ/ω/g; - $s =~ s/Ά/Α/g; - $s =~ s/Έ/Ε/g; - $s =~ s/Ή/Η/g; - $s =~ s/Ί/Ι/g; - $s =~ s/Ϊ/Ι/g; - $s =~ s/Ύ/Υ/g; - $s =~ s/Ϋ/Υ/g; - $s =~ s/Ώ/Ω/g; - } - # Cyrillic letters - if ($s =~ /\xD0[\x80-\xAF]/) { - $s =~ s/Ѐ/Е/g; - $s =~ s/Ё/Е/g; - $s =~ s/Ѓ/Г/g; - $s =~ s/Ќ/К/g; - $s =~ s/Ѝ/И/g; - $s =~ s/Й/И/g; - $s =~ s/ѐ/е/g; - $s =~ s/ё/е/g; - $s =~ s/ѓ/г/g; - $s =~ s/ќ/к/g; - $s =~ s/ѝ/и/g; - $s =~ s/й/и/g; - } - } - return $s; -} - -while (@ARGV) { - $arg = shift @ARGV; - if ($arg =~ /^-*(h|help)$/i) { - &print_usage; - exit 1; - } elsif ($arg =~ /^-*(v|version)$/i) { - &print_version; - exit 1; - } else { - print STDERR "Ignoring unrecognized argument $arg\n"; - } -} - -$line_number = 0; -while (<>) { - $line_number++; - print &de_accent_string($_); -} -exit 0; - diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/configs/_base_/models/fpn_r50.py b/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/configs/_base_/models/fpn_r50.py deleted file mode 100644 index 86ab327db92e44c14822d65f1c9277cb007f17c1..0000000000000000000000000000000000000000 --- a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/configs/_base_/models/fpn_r50.py +++ /dev/null @@ -1,36 +0,0 @@ -# model settings -norm_cfg = dict(type='SyncBN', requires_grad=True) -model = dict( - type='EncoderDecoder', - pretrained='open-mmlab://resnet50_v1c', - backbone=dict( - type='ResNetV1c', - depth=50, - num_stages=4, - out_indices=(0, 1, 2, 3), - dilations=(1, 1, 1, 1), - strides=(1, 2, 2, 2), - norm_cfg=norm_cfg, - norm_eval=False, - style='pytorch', - contract_dilation=True), - neck=dict( - type='FPN', - in_channels=[256, 512, 1024, 2048], - out_channels=256, - num_outs=4), - decode_head=dict( - type='FPNHead', - in_channels=[256, 256, 256, 256], - in_index=[0, 1, 2, 3], - feature_strides=[4, 8, 16, 32], - channels=128, - dropout_ratio=0.1, - num_classes=19, - norm_cfg=norm_cfg, - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)), - # model training and testing settings - train_cfg=dict(), - test_cfg=dict(mode='whole')) diff --git a/spaces/MilliMalinga/moghel-bot/app.py b/spaces/MilliMalinga/moghel-bot/app.py deleted file mode 100644 index 9a08fba896516f7eda55fc333602b24efa7c6dfd..0000000000000000000000000000000000000000 --- a/spaces/MilliMalinga/moghel-bot/app.py +++ /dev/null @@ -1,79 +0,0 @@ -import openai -import gradio as gr -import time -import os -from dotenv import load_dotenv - -load_dotenv() -openai.api_key = os.getenv("OPENAI_API_KEY") - - - -system_message = {"role": "system", "content":"You are friend named Moghel to young women and girls who would " - "like to understand their bodies better. Introduce yourself with this name." - " These people are aged from 14-35," - " and most of them do not have a proper educational background " - "so your responses have to ensure that they still understand. " - "\nThe main focus is on conditions that are rarely talked " - "about such as PCOS, Bacterial Vaginosis, Endometriosis," - " Yeast Infections, PMS, and other non-sexually transmitted diseases. " - "\n\nYou also teach these young girls and women about the conditions " - "it takes for a young woman to fall pregnant , for example, sex during " - "ovulation period. \nYou also debunk some common myths such as how " - "inefficient the withdrawal method is; how HIV/AIDS and other STIs " - "are transmitted. \n\nYou do not explain concepts such as sexual " - "intercourse in graphic detail. " - "\n\nYou always have to ask to ask your friend their preferred name, " - "and age. Use this name to address them. " - "\nAlways ask how they are doing, if there is anything they need " - "to talk about. If:\n1. It is not related to their Sexual " - "and reproductive health, and/or their mental health, " - " let them know that you can help them. " - "\nFor any asks about diagnosis, ensure that you let the user know " - "that you are not a qualified health professional and even if you " - "give some information around this, your diagnosis is not 100% accurate, " - "and they would still need medical assistance." - "You are to end a conversation if at any point you are asked about anything " - "that has nothing to do with SRH, and mental health, no matter the person's age. "} - - -with gr.Blocks() as demo: - chatbot = gr.Chatbot() - msg = gr.Textbox() - clear = gr.Button("Clear") - - state = gr.State([]) - - def user(user_message, history): - return "", history + [[user_message, None]] - - - def bot(history, messages_history): - user_message = history[-1][0] - bot_message, messages_history = ask_gpt(user_message, messages_history) - messages_history += [{"role": "assistant", "content": bot_message}] - history[-1][1] = bot_message - time.sleep(1) - return history, messages_history - - def ask_gpt(message, messages_history): - messages_history += [system_message] - messages_history += [{"role": "user", "content": message}] - response = openai.ChatCompletion.create( - model="gpt-3.5-turbo", - messages=messages_history - ) - return response['choices'][0]['message']['content'], messages_history - - def init_history(messages_history): - messages_history = [] - messages_history += [system_message] - return messages_history - - msg.submit(user, [msg, chatbot], [msg, chatbot], queue=False).then( - bot, [chatbot, state], [chatbot, state] - ) - - clear.click(lambda: None, None, chatbot, queue=False).success(init_history, [state], [state]) - -demo.launch() \ No newline at end of file diff --git a/spaces/Mountchicken/MAERec-Gradio/configs/textrecog/svtr/svtr-large_20e_st_mj.py b/spaces/Mountchicken/MAERec-Gradio/configs/textrecog/svtr/svtr-large_20e_st_mj.py deleted file mode 100644 index 1082d761d53d91ea35a949f02e000b9e0a033926..0000000000000000000000000000000000000000 --- a/spaces/Mountchicken/MAERec-Gradio/configs/textrecog/svtr/svtr-large_20e_st_mj.py +++ /dev/null @@ -1,19 +0,0 @@ -_base_ = [ - 'svtr-tiny_20e_st_mj.py', -] - -model = dict( - preprocessor=dict(output_image_size=(48, 160), ), - encoder=dict( - img_size=[48, 160], - max_seq_len=40, - out_channels=384, - embed_dims=[192, 256, 512], - depth=[3, 9, 9], - num_heads=[6, 8, 16], - mixer_types=['Local'] * 10 + ['Global'] * 11), - decoder=dict(in_channels=384)) - -train_dataloader = dict(batch_size=128, ) - -optim_wrapper = dict(optimizer=dict(lr=2.5 / (10**4))) diff --git a/spaces/Mountchicken/MAERec-Gradio/tools/test.py b/spaces/Mountchicken/MAERec-Gradio/tools/test.py deleted file mode 100644 index 15645f2207ebdb61fd70293f2b2c9602e99b2c61..0000000000000000000000000000000000000000 --- a/spaces/Mountchicken/MAERec-Gradio/tools/test.py +++ /dev/null @@ -1,141 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import argparse -import os -import os.path as osp - -from mmengine.config import Config, DictAction -from mmengine.registry import RUNNERS -from mmengine.runner import Runner - - -def parse_args(): - parser = argparse.ArgumentParser(description='Test (and eval) a model') - parser.add_argument('config', help='Test config file path') - parser.add_argument('checkpoint', help='Checkpoint file') - parser.add_argument( - '--work-dir', - help='The directory to save the file containing evaluation metrics') - parser.add_argument( - '--save-preds', - action='store_true', - help='Dump predictions to a pickle file for offline evaluation') - parser.add_argument( - '--show', action='store_true', help='Show prediction results') - parser.add_argument( - '--show-dir', - help='Directory where painted images will be saved. ' - 'If specified, it will be automatically saved ' - 'to the work_dir/timestamp/show_dir') - parser.add_argument( - '--wait-time', type=float, default=2, help='The interval of show (s)') - parser.add_argument( - '--cfg-options', - nargs='+', - action=DictAction, - help='Override some settings in the used config, the key-value pair ' - 'in xxx=yyy format will be merged into config file. If the value to ' - 'be overwritten is a list, it should be like key="[a,b]" or key=a,b ' - 'It also allows nested list/tuple values, e.g. key="[(a,b),(c,d)]" ' - 'Note that the quotation marks are necessary and that no white space ' - 'is allowed.') - parser.add_argument( - '--launcher', - choices=['none', 'pytorch', 'slurm', 'mpi'], - default='none', - help='Job launcher') - parser.add_argument( - '--tta', action='store_true', help='Test time augmentation') - # When using PyTorch version >= 2.0.0, the `torch.distributed.launch` - # will pass the `--local-rank` parameter to `tools/test.py` instead - # of `--local_rank`. - parser.add_argument('--local_rank', '--local-rank', type=int, default=0) - args = parser.parse_args() - if 'LOCAL_RANK' not in os.environ: - os.environ['LOCAL_RANK'] = str(args.local_rank) - return args - - -def trigger_visualization_hook(cfg, args): - default_hooks = cfg.default_hooks - if 'visualization' in default_hooks: - visualization_hook = default_hooks['visualization'] - # Turn on visualization - visualization_hook['enable'] = True - visualization_hook['draw_gt'] = True - visualization_hook['draw_pred'] = True - if args.show: - visualization_hook['show'] = True - visualization_hook['wait_time'] = args.wait_time - if args.show_dir: - cfg.visualizer['save_dir'] = args.show_dir - cfg.visualizer['vis_backends'] = [dict(type='LocalVisBackend')] - else: - raise RuntimeError( - 'VisualizationHook must be included in default_hooks.' - 'refer to usage ' - '"visualization=dict(type=\'VisualizationHook\')"') - - return cfg - - -def main(): - args = parse_args() - - # load config - cfg = Config.fromfile(args.config) - cfg.launcher = args.launcher - if args.cfg_options is not None: - cfg.merge_from_dict(args.cfg_options) - - # work_dir is determined in this priority: CLI > segment in file > filename - if args.work_dir is not None: - # update configs according to CLI args if args.work_dir is not None - cfg.work_dir = args.work_dir - elif cfg.get('work_dir', None) is None: - # use config filename as default work_dir if cfg.work_dir is None - cfg.work_dir = osp.join('./work_dirs', - osp.splitext(osp.basename(args.config))[0]) - - cfg.load_from = args.checkpoint - - # TODO: It will be supported after refactoring the visualizer - if args.show and args.show_dir: - raise NotImplementedError('--show and --show-dir cannot be set ' - 'at the same time') - - if args.show or args.show_dir: - cfg = trigger_visualization_hook(cfg, args) - - if args.tta: - cfg.test_dataloader.dataset.pipeline = cfg.tta_pipeline - cfg.tta_model.module = cfg.model - cfg.model = cfg.tta_model - - # save predictions - if args.save_preds: - dump_metric = dict( - type='DumpResults', - out_file_path=osp.join( - cfg.work_dir, - f'{osp.basename(args.checkpoint)}_predictions.pkl')) - if isinstance(cfg.test_evaluator, (list, tuple)): - cfg.test_evaluator = list(cfg.test_evaluator) - cfg.test_evaluator.append(dump_metric) - else: - cfg.test_evaluator = [cfg.test_evaluator, dump_metric] - - # build the runner from config - if 'runner_type' not in cfg: - # build the default runner - runner = Runner.from_cfg(cfg) - else: - # build customized runner from the registry - # if 'runner_type' is set in the cfg - runner = RUNNERS.build(cfg) - - # start testing - runner.test() - - -if __name__ == '__main__': - main() diff --git a/spaces/NCTCMumbai/NCTC/models/official/nlp/data/create_pretraining_data.py b/spaces/NCTCMumbai/NCTC/models/official/nlp/data/create_pretraining_data.py deleted file mode 100644 index 79dac57ac8775687673604af6fb2fb50c9f74244..0000000000000000000000000000000000000000 --- a/spaces/NCTCMumbai/NCTC/models/official/nlp/data/create_pretraining_data.py +++ /dev/null @@ -1,486 +0,0 @@ -# Copyright 2019 The TensorFlow Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# ============================================================================== -"""Create masked LM/next sentence masked_lm TF examples for BERT.""" -from __future__ import absolute_import -from __future__ import division -from __future__ import print_function - -import collections -import random - -from absl import app -from absl import flags -from absl import logging -import tensorflow as tf - -from official.nlp.bert import tokenization - -FLAGS = flags.FLAGS - -flags.DEFINE_string("input_file", None, - "Input raw text file (or comma-separated list of files).") - -flags.DEFINE_string( - "output_file", None, - "Output TF example file (or comma-separated list of files).") - -flags.DEFINE_string("vocab_file", None, - "The vocabulary file that the BERT model was trained on.") - -flags.DEFINE_bool( - "do_lower_case", True, - "Whether to lower case the input text. Should be True for uncased " - "models and False for cased models.") - -flags.DEFINE_bool( - "do_whole_word_mask", False, - "Whether to use whole word masking rather than per-WordPiece masking.") - -flags.DEFINE_bool( - "gzip_compress", False, - "Whether to use `GZIP` compress option to get compressed TFRecord files.") - -flags.DEFINE_integer("max_seq_length", 128, "Maximum sequence length.") - -flags.DEFINE_integer("max_predictions_per_seq", 20, - "Maximum number of masked LM predictions per sequence.") - -flags.DEFINE_integer("random_seed", 12345, "Random seed for data generation.") - -flags.DEFINE_integer( - "dupe_factor", 10, - "Number of times to duplicate the input data (with different masks).") - -flags.DEFINE_float("masked_lm_prob", 0.15, "Masked LM probability.") - -flags.DEFINE_float( - "short_seq_prob", 0.1, - "Probability of creating sequences which are shorter than the " - "maximum length.") - - -class TrainingInstance(object): - """A single training instance (sentence pair).""" - - def __init__(self, tokens, segment_ids, masked_lm_positions, masked_lm_labels, - is_random_next): - self.tokens = tokens - self.segment_ids = segment_ids - self.is_random_next = is_random_next - self.masked_lm_positions = masked_lm_positions - self.masked_lm_labels = masked_lm_labels - - def __str__(self): - s = "" - s += "tokens: %s\n" % (" ".join( - [tokenization.printable_text(x) for x in self.tokens])) - s += "segment_ids: %s\n" % (" ".join([str(x) for x in self.segment_ids])) - s += "is_random_next: %s\n" % self.is_random_next - s += "masked_lm_positions: %s\n" % (" ".join( - [str(x) for x in self.masked_lm_positions])) - s += "masked_lm_labels: %s\n" % (" ".join( - [tokenization.printable_text(x) for x in self.masked_lm_labels])) - s += "\n" - return s - - def __repr__(self): - return self.__str__() - - -def write_instance_to_example_files(instances, tokenizer, max_seq_length, - max_predictions_per_seq, output_files, - gzip_compress): - """Create TF example files from `TrainingInstance`s.""" - writers = [] - for output_file in output_files: - writers.append( - tf.io.TFRecordWriter( - output_file, options="GZIP" if gzip_compress else "")) - - writer_index = 0 - - total_written = 0 - for (inst_index, instance) in enumerate(instances): - input_ids = tokenizer.convert_tokens_to_ids(instance.tokens) - input_mask = [1] * len(input_ids) - segment_ids = list(instance.segment_ids) - assert len(input_ids) <= max_seq_length - - while len(input_ids) < max_seq_length: - input_ids.append(0) - input_mask.append(0) - segment_ids.append(0) - - assert len(input_ids) == max_seq_length - assert len(input_mask) == max_seq_length - assert len(segment_ids) == max_seq_length - - masked_lm_positions = list(instance.masked_lm_positions) - masked_lm_ids = tokenizer.convert_tokens_to_ids(instance.masked_lm_labels) - masked_lm_weights = [1.0] * len(masked_lm_ids) - - while len(masked_lm_positions) < max_predictions_per_seq: - masked_lm_positions.append(0) - masked_lm_ids.append(0) - masked_lm_weights.append(0.0) - - next_sentence_label = 1 if instance.is_random_next else 0 - - features = collections.OrderedDict() - features["input_ids"] = create_int_feature(input_ids) - features["input_mask"] = create_int_feature(input_mask) - features["segment_ids"] = create_int_feature(segment_ids) - features["masked_lm_positions"] = create_int_feature(masked_lm_positions) - features["masked_lm_ids"] = create_int_feature(masked_lm_ids) - features["masked_lm_weights"] = create_float_feature(masked_lm_weights) - features["next_sentence_labels"] = create_int_feature([next_sentence_label]) - - tf_example = tf.train.Example(features=tf.train.Features(feature=features)) - - writers[writer_index].write(tf_example.SerializeToString()) - writer_index = (writer_index + 1) % len(writers) - - total_written += 1 - - if inst_index < 20: - logging.info("*** Example ***") - logging.info("tokens: %s", " ".join( - [tokenization.printable_text(x) for x in instance.tokens])) - - for feature_name in features.keys(): - feature = features[feature_name] - values = [] - if feature.int64_list.value: - values = feature.int64_list.value - elif feature.float_list.value: - values = feature.float_list.value - logging.info("%s: %s", feature_name, " ".join([str(x) for x in values])) - - for writer in writers: - writer.close() - - logging.info("Wrote %d total instances", total_written) - - -def create_int_feature(values): - feature = tf.train.Feature(int64_list=tf.train.Int64List(value=list(values))) - return feature - - -def create_float_feature(values): - feature = tf.train.Feature(float_list=tf.train.FloatList(value=list(values))) - return feature - - -def create_training_instances(input_files, - tokenizer, - max_seq_length, - dupe_factor, - short_seq_prob, - masked_lm_prob, - max_predictions_per_seq, - rng, - do_whole_word_mask=False): - """Create `TrainingInstance`s from raw text.""" - all_documents = [[]] - - # Input file format: - # (1) One sentence per line. These should ideally be actual sentences, not - # entire paragraphs or arbitrary spans of text. (Because we use the - # sentence boundaries for the "next sentence prediction" task). - # (2) Blank lines between documents. Document boundaries are needed so - # that the "next sentence prediction" task doesn't span between documents. - for input_file in input_files: - with tf.io.gfile.GFile(input_file, "rb") as reader: - while True: - line = tokenization.convert_to_unicode(reader.readline()) - if not line: - break - line = line.strip() - - # Empty lines are used as document delimiters - if not line: - all_documents.append([]) - tokens = tokenizer.tokenize(line) - if tokens: - all_documents[-1].append(tokens) - - # Remove empty documents - all_documents = [x for x in all_documents if x] - rng.shuffle(all_documents) - - vocab_words = list(tokenizer.vocab.keys()) - instances = [] - for _ in range(dupe_factor): - for document_index in range(len(all_documents)): - instances.extend( - create_instances_from_document( - all_documents, document_index, max_seq_length, short_seq_prob, - masked_lm_prob, max_predictions_per_seq, vocab_words, rng, - do_whole_word_mask)) - - rng.shuffle(instances) - return instances - - -def create_instances_from_document( - all_documents, document_index, max_seq_length, short_seq_prob, - masked_lm_prob, max_predictions_per_seq, vocab_words, rng, - do_whole_word_mask=False): - """Creates `TrainingInstance`s for a single document.""" - document = all_documents[document_index] - - # Account for [CLS], [SEP], [SEP] - max_num_tokens = max_seq_length - 3 - - # We *usually* want to fill up the entire sequence since we are padding - # to `max_seq_length` anyways, so short sequences are generally wasted - # computation. However, we *sometimes* - # (i.e., short_seq_prob == 0.1 == 10% of the time) want to use shorter - # sequences to minimize the mismatch between pre-training and fine-tuning. - # The `target_seq_length` is just a rough target however, whereas - # `max_seq_length` is a hard limit. - target_seq_length = max_num_tokens - if rng.random() < short_seq_prob: - target_seq_length = rng.randint(2, max_num_tokens) - - # We DON'T just concatenate all of the tokens from a document into a long - # sequence and choose an arbitrary split point because this would make the - # next sentence prediction task too easy. Instead, we split the input into - # segments "A" and "B" based on the actual "sentences" provided by the user - # input. - instances = [] - current_chunk = [] - current_length = 0 - i = 0 - while i < len(document): - segment = document[i] - current_chunk.append(segment) - current_length += len(segment) - if i == len(document) - 1 or current_length >= target_seq_length: - if current_chunk: - # `a_end` is how many segments from `current_chunk` go into the `A` - # (first) sentence. - a_end = 1 - if len(current_chunk) >= 2: - a_end = rng.randint(1, len(current_chunk) - 1) - - tokens_a = [] - for j in range(a_end): - tokens_a.extend(current_chunk[j]) - - tokens_b = [] - # Random next - is_random_next = False - if len(current_chunk) == 1 or rng.random() < 0.5: - is_random_next = True - target_b_length = target_seq_length - len(tokens_a) - - # This should rarely go for more than one iteration for large - # corpora. However, just to be careful, we try to make sure that - # the random document is not the same as the document - # we're processing. - for _ in range(10): - random_document_index = rng.randint(0, len(all_documents) - 1) - if random_document_index != document_index: - break - - random_document = all_documents[random_document_index] - random_start = rng.randint(0, len(random_document) - 1) - for j in range(random_start, len(random_document)): - tokens_b.extend(random_document[j]) - if len(tokens_b) >= target_b_length: - break - # We didn't actually use these segments so we "put them back" so - # they don't go to waste. - num_unused_segments = len(current_chunk) - a_end - i -= num_unused_segments - # Actual next - else: - is_random_next = False - for j in range(a_end, len(current_chunk)): - tokens_b.extend(current_chunk[j]) - truncate_seq_pair(tokens_a, tokens_b, max_num_tokens, rng) - - assert len(tokens_a) >= 1 - assert len(tokens_b) >= 1 - - tokens = [] - segment_ids = [] - tokens.append("[CLS]") - segment_ids.append(0) - for token in tokens_a: - tokens.append(token) - segment_ids.append(0) - - tokens.append("[SEP]") - segment_ids.append(0) - - for token in tokens_b: - tokens.append(token) - segment_ids.append(1) - tokens.append("[SEP]") - segment_ids.append(1) - - (tokens, masked_lm_positions, - masked_lm_labels) = create_masked_lm_predictions( - tokens, masked_lm_prob, max_predictions_per_seq, vocab_words, rng, - do_whole_word_mask) - instance = TrainingInstance( - tokens=tokens, - segment_ids=segment_ids, - is_random_next=is_random_next, - masked_lm_positions=masked_lm_positions, - masked_lm_labels=masked_lm_labels) - instances.append(instance) - current_chunk = [] - current_length = 0 - i += 1 - - return instances - - -MaskedLmInstance = collections.namedtuple("MaskedLmInstance", - ["index", "label"]) - - -def create_masked_lm_predictions(tokens, masked_lm_prob, - max_predictions_per_seq, vocab_words, rng, - do_whole_word_mask): - """Creates the predictions for the masked LM objective.""" - - cand_indexes = [] - for (i, token) in enumerate(tokens): - if token == "[CLS]" or token == "[SEP]": - continue - # Whole Word Masking means that if we mask all of the wordpieces - # corresponding to an original word. When a word has been split into - # WordPieces, the first token does not have any marker and any subsequence - # tokens are prefixed with ##. So whenever we see the ## token, we - # append it to the previous set of word indexes. - # - # Note that Whole Word Masking does *not* change the training code - # at all -- we still predict each WordPiece independently, softmaxed - # over the entire vocabulary. - if (do_whole_word_mask and len(cand_indexes) >= 1 and - token.startswith("##")): - cand_indexes[-1].append(i) - else: - cand_indexes.append([i]) - - rng.shuffle(cand_indexes) - - output_tokens = list(tokens) - - num_to_predict = min(max_predictions_per_seq, - max(1, int(round(len(tokens) * masked_lm_prob)))) - - masked_lms = [] - covered_indexes = set() - for index_set in cand_indexes: - if len(masked_lms) >= num_to_predict: - break - # If adding a whole-word mask would exceed the maximum number of - # predictions, then just skip this candidate. - if len(masked_lms) + len(index_set) > num_to_predict: - continue - is_any_index_covered = False - for index in index_set: - if index in covered_indexes: - is_any_index_covered = True - break - if is_any_index_covered: - continue - for index in index_set: - covered_indexes.add(index) - - masked_token = None - # 80% of the time, replace with [MASK] - if rng.random() < 0.8: - masked_token = "[MASK]" - else: - # 10% of the time, keep original - if rng.random() < 0.5: - masked_token = tokens[index] - # 10% of the time, replace with random word - else: - masked_token = vocab_words[rng.randint(0, len(vocab_words) - 1)] - - output_tokens[index] = masked_token - - masked_lms.append(MaskedLmInstance(index=index, label=tokens[index])) - assert len(masked_lms) <= num_to_predict - masked_lms = sorted(masked_lms, key=lambda x: x.index) - - masked_lm_positions = [] - masked_lm_labels = [] - for p in masked_lms: - masked_lm_positions.append(p.index) - masked_lm_labels.append(p.label) - - return (output_tokens, masked_lm_positions, masked_lm_labels) - - -def truncate_seq_pair(tokens_a, tokens_b, max_num_tokens, rng): - """Truncates a pair of sequences to a maximum sequence length.""" - while True: - total_length = len(tokens_a) + len(tokens_b) - if total_length <= max_num_tokens: - break - - trunc_tokens = tokens_a if len(tokens_a) > len(tokens_b) else tokens_b - assert len(trunc_tokens) >= 1 - - # We want to sometimes truncate from the front and sometimes from the - # back to add more randomness and avoid biases. - if rng.random() < 0.5: - del trunc_tokens[0] - else: - trunc_tokens.pop() - - -def main(_): - tokenizer = tokenization.FullTokenizer( - vocab_file=FLAGS.vocab_file, do_lower_case=FLAGS.do_lower_case) - - input_files = [] - for input_pattern in FLAGS.input_file.split(","): - input_files.extend(tf.io.gfile.glob(input_pattern)) - - logging.info("*** Reading from input files ***") - for input_file in input_files: - logging.info(" %s", input_file) - - rng = random.Random(FLAGS.random_seed) - instances = create_training_instances( - input_files, tokenizer, FLAGS.max_seq_length, FLAGS.dupe_factor, - FLAGS.short_seq_prob, FLAGS.masked_lm_prob, FLAGS.max_predictions_per_seq, - rng, FLAGS.do_whole_word_mask) - - output_files = FLAGS.output_file.split(",") - logging.info("*** Writing to output files ***") - for output_file in output_files: - logging.info(" %s", output_file) - - write_instance_to_example_files(instances, tokenizer, FLAGS.max_seq_length, - FLAGS.max_predictions_per_seq, output_files, - FLAGS.gzip_compress) - - -if __name__ == "__main__": - flags.mark_flag_as_required("input_file") - flags.mark_flag_as_required("output_file") - flags.mark_flag_as_required("vocab_file") - app.run(main) diff --git a/spaces/NCTCMumbai/NCTC/models/official/nlp/nhnet/testdata/crawled_articles/domain_0.com/url_000.html b/spaces/NCTCMumbai/NCTC/models/official/nlp/nhnet/testdata/crawled_articles/domain_0.com/url_000.html deleted file mode 100644 index 0a8549c1d274dc2ba29862860391e65bca391242..0000000000000000000000000000000000000000 --- a/spaces/NCTCMumbai/NCTC/models/official/nlp/nhnet/testdata/crawled_articles/domain_0.com/url_000.html +++ /dev/null @@ -1,3 +0,0 @@ - - -Page Title 0 diff --git a/spaces/NCTCMumbai/NCTC/models/research/audioset/vggish/vggish_train_demo.py b/spaces/NCTCMumbai/NCTC/models/research/audioset/vggish/vggish_train_demo.py deleted file mode 100644 index d8be0f1774549b0b0ec4bdbcf840a16696fa6322..0000000000000000000000000000000000000000 --- a/spaces/NCTCMumbai/NCTC/models/research/audioset/vggish/vggish_train_demo.py +++ /dev/null @@ -1,194 +0,0 @@ -# Copyright 2017 The TensorFlow Authors All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# ============================================================================== - -r"""A simple demonstration of running VGGish in training mode. - -This is intended as a toy example that demonstrates how to use the VGGish model -definition within a larger model that adds more layers on top, and then train -the larger model. If you let VGGish train as well, then this allows you to -fine-tune the VGGish model parameters for your application. If you don't let -VGGish train, then you use VGGish as a feature extractor for the layers above -it. - -For this toy task, we are training a classifier to distinguish between three -classes: sine waves, constant signals, and white noise. We generate synthetic -waveforms from each of these classes, convert into shuffled batches of log mel -spectrogram examples with associated labels, and feed the batches into a model -that includes VGGish at the bottom and a couple of additional layers on top. We -also plumb in labels that are associated with the examples, which feed a label -loss used for training. - -Usage: - # Run training for 100 steps using a model checkpoint in the default - # location (vggish_model.ckpt in the current directory). Allow VGGish - # to get fine-tuned. - $ python vggish_train_demo.py --num_batches 100 - - # Same as before but run for fewer steps and don't change VGGish parameters - # and use a checkpoint in a different location - $ python vggish_train_demo.py --num_batches 50 \ - --train_vggish=False \ - --checkpoint /path/to/model/checkpoint -""" - -from __future__ import print_function - -from random import shuffle - -import numpy as np -import tensorflow.compat.v1 as tf -tf.disable_v2_behavior() -import tf_slim as slim - -import vggish_input -import vggish_params -import vggish_slim - -flags = tf.app.flags - -flags.DEFINE_integer( - 'num_batches', 30, - 'Number of batches of examples to feed into the model. Each batch is of ' - 'variable size and contains shuffled examples of each class of audio.') - -flags.DEFINE_boolean( - 'train_vggish', True, - 'If True, allow VGGish parameters to change during training, thus ' - 'fine-tuning VGGish. If False, VGGish parameters are fixed, thus using ' - 'VGGish as a fixed feature extractor.') - -flags.DEFINE_string( - 'checkpoint', 'vggish_model.ckpt', - 'Path to the VGGish checkpoint file.') - -FLAGS = flags.FLAGS - -_NUM_CLASSES = 3 - - -def _get_examples_batch(): - """Returns a shuffled batch of examples of all audio classes. - - Note that this is just a toy function because this is a simple demo intended - to illustrate how the training code might work. - - Returns: - a tuple (features, labels) where features is a NumPy array of shape - [batch_size, num_frames, num_bands] where the batch_size is variable and - each row is a log mel spectrogram patch of shape [num_frames, num_bands] - suitable for feeding VGGish, while labels is a NumPy array of shape - [batch_size, num_classes] where each row is a multi-hot label vector that - provides the labels for corresponding rows in features. - """ - # Make a waveform for each class. - num_seconds = 5 - sr = 44100 # Sampling rate. - t = np.linspace(0, num_seconds, int(num_seconds * sr)) # Time axis. - # Random sine wave. - freq = np.random.uniform(100, 1000) - sine = np.sin(2 * np.pi * freq * t) - # Random constant signal. - magnitude = np.random.uniform(-1, 1) - const = magnitude * t - # White noise. - noise = np.random.normal(-1, 1, size=t.shape) - - # Make examples of each signal and corresponding labels. - # Sine is class index 0, Const class index 1, Noise class index 2. - sine_examples = vggish_input.waveform_to_examples(sine, sr) - sine_labels = np.array([[1, 0, 0]] * sine_examples.shape[0]) - const_examples = vggish_input.waveform_to_examples(const, sr) - const_labels = np.array([[0, 1, 0]] * const_examples.shape[0]) - noise_examples = vggish_input.waveform_to_examples(noise, sr) - noise_labels = np.array([[0, 0, 1]] * noise_examples.shape[0]) - - # Shuffle (example, label) pairs across all classes. - all_examples = np.concatenate((sine_examples, const_examples, noise_examples)) - all_labels = np.concatenate((sine_labels, const_labels, noise_labels)) - labeled_examples = list(zip(all_examples, all_labels)) - shuffle(labeled_examples) - - # Separate and return the features and labels. - features = [example for (example, _) in labeled_examples] - labels = [label for (_, label) in labeled_examples] - return (features, labels) - - -def main(_): - with tf.Graph().as_default(), tf.Session() as sess: - # Define VGGish. - embeddings = vggish_slim.define_vggish_slim(FLAGS.train_vggish) - - # Define a shallow classification model and associated training ops on top - # of VGGish. - with tf.variable_scope('mymodel'): - # Add a fully connected layer with 100 units. - num_units = 100 - fc = slim.fully_connected(embeddings, num_units) - - # Add a classifier layer at the end, consisting of parallel logistic - # classifiers, one per class. This allows for multi-class tasks. - logits = slim.fully_connected( - fc, _NUM_CLASSES, activation_fn=None, scope='logits') - tf.sigmoid(logits, name='prediction') - - # Add training ops. - with tf.variable_scope('train'): - global_step = tf.Variable( - 0, name='global_step', trainable=False, - collections=[tf.GraphKeys.GLOBAL_VARIABLES, - tf.GraphKeys.GLOBAL_STEP]) - - # Labels are assumed to be fed as a batch multi-hot vectors, with - # a 1 in the position of each positive class label, and 0 elsewhere. - labels = tf.placeholder( - tf.float32, shape=(None, _NUM_CLASSES), name='labels') - - # Cross-entropy label loss. - xent = tf.nn.sigmoid_cross_entropy_with_logits( - logits=logits, labels=labels, name='xent') - loss = tf.reduce_mean(xent, name='loss_op') - tf.summary.scalar('loss', loss) - - # We use the same optimizer and hyperparameters as used to train VGGish. - optimizer = tf.train.AdamOptimizer( - learning_rate=vggish_params.LEARNING_RATE, - epsilon=vggish_params.ADAM_EPSILON) - optimizer.minimize(loss, global_step=global_step, name='train_op') - - # Initialize all variables in the model, and then load the pre-trained - # VGGish checkpoint. - sess.run(tf.global_variables_initializer()) - vggish_slim.load_vggish_slim_checkpoint(sess, FLAGS.checkpoint) - - # Locate all the tensors and ops we need for the training loop. - features_tensor = sess.graph.get_tensor_by_name( - vggish_params.INPUT_TENSOR_NAME) - labels_tensor = sess.graph.get_tensor_by_name('mymodel/train/labels:0') - global_step_tensor = sess.graph.get_tensor_by_name( - 'mymodel/train/global_step:0') - loss_tensor = sess.graph.get_tensor_by_name('mymodel/train/loss_op:0') - train_op = sess.graph.get_operation_by_name('mymodel/train/train_op') - - # The training loop. - for _ in range(FLAGS.num_batches): - (features, labels) = _get_examples_batch() - [num_steps, loss, _] = sess.run( - [global_step_tensor, loss_tensor, train_op], - feed_dict={features_tensor: features, labels_tensor: labels}) - print('Step %d: loss %g' % (num_steps, loss)) - -if __name__ == '__main__': - tf.app.run() diff --git a/spaces/NN520/AI/src/components/ui/separator.tsx b/spaces/NN520/AI/src/components/ui/separator.tsx deleted file mode 100644 index 6c55e0b2ca8e2436658a06748aadbff7cd700db0..0000000000000000000000000000000000000000 --- a/spaces/NN520/AI/src/components/ui/separator.tsx +++ /dev/null @@ -1,31 +0,0 @@ -'use client' - -import * as React from 'react' -import * as SeparatorPrimitive from '@radix-ui/react-separator' - -import { cn } from '@/lib/utils' - -const Separator = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->( - ( - { className, orientation = 'horizontal', decorative = true, ...props }, - ref - ) => ( - - ) -) -Separator.displayName = SeparatorPrimitive.Root.displayName - -export { Separator } diff --git a/spaces/NSect/VALL-E-X/README.md b/spaces/NSect/VALL-E-X/README.md deleted file mode 100644 index 9859ef8c84627abc0c4b8f19a3d7b96163c8af01..0000000000000000000000000000000000000000 --- a/spaces/NSect/VALL-E-X/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: VALL E X -emoji: 🎙 -colorFrom: green -colorTo: purple -sdk: gradio -sdk_version: 3.39.0 -app_file: app.py -pinned: false -license: mit -duplicated_from: Plachta/VALL-E-X ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/NbAiLab/whisper-norwegian-small/README.md b/spaces/NbAiLab/whisper-norwegian-small/README.md deleted file mode 100644 index 4634d4e2d56b9a991a3a9dc0813b20bffb8757e0..0000000000000000000000000000000000000000 --- a/spaces/NbAiLab/whisper-norwegian-small/README.md +++ /dev/null @@ -1,15 +0,0 @@ ---- -title: Whisper Norwegian Demo -emoji: 🤫 -colorFrom: indigo -colorTo: red -sdk: gradio -sdk_version: 3.9.1 -app_file: app.py -pinned: false -tags: -- whisper-event -duplicated_from: whisper-event/whisper-demo ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Nesip/Aeala-GPT4-x-AlpacaDente2-30b/app.py b/spaces/Nesip/Aeala-GPT4-x-AlpacaDente2-30b/app.py deleted file mode 100644 index 5fe71c1b827fab4c7958169f89a15d270668a119..0000000000000000000000000000000000000000 --- a/spaces/Nesip/Aeala-GPT4-x-AlpacaDente2-30b/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/Aeala/GPT4-x-AlpacaDente2-30b").launch() \ No newline at end of file diff --git a/spaces/NimaBoscarino/climategan/climategan/__init__.py b/spaces/NimaBoscarino/climategan/climategan/__init__.py deleted file mode 100644 index edfc9bec8573c946217947a2329afd7d2d05ec08..0000000000000000000000000000000000000000 --- a/spaces/NimaBoscarino/climategan/climategan/__init__.py +++ /dev/null @@ -1,9 +0,0 @@ -from importlib import import_module -from pathlib import Path - -__all__ = [ - import_module(f".{f.stem}", __package__) - for f in Path(__file__).parent.glob("*.py") - if "__" not in f.stem -] -del import_module, Path diff --git a/spaces/Not-Grim-Refer/GitHub-Tool/README.md b/spaces/Not-Grim-Refer/GitHub-Tool/README.md deleted file mode 100644 index 89b664a47711d4fdb677c58f24e1f40739d9d7bc..0000000000000000000000000000000000000000 --- a/spaces/Not-Grim-Refer/GitHub-Tool/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: GitHub-Tool -emoji: 🌍 -colorFrom: blue -colorTo: gray -sdk: streamlit -sdk_version: 1.2.0 -app_file: app.py -pinned: true -license: afl-3.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/optim/lr_scheduler/step_lr_scheduler.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/optim/lr_scheduler/step_lr_scheduler.py deleted file mode 100644 index 8cb20068606a4afd2983430b794fa24647de2e7b..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/optim/lr_scheduler/step_lr_scheduler.py +++ /dev/null @@ -1,86 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from collections.abc import Collection -from dataclasses import dataclass, field -from typing import List - -from omegaconf import II - -from fairseq.dataclass import FairseqDataclass -from fairseq.optim.lr_scheduler import FairseqLRScheduler, register_lr_scheduler - - -@dataclass -class StepLRScheduleConfig(FairseqDataclass): - warmup_updates: int = field( - default=0, - metadata={"help": "warmup the learning rate linearly for the first N updates"}, - ) - warmup_init_lr: float = field( - default=-1, - metadata={ - "help": "initial learning rate during warmup phase; default is cfg.lr" - }, - ) - lr: List[float] = field( - default=II("optimization.lr"), - metadata={"help": "max learning rate, must be more than cfg.min_lr"}, - ) - min_lr: float = field(default=0.0, metadata={"help": "min learning rate"}) - lr_deacy_period: int = field(default=25000, metadata={"help": "decay period"}) - lr_decay: float = field(default=0.5, metadata={"help": "decay factor"}) - - -@register_lr_scheduler("step", dataclass=StepLRScheduleConfig) -class StepLRSchedule(FairseqLRScheduler): - """Decay learning rate every k updates by a fixed factor - """ - - def __init__(self, cfg: StepLRScheduleConfig, fairseq_optimizer): - super().__init__(cfg, fairseq_optimizer) - self.max_lr = cfg.lr[0] if isinstance(cfg.lr, Collection) else cfg.lr - self.min_lr = cfg.min_lr - self.lr_deacy_period = cfg.lr_deacy_period - self.lr_decay = cfg.lr_decay - self.warmup_updates = cfg.warmup_updates - self.warmup_init_lr = ( - cfg.warmup_init_lr if cfg.warmup_init_lr >= 0 else self.min_lr - ) - - assert(self.lr_deacy_period > 0) - assert(self.lr_decay <= 1) - assert(self.min_lr >= 0) - assert(self.max_lr > self.min_lr) - - if cfg.warmup_updates > 0: - # linearly warmup for the first cfg.warmup_updates - self.warmup_lr_step = ( - (self.max_lr - self.warmup_init_lr) / self.warmup_updates - ) - else: - self.warmup_lr_step = 1 - - # initial learning rate - self.lr = self.warmup_init_lr - self.optimizer.set_lr(self.lr) - - def step(self, epoch, val_loss=None): - """Update the learning rate at the end of the given epoch.""" - super().step(epoch, val_loss) - # we don't change the learning rate at epoch boundaries - return self.optimizer.get_lr() - - def step_update(self, num_updates): - """Update the learning rate after each update.""" - if num_updates < self.cfg.warmup_updates: - self.lr = self.warmup_init_lr + num_updates * self.warmup_lr_step - else: - curr_updates = num_updates - self.cfg.warmup_updates - lr_mult = self.lr_decay ** (curr_updates // self.lr_deacy_period) - self.lr = max(self.max_lr * lr_mult, self.min_lr) - - self.optimizer.set_lr(self.lr) - return self.lr diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/textless_nlp/gslm/unit2speech/tacotron2/cleaners.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/textless_nlp/gslm/unit2speech/tacotron2/cleaners.py deleted file mode 100644 index e2e35c1a8cc4c628c5d05802677142c9a2122d2b..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/textless_nlp/gslm/unit2speech/tacotron2/cleaners.py +++ /dev/null @@ -1,90 +0,0 @@ -""" from https://github.com/keithito/tacotron """ - -''' -Cleaners are transformations that run over the input text at both training and eval time. - -Cleaners can be selected by passing a comma-delimited list of cleaner names as the "cleaners" -hyperparameter. Some cleaners are English-specific. You'll typically want to use: - 1. "english_cleaners" for English text - 2. "transliteration_cleaners" for non-English text that can be transliterated to ASCII using - the Unidecode library (https://pypi.python.org/pypi/Unidecode) - 3. "basic_cleaners" if you do not want to transliterate (in this case, you should also update - the symbols in symbols.py to match your data). -''' - -import re -from unidecode import unidecode -from .numbers import normalize_numbers - - -# Regular expression matching whitespace: -_whitespace_re = re.compile(r'\s+') - -# List of (regular expression, replacement) pairs for abbreviations: -_abbreviations = [(re.compile('\\b%s\\.' % x[0], re.IGNORECASE), x[1]) for x in [ - ('mrs', 'misess'), - ('mr', 'mister'), - ('dr', 'doctor'), - ('st', 'saint'), - ('co', 'company'), - ('jr', 'junior'), - ('maj', 'major'), - ('gen', 'general'), - ('drs', 'doctors'), - ('rev', 'reverend'), - ('lt', 'lieutenant'), - ('hon', 'honorable'), - ('sgt', 'sergeant'), - ('capt', 'captain'), - ('esq', 'esquire'), - ('ltd', 'limited'), - ('col', 'colonel'), - ('ft', 'fort'), -]] - - -def expand_abbreviations(text): - for regex, replacement in _abbreviations: - text = re.sub(regex, replacement, text) - return text - - -def expand_numbers(text): - return normalize_numbers(text) - - -def lowercase(text): - return text.lower() - - -def collapse_whitespace(text): - return re.sub(_whitespace_re, ' ', text) - - -def convert_to_ascii(text): - return unidecode(text) - - -def basic_cleaners(text): - '''Basic pipeline that lowercases and collapses whitespace without transliteration.''' - text = lowercase(text) - text = collapse_whitespace(text) - return text - - -def transliteration_cleaners(text): - '''Pipeline for non-English text that transliterates to ASCII.''' - text = convert_to_ascii(text) - text = lowercase(text) - text = collapse_whitespace(text) - return text - - -def english_cleaners(text): - '''Pipeline for English text, including number and abbreviation expansion.''' - text = convert_to_ascii(text) - text = lowercase(text) - text = expand_numbers(text) - text = expand_abbreviations(text) - text = collapse_whitespace(text) - return text diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/wav2vec/unsupervised/data/__init__.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/wav2vec/unsupervised/data/__init__.py deleted file mode 100644 index d0545627efc9a6f9bb180e351ead519a2cb6dea7..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/wav2vec/unsupervised/data/__init__.py +++ /dev/null @@ -1,13 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from .extracted_features_dataset import ExtractedFeaturesDataset -from .random_input_dataset import RandomInputDataset - - -__all__ = [ - "ExtractedFeaturesDataset", - "RandomInputDataset", -] diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/rxf/__init__.py b/spaces/OFA-Sys/OFA-vqa/fairseq/examples/rxf/__init__.py deleted file mode 100644 index b24cb6b797b4159c9862bab1f882ee6ae95614ab..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/rxf/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from . import rxf_src # noqa diff --git a/spaces/OlaWod/FreeVC/speaker_encoder/config.py b/spaces/OlaWod/FreeVC/speaker_encoder/config.py deleted file mode 100644 index 1c21312f3de971bfa008254c6035cebc09f05e4c..0000000000000000000000000000000000000000 --- a/spaces/OlaWod/FreeVC/speaker_encoder/config.py +++ /dev/null @@ -1,45 +0,0 @@ -librispeech_datasets = { - "train": { - "clean": ["LibriSpeech/train-clean-100", "LibriSpeech/train-clean-360"], - "other": ["LibriSpeech/train-other-500"] - }, - "test": { - "clean": ["LibriSpeech/test-clean"], - "other": ["LibriSpeech/test-other"] - }, - "dev": { - "clean": ["LibriSpeech/dev-clean"], - "other": ["LibriSpeech/dev-other"] - }, -} -libritts_datasets = { - "train": { - "clean": ["LibriTTS/train-clean-100", "LibriTTS/train-clean-360"], - "other": ["LibriTTS/train-other-500"] - }, - "test": { - "clean": ["LibriTTS/test-clean"], - "other": ["LibriTTS/test-other"] - }, - "dev": { - "clean": ["LibriTTS/dev-clean"], - "other": ["LibriTTS/dev-other"] - }, -} -voxceleb_datasets = { - "voxceleb1" : { - "train": ["VoxCeleb1/wav"], - "test": ["VoxCeleb1/test_wav"] - }, - "voxceleb2" : { - "train": ["VoxCeleb2/dev/aac"], - "test": ["VoxCeleb2/test_wav"] - } -} - -other_datasets = [ - "LJSpeech-1.1", - "VCTK-Corpus/wav48", -] - -anglophone_nationalites = ["australia", "canada", "ireland", "uk", "usa"] diff --git a/spaces/Omnibus/MusicGen/audiocraft/modules/conditioners.py b/spaces/Omnibus/MusicGen/audiocraft/modules/conditioners.py deleted file mode 100644 index 82792316024b88d4c5c38b0a28f443627771d509..0000000000000000000000000000000000000000 --- a/spaces/Omnibus/MusicGen/audiocraft/modules/conditioners.py +++ /dev/null @@ -1,990 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -from collections import defaultdict -from copy import deepcopy -from dataclasses import dataclass, field -from itertools import chain -import logging -import math -import random -import re -import typing as tp -import warnings - -from einops import rearrange -from num2words import num2words -import spacy -from transformers import T5EncoderModel, T5Tokenizer # type: ignore -import torchaudio -import torch -from torch import nn -from torch import Tensor -import torch.nn.functional as F -from torch.nn.utils.rnn import pad_sequence - -from .streaming import StreamingModule -from .transformer import create_sin_embedding -from ..data.audio_dataset import SegmentInfo -from ..utils.autocast import TorchAutocast -from ..utils.utils import hash_trick, length_to_mask, collate - - -logger = logging.getLogger(__name__) -TextCondition = tp.Optional[str] # a text condition can be a string or None (if doesn't exist) -ConditionType = tp.Tuple[Tensor, Tensor] # condition, mask - - -class WavCondition(tp.NamedTuple): - wav: Tensor - length: Tensor - path: tp.List[tp.Optional[str]] = [] - - -def nullify_condition(condition: ConditionType, dim: int = 1): - """This function transforms an input condition to a null condition. - The way it is done by converting it to a single zero vector similarly - to how it is done inside WhiteSpaceTokenizer and NoopTokenizer. - - Args: - condition (ConditionType): a tuple of condition and mask (tp.Tuple[Tensor, Tensor]) - dim (int): the dimension that will be truncated (should be the time dimension) - WARNING!: dim should not be the batch dimension! - Returns: - ConditionType: a tuple of null condition and mask - """ - assert dim != 0, "dim cannot be the batch dimension!" - assert type(condition) == tuple and \ - type(condition[0]) == Tensor and \ - type(condition[1]) == Tensor, "'nullify_condition' got an unexpected input type!" - cond, mask = condition - B = cond.shape[0] - last_dim = cond.dim() - 1 - out = cond.transpose(dim, last_dim) - out = 0. * out[..., :1] - out = out.transpose(dim, last_dim) - mask = torch.zeros((B, 1), device=out.device).int() - assert cond.dim() == out.dim() - return out, mask - - -def nullify_wav(wav: Tensor) -> WavCondition: - """Create a nullified WavCondition from a wav tensor with appropriate shape. - - Args: - wav (Tensor): tensor of shape [B, T] - Returns: - WavCondition: wav condition with nullified wav. - """ - null_wav, _ = nullify_condition((wav, torch.zeros_like(wav)), dim=wav.dim() - 1) - return WavCondition( - wav=null_wav, - length=torch.tensor([0] * wav.shape[0], device=wav.device), - path=['null_wav'] * wav.shape[0] - ) - - -@dataclass -class ConditioningAttributes: - text: tp.Dict[str, tp.Optional[str]] = field(default_factory=dict) - wav: tp.Dict[str, WavCondition] = field(default_factory=dict) - - def __getitem__(self, item): - return getattr(self, item) - - @property - def text_attributes(self): - return self.text.keys() - - @property - def wav_attributes(self): - return self.wav.keys() - - @property - def attributes(self): - return {"text": self.text_attributes, "wav": self.wav_attributes} - - def to_flat_dict(self): - return { - **{f"text.{k}": v for k, v in self.text.items()}, - **{f"wav.{k}": v for k, v in self.wav.items()}, - } - - @classmethod - def from_flat_dict(cls, x): - out = cls() - for k, v in x.items(): - kind, att = k.split(".") - out[kind][att] = v - return out - - -class SegmentWithAttributes(SegmentInfo): - """Base class for all dataclasses that are used for conditioning. - All child classes should implement `to_condition_attributes` that converts - the existing attributes to a dataclass of type ConditioningAttributes. - """ - def to_condition_attributes(self) -> ConditioningAttributes: - raise NotImplementedError() - - -class Tokenizer: - """Base class for all tokenizers - (in case we want to introduce more advances tokenizers in the future). - """ - def __call__(self, texts: tp.List[tp.Optional[str]]) -> tp.Tuple[Tensor, Tensor]: - raise NotImplementedError() - - -class WhiteSpaceTokenizer(Tokenizer): - """This tokenizer should be used for natural language descriptions. - For example: - ["he didn't, know he's going home.", 'shorter sentence'] => - [[78, 62, 31, 4, 78, 25, 19, 34], - [59, 77, 0, 0, 0, 0, 0, 0]] - """ - PUNCTUATIONS = "?:!.,;" - - def __init__(self, n_bins: int, pad_idx: int = 0, language: str = "en_core_web_sm", - lemma: bool = True, stopwords: bool = True) -> None: - self.n_bins = n_bins - self.pad_idx = pad_idx - self.lemma = lemma - self.stopwords = stopwords - try: - self.nlp = spacy.load(language) - except IOError: - spacy.cli.download(language) # type: ignore - self.nlp = spacy.load(language) - - @tp.no_type_check - def __call__( - self, - texts: tp.List[tp.Optional[str]], - return_text: bool = False - ) -> tp.Tuple[Tensor, Tensor]: - """Take a list of strings and convert them to a tensor of indices. - - Args: - texts (tp.List[str]): List of strings. - return_text (bool, optional): Whether to return text as additional tuple item. Defaults to False. - Returns: - tp.Tuple[Tensor, Tensor]: - - Indices of words in the LUT. - - And a mask indicating where the padding tokens are - """ - output, lengths = [], [] - texts = deepcopy(texts) - for i, text in enumerate(texts): - # if current sample doesn't have a certain attribute, replace with pad token - if text is None: - output.append(Tensor([self.pad_idx])) - lengths.append(0) - continue - - # convert numbers to words - text = re.sub(r"(\d+)", lambda x: num2words(int(x.group(0))), text) # type: ignore - # normalize text - text = self.nlp(text) # type: ignore - # remove stopwords - if self.stopwords: - text = [w for w in text if not w.is_stop] # type: ignore - # remove punctuations - text = [w for w in text if w.text not in self.PUNCTUATIONS] # type: ignore - # lemmatize if needed - text = [getattr(t, "lemma_" if self.lemma else "text") for t in text] # type: ignore - - texts[i] = " ".join(text) - lengths.append(len(text)) - # convert to tensor - tokens = Tensor([hash_trick(w, self.n_bins) for w in text]) - output.append(tokens) - - mask = length_to_mask(torch.IntTensor(lengths)).int() - padded_output = pad_sequence(output, padding_value=self.pad_idx).int().t() - if return_text: - return padded_output, mask, texts # type: ignore - return padded_output, mask - - -class NoopTokenizer(Tokenizer): - """This tokenizer should be used for global conditioners such as: artist, genre, key, etc. - The difference between this and WhiteSpaceTokenizer is that NoopTokenizer does not split - strings, so "Jeff Buckley" will get it's own index. Whereas WhiteSpaceTokenizer will - split it to ["Jeff", "Buckley"] and return an index per word. - - For example: - ["Queen", "ABBA", "Jeff Buckley"] => [43, 55, 101] - ["Metal", "Rock", "Classical"] => [0, 223, 51] - """ - def __init__(self, n_bins: int, pad_idx: int = 0): - self.n_bins = n_bins - self.pad_idx = pad_idx - - def __call__(self, texts: tp.List[tp.Optional[str]]) -> tp.Tuple[Tensor, Tensor]: - output, lengths = [], [] - for text in texts: - # if current sample doesn't have a certain attribute, replace with pad token - if text is None: - output.append(self.pad_idx) - lengths.append(0) - else: - output.append(hash_trick(text, self.n_bins)) - lengths.append(1) - - tokens = torch.LongTensor(output).unsqueeze(1) - mask = length_to_mask(torch.IntTensor(lengths)).int() - return tokens, mask - - -class BaseConditioner(nn.Module): - """Base model for all conditioner modules. We allow the output dim to be different - than the hidden dim for two reasons: 1) keep our LUTs small when the vocab is large; - 2) make all condition dims consistent. - - Args: - dim (int): Hidden dim of the model (text-encoder/LUT). - output_dim (int): Output dim of the conditioner. - """ - def __init__(self, dim, output_dim): - super().__init__() - self.dim = dim - self.output_dim = output_dim - self.output_proj = nn.Linear(dim, output_dim) - - def tokenize(self, *args, **kwargs) -> tp.Any: - """Should be any part of the processing that will lead to a synchronization - point, e.g. BPE tokenization with transfer to the GPU. - - The returned value will be saved and return later when calling forward(). - """ - raise NotImplementedError() - - def forward(self, inputs: tp.Any) -> ConditionType: - """Gets input that should be used as conditioning (e.g, genre, description or a waveform). - Outputs a ConditionType, after the input data was embedded as a dense vector. - - Returns: - ConditionType: - - A tensor of size [B, T, D] where B is the batch size, T is the length of the - output embedding and D is the dimension of the embedding. - - And a mask indicating where the padding tokens. - """ - raise NotImplementedError() - - -class TextConditioner(BaseConditioner): - ... - - -class LUTConditioner(TextConditioner): - """Lookup table TextConditioner. - - Args: - n_bins (int): Number of bins. - dim (int): Hidden dim of the model (text-encoder/LUT). - output_dim (int): Output dim of the conditioner. - tokenizer (str): Name of the tokenizer. - pad_idx (int, optional): Index for padding token. Defaults to 0. - """ - def __init__(self, n_bins: int, dim: int, output_dim: int, tokenizer: str, pad_idx: int = 0): - super().__init__(dim, output_dim) - self.embed = nn.Embedding(n_bins, dim) - self.tokenizer: Tokenizer - if tokenizer == "whitespace": - self.tokenizer = WhiteSpaceTokenizer(n_bins, pad_idx=pad_idx) - elif tokenizer == "noop": - self.tokenizer = NoopTokenizer(n_bins, pad_idx=pad_idx) - else: - raise ValueError(f"unrecognized tokenizer `{tokenizer}`.") - - def tokenize(self, x: tp.List[tp.Optional[str]]) -> tp.Tuple[torch.Tensor, torch.Tensor]: - device = self.embed.weight.device - tokens, mask = self.tokenizer(x) - tokens, mask = tokens.to(device), mask.to(device) - return tokens, mask - - def forward(self, inputs: tp.Tuple[torch.Tensor, torch.Tensor]) -> ConditionType: - tokens, mask = inputs - embeds = self.embed(tokens) - embeds = self.output_proj(embeds) - embeds = (embeds * mask.unsqueeze(-1)) - return embeds, mask - - -class T5Conditioner(TextConditioner): - """T5-based TextConditioner. - - Args: - name (str): Name of the T5 model. - output_dim (int): Output dim of the conditioner. - finetune (bool): Whether to fine-tune T5 at train time. - device (str): Device for T5 Conditioner. - autocast_dtype (tp.Optional[str], optional): Autocast dtype. - word_dropout (float, optional): Word dropout probability. - normalize_text (bool, optional): Whether to apply text normalization. - """ - MODELS = ["t5-small", "t5-base", "t5-large", "t5-3b", "t5-11b", - "google/flan-t5-small", "google/flan-t5-base", "google/flan-t5-large", - "google/flan-t5-xl", "google/flan-t5-xxl"] - MODELS_DIMS = { - "t5-small": 512, - "t5-base": 768, - "t5-large": 1024, - "t5-3b": 1024, - "t5-11b": 1024, - "google/flan-t5-small": 512, - "google/flan-t5-base": 768, - "google/flan-t5-large": 1024, - "google/flan-t5-3b": 1024, - "google/flan-t5-11b": 1024, - } - - def __init__(self, name: str, output_dim: int, finetune: bool, device: str, - autocast_dtype: tp.Optional[str] = 'float32', word_dropout: float = 0., - normalize_text: bool = False): - assert name in self.MODELS, f"unrecognized t5 model name (should in {self.MODELS})" - super().__init__(self.MODELS_DIMS[name], output_dim) - self.device = device - self.name = name - self.finetune = finetune - self.word_dropout = word_dropout - - if autocast_dtype is None or self.device == 'cpu': - self.autocast = TorchAutocast(enabled=False) - if self.device != 'cpu': - logger.warning("T5 has no autocast, this might lead to NaN") - else: - dtype = getattr(torch, autocast_dtype) - assert isinstance(dtype, torch.dtype) - logger.info(f"T5 will be evaluated with autocast as {autocast_dtype}") - self.autocast = TorchAutocast(enabled=True, device_type=self.device, dtype=dtype) - # Let's disable logging temporarily because T5 will vomit some errors otherwise. - # thanks https://gist.github.com/simon-weber/7853144 - previous_level = logging.root.manager.disable - logging.disable(logging.ERROR) - with warnings.catch_warnings(): - warnings.simplefilter("ignore") - try: - self.t5_tokenizer = T5Tokenizer.from_pretrained(name) - t5 = T5EncoderModel.from_pretrained(name).train(mode=finetune) - finally: - logging.disable(previous_level) - if finetune: - self.t5 = t5 - else: - # this makes sure that the t5 models is not part - # of the saved checkpoint - self.__dict__["t5"] = t5.to(device) - - self.normalize_text = normalize_text - if normalize_text: - self.text_normalizer = WhiteSpaceTokenizer(1, lemma=True, stopwords=True) - - def tokenize(self, x: tp.List[tp.Optional[str]]) -> tp.Dict[str, torch.Tensor]: - # if current sample doesn't have a certain attribute, replace with empty string - entries: tp.List[str] = [xi if xi is not None else "" for xi in x] - if self.normalize_text: - _, _, entries = self.text_normalizer(entries, return_text=True) - if self.word_dropout > 0. and self.training: - new_entries = [] - for entry in entries: - words = [word for word in entry.split(" ") if random.random() >= self.word_dropout] - new_entries.append(" ".join(words)) - entries = new_entries - - empty_idx = torch.LongTensor([i for i, xi in enumerate(entries) if xi == ""]) - - inputs = self.t5_tokenizer(entries, return_tensors="pt", padding=True).to(self.device) - mask = inputs["attention_mask"] - mask[empty_idx, :] = 0 # zero-out index where the input is non-existant - return inputs - - def forward(self, inputs: tp.Dict[str, torch.Tensor]) -> ConditionType: - mask = inputs["attention_mask"] - with torch.set_grad_enabled(self.finetune), self.autocast: - embeds = self.t5(**inputs).last_hidden_state - embeds = self.output_proj(embeds.to(self.output_proj.weight)) - embeds = (embeds * mask.unsqueeze(-1)) - return embeds, mask - - -class WaveformConditioner(BaseConditioner): - """Base class for all conditioners that take a waveform as input. - Classes that inherit must implement `_get_wav_embedding` that outputs - a continuous tensor, and `_downsampling_factor` that returns the down-sampling - factor of the embedding model. - - Args: - dim (int): The internal representation dimension. - output_dim (int): Output dimension. - device (tp.Union[torch.device, str]): Device. - """ - def __init__(self, dim: int, output_dim: int, device: tp.Union[torch.device, str]): - super().__init__(dim, output_dim) - self.device = device - - def tokenize(self, wav_length: WavCondition) -> WavCondition: - wav, length, path = wav_length - assert length is not None - return WavCondition(wav.to(self.device), length.to(self.device), path) - - def _get_wav_embedding(self, wav: Tensor) -> Tensor: - """Gets as input a wav and returns a dense vector of conditions.""" - raise NotImplementedError() - - def _downsampling_factor(self): - """Returns the downsampling factor of the embedding model.""" - raise NotImplementedError() - - def forward(self, inputs: WavCondition) -> ConditionType: - """ - Args: - input (WavCondition): Tuple of (waveform, lengths). - Returns: - ConditionType: Dense vector representing the conditioning along with its' mask. - """ - wav, lengths, path = inputs - with torch.no_grad(): - embeds = self._get_wav_embedding(wav) - embeds = embeds.to(self.output_proj.weight) - embeds = self.output_proj(embeds) - - if lengths is not None: - lengths = lengths / self._downsampling_factor() - mask = length_to_mask(lengths, max_len=embeds.shape[1]).int() # type: ignore - else: - mask = torch.ones_like(embeds) - embeds = (embeds * mask.unsqueeze(2).to(self.device)) - - return embeds, mask - - -class ChromaStemConditioner(WaveformConditioner): - """Chroma conditioner that uses DEMUCS to first filter out drums and bass. The is followed by - the insight the drums and bass often dominate the chroma, leading to the chroma not containing the - information about melody. - - Args: - output_dim (int): Output dimension for the conditioner. - sample_rate (int): Sample rate for the chroma extractor. - n_chroma (int): Number of chroma for the chroma extractor. - radix2_exp (int): Radix2 exponent for the chroma extractor. - duration (float): Duration used during training. This is later used for correct padding - in case we are using chroma as prefix. - match_len_on_eval (bool, optional): If True then all chromas are padded to the training - duration. Defaults to False. - eval_wavs (str, optional): Path to a json egg with waveform, this waveforms are used as - conditions during eval (for cases where we don't want to leak test conditions like MusicCaps). - Defaults to None. - n_eval_wavs (int, optional): Limits the number of waveforms used for conditioning. Defaults to 0. - device (tp.Union[torch.device, str], optional): Device for the conditioner. - **kwargs: Additional parameters for the chroma extractor. - """ - def __init__(self, output_dim: int, sample_rate: int, n_chroma: int, radix2_exp: int, - duration: float, match_len_on_eval: bool = True, eval_wavs: tp.Optional[str] = None, - n_eval_wavs: int = 0, device: tp.Union[torch.device, str] = "cpu", **kwargs): - from demucs import pretrained - super().__init__(dim=n_chroma, output_dim=output_dim, device=device) - self.autocast = TorchAutocast(enabled=device != "cpu", device_type=self.device, dtype=torch.float32) - self.sample_rate = sample_rate - self.match_len_on_eval = match_len_on_eval - self.duration = duration - self.__dict__["demucs"] = pretrained.get_model('htdemucs').to(device) - self.stem2idx = {'drums': 0, 'bass': 1, 'other': 2, 'vocal': 3} - self.stem_idx = torch.LongTensor([self.stem2idx['vocal'], self.stem2idx['other']]).to(device) - self.chroma = ChromaExtractor(sample_rate=sample_rate, n_chroma=n_chroma, radix2_exp=radix2_exp, - device=device, **kwargs) - self.chroma_len = self._get_chroma_len() - - def _downsampling_factor(self): - return self.chroma.winhop - - def _get_chroma_len(self): - """Get length of chroma during training""" - dummy_wav = torch.zeros((1, self.sample_rate * self.duration), device=self.device) - dummy_chr = self.chroma(dummy_wav) - return dummy_chr.shape[1] - - @torch.no_grad() - def _get_filtered_wav(self, wav): - from demucs.apply import apply_model - from demucs.audio import convert_audio - with self.autocast: - wav = convert_audio(wav, self.sample_rate, self.demucs.samplerate, self.demucs.audio_channels) - stems = apply_model(self.demucs, wav, device=self.device) - stems = stems[:, self.stem_idx] # extract stem - stems = stems.sum(1) # merge extracted stems - stems = stems.mean(1, keepdim=True) # mono - stems = convert_audio(stems, self.demucs.samplerate, self.sample_rate, 1) - return stems - - @torch.no_grad() - def _get_wav_embedding(self, wav): - # avoid 0-size tensors when we are working with null conds - if wav.shape[-1] == 1: - return self.chroma(wav) - stems = self._get_filtered_wav(wav) - chroma = self.chroma(stems) - - if self.match_len_on_eval: - b, t, c = chroma.shape - if t > self.chroma_len: - chroma = chroma[:, :self.chroma_len] - logger.debug(f'chroma was truncated! ({t} -> {chroma.shape[1]})') - elif t < self.chroma_len: - # chroma = F.pad(chroma, (0, 0, 0, self.chroma_len - t)) - n_repeat = int(math.ceil(self.chroma_len / t)) - chroma = chroma.repeat(1, n_repeat, 1) - chroma = chroma[:, :self.chroma_len] - logger.debug(f'chroma was zero-padded! ({t} -> {chroma.shape[1]})') - return chroma - - -class ChromaExtractor(nn.Module): - """Chroma extraction class, handles chroma extraction and quantization. - - Args: - sample_rate (int): Sample rate. - n_chroma (int): Number of chroma to consider. - radix2_exp (int): Radix2 exponent. - nfft (tp.Optional[int], optional): Number of FFT. - winlen (tp.Optional[int], optional): Window length. - winhop (tp.Optional[int], optional): Window hop size. - argmax (bool, optional): Whether to use argmax. Defaults to False. - norm (float, optional): Norm for chroma normalization. Defaults to inf. - device (tp.Union[torch.device, str], optional): Device to use. Defaults to cpu. - """ - def __init__(self, sample_rate: int, n_chroma: int = 12, radix2_exp: int = 12, - nfft: tp.Optional[int] = None, winlen: tp.Optional[int] = None, winhop: tp.Optional[int] = None, - argmax: bool = False, norm: float = torch.inf, device: tp.Union[torch.device, str] = "cpu"): - super().__init__() - from librosa import filters - self.device = device - self.autocast = TorchAutocast(enabled=device != "cpu", device_type=self.device, dtype=torch.float32) - self.winlen = winlen or 2 ** radix2_exp - self.nfft = nfft or self.winlen - self.winhop = winhop or (self.winlen // 4) - self.sr = sample_rate - self.n_chroma = n_chroma - self.norm = norm - self.argmax = argmax - self.window = torch.hann_window(self.winlen).to(device) - self.fbanks = torch.from_numpy(filters.chroma(sr=sample_rate, n_fft=self.nfft, tuning=0, - n_chroma=self.n_chroma)).to(device) - self.spec = torchaudio.transforms.Spectrogram(n_fft=self.nfft, win_length=self.winlen, - hop_length=self.winhop, power=2, center=True, - pad=0, normalized=True).to(device) - - def forward(self, wav): - with self.autocast: - T = wav.shape[-1] - # in case we are getting a wav that was dropped out (nullified) - # make sure wav length is no less that nfft - if T < self.nfft: - pad = self.nfft - T - r = 0 if pad % 2 == 0 else 1 - wav = F.pad(wav, (pad // 2, pad // 2 + r), 'constant', 0) - assert wav.shape[-1] == self.nfft, f'expected len {self.nfft} but got {wav.shape[-1]}' - spec = self.spec(wav).squeeze(1) - raw_chroma = torch.einsum("cf,...ft->...ct", self.fbanks, spec) - norm_chroma = torch.nn.functional.normalize(raw_chroma, p=self.norm, dim=-2, eps=1e-6) - norm_chroma = rearrange(norm_chroma, "b d t -> b t d") - - if self.argmax: - idx = norm_chroma.argmax(-1, keepdims=True) - norm_chroma[:] = 0 - norm_chroma.scatter_(dim=-1, index=idx, value=1) - - return norm_chroma - - -def dropout_condition(sample: ConditioningAttributes, condition_type: str, condition: str): - """Utility function for nullifying an attribute inside an ConditioningAttributes object. - If the condition is of type "wav", then nullify it using "nullify_condition". - If the condition is of any other type, set its' value to None. - Works in-place. - """ - if condition_type not in ["text", "wav"]: - raise ValueError( - "dropout_condition got an unexpected condition type!" - f" expected 'wav' or 'text' but got '{condition_type}'" - ) - - if condition not in getattr(sample, condition_type): - raise ValueError( - "dropout_condition received an unexpected condition!" - f" expected wav={sample.wav.keys()} and text={sample.text.keys()}" - f"but got '{condition}' of type '{condition_type}'!" - ) - - if condition_type == "wav": - wav, length, path = sample.wav[condition] - sample.wav[condition] = nullify_wav(wav) - else: - sample.text[condition] = None - - return sample - - -class DropoutModule(nn.Module): - """Base class for all dropout modules.""" - def __init__(self, seed: int = 1234): - super().__init__() - self.rng = torch.Generator() - self.rng.manual_seed(seed) - - -class AttributeDropout(DropoutModule): - """Applies dropout with a given probability per attribute. This is different from the behavior of - ClassifierFreeGuidanceDropout as this allows for attributes to be dropped out separately. For example, - "artist" can be dropped while "genre" remains. This is in contrast to ClassifierFreeGuidanceDropout - where if "artist" is dropped "genre" must also be dropped. - - Args: - p (tp.Dict[str, float]): A dict mapping between attributes and dropout probability. For example: - ... - "genre": 0.1, - "artist": 0.5, - "wav": 0.25, - ... - active_on_eval (bool, optional): Whether the dropout is active at eval. Default to False. - seed (int, optional): Random seed. - """ - def __init__(self, p: tp.Dict[str, tp.Dict[str, float]], active_on_eval: bool = False, seed: int = 1234): - super().__init__(seed=seed) - self.active_on_eval = active_on_eval - # construct dict that return the values from p otherwise 0 - self.p = {} - for condition_type, probs in p.items(): - self.p[condition_type] = defaultdict(lambda: 0, probs) - - def forward(self, samples: tp.List[ConditioningAttributes]) -> tp.List[ConditioningAttributes]: - """ - Args: - samples (tp.List[ConditioningAttributes]): List of conditions. - Returns: - tp.List[ConditioningAttributes]: List of conditions after certain attributes were set to None. - """ - if not self.training and not self.active_on_eval: - return samples - - samples = deepcopy(samples) - - for condition_type, ps in self.p.items(): # for condition types [text, wav] - for condition, p in ps.items(): # for attributes of each type (e.g., [artist, genre]) - if torch.rand(1, generator=self.rng).item() < p: - for sample in samples: - dropout_condition(sample, condition_type, condition) - - return samples - - def __repr__(self): - return f"AttributeDropout({dict(self.p)})" - - -class ClassifierFreeGuidanceDropout(DropoutModule): - """Applies Classifier Free Guidance dropout, meaning all attributes - are dropped with the same probability. - - Args: - p (float): Probability to apply condition dropout during training. - seed (int): Random seed. - """ - def __init__(self, p: float, seed: int = 1234): - super().__init__(seed=seed) - self.p = p - - def forward(self, samples: tp.List[ConditioningAttributes]) -> tp.List[ConditioningAttributes]: - """ - Args: - samples (tp.List[ConditioningAttributes]): List of conditions. - Returns: - tp.List[ConditioningAttributes]: List of conditions after all attributes were set to None. - """ - if not self.training: - return samples - - # decide on which attributes to drop in a batched fashion - drop = torch.rand(1, generator=self.rng).item() < self.p - if not drop: - return samples - - # nullify conditions of all attributes - samples = deepcopy(samples) - - for condition_type in ["wav", "text"]: - for sample in samples: - for condition in sample.attributes[condition_type]: - dropout_condition(sample, condition_type, condition) - - return samples - - def __repr__(self): - return f"ClassifierFreeGuidanceDropout(p={self.p})" - - -class ConditioningProvider(nn.Module): - """Main class to provide conditions given all the supported conditioners. - - Args: - conditioners (dict): Dictionary of conditioners. - merge_text_conditions_p (float, optional): Probability to merge all text sources - into a single text condition. Defaults to 0. - drop_desc_p (float, optional): Probability to drop the original description - when merging all text sources into a single text condition. Defaults to 0. - device (tp.Union[torch.device, str], optional): Device for conditioners and output condition types. - """ - def __init__( - self, - conditioners: tp.Dict[str, BaseConditioner], - merge_text_conditions_p: float = 0, - drop_desc_p: float = 0, - device: tp.Union[torch.device, str] = "cpu", - ): - super().__init__() - self.device = device - self.merge_text_conditions_p = merge_text_conditions_p - self.drop_desc_p = drop_desc_p - self.conditioners = nn.ModuleDict(conditioners) - - @property - def text_conditions(self): - return [k for k, v in self.conditioners.items() if isinstance(v, TextConditioner)] - - @property - def wav_conditions(self): - return [k for k, v in self.conditioners.items() if isinstance(v, WaveformConditioner)] - - @property - def has_wav_condition(self): - return len(self.wav_conditions) > 0 - - def tokenize(self, inputs: tp.List[ConditioningAttributes]) -> tp.Dict[str, tp.Any]: - """Match attributes/wavs with existing conditioners in self, and compute tokenize them accordingly. - This should be called before starting any real GPU work to avoid synchronization points. - This will return a dict matching conditioner names to their arbitrary tokenized representations. - - Args: - inputs (list[ConditioningAttribres]): List of ConditioningAttributes objects containing - text and wav conditions. - """ - assert all([type(x) == ConditioningAttributes for x in inputs]), \ - "got unexpected types input for conditioner! should be tp.List[ConditioningAttributes]" \ - f" but types were {set([type(x) for x in inputs])}" - - output = {} - text = self._collate_text(inputs) - wavs = self._collate_wavs(inputs) - - assert set(text.keys() | wavs.keys()).issubset(set(self.conditioners.keys())), \ - f"got an unexpected attribute! Expected {self.conditioners.keys()}, got {text.keys(), wavs.keys()}" - - for attribute, batch in chain(text.items(), wavs.items()): - output[attribute] = self.conditioners[attribute].tokenize(batch) - return output - - def forward(self, tokenized: tp.Dict[str, tp.Any]) -> tp.Dict[str, ConditionType]: - """Compute pairs of `(embedding, mask)` using the configured conditioners - and the tokenized representations. The output is for example: - - { - "genre": (torch.Tensor([B, 1, D_genre]), torch.Tensor([B, 1])), - "description": (torch.Tensor([B, T_desc, D_desc]), torch.Tensor([B, T_desc])), - ... - } - - Args: - tokenized (dict): Dict of tokenized representations as returned by `tokenize()`. - """ - output = {} - for attribute, inputs in tokenized.items(): - condition, mask = self.conditioners[attribute](inputs) - output[attribute] = (condition, mask) - return output - - def _collate_text(self, samples: tp.List[ConditioningAttributes]) -> tp.Dict[str, tp.List[tp.Optional[str]]]: - """Given a list of ConditioningAttributes objects, compile a dictionary where the keys - are the attributes and the values are the aggregated input per attribute. - For example: - Input: - [ - ConditioningAttributes(text={"genre": "Rock", "description": "A rock song with a guitar solo"}, wav=...), - ConditioningAttributes(text={"genre": "Hip-hop", "description": "A hip-hop verse"}, wav=...), - ] - Output: - { - "genre": ["Rock", "Hip-hop"], - "description": ["A rock song with a guitar solo", "A hip-hop verse"] - } - """ - batch_per_attribute: tp.Dict[str, tp.List[tp.Optional[str]]] = defaultdict(list) - - def _merge_conds(cond, merge_text_conditions_p=0, drop_desc_p=0): - def is_valid(k, v): - k_valid = k in ['key', 'bpm', 'genre', 'moods', 'instrument'] - v_valid = v is not None and isinstance(v, (int, float, str, list)) - return k_valid and v_valid - - def process_value(v): - if isinstance(v, (int, float, str)): - return v - if isinstance(v, list): - return ", ".join(v) - else: - RuntimeError(f"unknown type for text value! ({type(v), v})") - - desc = cond.text['description'] - meta_data = "" - if random.uniform(0, 1) < merge_text_conditions_p: - meta_pairs = [f'{k}: {process_value(v)}' for k, v in cond.text.items() if is_valid(k, v)] - random.shuffle(meta_pairs) - meta_data = ". ".join(meta_pairs) - desc = desc if not random.uniform(0, 1) < drop_desc_p else None - - if desc is None: - desc = meta_data if len(meta_data) > 1 else None - else: - desc = desc.rstrip('.') + ". " + meta_data - cond.text['description'] = desc.strip() if desc else None - - if self.training and self.merge_text_conditions_p: - for sample in samples: - _merge_conds(sample, self.merge_text_conditions_p, self.drop_desc_p) - - texts = [x.text for x in samples] - for text in texts: - for condition in self.text_conditions: - batch_per_attribute[condition].append(text[condition]) - - return batch_per_attribute - - def _collate_wavs(self, samples: tp.List[ConditioningAttributes]): - """Generate a dict where the keys are attributes by which we fetch similar wavs, - and the values are Tensors of wavs according to said attribtues. - - *Note*: by the time the samples reach this function, each sample should have some waveform - inside the "wav" attribute. It should be either: - 1. A real waveform - 2. A null waveform due to the sample having no similar waveforms (nullified by the dataset) - 3. A null waveform due to it being dropped in a dropout module (nullified by dropout) - - Args: - samples (tp.List[ConditioningAttributes]): List of ConditioningAttributes samples. - Returns: - dict: A dicionary mapping an attribute name to wavs. - """ - wavs = defaultdict(list) - lens = defaultdict(list) - paths = defaultdict(list) - out = {} - - for sample in samples: - for attribute in self.wav_conditions: - wav, length, path = sample.wav[attribute] - wavs[attribute].append(wav.flatten()) - lens[attribute].append(length) - paths[attribute].append(path) - - # stack all wavs to a single tensor - for attribute in self.wav_conditions: - stacked_wav, _ = collate(wavs[attribute], dim=0) - out[attribute] = WavCondition(stacked_wav.unsqueeze(1), - torch.cat(lens['self_wav']), paths[attribute]) # type: ignore - - return out - - -class ConditionFuser(StreamingModule): - """Condition fuser handles the logic to combine the different conditions - to the actual model input. - - Args: - fuse2cond (tp.Dict[str, str]): A dictionary that says how to fuse - each condition. For example: - { - "prepend": ["description"], - "sum": ["genre", "bpm"], - "cross": ["description"], - } - cross_attention_pos_emb (bool, optional): Use positional embeddings in cross attention. - cross_attention_pos_emb_scale (int): Scale for positional embeddings in cross attention if used. - """ - FUSING_METHODS = ["sum", "prepend", "cross", "input_interpolate"] - - def __init__(self, fuse2cond: tp.Dict[str, tp.List[str]], cross_attention_pos_emb: bool = False, - cross_attention_pos_emb_scale: float = 1.0): - super().__init__() - assert all( - [k in self.FUSING_METHODS for k in fuse2cond.keys()] - ), f"got invalid fuse method, allowed methods: {self.FUSING_MEHTODS}" - self.cross_attention_pos_emb = cross_attention_pos_emb - self.cross_attention_pos_emb_scale = cross_attention_pos_emb_scale - self.fuse2cond: tp.Dict[str, tp.List[str]] = fuse2cond - self.cond2fuse: tp.Dict[str, str] = {} - for fuse_method, conditions in fuse2cond.items(): - for condition in conditions: - self.cond2fuse[condition] = fuse_method - - def forward( - self, - input: Tensor, - conditions: tp.Dict[str, ConditionType] - ) -> tp.Tuple[Tensor, tp.Optional[Tensor]]: - """Fuse the conditions to the provided model input. - - Args: - input (Tensor): Transformer input. - conditions (tp.Dict[str, ConditionType]): Dict of conditions. - Returns: - tp.Tuple[Tensor, Tensor]: The first tensor is the transformer input - after the conditions have been fused. The second output tensor is the tensor - used for cross-attention or None if no cross attention inputs exist. - """ - B, T, _ = input.shape - - if 'offsets' in self._streaming_state: - first_step = False - offsets = self._streaming_state['offsets'] - else: - first_step = True - offsets = torch.zeros(input.shape[0], dtype=torch.long, device=input.device) - - assert set(conditions.keys()).issubset(set(self.cond2fuse.keys())), \ - f"given conditions contain unknown attributes for fuser, " \ - f"expected {self.cond2fuse.keys()}, got {conditions.keys()}" - cross_attention_output = None - for cond_type, (cond, cond_mask) in conditions.items(): - op = self.cond2fuse[cond_type] - if op == "sum": - input += cond - elif op == "input_interpolate": - cond = rearrange(cond, "b t d -> b d t") - cond = F.interpolate(cond, size=input.shape[1]) - input += rearrange(cond, "b d t -> b t d") - elif op == "prepend": - if first_step: - input = torch.cat([cond, input], dim=1) - elif op == "cross": - if cross_attention_output is not None: - cross_attention_output = torch.cat([cross_attention_output, cond], dim=1) - else: - cross_attention_output = cond - else: - raise ValueError(f"unknown op ({op})") - - if self.cross_attention_pos_emb and cross_attention_output is not None: - positions = torch.arange( - cross_attention_output.shape[1], - device=cross_attention_output.device - ).view(1, -1, 1) - pos_emb = create_sin_embedding(positions, cross_attention_output.shape[-1]) - cross_attention_output = cross_attention_output + self.cross_attention_pos_emb_scale * pos_emb - - if self._is_streaming: - self._streaming_state['offsets'] = offsets + T - - return input, cross_attention_output diff --git a/spaces/PSLD/PSLD/diffusion-posterior-sampling/guided_diffusion/posterior_mean_variance.py b/spaces/PSLD/PSLD/diffusion-posterior-sampling/guided_diffusion/posterior_mean_variance.py deleted file mode 100644 index 50e20687d03f07572b834598b26ee7f5a0b2b4db..0000000000000000000000000000000000000000 --- a/spaces/PSLD/PSLD/diffusion-posterior-sampling/guided_diffusion/posterior_mean_variance.py +++ /dev/null @@ -1,264 +0,0 @@ -from abc import ABC, abstractmethod - -import numpy as np -import torch - -from util.img_utils import dynamic_thresholding - - - -# ==================== -# Model Mean Processor -# ==================== - -__MODEL_MEAN_PROCESSOR__ = {} - -def register_mean_processor(name: str): - def wrapper(cls): - if __MODEL_MEAN_PROCESSOR__.get(name, None): - raise NameError(f"Name {name} is already registerd.") - __MODEL_MEAN_PROCESSOR__[name] = cls - return cls - return wrapper - -def get_mean_processor(name: str, **kwargs): - if __MODEL_MEAN_PROCESSOR__.get(name, None) is None: - raise NameError(f"Name {name} is not defined.") - return __MODEL_MEAN_PROCESSOR__[name](**kwargs) - -class MeanProcessor(ABC): - """Predict x_start and calculate mean value""" - @abstractmethod - def __init__(self, betas, dynamic_threshold, clip_denoised): - self.dynamic_threshold = dynamic_threshold - self.clip_denoised = clip_denoised - - @abstractmethod - def get_mean_and_xstart(self, x, t, model_output): - pass - - def process_xstart(self, x): - if self.dynamic_threshold: - x = dynamic_thresholding(x, s=0.95) - if self.clip_denoised: - x = x.clamp(-1, 1) - return x - -@register_mean_processor(name='previous_x') -class PreviousXMeanProcessor(MeanProcessor): - def __init__(self, betas, dynamic_threshold, clip_denoised): - super().__init__(betas, dynamic_threshold, clip_denoised) - alphas = 1.0 - betas - alphas_cumprod = np.cumprod(alphas, axis=0) - alphas_cumprod_prev = np.append(1.0, alphas_cumprod[:-1]) - - self.posterior_mean_coef1 = betas * np.sqrt(alphas_cumprod_prev) / (1.0-alphas_cumprod) - self.posterior_mean_coef2 = (1.0 - alphas_cumprod_prev) * np.sqrt(alphas) / (1.0 - alphas_cumprod) - - def predict_xstart(self, x_t, t, x_prev): - coef1 = extract_and_expand(1.0/self.posterior_mean_coef1, t, x_t) - coef2 = extract_and_expand(self.posterior_mean_coef2/self.posterior_mean_coef1, t, x_t) - return coef1 * x_prev - coef2 * x_t - - def get_mean_and_xstart(self, x, t, model_output): - mean = model_output - pred_xstart = self.process_xstart(self.predict_xstart(x, t, model_output)) - return mean, pred_xstart - -@register_mean_processor(name='start_x') -class StartXMeanProcessor(MeanProcessor): - def __init__(self, betas, dynamic_threshold, clip_denoised): - super().__init__(betas, dynamic_threshold, clip_denoised) - alphas = 1.0 - betas - alphas_cumprod = np.cumprod(alphas, axis=0) - alphas_cumprod_prev = np.append(1.0, alphas_cumprod[:-1]) - - self.posterior_mean_coef1 = betas * np.sqrt(alphas_cumprod_prev) / (1.0-alphas_cumprod) - self.posterior_mean_coef2 = (1.0 - alphas_cumprod_prev) * np.sqrt(alphas) / (1.0 - alphas_cumprod) - - def q_posterior_mean(self, x_start, x_t, t): - """ - Compute the mean of the diffusion posteriro: - q(x_{t-1} | x_t, x_0) - """ - assert x_start.shape == x_t.shape - coef1 = extract_and_expand(self.posterior_mean_coef1, t, x_start) - coef2 = extract_and_expand(self.posterior_mean_coef2, t, x_t) - - return coef1 * x_start + coef2 * x_t - - def get_mean_and_xstart(self, x, t, model_output): - pred_xstart = self.process_xstart(model_output) - mean = self.q_posterior_mean(x_start=pred_xstart, x_t=x, t=t) - - return mean, pred_xstart - -@register_mean_processor(name='epsilon') -class EpsilonXMeanProcessor(MeanProcessor): - def __init__(self, betas, dynamic_threshold, clip_denoised): - super().__init__(betas, dynamic_threshold, clip_denoised) - alphas = 1.0 - betas - alphas_cumprod = np.cumprod(alphas, axis=0) - alphas_cumprod_prev = np.append(1.0, alphas_cumprod[:-1]) - - self.sqrt_recip_alphas_cumprod = np.sqrt(1.0 / alphas_cumprod) - self.sqrt_recipm1_alphas_cumprod = np.sqrt(1.0 / alphas_cumprod - 1) - self.posterior_mean_coef1 = betas * np.sqrt(alphas_cumprod_prev) / (1.0-alphas_cumprod) - self.posterior_mean_coef2 = (1.0 - alphas_cumprod_prev) * np.sqrt(alphas) / (1.0 - alphas_cumprod) - - - def q_posterior_mean(self, x_start, x_t, t): - """ - Compute the mean of the diffusion posteriro: - q(x_{t-1} | x_t, x_0) - """ - assert x_start.shape == x_t.shape - coef1 = extract_and_expand(self.posterior_mean_coef1, t, x_start) - coef2 = extract_and_expand(self.posterior_mean_coef2, t, x_t) - return coef1 * x_start + coef2 * x_t - - def predict_xstart(self, x_t, t, eps): - coef1 = extract_and_expand(self.sqrt_recip_alphas_cumprod, t, x_t) - coef2 = extract_and_expand(self.sqrt_recipm1_alphas_cumprod, t, eps) - return coef1 * x_t - coef2 * eps - - def get_mean_and_xstart(self, x, t, model_output): - pred_xstart = self.process_xstart(self.predict_xstart(x, t, model_output)) - mean = self.q_posterior_mean(pred_xstart, x, t) - - return mean, pred_xstart - -# ========================= -# Model Variance Processor -# ========================= - -__MODEL_VAR_PROCESSOR__ = {} - -def register_var_processor(name: str): - def wrapper(cls): - if __MODEL_VAR_PROCESSOR__.get(name, None): - raise NameError(f"Name {name} is already registerd.") - __MODEL_VAR_PROCESSOR__[name] = cls - return cls - return wrapper - -def get_var_processor(name: str, **kwargs): - if __MODEL_VAR_PROCESSOR__.get(name, None) is None: - raise NameError(f"Name {name} is not defined.") - return __MODEL_VAR_PROCESSOR__[name](**kwargs) - -class VarianceProcessor(ABC): - @abstractmethod - def __init__(self, betas): - pass - - @abstractmethod - def get_variance(self, x, t): - pass - -@register_var_processor(name='fixed_small') -class FixedSmallVarianceProcessor(VarianceProcessor): - def __init__(self, betas): - alphas = 1.0 - betas - alphas_cumprod = np.cumprod(alphas, axis=0) - alphas_cumprod_prev = np.append(1.0, alphas_cumprod[:-1]) - # calculations for posterior q(x_{t-1} | x_t, x_0) - self.posterior_variance = ( - betas * (1.0 - alphas_cumprod_prev) / (1.0 - alphas_cumprod) - ) - - def get_variance(self, x, t): - model_variance = self.posterior_variance - model_log_variance = np.log(model_variance) - - model_variance = extract_and_expand(model_variance, t, x) - model_log_variance = extract_and_expand(model_log_variance, t, x) - - return model_variance, model_log_variance - -@register_var_processor(name='fixed_large') -class FixedLargeVarianceProcessor(VarianceProcessor): - def __init__(self, betas): - self.betas = betas - - alphas = 1.0 - betas - alphas_cumprod = np.cumprod(alphas, axis=0) - alphas_cumprod_prev = np.append(1.0, alphas_cumprod[:-1]) - # calculations for posterior q(x_{t-1} | x_t, x_0) - self.posterior_variance = ( - betas * (1.0 - alphas_cumprod_prev) / (1.0 - alphas_cumprod) - ) - - def get_variance(self, x, t): - model_variance = np.append(self.posterior_variance[1], self.betas[1:]) - model_log_variance = np.log(model_variance) - - model_variance = extract_and_expand(model_variance, t, x) - model_log_variance = extract_and_expand(model_log_variance, t, x) - - return model_variance, model_log_variance - -@register_var_processor(name='learned') -class LearnedVarianceProcessor(VarianceProcessor): - def __init__(self, betas): - pass - - def get_variance(self, x, t): - model_log_variance = x - model_variance = torch.exp(model_log_variance) - return model_variance, model_log_variance - -@register_var_processor(name='learned_range') -class LearnedRangeVarianceProcessor(VarianceProcessor): - def __init__(self, betas): - self.betas = betas - - alphas = 1.0 - betas - alphas_cumprod = np.cumprod(alphas, axis=0) - alphas_cumprod_prev = np.append(1.0, alphas_cumprod[:-1]) - - # calculations for posterior q(x_{t-1} | x_t, x_0) - posterior_variance = ( - betas * (1.0 - alphas_cumprod_prev) / (1.0 - alphas_cumprod) - ) - # log calculation clipped because the posterior variance is 0 at the - # beginning of the diffusion chain. - self.posterior_log_variance_clipped = np.log( - np.append(posterior_variance[1], posterior_variance[1:]) - ) - - def get_variance(self, x, t): - model_var_values = x - min_log = self.posterior_log_variance_clipped - max_log = np.log(self.betas) - - min_log = extract_and_expand(min_log, t, x) - max_log = extract_and_expand(max_log, t, x) - - # The model_var_values is [-1, 1] for [min_var, max_var] - frac = (model_var_values + 1.0) / 2.0 - model_log_variance = frac * max_log + (1-frac) * min_log - model_variance = torch.exp(model_log_variance) - return model_variance, model_log_variance - -# ================ -# Helper function -# ================ - -def extract_and_expand(array, time, target): - array = torch.from_numpy(array).to(target.device)[time].float() - while array.ndim < target.ndim: - array = array.unsqueeze(-1) - return array.expand_as(target) - - -def expand_as(array, target): - if isinstance(array, np.ndarray): - array = torch.from_numpy(array) - elif isinstance(array, np.float): - array = torch.tensor([array]) - - while array.ndim < target.ndim: - array = array.unsqueeze(-1) - - return array.expand_as(target).to(target.device) diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/language/scheme/spec.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/language/scheme/spec.go deleted file mode 100644 index 841981867de295342b455efd4150a6b245c38b27..0000000000000000000000000000000000000000 Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/language/scheme/spec.go and /dev/null differ diff --git a/spaces/Pie31415/control-animation/annotator/uniformer/mmseg/models/backbones/resnet.py b/spaces/Pie31415/control-animation/annotator/uniformer/mmseg/models/backbones/resnet.py deleted file mode 100644 index 4e52bf048d28ecb069db4728e5f05ad85ac53198..0000000000000000000000000000000000000000 --- a/spaces/Pie31415/control-animation/annotator/uniformer/mmseg/models/backbones/resnet.py +++ /dev/null @@ -1,688 +0,0 @@ -import torch.nn as nn -import torch.utils.checkpoint as cp -from annotator.uniformer.mmcv.cnn import (build_conv_layer, build_norm_layer, build_plugin_layer, - constant_init, kaiming_init) -from annotator.uniformer.mmcv.runner import load_checkpoint -from annotator.uniformer.mmcv.utils.parrots_wrapper import _BatchNorm - -from annotator.uniformer.mmseg.utils import get_root_logger -from ..builder import BACKBONES -from ..utils import ResLayer - - -class BasicBlock(nn.Module): - """Basic block for ResNet.""" - - expansion = 1 - - def __init__(self, - inplanes, - planes, - stride=1, - dilation=1, - downsample=None, - style='pytorch', - with_cp=False, - conv_cfg=None, - norm_cfg=dict(type='BN'), - dcn=None, - plugins=None): - super(BasicBlock, self).__init__() - assert dcn is None, 'Not implemented yet.' - assert plugins is None, 'Not implemented yet.' - - self.norm1_name, norm1 = build_norm_layer(norm_cfg, planes, postfix=1) - self.norm2_name, norm2 = build_norm_layer(norm_cfg, planes, postfix=2) - - self.conv1 = build_conv_layer( - conv_cfg, - inplanes, - planes, - 3, - stride=stride, - padding=dilation, - dilation=dilation, - bias=False) - self.add_module(self.norm1_name, norm1) - self.conv2 = build_conv_layer( - conv_cfg, planes, planes, 3, padding=1, bias=False) - self.add_module(self.norm2_name, norm2) - - self.relu = nn.ReLU(inplace=True) - self.downsample = downsample - self.stride = stride - self.dilation = dilation - self.with_cp = with_cp - - @property - def norm1(self): - """nn.Module: normalization layer after the first convolution layer""" - return getattr(self, self.norm1_name) - - @property - def norm2(self): - """nn.Module: normalization layer after the second convolution layer""" - return getattr(self, self.norm2_name) - - def forward(self, x): - """Forward function.""" - - def _inner_forward(x): - identity = x - - out = self.conv1(x) - out = self.norm1(out) - out = self.relu(out) - - out = self.conv2(out) - out = self.norm2(out) - - if self.downsample is not None: - identity = self.downsample(x) - - out += identity - - return out - - if self.with_cp and x.requires_grad: - out = cp.checkpoint(_inner_forward, x) - else: - out = _inner_forward(x) - - out = self.relu(out) - - return out - - -class Bottleneck(nn.Module): - """Bottleneck block for ResNet. - - If style is "pytorch", the stride-two layer is the 3x3 conv layer, if it is - "caffe", the stride-two layer is the first 1x1 conv layer. - """ - - expansion = 4 - - def __init__(self, - inplanes, - planes, - stride=1, - dilation=1, - downsample=None, - style='pytorch', - with_cp=False, - conv_cfg=None, - norm_cfg=dict(type='BN'), - dcn=None, - plugins=None): - super(Bottleneck, self).__init__() - assert style in ['pytorch', 'caffe'] - assert dcn is None or isinstance(dcn, dict) - assert plugins is None or isinstance(plugins, list) - if plugins is not None: - allowed_position = ['after_conv1', 'after_conv2', 'after_conv3'] - assert all(p['position'] in allowed_position for p in plugins) - - self.inplanes = inplanes - self.planes = planes - self.stride = stride - self.dilation = dilation - self.style = style - self.with_cp = with_cp - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.dcn = dcn - self.with_dcn = dcn is not None - self.plugins = plugins - self.with_plugins = plugins is not None - - if self.with_plugins: - # collect plugins for conv1/conv2/conv3 - self.after_conv1_plugins = [ - plugin['cfg'] for plugin in plugins - if plugin['position'] == 'after_conv1' - ] - self.after_conv2_plugins = [ - plugin['cfg'] for plugin in plugins - if plugin['position'] == 'after_conv2' - ] - self.after_conv3_plugins = [ - plugin['cfg'] for plugin in plugins - if plugin['position'] == 'after_conv3' - ] - - if self.style == 'pytorch': - self.conv1_stride = 1 - self.conv2_stride = stride - else: - self.conv1_stride = stride - self.conv2_stride = 1 - - self.norm1_name, norm1 = build_norm_layer(norm_cfg, planes, postfix=1) - self.norm2_name, norm2 = build_norm_layer(norm_cfg, planes, postfix=2) - self.norm3_name, norm3 = build_norm_layer( - norm_cfg, planes * self.expansion, postfix=3) - - self.conv1 = build_conv_layer( - conv_cfg, - inplanes, - planes, - kernel_size=1, - stride=self.conv1_stride, - bias=False) - self.add_module(self.norm1_name, norm1) - fallback_on_stride = False - if self.with_dcn: - fallback_on_stride = dcn.pop('fallback_on_stride', False) - if not self.with_dcn or fallback_on_stride: - self.conv2 = build_conv_layer( - conv_cfg, - planes, - planes, - kernel_size=3, - stride=self.conv2_stride, - padding=dilation, - dilation=dilation, - bias=False) - else: - assert self.conv_cfg is None, 'conv_cfg must be None for DCN' - self.conv2 = build_conv_layer( - dcn, - planes, - planes, - kernel_size=3, - stride=self.conv2_stride, - padding=dilation, - dilation=dilation, - bias=False) - - self.add_module(self.norm2_name, norm2) - self.conv3 = build_conv_layer( - conv_cfg, - planes, - planes * self.expansion, - kernel_size=1, - bias=False) - self.add_module(self.norm3_name, norm3) - - self.relu = nn.ReLU(inplace=True) - self.downsample = downsample - - if self.with_plugins: - self.after_conv1_plugin_names = self.make_block_plugins( - planes, self.after_conv1_plugins) - self.after_conv2_plugin_names = self.make_block_plugins( - planes, self.after_conv2_plugins) - self.after_conv3_plugin_names = self.make_block_plugins( - planes * self.expansion, self.after_conv3_plugins) - - def make_block_plugins(self, in_channels, plugins): - """make plugins for block. - - Args: - in_channels (int): Input channels of plugin. - plugins (list[dict]): List of plugins cfg to build. - - Returns: - list[str]: List of the names of plugin. - """ - assert isinstance(plugins, list) - plugin_names = [] - for plugin in plugins: - plugin = plugin.copy() - name, layer = build_plugin_layer( - plugin, - in_channels=in_channels, - postfix=plugin.pop('postfix', '')) - assert not hasattr(self, name), f'duplicate plugin {name}' - self.add_module(name, layer) - plugin_names.append(name) - return plugin_names - - def forward_plugin(self, x, plugin_names): - """Forward function for plugins.""" - out = x - for name in plugin_names: - out = getattr(self, name)(x) - return out - - @property - def norm1(self): - """nn.Module: normalization layer after the first convolution layer""" - return getattr(self, self.norm1_name) - - @property - def norm2(self): - """nn.Module: normalization layer after the second convolution layer""" - return getattr(self, self.norm2_name) - - @property - def norm3(self): - """nn.Module: normalization layer after the third convolution layer""" - return getattr(self, self.norm3_name) - - def forward(self, x): - """Forward function.""" - - def _inner_forward(x): - identity = x - - out = self.conv1(x) - out = self.norm1(out) - out = self.relu(out) - - if self.with_plugins: - out = self.forward_plugin(out, self.after_conv1_plugin_names) - - out = self.conv2(out) - out = self.norm2(out) - out = self.relu(out) - - if self.with_plugins: - out = self.forward_plugin(out, self.after_conv2_plugin_names) - - out = self.conv3(out) - out = self.norm3(out) - - if self.with_plugins: - out = self.forward_plugin(out, self.after_conv3_plugin_names) - - if self.downsample is not None: - identity = self.downsample(x) - - out += identity - - return out - - if self.with_cp and x.requires_grad: - out = cp.checkpoint(_inner_forward, x) - else: - out = _inner_forward(x) - - out = self.relu(out) - - return out - - -@BACKBONES.register_module() -class ResNet(nn.Module): - """ResNet backbone. - - Args: - depth (int): Depth of resnet, from {18, 34, 50, 101, 152}. - in_channels (int): Number of input image channels. Default" 3. - stem_channels (int): Number of stem channels. Default: 64. - base_channels (int): Number of base channels of res layer. Default: 64. - num_stages (int): Resnet stages, normally 4. - strides (Sequence[int]): Strides of the first block of each stage. - dilations (Sequence[int]): Dilation of each stage. - out_indices (Sequence[int]): Output from which stages. - style (str): `pytorch` or `caffe`. If set to "pytorch", the stride-two - layer is the 3x3 conv layer, otherwise the stride-two layer is - the first 1x1 conv layer. - deep_stem (bool): Replace 7x7 conv in input stem with 3 3x3 conv - avg_down (bool): Use AvgPool instead of stride conv when - downsampling in the bottleneck. - frozen_stages (int): Stages to be frozen (stop grad and set eval mode). - -1 means not freezing any parameters. - norm_cfg (dict): Dictionary to construct and config norm layer. - norm_eval (bool): Whether to set norm layers to eval mode, namely, - freeze running stats (mean and var). Note: Effect on Batch Norm - and its variants only. - plugins (list[dict]): List of plugins for stages, each dict contains: - - - cfg (dict, required): Cfg dict to build plugin. - - - position (str, required): Position inside block to insert plugin, - options: 'after_conv1', 'after_conv2', 'after_conv3'. - - - stages (tuple[bool], optional): Stages to apply plugin, length - should be same as 'num_stages' - multi_grid (Sequence[int]|None): Multi grid dilation rates of last - stage. Default: None - contract_dilation (bool): Whether contract first dilation of each layer - Default: False - with_cp (bool): Use checkpoint or not. Using checkpoint will save some - memory while slowing down the training speed. - zero_init_residual (bool): Whether to use zero init for last norm layer - in resblocks to let them behave as identity. - - Example: - >>> from annotator.uniformer.mmseg.models import ResNet - >>> import torch - >>> self = ResNet(depth=18) - >>> self.eval() - >>> inputs = torch.rand(1, 3, 32, 32) - >>> level_outputs = self.forward(inputs) - >>> for level_out in level_outputs: - ... print(tuple(level_out.shape)) - (1, 64, 8, 8) - (1, 128, 4, 4) - (1, 256, 2, 2) - (1, 512, 1, 1) - """ - - arch_settings = { - 18: (BasicBlock, (2, 2, 2, 2)), - 34: (BasicBlock, (3, 4, 6, 3)), - 50: (Bottleneck, (3, 4, 6, 3)), - 101: (Bottleneck, (3, 4, 23, 3)), - 152: (Bottleneck, (3, 8, 36, 3)) - } - - def __init__(self, - depth, - in_channels=3, - stem_channels=64, - base_channels=64, - num_stages=4, - strides=(1, 2, 2, 2), - dilations=(1, 1, 1, 1), - out_indices=(0, 1, 2, 3), - style='pytorch', - deep_stem=False, - avg_down=False, - frozen_stages=-1, - conv_cfg=None, - norm_cfg=dict(type='BN', requires_grad=True), - norm_eval=False, - dcn=None, - stage_with_dcn=(False, False, False, False), - plugins=None, - multi_grid=None, - contract_dilation=False, - with_cp=False, - zero_init_residual=True): - super(ResNet, self).__init__() - if depth not in self.arch_settings: - raise KeyError(f'invalid depth {depth} for resnet') - self.depth = depth - self.stem_channels = stem_channels - self.base_channels = base_channels - self.num_stages = num_stages - assert num_stages >= 1 and num_stages <= 4 - self.strides = strides - self.dilations = dilations - assert len(strides) == len(dilations) == num_stages - self.out_indices = out_indices - assert max(out_indices) < num_stages - self.style = style - self.deep_stem = deep_stem - self.avg_down = avg_down - self.frozen_stages = frozen_stages - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.with_cp = with_cp - self.norm_eval = norm_eval - self.dcn = dcn - self.stage_with_dcn = stage_with_dcn - if dcn is not None: - assert len(stage_with_dcn) == num_stages - self.plugins = plugins - self.multi_grid = multi_grid - self.contract_dilation = contract_dilation - self.zero_init_residual = zero_init_residual - self.block, stage_blocks = self.arch_settings[depth] - self.stage_blocks = stage_blocks[:num_stages] - self.inplanes = stem_channels - - self._make_stem_layer(in_channels, stem_channels) - - self.res_layers = [] - for i, num_blocks in enumerate(self.stage_blocks): - stride = strides[i] - dilation = dilations[i] - dcn = self.dcn if self.stage_with_dcn[i] else None - if plugins is not None: - stage_plugins = self.make_stage_plugins(plugins, i) - else: - stage_plugins = None - # multi grid is applied to last layer only - stage_multi_grid = multi_grid if i == len( - self.stage_blocks) - 1 else None - planes = base_channels * 2**i - res_layer = self.make_res_layer( - block=self.block, - inplanes=self.inplanes, - planes=planes, - num_blocks=num_blocks, - stride=stride, - dilation=dilation, - style=self.style, - avg_down=self.avg_down, - with_cp=with_cp, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - dcn=dcn, - plugins=stage_plugins, - multi_grid=stage_multi_grid, - contract_dilation=contract_dilation) - self.inplanes = planes * self.block.expansion - layer_name = f'layer{i+1}' - self.add_module(layer_name, res_layer) - self.res_layers.append(layer_name) - - self._freeze_stages() - - self.feat_dim = self.block.expansion * base_channels * 2**( - len(self.stage_blocks) - 1) - - def make_stage_plugins(self, plugins, stage_idx): - """make plugins for ResNet 'stage_idx'th stage . - - Currently we support to insert 'context_block', - 'empirical_attention_block', 'nonlocal_block' into the backbone like - ResNet/ResNeXt. They could be inserted after conv1/conv2/conv3 of - Bottleneck. - - An example of plugins format could be : - >>> plugins=[ - ... dict(cfg=dict(type='xxx', arg1='xxx'), - ... stages=(False, True, True, True), - ... position='after_conv2'), - ... dict(cfg=dict(type='yyy'), - ... stages=(True, True, True, True), - ... position='after_conv3'), - ... dict(cfg=dict(type='zzz', postfix='1'), - ... stages=(True, True, True, True), - ... position='after_conv3'), - ... dict(cfg=dict(type='zzz', postfix='2'), - ... stages=(True, True, True, True), - ... position='after_conv3') - ... ] - >>> self = ResNet(depth=18) - >>> stage_plugins = self.make_stage_plugins(plugins, 0) - >>> assert len(stage_plugins) == 3 - - Suppose 'stage_idx=0', the structure of blocks in the stage would be: - conv1-> conv2->conv3->yyy->zzz1->zzz2 - Suppose 'stage_idx=1', the structure of blocks in the stage would be: - conv1-> conv2->xxx->conv3->yyy->zzz1->zzz2 - - If stages is missing, the plugin would be applied to all stages. - - Args: - plugins (list[dict]): List of plugins cfg to build. The postfix is - required if multiple same type plugins are inserted. - stage_idx (int): Index of stage to build - - Returns: - list[dict]: Plugins for current stage - """ - stage_plugins = [] - for plugin in plugins: - plugin = plugin.copy() - stages = plugin.pop('stages', None) - assert stages is None or len(stages) == self.num_stages - # whether to insert plugin into current stage - if stages is None or stages[stage_idx]: - stage_plugins.append(plugin) - - return stage_plugins - - def make_res_layer(self, **kwargs): - """Pack all blocks in a stage into a ``ResLayer``.""" - return ResLayer(**kwargs) - - @property - def norm1(self): - """nn.Module: the normalization layer named "norm1" """ - return getattr(self, self.norm1_name) - - def _make_stem_layer(self, in_channels, stem_channels): - """Make stem layer for ResNet.""" - if self.deep_stem: - self.stem = nn.Sequential( - build_conv_layer( - self.conv_cfg, - in_channels, - stem_channels // 2, - kernel_size=3, - stride=2, - padding=1, - bias=False), - build_norm_layer(self.norm_cfg, stem_channels // 2)[1], - nn.ReLU(inplace=True), - build_conv_layer( - self.conv_cfg, - stem_channels // 2, - stem_channels // 2, - kernel_size=3, - stride=1, - padding=1, - bias=False), - build_norm_layer(self.norm_cfg, stem_channels // 2)[1], - nn.ReLU(inplace=True), - build_conv_layer( - self.conv_cfg, - stem_channels // 2, - stem_channels, - kernel_size=3, - stride=1, - padding=1, - bias=False), - build_norm_layer(self.norm_cfg, stem_channels)[1], - nn.ReLU(inplace=True)) - else: - self.conv1 = build_conv_layer( - self.conv_cfg, - in_channels, - stem_channels, - kernel_size=7, - stride=2, - padding=3, - bias=False) - self.norm1_name, norm1 = build_norm_layer( - self.norm_cfg, stem_channels, postfix=1) - self.add_module(self.norm1_name, norm1) - self.relu = nn.ReLU(inplace=True) - self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1) - - def _freeze_stages(self): - """Freeze stages param and norm stats.""" - if self.frozen_stages >= 0: - if self.deep_stem: - self.stem.eval() - for param in self.stem.parameters(): - param.requires_grad = False - else: - self.norm1.eval() - for m in [self.conv1, self.norm1]: - for param in m.parameters(): - param.requires_grad = False - - for i in range(1, self.frozen_stages + 1): - m = getattr(self, f'layer{i}') - m.eval() - for param in m.parameters(): - param.requires_grad = False - - def init_weights(self, pretrained=None): - """Initialize the weights in backbone. - - Args: - pretrained (str, optional): Path to pre-trained weights. - Defaults to None. - """ - if isinstance(pretrained, str): - logger = get_root_logger() - load_checkpoint(self, pretrained, strict=False, logger=logger) - elif pretrained is None: - for m in self.modules(): - if isinstance(m, nn.Conv2d): - kaiming_init(m) - elif isinstance(m, (_BatchNorm, nn.GroupNorm)): - constant_init(m, 1) - - if self.dcn is not None: - for m in self.modules(): - if isinstance(m, Bottleneck) and hasattr( - m, 'conv2_offset'): - constant_init(m.conv2_offset, 0) - - if self.zero_init_residual: - for m in self.modules(): - if isinstance(m, Bottleneck): - constant_init(m.norm3, 0) - elif isinstance(m, BasicBlock): - constant_init(m.norm2, 0) - else: - raise TypeError('pretrained must be a str or None') - - def forward(self, x): - """Forward function.""" - if self.deep_stem: - x = self.stem(x) - else: - x = self.conv1(x) - x = self.norm1(x) - x = self.relu(x) - x = self.maxpool(x) - outs = [] - for i, layer_name in enumerate(self.res_layers): - res_layer = getattr(self, layer_name) - x = res_layer(x) - if i in self.out_indices: - outs.append(x) - return tuple(outs) - - def train(self, mode=True): - """Convert the model into training mode while keep normalization layer - freezed.""" - super(ResNet, self).train(mode) - self._freeze_stages() - if mode and self.norm_eval: - for m in self.modules(): - # trick: eval have effect on BatchNorm only - if isinstance(m, _BatchNorm): - m.eval() - - -@BACKBONES.register_module() -class ResNetV1c(ResNet): - """ResNetV1c variant described in [1]_. - - Compared with default ResNet(ResNetV1b), ResNetV1c replaces the 7x7 conv - in the input stem with three 3x3 convs. - - References: - .. [1] https://arxiv.org/pdf/1812.01187.pdf - """ - - def __init__(self, **kwargs): - super(ResNetV1c, self).__init__( - deep_stem=True, avg_down=False, **kwargs) - - -@BACKBONES.register_module() -class ResNetV1d(ResNet): - """ResNetV1d variant described in [1]_. - - Compared with default ResNet(ResNetV1b), ResNetV1d replaces the 7x7 conv in - the input stem with three 3x3 convs. And in the downsampling block, a 2x2 - avg_pool with stride 2 is added before conv, whose stride is changed to 1. - """ - - def __init__(self, **kwargs): - super(ResNetV1d, self).__init__( - deep_stem=True, avg_down=True, **kwargs) diff --git a/spaces/Priyanka-Kumavat/Document-Summarization/README.md b/spaces/Priyanka-Kumavat/Document-Summarization/README.md deleted file mode 100644 index d81ef579c2b4de7c56dd578819f779df4fd288a1..0000000000000000000000000000000000000000 --- a/spaces/Priyanka-Kumavat/Document-Summarization/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Document Summarization -emoji: 🌖 -colorFrom: yellow -colorTo: gray -sdk: gradio -sdk_version: 3.50.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/ProteinDesignLab/protpardelle/core/protein.py b/spaces/ProteinDesignLab/protpardelle/core/protein.py deleted file mode 100644 index 11f8cc22168b916f02748eda9164c55c64803a07..0000000000000000000000000000000000000000 --- a/spaces/ProteinDesignLab/protpardelle/core/protein.py +++ /dev/null @@ -1,341 +0,0 @@ -# Copyright 2021 DeepMind Technologies Limited -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -"""Protein data type. -Adapted from original code by alexechu. -""" -import dataclasses -import io -from typing import Any, Mapping, Optional -from core import residue_constants -from Bio.PDB import PDBParser -import numpy as np - -FeatureDict = Mapping[str, np.ndarray] -ModelOutput = Mapping[str, Any] # Is a nested dict. - -# Complete sequence of chain IDs supported by the PDB format. -PDB_CHAIN_IDS = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789" -PDB_MAX_CHAINS = len(PDB_CHAIN_IDS) # := 62. - - -@dataclasses.dataclass(frozen=True) -class Protein: - """Protein structure representation.""" - - # Cartesian coordinates of atoms in angstroms. The atom types correspond to - # residue_constants.atom_types, i.e. the first three are N, CA, CB. - atom_positions: np.ndarray # [num_res, num_atom_type, 3] - - # Amino-acid type for each residue represented as an integer between 0 and - # 20, where 20 is 'X'. - aatype: np.ndarray # [num_res] - - # Binary float mask to indicate presence of a particular atom. 1.0 if an atom - # is present and 0.0 if not. This should be used for loss masking. - atom_mask: np.ndarray # [num_res, num_atom_type] - - # Residue index as used in PDB. It is not necessarily continuous or 0-indexed. - residue_index: np.ndarray # [num_res] - - # 0-indexed number corresponding to the chain in the protein that this residue - # belongs to. - chain_index: np.ndarray # [num_res] - - # B-factors, or temperature factors, of each residue (in sq. angstroms units), - # representing the displacement of the residue from its ground truth mean - # value. - b_factors: np.ndarray # [num_res, num_atom_type] - - def __post_init__(self): - if len(np.unique(self.chain_index)) > PDB_MAX_CHAINS: - raise ValueError( - f"Cannot build an instance with more than {PDB_MAX_CHAINS} chains " - "because these cannot be written to PDB format." - ) - - -def from_pdb_string( - pdb_str: str, chain_id: Optional[str] = None, protein_only: bool = False -) -> Protein: - """Takes a PDB string and constructs a Protein object. - - WARNING: All non-standard residue types will be converted into UNK. All - non-standard atoms will be ignored. - - Args: - pdb_str: The contents of the pdb file - chain_id: If chain_id is specified (e.g. A), then only that chain - is parsed. Otherwise all chains are parsed. - - Returns: - A new `Protein` parsed from the pdb contents. - """ - pdb_fh = io.StringIO(pdb_str) - parser = PDBParser(QUIET=True) - structure = parser.get_structure("none", pdb_fh) - models = list(structure.get_models()) - if len(models) != 1: - raise ValueError( - f"Only single model PDBs are supported. Found {len(models)} models." - ) - model = models[0] - - atom_positions = [] - aatype = [] - atom_mask = [] - residue_index = [] - chain_ids = [] - b_factors = [] - - for chain in model: - if chain_id is not None and chain.id != chain_id: - continue - for res in chain: - if protein_only and res.id[0] != " ": - continue - if res.id[2] != " ": - pass - # raise ValueError( - # f"PDB contains an insertion code at chain {chain.id} and residue " - # f"index {res.id[1]}. These are not supported." - # ) - res_shortname = residue_constants.restype_3to1.get(res.resname, "X") - restype_idx = residue_constants.restype_order.get( - res_shortname, residue_constants.restype_num - ) - pos = np.zeros((residue_constants.atom_type_num, 3)) - mask = np.zeros((residue_constants.atom_type_num,)) - res_b_factors = np.zeros((residue_constants.atom_type_num,)) - for atom in res: - if atom.name not in residue_constants.atom_types: - continue - pos[residue_constants.atom_order[atom.name]] = atom.coord - mask[residue_constants.atom_order[atom.name]] = 1.0 - res_b_factors[residue_constants.atom_order[atom.name]] = atom.bfactor - if np.sum(mask) < 0.5: - # If no known atom positions are reported for the residue then skip it. - continue - aatype.append(restype_idx) - atom_positions.append(pos) - atom_mask.append(mask) - residue_index.append(res.id[1]) - chain_ids.append(chain.id) - b_factors.append(res_b_factors) - - # Chain IDs are usually characters so map these to ints. - unique_chain_ids = np.unique(chain_ids) - chain_id_mapping = {cid: n for n, cid in enumerate(unique_chain_ids)} - chain_index = np.array([chain_id_mapping[cid] for cid in chain_ids]) - - return Protein( - atom_positions=np.array(atom_positions), - atom_mask=np.array(atom_mask), - aatype=np.array(aatype), - residue_index=np.array(residue_index), - chain_index=chain_index, - b_factors=np.array(b_factors), - ) - - -def _chain_end(atom_index, end_resname, chain_name, residue_index) -> str: - chain_end = "TER" - return ( - f"{chain_end:<6}{atom_index:>5} {end_resname:>3} " - f"{chain_name:>1}{residue_index:>4}" - ) - - -def are_atoms_bonded(res3name, atom1_name, atom2_name): - lookup_table = residue_constants.standard_residue_bonds - for bond in lookup_table[res3name]: - if bond.atom1_name == atom1_name and bond.atom2_name == atom2_name: - return True - elif bond.atom1_name == atom2_name and bond.atom2_name == atom1_name: - return True - return False - - -def to_pdb(prot: Protein, conect=False) -> str: - """Converts a `Protein` instance to a PDB string. - - Args: - prot: The protein to convert to PDB. - - Returns: - PDB string. - """ - restypes = residue_constants.restypes + ["X"] - res_1to3 = lambda r: residue_constants.restype_1to3.get(restypes[r], "UNK") - atom_types = residue_constants.atom_types - - pdb_lines = [] - - atom_mask = prot.atom_mask - aatype = prot.aatype - atom_positions = prot.atom_positions - residue_index = prot.residue_index.astype(np.int32) - chain_index = prot.chain_index.astype(np.int32) - b_factors = prot.b_factors - - if np.any(aatype > residue_constants.restype_num): - raise ValueError("Invalid aatypes.") - - # Construct a mapping from chain integer indices to chain ID strings. - chain_ids = {} - for i in np.unique(chain_index): # np.unique gives sorted output. - if i >= PDB_MAX_CHAINS: - raise ValueError( - f"The PDB format supports at most {PDB_MAX_CHAINS} chains." - ) - chain_ids[i] = PDB_CHAIN_IDS[i] - - pdb_lines.append("MODEL 1") - atom_index = 1 - last_chain_index = chain_index[0] - conect_lines = [] - # Add all atom sites. - for i in range(aatype.shape[0]): - # Close the previous chain if in a multichain PDB. - if last_chain_index != chain_index[i]: - pdb_lines.append( - _chain_end( - atom_index, - res_1to3(aatype[i - 1]), - chain_ids[chain_index[i - 1]], - residue_index[i - 1], - ) - ) - last_chain_index = chain_index[i] - atom_index += 1 # Atom index increases at the TER symbol. - - res_name_3 = res_1to3(aatype[i]) - atoms_appended_for_res = [] - for atom_name, pos, mask, b_factor in zip( - atom_types, atom_positions[i], atom_mask[i], b_factors[i] - ): - if mask < 0.5: - continue - - record_type = "ATOM" - name = atom_name if len(atom_name) == 4 else f" {atom_name}" - alt_loc = "" - insertion_code = "" - occupancy = 1.00 - element = atom_name[0] # Protein supports only C, N, O, S, this works. - charge = "" - # PDB is a columnar format, every space matters here! - atom_line = ( - f"{record_type:<6}{atom_index:>5} {name:<4}{alt_loc:>1}" - f"{res_name_3:>3} {chain_ids[chain_index[i]]:>1}" - f"{residue_index[i]:>4}{insertion_code:>1} " - f"{pos[0]:>8.3f}{pos[1]:>8.3f}{pos[2]:>8.3f}" - f"{occupancy:>6.2f}{b_factor:>6.2f} " - f"{element:>2}{charge:>2}" - ) - pdb_lines.append(atom_line) - - for prev_atom_idx, prev_atom in atoms_appended_for_res: - if are_atoms_bonded(res_name_3, atom_name, prev_atom): - conect_line = f"CONECT{prev_atom_idx:5d}{atom_index:5d}\n" - conect_lines.append(conect_line) - atoms_appended_for_res.append((atom_index, atom_name)) - if atom_name == "N": - n_atom_idx = atom_index - if atom_name == "C": - c_atom_idx = atom_index - - atom_index += 1 - - if i > 0: - conect_line = f"CONECT{prev_c_atom_idx:5d}{n_atom_idx:5d}\n" - conect_lines.append(conect_line) - prev_c_atom_idx = c_atom_idx - - # Close the final chain. - pdb_lines.append( - _chain_end( - atom_index, - res_1to3(aatype[-1]), - chain_ids[chain_index[-1]], - residue_index[-1], - ) - ) - pdb_lines.append("ENDMDL") - pdb_lines.append("END") - - # Pad all lines to 80 characters. - pdb_lines = [line.ljust(80) for line in pdb_lines] - pdb_str = "\n".join(pdb_lines) + "\n" # Add terminating newline. - if conect: - conect_str = "".join(conect_lines) + "\n" - return pdb_str, conect_str - return pdb_str - - -def ideal_atom_mask(prot: Protein) -> np.ndarray: - """Computes an ideal atom mask. - - `Protein.atom_mask` typically is defined according to the atoms that are - reported in the PDB. This function computes a mask according to heavy atoms - that should be present in the given sequence of amino acids. - - Args: - prot: `Protein` whose fields are `numpy.ndarray` objects. - - Returns: - An ideal atom mask. - """ - return residue_constants.STANDARD_ATOM_MASK[prot.aatype] - - -def from_prediction( - features: FeatureDict, - result: ModelOutput, - b_factors: Optional[np.ndarray] = None, - remove_leading_feature_dimension: bool = True, -) -> Protein: - """Assembles a protein from a prediction. - - Args: - features: Dictionary holding model inputs. - result: Dictionary holding model outputs. - b_factors: (Optional) B-factors to use for the protein. - remove_leading_feature_dimension: Whether to remove the leading dimension - of the `features` values. - - Returns: - A protein instance. - """ - fold_output = result["structure_module"] - - def _maybe_remove_leading_dim(arr: np.ndarray) -> np.ndarray: - return arr[0] if remove_leading_feature_dimension else arr - - if "asym_id" in features: - chain_index = _maybe_remove_leading_dim(features["asym_id"]) - else: - chain_index = np.zeros_like(_maybe_remove_leading_dim(features["aatype"])) - - if b_factors is None: - b_factors = np.zeros_like(fold_output["final_atom_mask"]) - - return Protein( - aatype=_maybe_remove_leading_dim(features["aatype"]), - atom_positions=fold_output["final_atom_positions"], - atom_mask=fold_output["final_atom_mask"], - residue_index=_maybe_remove_leading_dim(features["residue_index"]) + 1, - chain_index=chain_index, - b_factors=b_factors, - ) diff --git a/spaces/ProteinDesignLab/protpardelle/modules.py b/spaces/ProteinDesignLab/protpardelle/modules.py deleted file mode 100644 index 2dcf66c24193c9cd753c648a2dd0549d7c160369..0000000000000000000000000000000000000000 --- a/spaces/ProteinDesignLab/protpardelle/modules.py +++ /dev/null @@ -1,696 +0,0 @@ -""" -https://github.com/ProteinDesignLab/protpardelle -License: MIT -Author: Alex Chu - -Neural network modules. Many of these are adapted from open source modules. -""" -from typing import List, Sequence, Optional - -from einops import rearrange, reduce, repeat -from einops.layers.torch import Rearrange -import numpy as np -from rotary_embedding_torch import RotaryEmbedding -import torch -import torch.nn as nn -import torch.nn.functional as F -from transformers import AutoTokenizer, EsmModel - -from core import protein_mpnn -from core import residue_constants -from core import utils - - -######################################## -# Adapted from https://github.com/ermongroup/ddim - - -def downsample(x): - return nn.functional.avg_pool2d(x, 2, 2, ceil_mode=True) - - -def upsample_coords(x, shape): - new_l, new_w = shape - return nn.functional.interpolate(x, size=(new_l, new_w), mode="nearest") - - -######################################## -# Adapted from https://github.com/aqlaboratory/openfold - - -def permute_final_dims(tensor: torch.Tensor, inds: List[int]): - zero_index = -1 * len(inds) - first_inds = list(range(len(tensor.shape[:zero_index]))) - return tensor.contiguous().permute(first_inds + [zero_index + i for i in inds]) - - -def lddt( - all_atom_pred_pos: torch.Tensor, - all_atom_positions: torch.Tensor, - all_atom_mask: torch.Tensor, - cutoff: float = 15.0, - eps: float = 1e-10, - per_residue: bool = True, -) -> torch.Tensor: - n = all_atom_mask.shape[-2] - dmat_true = torch.sqrt( - eps - + torch.sum( - (all_atom_positions[..., None, :] - all_atom_positions[..., None, :, :]) - ** 2, - dim=-1, - ) - ) - - dmat_pred = torch.sqrt( - eps - + torch.sum( - (all_atom_pred_pos[..., None, :] - all_atom_pred_pos[..., None, :, :]) ** 2, - dim=-1, - ) - ) - dists_to_score = ( - (dmat_true < cutoff) - * all_atom_mask - * permute_final_dims(all_atom_mask, (1, 0)) - * (1.0 - torch.eye(n, device=all_atom_mask.device)) - ) - - dist_l1 = torch.abs(dmat_true - dmat_pred) - - score = ( - (dist_l1 < 0.5).type(dist_l1.dtype) - + (dist_l1 < 1.0).type(dist_l1.dtype) - + (dist_l1 < 2.0).type(dist_l1.dtype) - + (dist_l1 < 4.0).type(dist_l1.dtype) - ) - score = score * 0.25 - - dims = (-1,) if per_residue else (-2, -1) - norm = 1.0 / (eps + torch.sum(dists_to_score, dim=dims)) - score = norm * (eps + torch.sum(dists_to_score * score, dim=dims)) - - return score - - -class RelativePositionalEncoding(nn.Module): - def __init__(self, attn_dim=8, max_rel_idx=32): - super().__init__() - self.max_rel_idx = max_rel_idx - self.n_rel_pos = 2 * self.max_rel_idx + 1 - self.linear = nn.Linear(self.n_rel_pos, attn_dim) - - def forward(self, residue_index): - d_ij = residue_index[..., None] - residue_index[..., None, :] - v_bins = torch.arange(self.n_rel_pos).to(d_ij.device) - self.max_rel_idx - idxs = (d_ij[..., None] - v_bins[None, None]).abs().argmin(-1) - p_ij = nn.functional.one_hot(idxs, num_classes=self.n_rel_pos) - embeddings = self.linear(p_ij.float()) - return embeddings - - -######################################## -# Adapted from https://github.com/NVlabs/edm - - -class Noise_Embedding(nn.Module): - def __init__(self, num_channels, max_positions=10000, endpoint=False): - super().__init__() - self.num_channels = num_channels - self.max_positions = max_positions - self.endpoint = endpoint - - def forward(self, x): - freqs = torch.arange( - start=0, end=self.num_channels // 2, dtype=torch.float32, device=x.device - ) - freqs = freqs / (self.num_channels // 2 - (1 if self.endpoint else 0)) - freqs = (1 / self.max_positions) ** freqs - x = x.outer(freqs.to(x.dtype)) - x = torch.cat([x.cos(), x.sin()], dim=1) - return x - - -######################################## -# Adapted from github.com/lucidrains -# https://github.com/lucidrains/denoising-diffusion-pytorch -# https://github.com/lucidrains/recurrent-interface-network-pytorch - - -def exists(x): - return x is not None - - -def default(val, d): - if exists(val): - return val - return d() if callable(d) else d - - -def posemb_sincos_1d(patches, temperature=10000, residue_index=None): - _, n, dim, device, dtype = *patches.shape, patches.device, patches.dtype - - n = torch.arange(n, device=device) if residue_index is None else residue_index - assert (dim % 2) == 0, "feature dimension must be multiple of 2 for sincos emb" - omega = torch.arange(dim // 2, device=device) / (dim // 2 - 1) - omega = 1.0 / (temperature**omega) - - n = n[..., None] * omega - pe = torch.cat((n.sin(), n.cos()), dim=-1) - return pe.type(dtype) - - -class LayerNorm(nn.Module): - def __init__(self, dim): - super().__init__() - self.gamma = nn.Parameter(torch.ones(dim)) - self.register_buffer("beta", torch.zeros(dim)) - - def forward(self, x): - return F.layer_norm(x, x.shape[-1:], self.gamma, self.beta) - - -class NoiseConditioningBlock(nn.Module): - def __init__(self, n_in_channel, n_out_channel): - super().__init__() - self.block = nn.Sequential( - Noise_Embedding(n_in_channel), - nn.Linear(n_in_channel, n_out_channel), - nn.SiLU(), - nn.Linear(n_out_channel, n_out_channel), - Rearrange("b d -> b 1 d"), - ) - - def forward(self, noise_level): - return self.block(noise_level) - - -class TimeCondResnetBlock(nn.Module): - def __init__( - self, nic, noc, cond_nc, conv_layer=nn.Conv2d, dropout=0.1, n_norm_in_groups=4 - ): - super().__init__() - self.block1 = nn.Sequential( - nn.GroupNorm(num_groups=nic // n_norm_in_groups, num_channels=nic), - nn.SiLU(), - conv_layer(nic, noc, 3, 1, 1), - ) - self.cond_proj = nn.Linear(cond_nc, noc * 2) - self.mid_norm = nn.GroupNorm(num_groups=noc // 4, num_channels=noc) - self.dropout = dropout if dropout is None else nn.Dropout(dropout) - self.block2 = nn.Sequential( - nn.GroupNorm(num_groups=noc // 4, num_channels=noc), - nn.SiLU(), - conv_layer(noc, noc, 3, 1, 1), - ) - self.mismatch = False - if nic != noc: - self.mismatch = True - self.conv_match = conv_layer(nic, noc, 1, 1, 0) - - def forward(self, x, time=None): - h = self.block1(x) - - if time is not None: - h = self.mid_norm(h) - scale, shift = self.cond_proj(time).chunk(2, dim=-1) - h = (h * (utils.expand(scale, h) + 1)) + utils.expand(shift, h) - - if self.dropout is not None: - h = self.dropout(h) - - h = self.block2(h) - - if self.mismatch: - x = self.conv_match(x) - - return x + h - - -class TimeCondAttention(nn.Module): - def __init__( - self, - dim, - dim_context=None, - heads=4, - dim_head=32, - norm=False, - norm_context=False, - time_cond_dim=None, - attn_bias_dim=None, - rotary_embedding_module=None, - ): - super().__init__() - hidden_dim = dim_head * heads - dim_context = default(dim_context, dim) - - self.time_cond = None - - if exists(time_cond_dim): - self.time_cond = nn.Sequential(nn.SiLU(), nn.Linear(time_cond_dim, dim * 2)) - - nn.init.zeros_(self.time_cond[-1].weight) - nn.init.zeros_(self.time_cond[-1].bias) - - self.scale = dim_head**-0.5 - self.heads = heads - - self.norm = LayerNorm(dim) if norm else nn.Identity() - self.norm_context = LayerNorm(dim_context) if norm_context else nn.Identity() - - self.attn_bias_proj = None - if attn_bias_dim is not None: - self.attn_bias_proj = nn.Sequential( - Rearrange("b a i j -> b i j a"), - nn.Linear(attn_bias_dim, heads), - Rearrange("b i j a -> b a i j"), - ) - - self.to_q = nn.Linear(dim, hidden_dim, bias=False) - self.to_kv = nn.Linear(dim_context, hidden_dim * 2, bias=False) - self.to_out = nn.Linear(hidden_dim, dim, bias=False) - nn.init.zeros_(self.to_out.weight) - - self.use_rope = False - if rotary_embedding_module is not None: - self.use_rope = True - self.rope = rotary_embedding_module - - def forward(self, x, context=None, time=None, attn_bias=None, seq_mask=None): - # attn_bias is b, c, i, j - h = self.heads - has_context = exists(context) - - context = default(context, x) - - if x.shape[-1] != self.norm.gamma.shape[-1]: - print(context.shape, x.shape, self.norm.gamma.shape) - - x = self.norm(x) - - if exists(time): - scale, shift = self.time_cond(time).chunk(2, dim=-1) - x = (x * (scale + 1)) + shift - - if has_context: - context = self.norm_context(context) - - if seq_mask is not None: - x = x * seq_mask[..., None] - - qkv = (self.to_q(x), *self.to_kv(context).chunk(2, dim=-1)) - q, k, v = map(lambda t: rearrange(t, "b n (h d) -> b h n d", h=h), qkv) - - q = q * self.scale - - if self.use_rope: - q = self.rope.rotate_queries_or_keys(q) - k = self.rope.rotate_queries_or_keys(k) - - sim = torch.einsum("b h i d, b h j d -> b h i j", q, k) - if attn_bias is not None: - if self.attn_bias_proj is not None: - attn_bias = self.attn_bias_proj(attn_bias) - sim += attn_bias - if seq_mask is not None: - attn_mask = torch.einsum("b i, b j -> b i j", seq_mask, seq_mask)[:, None] - sim -= (1 - attn_mask) * 1e6 - attn = sim.softmax(dim=-1) - - out = torch.einsum("b h i j, b h j d -> b h i d", attn, v) - out = rearrange(out, "b h n d -> b n (h d)") - out = self.to_out(out) - if seq_mask is not None: - out = out * seq_mask[..., None] - return out - - -class TimeCondFeedForward(nn.Module): - def __init__(self, dim, mult=4, dim_out=None, time_cond_dim=None, dropout=0.1): - super().__init__() - if dim_out is None: - dim_out = dim - self.norm = LayerNorm(dim) - - self.time_cond = None - self.dropout = None - inner_dim = int(dim * mult) - - if exists(time_cond_dim): - self.time_cond = nn.Sequential( - nn.SiLU(), - nn.Linear(time_cond_dim, inner_dim * 2), - ) - - nn.init.zeros_(self.time_cond[-1].weight) - nn.init.zeros_(self.time_cond[-1].bias) - - self.linear_in = nn.Linear(dim, inner_dim) - self.nonlinearity = nn.SiLU() - if dropout is not None: - self.dropout = nn.Dropout(dropout) - self.linear_out = nn.Linear(inner_dim, dim_out) - nn.init.zeros_(self.linear_out.weight) - nn.init.zeros_(self.linear_out.bias) - - def forward(self, x, time=None): - x = self.norm(x) - x = self.linear_in(x) - x = self.nonlinearity(x) - - if exists(time): - scale, shift = self.time_cond(time).chunk(2, dim=-1) - x = (x * (scale + 1)) + shift - - if exists(self.dropout): - x = self.dropout(x) - - return self.linear_out(x) - - -class TimeCondTransformer(nn.Module): - def __init__( - self, - dim, - depth, - heads, - dim_head, - time_cond_dim, - attn_bias_dim=None, - mlp_inner_dim_mult=4, - position_embedding_type: str = "rotary", - ): - super().__init__() - - self.rope = None - self.pos_emb_type = position_embedding_type - if position_embedding_type == "rotary": - self.rope = RotaryEmbedding(dim=32) - elif position_embedding_type == "relative": - self.relpos = nn.Sequential( - RelativePositionalEncoding(attn_dim=heads), - Rearrange("b i j d -> b d i j"), - ) - - self.layers = nn.ModuleList([]) - for _ in range(depth): - self.layers.append( - nn.ModuleList( - [ - TimeCondAttention( - dim, - heads=heads, - dim_head=dim_head, - norm=True, - time_cond_dim=time_cond_dim, - attn_bias_dim=attn_bias_dim, - rotary_embedding_module=self.rope, - ), - TimeCondFeedForward( - dim, mlp_inner_dim_mult, time_cond_dim=time_cond_dim - ), - ] - ) - ) - - def forward( - self, - x, - time=None, - attn_bias=None, - context=None, - seq_mask=None, - residue_index=None, - ): - if self.pos_emb_type == "absolute": - pos_emb = posemb_sincos_1d(x) - x = x + pos_emb - elif self.pos_emb_type == "absolute_residx": - assert residue_index is not None - pos_emb = posemb_sincos_1d(x, residue_index=residue_index) - x = x + pos_emb - elif self.pos_emb_type == "relative": - assert residue_index is not None - pos_emb = self.relpos(residue_index) - attn_bias = pos_emb if attn_bias is None else attn_bias + pos_emb - if seq_mask is not None: - x = x * seq_mask[..., None] - - for i, (attn, ff) in enumerate(self.layers): - x = x + attn( - x, context=context, time=time, attn_bias=attn_bias, seq_mask=seq_mask - ) - x = x + ff(x, time=time) - if seq_mask is not None: - x = x * seq_mask[..., None] - - return x - - -class TimeCondUViT(nn.Module): - def __init__( - self, - *, - seq_len: int, - dim: int, - patch_size: int = 1, - depth: int = 6, - heads: int = 8, - dim_head: int = 32, - n_filt_per_layer: List[int] = [], - n_blocks_per_layer: int = 2, - n_atoms: int = 37, - channels_per_atom: int = 6, - attn_bias_dim: int = None, - time_cond_dim: int = None, - conv_skip_connection: bool = False, - position_embedding_type: str = "rotary", - ): - super().__init__() - - # Initialize configuration params - if time_cond_dim is None: - time_cond_dim = dim * 4 - self.position_embedding_type = position_embedding_type - channels = channels_per_atom - self.n_conv_layers = n_conv_layers = len(n_filt_per_layer) - if n_conv_layers > 0: - post_conv_filt = n_filt_per_layer[-1] - self.conv_skip_connection = conv_skip_connection and n_conv_layers == 1 - transformer_seq_len = seq_len // (2**n_conv_layers) - assert transformer_seq_len % patch_size == 0 - num_patches = transformer_seq_len // patch_size - dim_a = post_conv_atom_dim = max(1, n_atoms // (2 ** (n_conv_layers - 1))) - if n_conv_layers == 0: - patch_dim = patch_size * n_atoms * channels_per_atom - patch_dim_out = patch_size * n_atoms * 3 - dim_a = n_atoms - elif conv_skip_connection and n_conv_layers == 1: - patch_dim = patch_size * (channels + post_conv_filt) * post_conv_atom_dim - patch_dim_out = patch_size * post_conv_filt * post_conv_atom_dim - elif n_conv_layers > 0: - patch_dim = patch_dim_out = patch_size * post_conv_filt * post_conv_atom_dim - - # Make downsampling conv - # Downsamples n-1 times where n is n_conv_layers - down_conv = [] - block_in = channels - for i, nf in enumerate(n_filt_per_layer): - block_out = nf - layer = [] - for j in range(n_blocks_per_layer): - n_groups = 2 if i == 0 and j == 0 else 4 - layer.append( - TimeCondResnetBlock( - block_in, block_out, time_cond_dim, n_norm_in_groups=n_groups - ) - ) - block_in = block_out - down_conv.append(nn.ModuleList(layer)) - self.down_conv = nn.ModuleList(down_conv) - - # Make transformer - self.to_patch_embedding = nn.Sequential( - Rearrange("b c (n p) a -> b n (p c a)", p=patch_size), - nn.Linear(patch_dim, dim), - LayerNorm(dim), - ) - self.transformer = TimeCondTransformer( - dim, - depth, - heads, - dim_head, - time_cond_dim, - attn_bias_dim=attn_bias_dim, - position_embedding_type=position_embedding_type, - ) - self.from_patch = nn.Sequential( - LayerNorm(dim), - nn.Linear(dim, patch_dim_out), - Rearrange("b n (p c a) -> b c (n p) a", p=patch_size, a=dim_a), - ) - nn.init.zeros_(self.from_patch[-2].weight) - nn.init.zeros_(self.from_patch[-2].bias) - - # Make upsampling conv - up_conv = [] - for i, nf in enumerate(reversed(n_filt_per_layer)): - skip_in = nf - block_out = nf - layer = [] - for j in range(n_blocks_per_layer): - layer.append( - TimeCondResnetBlock(block_in + skip_in, block_out, time_cond_dim) - ) - block_in = block_out - up_conv.append(nn.ModuleList(layer)) - self.up_conv = nn.ModuleList(up_conv) - - # Conv out - if n_conv_layers > 0: - self.conv_out = nn.Sequential( - nn.GroupNorm(num_groups=block_out // 4, num_channels=block_out), - nn.SiLU(), - nn.Conv2d(block_out, channels // 2, 3, 1, 1), - ) - - def forward( - self, coords, time_cond, pair_bias=None, seq_mask=None, residue_index=None - ): - if self.n_conv_layers > 0: # pad up to even dims - coords = F.pad(coords, (0, 0, 0, 0, 0, 1, 0, 0)) - - x = rearr_coords = rearrange(coords, "b n a c -> b c n a") - hiddens = [] - for i, layer in enumerate(self.down_conv): - for block in layer: - x = block(x, time=time_cond) - hiddens.append(x) - if i != self.n_conv_layers - 1: - x = downsample(x) - - if self.conv_skip_connection: - x = torch.cat([x, rearr_coords], 1) - - x = self.to_patch_embedding(x) - # if self.position_embedding_type == 'absolute': - # pos_emb = posemb_sincos_1d(x) - # x = x + pos_emb - if seq_mask is not None and x.shape[1] == seq_mask.shape[1]: - x *= seq_mask[..., None] - x = self.transformer( - x, - time=time_cond, - attn_bias=pair_bias, - seq_mask=seq_mask, - residue_index=residue_index, - ) - x = self.from_patch(x) - - for i, layer in enumerate(self.up_conv): - for block in layer: - x = torch.cat([x, hiddens.pop()], 1) - x = block(x, time=time_cond) - if i != self.n_conv_layers - 1: - x = upsample_coords(x, hiddens[-1].shape[2:]) - - if self.n_conv_layers > 0: - x = self.conv_out(x) - x = x[..., :-1, :] # drop even-dims padding - - x = rearrange(x, "b c n a -> b n a c") - return x - - -######################################## - - -class LinearWarmupCosineDecay(torch.optim.lr_scheduler._LRScheduler): - def __init__( - self, - optimizer, - max_lr, - warmup_steps=1000, - decay_steps=int(1e6), - min_lr=1e-6, - **kwargs, - ): - self.max_lr = max_lr - self.min_lr = min_lr - self.warmup_steps = warmup_steps - self.decay_steps = decay_steps - self.total_steps = warmup_steps + decay_steps - super(LinearWarmupCosineDecay, self).__init__(optimizer, **kwargs) - - def get_lr(self): - # TODO double check for off-by-one errors - if self.last_epoch < self.warmup_steps: - curr_lr = self.last_epoch / self.warmup_steps * self.max_lr - return [curr_lr for group in self.optimizer.param_groups] - elif self.last_epoch < self.total_steps: - time = (self.last_epoch - self.warmup_steps) / self.decay_steps * np.pi - curr_lr = self.min_lr + (self.max_lr - self.min_lr) * 0.5 * ( - 1 + np.cos(time) - ) - return [curr_lr for group in self.optimizer.param_groups] - else: - return [self.min_lr for group in self.optimizer.param_groups] - - -class NoiseConditionalProteinMPNN(nn.Module): - def __init__( - self, - n_channel=128, - n_layers=3, - n_neighbors=32, - time_cond_dim=None, - vocab_size=21, - input_S_is_embeddings=False, - ): - super().__init__() - self.n_channel = n_channel - self.n_layers = n_layers - self.n_neighbors = n_neighbors - self.time_cond_dim = time_cond_dim - self.vocab_size = vocab_size - self.bb_idxs_if_atom37 = [ - residue_constants.atom_order[a] for a in ["N", "CA", "C", "O"] - ] - - self.mpnn = protein_mpnn.ProteinMPNN( - num_letters=vocab_size, - node_features=n_channel, - edge_features=n_channel, - hidden_dim=n_channel, - num_encoder_layers=n_layers, - num_decoder_layers=n_layers, - vocab=vocab_size, - k_neighbors=n_neighbors, - augment_eps=0.0, - dropout=0.1, - ca_only=False, - time_cond_dim=time_cond_dim, - input_S_is_embeddings=input_S_is_embeddings, - ) - - def forward( - self, denoised_coords, noisy_aatype, seq_mask, residue_index, time_cond - ): - if denoised_coords.shape[-2] == 37: - denoised_coords = denoised_coords[:, :, self.bb_idxs_if_atom37] - - node_embs, encoder_embs = self.mpnn( - X=denoised_coords, - S=noisy_aatype, - mask=seq_mask, - chain_M=seq_mask, - residue_idx=residue_index, - chain_encoding_all=seq_mask, - randn=None, - use_input_decoding_order=False, - decoding_order=None, - causal_mask=False, - time_cond=time_cond, - return_node_embs=True, - ) - return node_embs, encoder_embs diff --git a/spaces/Realcat/image-matching-webui/third_party/TopicFM/src/datasets/sampler.py b/spaces/Realcat/image-matching-webui/third_party/TopicFM/src/datasets/sampler.py deleted file mode 100644 index 131111c4cf69cd8770058dfac2be717aa183978e..0000000000000000000000000000000000000000 --- a/spaces/Realcat/image-matching-webui/third_party/TopicFM/src/datasets/sampler.py +++ /dev/null @@ -1,90 +0,0 @@ -import torch -from torch.utils.data import Sampler, ConcatDataset - - -class RandomConcatSampler(Sampler): - """Random sampler for ConcatDataset. At each epoch, `n_samples_per_subset` samples will be draw from each subset - in the ConcatDataset. If `subset_replacement` is ``True``, sampling within each subset will be done with replacement. - However, it is impossible to sample data without replacement between epochs, unless bulding a stateful sampler lived along the entire training phase. - - For current implementation, the randomness of sampling is ensured no matter the sampler is recreated across epochs or not and call `torch.manual_seed()` or not. - Args: - shuffle (bool): shuffle the random sampled indices across all sub-datsets. - repeat (int): repeatedly use the sampled indices multiple times for training. - [arXiv:1902.05509, arXiv:1901.09335] - NOTE: Don't re-initialize the sampler between epochs (will lead to repeated samples) - NOTE: This sampler behaves differently with DistributedSampler. - It assume the dataset is splitted across ranks instead of replicated. - TODO: Add a `set_epoch()` method to fullfill sampling without replacement across epochs. - ref: https://github.com/PyTorchLightning/pytorch-lightning/blob/e9846dd758cfb1500eb9dba2d86f6912eb487587/pytorch_lightning/trainer/training_loop.py#L373 - """ - - def __init__( - self, - data_source: ConcatDataset, - n_samples_per_subset: int, - subset_replacement: bool = True, - shuffle: bool = True, - repeat: int = 1, - seed: int = None, - ): - if not isinstance(data_source, ConcatDataset): - raise TypeError("data_source should be torch.utils.data.ConcatDataset") - - self.data_source = data_source - self.n_subset = len(self.data_source.datasets) - self.n_samples_per_subset = n_samples_per_subset - self.n_samples = self.n_subset * self.n_samples_per_subset * repeat - self.subset_replacement = subset_replacement - self.repeat = repeat - self.shuffle = shuffle - self.generator = torch.manual_seed(seed) - assert self.repeat >= 1 - - def __len__(self): - return self.n_samples - - def __iter__(self): - indices = [] - # sample from each sub-dataset - for d_idx in range(self.n_subset): - low = 0 if d_idx == 0 else self.data_source.cumulative_sizes[d_idx - 1] - high = self.data_source.cumulative_sizes[d_idx] - if self.subset_replacement: - rand_tensor = torch.randint( - low, - high, - (self.n_samples_per_subset,), - generator=self.generator, - dtype=torch.int64, - ) - else: # sample without replacement - len_subset = len(self.data_source.datasets[d_idx]) - rand_tensor = torch.randperm(len_subset, generator=self.generator) + low - if len_subset >= self.n_samples_per_subset: - rand_tensor = rand_tensor[: self.n_samples_per_subset] - else: # padding with replacement - rand_tensor_replacement = torch.randint( - low, - high, - (self.n_samples_per_subset - len_subset,), - generator=self.generator, - dtype=torch.int64, - ) - rand_tensor = torch.cat([rand_tensor, rand_tensor_replacement]) - indices.append(rand_tensor) - indices = torch.cat(indices) - if self.shuffle: # shuffle the sampled dataset (from multiple subsets) - rand_tensor = torch.randperm(len(indices), generator=self.generator) - indices = indices[rand_tensor] - - # repeat the sampled indices (can be used for RepeatAugmentation or pure RepeatSampling) - if self.repeat > 1: - repeat_indices = [indices.clone() for _ in range(self.repeat - 1)] - if self.shuffle: - _choice = lambda x: x[torch.randperm(len(x), generator=self.generator)] - repeat_indices = map(_choice, repeat_indices) - indices = torch.cat([indices, *repeat_indices], 0) - - assert indices.shape[0] == self.n_samples - return iter(indices.tolist()) diff --git a/spaces/Reself/StableVideo/ldm/modules/distributions/__init__.py b/spaces/Reself/StableVideo/ldm/modules/distributions/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Ritori/TTS_Yui/meldataset.py b/spaces/Ritori/TTS_Yui/meldataset.py deleted file mode 100644 index 44b0bf45aaeaa88896bd6d64e0821dfc5399f5bd..0000000000000000000000000000000000000000 --- a/spaces/Ritori/TTS_Yui/meldataset.py +++ /dev/null @@ -1,168 +0,0 @@ -import math -import os -import random -import torch -import torch.utils.data -import numpy as np -from librosa.util import normalize -from scipy.io.wavfile import read -from librosa.filters import mel as librosa_mel_fn - -MAX_WAV_VALUE = 32768.0 - - -def load_wav(full_path): - sampling_rate, data = read(full_path) - return data, sampling_rate - - -def dynamic_range_compression(x, C=1, clip_val=1e-5): - return np.log(np.clip(x, a_min=clip_val, a_max=None) * C) - - -def dynamic_range_decompression(x, C=1): - return np.exp(x) / C - - -def dynamic_range_compression_torch(x, C=1, clip_val=1e-5): - return torch.log(torch.clamp(x, min=clip_val) * C) - - -def dynamic_range_decompression_torch(x, C=1): - return torch.exp(x) / C - - -def spectral_normalize_torch(magnitudes): - output = dynamic_range_compression_torch(magnitudes) - return output - - -def spectral_de_normalize_torch(magnitudes): - output = dynamic_range_decompression_torch(magnitudes) - return output - - -mel_basis = {} -hann_window = {} - - -def mel_spectrogram(y, n_fft, num_mels, sampling_rate, hop_size, win_size, fmin, fmax, center=False): - if torch.min(y) < -1.: - print('min value is ', torch.min(y)) - if torch.max(y) > 1.: - print('max value is ', torch.max(y)) - - global mel_basis, hann_window - if fmax not in mel_basis: - mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax) - mel_basis[str(fmax)+'_'+str(y.device)] = torch.from_numpy(mel).float().to(y.device) - hann_window[str(y.device)] = torch.hann_window(win_size).to(y.device) - - y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect') - y = y.squeeze(1) - - spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[str(y.device)], - center=center, pad_mode='reflect', normalized=False, onesided=True) - - spec = torch.sqrt(spec.pow(2).sum(-1)+(1e-9)) - - spec = torch.matmul(mel_basis[str(fmax)+'_'+str(y.device)], spec) - spec = spectral_normalize_torch(spec) - - return spec - - -def get_dataset_filelist(a): - with open(a.input_training_file, 'r', encoding='utf-8') as fi: - training_files = [os.path.join(a.input_wavs_dir, x.split('|')[0]) - for x in fi.read().split('\n') if len(x) > 0] - - with open(a.input_validation_file, 'r', encoding='utf-8') as fi: - validation_files = [os.path.join(a.input_wavs_dir, x.split('|')[0]) - for x in fi.read().split('\n') if len(x) > 0] - return training_files, validation_files - - -class MelDataset(torch.utils.data.Dataset): - def __init__(self, training_files, segment_size, n_fft, num_mels, - hop_size, win_size, sampling_rate, fmin, fmax, split=True, shuffle=True, n_cache_reuse=1, - device=None, fmax_loss=None, fine_tuning=False, base_mels_path=None): - self.audio_files = training_files - random.seed(1234) - if shuffle: - random.shuffle(self.audio_files) - self.segment_size = segment_size - self.sampling_rate = sampling_rate - self.split = split - self.n_fft = n_fft - self.num_mels = num_mels - self.hop_size = hop_size - self.win_size = win_size - self.fmin = fmin - self.fmax = fmax - self.fmax_loss = fmax_loss - self.cached_wav = None - self.n_cache_reuse = n_cache_reuse - self._cache_ref_count = 0 - self.device = device - self.fine_tuning = fine_tuning - self.base_mels_path = base_mels_path - - def __getitem__(self, index): - filename = self.audio_files[index] - if self._cache_ref_count == 0: - audio, sampling_rate = load_wav(filename) - audio = audio / MAX_WAV_VALUE - if not self.fine_tuning: - audio = normalize(audio) * 0.95 - self.cached_wav = audio - if sampling_rate != self.sampling_rate: - raise ValueError("{} SR doesn't match target {} SR".format( - sampling_rate, self.sampling_rate)) - self._cache_ref_count = self.n_cache_reuse - else: - audio = self.cached_wav - self._cache_ref_count -= 1 - - audio = torch.FloatTensor(audio) - audio = audio.unsqueeze(0) - - if not self.fine_tuning: - if self.split: - if audio.size(1) >= self.segment_size: - max_audio_start = audio.size(1) - self.segment_size - audio_start = random.randint(0, max_audio_start) - audio = audio[:, audio_start:audio_start+self.segment_size] - else: - audio = torch.nn.functional.pad(audio, (0, self.segment_size - audio.size(1)), 'constant') - - mel = mel_spectrogram(audio, self.n_fft, self.num_mels, - self.sampling_rate, self.hop_size, self.win_size, self.fmin, self.fmax, - center=False) - else: - mel = np.load( - os.path.join(self.base_mels_path, os.path.splitext(filename)[0] + '.npy')) - mel = torch.from_numpy(mel) - - if len(mel.shape) < 3: - mel = mel.unsqueeze(0) - - if self.split: - frames_per_seg = math.ceil(self.segment_size / self.hop_size) - - if audio.size(1) >= self.segment_size: - mel_start = random.randint(0, mel.size(2) - frames_per_seg - 1) - mel = mel[:, :, mel_start:mel_start + frames_per_seg] - audio = audio[:, mel_start * self.hop_size:(mel_start + frames_per_seg) * self.hop_size] - else: - mel = torch.nn.functional.pad(mel, (0, frames_per_seg - mel.size(2)), 'constant') - audio = torch.nn.functional.pad(audio, (0, self.segment_size - audio.size(1)), 'constant') - - mel_loss = mel_spectrogram(audio, self.n_fft, self.num_mels, - self.sampling_rate, self.hop_size, self.win_size, self.fmin, self.fmax_loss, - center=False) - - return (mel.squeeze(), audio.squeeze(0), filename, mel_loss.squeeze()) - - def __len__(self): - return len(self.audio_files) diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmseg/models/backbones/unet.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmseg/models/backbones/unet.py deleted file mode 100644 index 82caa16a94c195c192a2a920fb7bc7e60f0f3ce3..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmseg/models/backbones/unet.py +++ /dev/null @@ -1,429 +0,0 @@ -import torch.nn as nn -import torch.utils.checkpoint as cp -from annotator.uniformer.mmcv.cnn import (UPSAMPLE_LAYERS, ConvModule, build_activation_layer, - build_norm_layer, constant_init, kaiming_init) -from annotator.uniformer.mmcv.runner import load_checkpoint -from annotator.uniformer.mmcv.utils.parrots_wrapper import _BatchNorm - -from annotator.uniformer.mmseg.utils import get_root_logger -from ..builder import BACKBONES -from ..utils import UpConvBlock - - -class BasicConvBlock(nn.Module): - """Basic convolutional block for UNet. - - This module consists of several plain convolutional layers. - - Args: - in_channels (int): Number of input channels. - out_channels (int): Number of output channels. - num_convs (int): Number of convolutional layers. Default: 2. - stride (int): Whether use stride convolution to downsample - the input feature map. If stride=2, it only uses stride convolution - in the first convolutional layer to downsample the input feature - map. Options are 1 or 2. Default: 1. - dilation (int): Whether use dilated convolution to expand the - receptive field. Set dilation rate of each convolutional layer and - the dilation rate of the first convolutional layer is always 1. - Default: 1. - with_cp (bool): Use checkpoint or not. Using checkpoint will save some - memory while slowing down the training speed. Default: False. - conv_cfg (dict | None): Config dict for convolution layer. - Default: None. - norm_cfg (dict | None): Config dict for normalization layer. - Default: dict(type='BN'). - act_cfg (dict | None): Config dict for activation layer in ConvModule. - Default: dict(type='ReLU'). - dcn (bool): Use deformable convolution in convolutional layer or not. - Default: None. - plugins (dict): plugins for convolutional layers. Default: None. - """ - - def __init__(self, - in_channels, - out_channels, - num_convs=2, - stride=1, - dilation=1, - with_cp=False, - conv_cfg=None, - norm_cfg=dict(type='BN'), - act_cfg=dict(type='ReLU'), - dcn=None, - plugins=None): - super(BasicConvBlock, self).__init__() - assert dcn is None, 'Not implemented yet.' - assert plugins is None, 'Not implemented yet.' - - self.with_cp = with_cp - convs = [] - for i in range(num_convs): - convs.append( - ConvModule( - in_channels=in_channels if i == 0 else out_channels, - out_channels=out_channels, - kernel_size=3, - stride=stride if i == 0 else 1, - dilation=1 if i == 0 else dilation, - padding=1 if i == 0 else dilation, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg)) - - self.convs = nn.Sequential(*convs) - - def forward(self, x): - """Forward function.""" - - if self.with_cp and x.requires_grad: - out = cp.checkpoint(self.convs, x) - else: - out = self.convs(x) - return out - - -@UPSAMPLE_LAYERS.register_module() -class DeconvModule(nn.Module): - """Deconvolution upsample module in decoder for UNet (2X upsample). - - This module uses deconvolution to upsample feature map in the decoder - of UNet. - - Args: - in_channels (int): Number of input channels. - out_channels (int): Number of output channels. - with_cp (bool): Use checkpoint or not. Using checkpoint will save some - memory while slowing down the training speed. Default: False. - norm_cfg (dict | None): Config dict for normalization layer. - Default: dict(type='BN'). - act_cfg (dict | None): Config dict for activation layer in ConvModule. - Default: dict(type='ReLU'). - kernel_size (int): Kernel size of the convolutional layer. Default: 4. - """ - - def __init__(self, - in_channels, - out_channels, - with_cp=False, - norm_cfg=dict(type='BN'), - act_cfg=dict(type='ReLU'), - *, - kernel_size=4, - scale_factor=2): - super(DeconvModule, self).__init__() - - assert (kernel_size - scale_factor >= 0) and\ - (kernel_size - scale_factor) % 2 == 0,\ - f'kernel_size should be greater than or equal to scale_factor '\ - f'and (kernel_size - scale_factor) should be even numbers, '\ - f'while the kernel size is {kernel_size} and scale_factor is '\ - f'{scale_factor}.' - - stride = scale_factor - padding = (kernel_size - scale_factor) // 2 - self.with_cp = with_cp - deconv = nn.ConvTranspose2d( - in_channels, - out_channels, - kernel_size=kernel_size, - stride=stride, - padding=padding) - - norm_name, norm = build_norm_layer(norm_cfg, out_channels) - activate = build_activation_layer(act_cfg) - self.deconv_upsamping = nn.Sequential(deconv, norm, activate) - - def forward(self, x): - """Forward function.""" - - if self.with_cp and x.requires_grad: - out = cp.checkpoint(self.deconv_upsamping, x) - else: - out = self.deconv_upsamping(x) - return out - - -@UPSAMPLE_LAYERS.register_module() -class InterpConv(nn.Module): - """Interpolation upsample module in decoder for UNet. - - This module uses interpolation to upsample feature map in the decoder - of UNet. It consists of one interpolation upsample layer and one - convolutional layer. It can be one interpolation upsample layer followed - by one convolutional layer (conv_first=False) or one convolutional layer - followed by one interpolation upsample layer (conv_first=True). - - Args: - in_channels (int): Number of input channels. - out_channels (int): Number of output channels. - with_cp (bool): Use checkpoint or not. Using checkpoint will save some - memory while slowing down the training speed. Default: False. - norm_cfg (dict | None): Config dict for normalization layer. - Default: dict(type='BN'). - act_cfg (dict | None): Config dict for activation layer in ConvModule. - Default: dict(type='ReLU'). - conv_cfg (dict | None): Config dict for convolution layer. - Default: None. - conv_first (bool): Whether convolutional layer or interpolation - upsample layer first. Default: False. It means interpolation - upsample layer followed by one convolutional layer. - kernel_size (int): Kernel size of the convolutional layer. Default: 1. - stride (int): Stride of the convolutional layer. Default: 1. - padding (int): Padding of the convolutional layer. Default: 1. - upsample_cfg (dict): Interpolation config of the upsample layer. - Default: dict( - scale_factor=2, mode='bilinear', align_corners=False). - """ - - def __init__(self, - in_channels, - out_channels, - with_cp=False, - norm_cfg=dict(type='BN'), - act_cfg=dict(type='ReLU'), - *, - conv_cfg=None, - conv_first=False, - kernel_size=1, - stride=1, - padding=0, - upsample_cfg=dict( - scale_factor=2, mode='bilinear', align_corners=False)): - super(InterpConv, self).__init__() - - self.with_cp = with_cp - conv = ConvModule( - in_channels, - out_channels, - kernel_size=kernel_size, - stride=stride, - padding=padding, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - upsample = nn.Upsample(**upsample_cfg) - if conv_first: - self.interp_upsample = nn.Sequential(conv, upsample) - else: - self.interp_upsample = nn.Sequential(upsample, conv) - - def forward(self, x): - """Forward function.""" - - if self.with_cp and x.requires_grad: - out = cp.checkpoint(self.interp_upsample, x) - else: - out = self.interp_upsample(x) - return out - - -@BACKBONES.register_module() -class UNet(nn.Module): - """UNet backbone. - U-Net: Convolutional Networks for Biomedical Image Segmentation. - https://arxiv.org/pdf/1505.04597.pdf - - Args: - in_channels (int): Number of input image channels. Default" 3. - base_channels (int): Number of base channels of each stage. - The output channels of the first stage. Default: 64. - num_stages (int): Number of stages in encoder, normally 5. Default: 5. - strides (Sequence[int 1 | 2]): Strides of each stage in encoder. - len(strides) is equal to num_stages. Normally the stride of the - first stage in encoder is 1. If strides[i]=2, it uses stride - convolution to downsample in the correspondence encoder stage. - Default: (1, 1, 1, 1, 1). - enc_num_convs (Sequence[int]): Number of convolutional layers in the - convolution block of the correspondence encoder stage. - Default: (2, 2, 2, 2, 2). - dec_num_convs (Sequence[int]): Number of convolutional layers in the - convolution block of the correspondence decoder stage. - Default: (2, 2, 2, 2). - downsamples (Sequence[int]): Whether use MaxPool to downsample the - feature map after the first stage of encoder - (stages: [1, num_stages)). If the correspondence encoder stage use - stride convolution (strides[i]=2), it will never use MaxPool to - downsample, even downsamples[i-1]=True. - Default: (True, True, True, True). - enc_dilations (Sequence[int]): Dilation rate of each stage in encoder. - Default: (1, 1, 1, 1, 1). - dec_dilations (Sequence[int]): Dilation rate of each stage in decoder. - Default: (1, 1, 1, 1). - with_cp (bool): Use checkpoint or not. Using checkpoint will save some - memory while slowing down the training speed. Default: False. - conv_cfg (dict | None): Config dict for convolution layer. - Default: None. - norm_cfg (dict | None): Config dict for normalization layer. - Default: dict(type='BN'). - act_cfg (dict | None): Config dict for activation layer in ConvModule. - Default: dict(type='ReLU'). - upsample_cfg (dict): The upsample config of the upsample module in - decoder. Default: dict(type='InterpConv'). - norm_eval (bool): Whether to set norm layers to eval mode, namely, - freeze running stats (mean and var). Note: Effect on Batch Norm - and its variants only. Default: False. - dcn (bool): Use deformable convolution in convolutional layer or not. - Default: None. - plugins (dict): plugins for convolutional layers. Default: None. - - Notice: - The input image size should be divisible by the whole downsample rate - of the encoder. More detail of the whole downsample rate can be found - in UNet._check_input_divisible. - - """ - - def __init__(self, - in_channels=3, - base_channels=64, - num_stages=5, - strides=(1, 1, 1, 1, 1), - enc_num_convs=(2, 2, 2, 2, 2), - dec_num_convs=(2, 2, 2, 2), - downsamples=(True, True, True, True), - enc_dilations=(1, 1, 1, 1, 1), - dec_dilations=(1, 1, 1, 1), - with_cp=False, - conv_cfg=None, - norm_cfg=dict(type='BN'), - act_cfg=dict(type='ReLU'), - upsample_cfg=dict(type='InterpConv'), - norm_eval=False, - dcn=None, - plugins=None): - super(UNet, self).__init__() - assert dcn is None, 'Not implemented yet.' - assert plugins is None, 'Not implemented yet.' - assert len(strides) == num_stages, \ - 'The length of strides should be equal to num_stages, '\ - f'while the strides is {strides}, the length of '\ - f'strides is {len(strides)}, and the num_stages is '\ - f'{num_stages}.' - assert len(enc_num_convs) == num_stages, \ - 'The length of enc_num_convs should be equal to num_stages, '\ - f'while the enc_num_convs is {enc_num_convs}, the length of '\ - f'enc_num_convs is {len(enc_num_convs)}, and the num_stages is '\ - f'{num_stages}.' - assert len(dec_num_convs) == (num_stages-1), \ - 'The length of dec_num_convs should be equal to (num_stages-1), '\ - f'while the dec_num_convs is {dec_num_convs}, the length of '\ - f'dec_num_convs is {len(dec_num_convs)}, and the num_stages is '\ - f'{num_stages}.' - assert len(downsamples) == (num_stages-1), \ - 'The length of downsamples should be equal to (num_stages-1), '\ - f'while the downsamples is {downsamples}, the length of '\ - f'downsamples is {len(downsamples)}, and the num_stages is '\ - f'{num_stages}.' - assert len(enc_dilations) == num_stages, \ - 'The length of enc_dilations should be equal to num_stages, '\ - f'while the enc_dilations is {enc_dilations}, the length of '\ - f'enc_dilations is {len(enc_dilations)}, and the num_stages is '\ - f'{num_stages}.' - assert len(dec_dilations) == (num_stages-1), \ - 'The length of dec_dilations should be equal to (num_stages-1), '\ - f'while the dec_dilations is {dec_dilations}, the length of '\ - f'dec_dilations is {len(dec_dilations)}, and the num_stages is '\ - f'{num_stages}.' - self.num_stages = num_stages - self.strides = strides - self.downsamples = downsamples - self.norm_eval = norm_eval - self.base_channels = base_channels - - self.encoder = nn.ModuleList() - self.decoder = nn.ModuleList() - - for i in range(num_stages): - enc_conv_block = [] - if i != 0: - if strides[i] == 1 and downsamples[i - 1]: - enc_conv_block.append(nn.MaxPool2d(kernel_size=2)) - upsample = (strides[i] != 1 or downsamples[i - 1]) - self.decoder.append( - UpConvBlock( - conv_block=BasicConvBlock, - in_channels=base_channels * 2**i, - skip_channels=base_channels * 2**(i - 1), - out_channels=base_channels * 2**(i - 1), - num_convs=dec_num_convs[i - 1], - stride=1, - dilation=dec_dilations[i - 1], - with_cp=with_cp, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg, - upsample_cfg=upsample_cfg if upsample else None, - dcn=None, - plugins=None)) - - enc_conv_block.append( - BasicConvBlock( - in_channels=in_channels, - out_channels=base_channels * 2**i, - num_convs=enc_num_convs[i], - stride=strides[i], - dilation=enc_dilations[i], - with_cp=with_cp, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg, - dcn=None, - plugins=None)) - self.encoder.append((nn.Sequential(*enc_conv_block))) - in_channels = base_channels * 2**i - - def forward(self, x): - self._check_input_divisible(x) - enc_outs = [] - for enc in self.encoder: - x = enc(x) - enc_outs.append(x) - dec_outs = [x] - for i in reversed(range(len(self.decoder))): - x = self.decoder[i](enc_outs[i], x) - dec_outs.append(x) - - return dec_outs - - def train(self, mode=True): - """Convert the model into training mode while keep normalization layer - freezed.""" - super(UNet, self).train(mode) - if mode and self.norm_eval: - for m in self.modules(): - # trick: eval have effect on BatchNorm only - if isinstance(m, _BatchNorm): - m.eval() - - def _check_input_divisible(self, x): - h, w = x.shape[-2:] - whole_downsample_rate = 1 - for i in range(1, self.num_stages): - if self.strides[i] == 2 or self.downsamples[i - 1]: - whole_downsample_rate *= 2 - assert (h % whole_downsample_rate == 0) \ - and (w % whole_downsample_rate == 0),\ - f'The input image size {(h, w)} should be divisible by the whole '\ - f'downsample rate {whole_downsample_rate}, when num_stages is '\ - f'{self.num_stages}, strides is {self.strides}, and downsamples '\ - f'is {self.downsamples}.' - - def init_weights(self, pretrained=None): - """Initialize the weights in backbone. - - Args: - pretrained (str, optional): Path to pre-trained weights. - Defaults to None. - """ - if isinstance(pretrained, str): - logger = get_root_logger() - load_checkpoint(self, pretrained, strict=False, logger=logger) - elif pretrained is None: - for m in self.modules(): - if isinstance(m, nn.Conv2d): - kaiming_init(m) - elif isinstance(m, (_BatchNorm, nn.GroupNorm)): - constant_init(m, 1) - else: - raise TypeError('pretrained must be a str or None') diff --git a/spaces/SIGGRAPH2022/Text2Human/Text2Human/train_vqvae.py b/spaces/SIGGRAPH2022/Text2Human/Text2Human/train_vqvae.py deleted file mode 100644 index 107702af553e9acb6281586b447279006b304e24..0000000000000000000000000000000000000000 --- a/spaces/SIGGRAPH2022/Text2Human/Text2Human/train_vqvae.py +++ /dev/null @@ -1,132 +0,0 @@ -import argparse -import logging -import os -import os.path as osp -import random -import time - -import torch - -from data.segm_attr_dataset import DeepFashionAttrSegmDataset -from models import create_model -from utils.logger import MessageLogger, get_root_logger, init_tb_logger -from utils.options import dict2str, dict_to_nonedict, parse -from utils.util import make_exp_dirs - - -def main(): - # options - parser = argparse.ArgumentParser() - parser.add_argument('-opt', type=str, help='Path to option YAML file.') - args = parser.parse_args() - opt = parse(args.opt, is_train=True) - - # mkdir and loggers - make_exp_dirs(opt) - log_file = osp.join(opt['path']['log'], f"train_{opt['name']}.log") - logger = get_root_logger( - logger_name='base', log_level=logging.INFO, log_file=log_file) - logger.info(dict2str(opt)) - # initialize tensorboard logger - tb_logger = None - if opt['use_tb_logger'] and 'debug' not in opt['name']: - tb_logger = init_tb_logger(log_dir='./tb_logger/' + opt['name']) - - # convert to NoneDict, which returns None for missing keys - opt = dict_to_nonedict(opt) - - # set up data loader - train_dataset = DeepFashionAttrSegmDataset( - img_dir=opt['train_img_dir'], - segm_dir=opt['segm_dir'], - pose_dir=opt['pose_dir'], - ann_dir=opt['train_ann_file'], - xflip=True) - train_loader = torch.utils.data.DataLoader( - dataset=train_dataset, - batch_size=opt['batch_size'], - shuffle=True, - num_workers=opt['num_workers'], - persistent_workers=True, - drop_last=True) - logger.info(f'Number of train set: {len(train_dataset)}.') - opt['max_iters'] = opt['num_epochs'] * len( - train_dataset) // opt['batch_size'] - - val_dataset = DeepFashionAttrSegmDataset( - img_dir=opt['train_img_dir'], - segm_dir=opt['segm_dir'], - pose_dir=opt['pose_dir'], - ann_dir=opt['val_ann_file']) - val_loader = torch.utils.data.DataLoader( - dataset=val_dataset, batch_size=1, shuffle=False) - logger.info(f'Number of val set: {len(val_dataset)}.') - - test_dataset = DeepFashionAttrSegmDataset( - img_dir=opt['test_img_dir'], - segm_dir=opt['segm_dir'], - pose_dir=opt['pose_dir'], - ann_dir=opt['test_ann_file']) - test_loader = torch.utils.data.DataLoader( - dataset=test_dataset, batch_size=1, shuffle=False) - logger.info(f'Number of test set: {len(test_dataset)}.') - - current_iter = 0 - best_epoch = None - best_loss = 100000 - - model = create_model(opt) - - data_time, iter_time = 0, 0 - current_iter = 0 - - # create message logger (formatted outputs) - msg_logger = MessageLogger(opt, current_iter, tb_logger) - - for epoch in range(opt['num_epochs']): - lr = model.update_learning_rate(epoch) - - for _, batch_data in enumerate(train_loader): - data_time = time.time() - data_time - - current_iter += 1 - - model.optimize_parameters(batch_data, current_iter) - - iter_time = time.time() - iter_time - if current_iter % opt['print_freq'] == 0: - log_vars = {'epoch': epoch, 'iter': current_iter} - log_vars.update({'lrs': [lr]}) - log_vars.update({'time': iter_time, 'data_time': data_time}) - log_vars.update(model.get_current_log()) - msg_logger(log_vars) - - data_time = time.time() - iter_time = time.time() - - if epoch % opt['val_freq'] == 0: - save_dir = f'{opt["path"]["visualization"]}/valset/epoch_{epoch:03d}' # noqa - os.makedirs(save_dir, exist_ok=opt['debug']) - val_loss_total = model.inference(val_loader, save_dir) - - save_dir = f'{opt["path"]["visualization"]}/testset/epoch_{epoch:03d}' # noqa - os.makedirs(save_dir, exist_ok=opt['debug']) - test_loss_total = model.inference(test_loader, save_dir) - - logger.info(f'Epoch: {epoch}, ' - f'val_loss_total: {val_loss_total}, ' - f'test_loss_total: {test_loss_total}.') - - if test_loss_total < best_loss: - best_epoch = epoch - best_loss = test_loss_total - - logger.info(f'Best epoch: {best_epoch}, ' - f'Best test loss: {best_loss: .4f}.') - - # save model - model.save_network(f'{opt["path"]["models"]}/epoch{epoch}.pth') - - -if __name__ == '__main__': - main() diff --git a/spaces/Smithsonian/amazonian_fish_classifier/README.md b/spaces/Smithsonian/amazonian_fish_classifier/README.md deleted file mode 100644 index 458f4cc5800eac191b920c3ac65ccca81600d369..0000000000000000000000000000000000000000 --- a/spaces/Smithsonian/amazonian_fish_classifier/README.md +++ /dev/null @@ -1,17 +0,0 @@ ---- -title: Amazonian Fish Classifier -emoji: 🐠 -colorFrom: green -colorTo: pink -sdk: streamlit -sdk_version: 1.20.0 -app_file: app.py -pinned: false -license: mit ---- - -This is a demonstration app of the two machine learning models described in the paper: - -> Robillard, A., Trizna, M. G., Ruiz-Tafur, K., Panduro, E. D., de Santana, C. D., White, A. E., Dikow, R. B., Deichmann, J. 2023. Application of a Deep Learning Image Classifier for Identification of Amazonian Fishes. *Ecology and Evolution* [https://doi.org/10.1002/ece3.9987](https://doi.org/10.1002/ece3.9987) - -The models weights and image files used to train the models are available on FigShare at [https://doi.org/10.25573/data.c.5761097.v1](https://doi.org/10.25573/data.c.5761097.v1) \ No newline at end of file diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/attr/setters.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/attr/setters.py deleted file mode 100644 index 12ed6750df35b96e2ccde24a9752dca22929188d..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/attr/setters.py +++ /dev/null @@ -1,73 +0,0 @@ -# SPDX-License-Identifier: MIT - -""" -Commonly used hooks for on_setattr. -""" - - -from . import _config -from .exceptions import FrozenAttributeError - - -def pipe(*setters): - """ - Run all *setters* and return the return value of the last one. - - .. versionadded:: 20.1.0 - """ - - def wrapped_pipe(instance, attrib, new_value): - rv = new_value - - for setter in setters: - rv = setter(instance, attrib, rv) - - return rv - - return wrapped_pipe - - -def frozen(_, __, ___): - """ - Prevent an attribute to be modified. - - .. versionadded:: 20.1.0 - """ - raise FrozenAttributeError() - - -def validate(instance, attrib, new_value): - """ - Run *attrib*'s validator on *new_value* if it has one. - - .. versionadded:: 20.1.0 - """ - if _config._run_validators is False: - return new_value - - v = attrib.validator - if not v: - return new_value - - v(instance, attrib, new_value) - - return new_value - - -def convert(instance, attrib, new_value): - """ - Run *attrib*'s converter -- if it has one -- on *new_value* and return the - result. - - .. versionadded:: 20.1.0 - """ - c = attrib.converter - if c: - return c(new_value) - - return new_value - - -# Sentinel for disabling class-wide *on_setattr* hooks for certain attributes. -# autodata stopped working, so the docstring is inlined in the API docs. -NO_OP = object() diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/chromadb/api/types.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/chromadb/api/types.py deleted file mode 100644 index ae228666e715506f92c1f6129e9b38c3ad43b170..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/chromadb/api/types.py +++ /dev/null @@ -1,292 +0,0 @@ -from typing import Any, Optional, Union, Dict, Sequence, TypeVar, List -from typing_extensions import Literal, TypedDict, Protocol -import chromadb.errors as errors -from chromadb.types import ( - Metadata, - Vector, - LiteralValue, - LogicalOperator, - WhereOperator, - OperatorExpression, - Where, - WhereDocumentOperator, - WhereDocument, -) - -# Re-export types from chromadb.types -__all__ = ["Metadata", "Where", "WhereDocument"] - -ID = str -IDs = List[ID] - -Embedding = Vector -Embeddings = List[Embedding] - -Metadatas = List[Metadata] - -CollectionMetadata = Dict[Any, Any] - -Document = str -Documents = List[Document] - -Parameter = TypeVar("Parameter", Embedding, Document, Metadata, ID) -T = TypeVar("T") -OneOrMany = Union[T, List[T]] - -# This should ust be List[Literal["documents", "embeddings", "metadatas", "distances"]] -# However, this provokes an incompatibility with the Overrides library and Python 3.7 -Include = List[ - Union[ - Literal["documents"], - Literal["embeddings"], - Literal["metadatas"], - Literal["distances"], - ] -] - -# Re-export types from chromadb.types -LiteralValue = LiteralValue -LogicalOperator = LogicalOperator -WhereOperator = WhereOperator -OperatorExpression = OperatorExpression -Where = Where -WhereDocumentOperator = WhereDocumentOperator - - -class GetResult(TypedDict): - ids: List[ID] - embeddings: Optional[List[Embedding]] - documents: Optional[List[Document]] - metadatas: Optional[List[Metadata]] - - -class QueryResult(TypedDict): - ids: List[IDs] - embeddings: Optional[List[List[Embedding]]] - documents: Optional[List[List[Document]]] - metadatas: Optional[List[List[Metadata]]] - distances: Optional[List[List[float]]] - - -class IndexMetadata(TypedDict): - dimensionality: int - # The current number of elements in the index (total = additions - deletes) - curr_elements: int - # The auto-incrementing ID of the last inserted element, never decreases so - # can be used as a count of total historical size. Should increase by 1 every add. - # Assume cannot overflow - total_elements_added: int - time_created: float - - -class EmbeddingFunction(Protocol): - def __call__(self, texts: Documents) -> Embeddings: - ... - - -def maybe_cast_one_to_many( - target: OneOrMany[Parameter], -) -> List[Parameter]: - """Infers if target is Embedding, Metadata, or Document and casts it to a many object if its one""" - - if isinstance(target, Sequence): - # One Document or ID - if isinstance(target, str) and target is not None: - return [target] - # One Embedding - if isinstance(target[0], (int, float)): - return [target] # type: ignore - # One Metadata dict - if isinstance(target, dict): - return [target] - # Already a sequence - return target # type: ignore - - -def validate_ids(ids: IDs) -> IDs: - """Validates ids to ensure it is a list of strings""" - if not isinstance(ids, list): - raise ValueError(f"Expected IDs to be a list, got {ids}") - if len(ids) == 0: - raise ValueError(f"Expected IDs to be a non-empty list, got {ids}") - for id in ids: - if not isinstance(id, str): - raise ValueError(f"Expected ID to be a str, got {id}") - if len(ids) != len(set(ids)): - dups = set([x for x in ids if ids.count(x) > 1]) - raise errors.DuplicateIDError( - f"Expected IDs to be unique, found duplicates for: {dups}" - ) - return ids - - -def validate_metadata(metadata: Metadata) -> Metadata: - """Validates metadata to ensure it is a dictionary of strings to strings, ints, or floats""" - if not isinstance(metadata, dict): - raise ValueError(f"Expected metadata to be a dict, got {metadata}") - for key, value in metadata.items(): - if not isinstance(key, str): - raise ValueError(f"Expected metadata key to be a str, got {key}") - if not isinstance(value, (str, int, float)): - raise ValueError( - f"Expected metadata value to be a str, int, or float, got {value}" - ) - return metadata - - -def validate_metadatas(metadatas: Metadatas) -> Metadatas: - """Validates metadatas to ensure it is a list of dictionaries of strings to strings, ints, or floats""" - if not isinstance(metadatas, list): - raise ValueError(f"Expected metadatas to be a list, got {metadatas}") - for metadata in metadatas: - validate_metadata(metadata) - return metadatas - - -def validate_where(where: Where) -> Where: - """ - Validates where to ensure it is a dictionary of strings to strings, ints, floats or operator expressions, - or in the case of $and and $or, a list of where expressions - """ - if not isinstance(where, dict): - raise ValueError(f"Expected where to be a dict, got {where}") - if len(where) != 1: - raise ValueError(f"Expected where to have exactly one operator, got {where}") - for key, value in where.items(): - if not isinstance(key, str): - raise ValueError(f"Expected where key to be a str, got {key}") - if ( - key != "$and" - and key != "$or" - and not isinstance(value, (str, int, float, dict)) - ): - raise ValueError( - f"Expected where value to be a str, int, float, or operator expression, got {value}" - ) - if key == "$and" or key == "$or": - if not isinstance(value, list): - raise ValueError( - f"Expected where value for $and or $or to be a list of where expressions, got {value}" - ) - if len(value) <= 1: - raise ValueError( - f"Expected where value for $and or $or to be a list with at least two where expressions, got {value}" - ) - for where_expression in value: - validate_where(where_expression) - # Value is a operator expression - if isinstance(value, dict): - # Ensure there is only one operator - if len(value) != 1: - raise ValueError( - f"Expected operator expression to have exactly one operator, got {value}" - ) - - for operator, operand in value.items(): - # Only numbers can be compared with gt, gte, lt, lte - if operator in ["$gt", "$gte", "$lt", "$lte"]: - if not isinstance(operand, (int, float)): - raise ValueError( - f"Expected operand value to be an int or a float for operator {operator}, got {operand}" - ) - - if operator not in ["$gt", "$gte", "$lt", "$lte", "$ne", "$eq"]: - raise ValueError( - f"Expected where operator to be one of $gt, $gte, $lt, $lte, $ne, $eq, got {operator}" - ) - - if not isinstance(operand, (str, int, float)): - raise ValueError( - f"Expected where operand value to be a str, int, or float, got {operand}" - ) - return where - - -def validate_where_document(where_document: WhereDocument) -> WhereDocument: - """ - Validates where_document to ensure it is a dictionary of WhereDocumentOperator to strings, or in the case of $and and $or, - a list of where_document expressions - """ - if not isinstance(where_document, dict): - raise ValueError( - f"Expected where document to be a dictionary, got {where_document}" - ) - if len(where_document) != 1: - raise ValueError( - f"Expected where document to have exactly one operator, got {where_document}" - ) - for operator, operand in where_document.items(): - if operator not in ["$contains", "$and", "$or"]: - raise ValueError( - f"Expected where document operator to be one of $contains, $and, $or, got {operator}" - ) - if operator == "$and" or operator == "$or": - if not isinstance(operand, list): - raise ValueError( - f"Expected document value for $and or $or to be a list of where document expressions, got {operand}" - ) - if len(operand) <= 1: - raise ValueError( - f"Expected document value for $and or $or to be a list with at least two where document expressions, got {operand}" - ) - for where_document_expression in operand: - validate_where_document(where_document_expression) - # Value is a $contains operator - elif not isinstance(operand, str): - raise ValueError( - f"Expected where document operand value for operator $contains to be a str, got {operand}" - ) - return where_document - - -def validate_include(include: Include, allow_distances: bool) -> Include: - """Validates include to ensure it is a list of strings. Since get does not allow distances, allow_distances is used - to control if distances is allowed""" - - if not isinstance(include, list): - raise ValueError(f"Expected include to be a list, got {include}") - for item in include: - if not isinstance(item, str): - raise ValueError(f"Expected include item to be a str, got {item}") - allowed_values = ["embeddings", "documents", "metadatas"] - if allow_distances: - allowed_values.append("distances") - if item not in allowed_values: - raise ValueError( - f"Expected include item to be one of {', '.join(allowed_values)}, got {item}" - ) - return include - - -def validate_n_results(n_results: int) -> int: - """Validates n_results to ensure it is a positive Integer. Since hnswlib does not allow n_results to be negative.""" - # Check Number of requested results - if not isinstance(n_results, int): - raise ValueError( - f"Expected requested number of results to be a int, got {n_results}" - ) - if n_results <= 0: - raise TypeError( - f"Number of requested results {n_results}, cannot be negative, or zero." - ) - return n_results - - -def validate_embeddings(embeddings: Embeddings) -> Embeddings: - """Validates embeddings to ensure it is a list of list of ints, or floats""" - if not isinstance(embeddings, list): - raise ValueError(f"Expected embeddings to be a list, got {embeddings}") - if len(embeddings) == 0: - raise ValueError( - f"Expected embeddings to be a list with at least one item, got {embeddings}" - ) - if not all([isinstance(e, list) for e in embeddings]): - raise ValueError( - f"Expected each embedding in the embeddings to be a list, got {embeddings}" - ) - for embedding in embeddings: - if not all([isinstance(value, (int, float)) for value in embedding]): - raise ValueError( - f"Expected each value in the embedding to be a int or float, got {embeddings}" - ) - return embeddings diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_referrers.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_referrers.py deleted file mode 100644 index c7e1bfaf473a22bfddb2ef8a7e1b6aea5c4c4c7d..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_referrers.py +++ /dev/null @@ -1,257 +0,0 @@ -import sys -from _pydevd_bundle import pydevd_xml -from os.path import basename -from _pydev_bundle import pydev_log -from urllib.parse import unquote_plus -from _pydevd_bundle.pydevd_constants import IS_PY311_OR_GREATER - - -#=================================================================================================== -# print_var_node -#=================================================================================================== -def print_var_node(xml_node, stream): - name = xml_node.getAttribute('name') - value = xml_node.getAttribute('value') - val_type = xml_node.getAttribute('type') - - found_as = xml_node.getAttribute('found_as') - stream.write('Name: ') - stream.write(unquote_plus(name)) - stream.write(', Value: ') - stream.write(unquote_plus(value)) - stream.write(', Type: ') - stream.write(unquote_plus(val_type)) - if found_as: - stream.write(', Found as: %s' % (unquote_plus(found_as),)) - stream.write('\n') - - -#=================================================================================================== -# print_referrers -#=================================================================================================== -def print_referrers(obj, stream=None): - if stream is None: - stream = sys.stdout - result = get_referrer_info(obj) - from xml.dom.minidom import parseString - dom = parseString(result) - - xml = dom.getElementsByTagName('xml')[0] - for node in xml.childNodes: - if node.nodeType == node.TEXT_NODE: - continue - - if node.localName == 'for': - stream.write('Searching references for: ') - for child in node.childNodes: - if child.nodeType == node.TEXT_NODE: - continue - print_var_node(child, stream) - - elif node.localName == 'var': - stream.write('Referrer found: ') - print_var_node(node, stream) - - else: - sys.stderr.write('Unhandled node: %s\n' % (node,)) - - return result - - -#=================================================================================================== -# get_referrer_info -#=================================================================================================== -def get_referrer_info(searched_obj): - DEBUG = 0 - if DEBUG: - sys.stderr.write('Getting referrers info.\n') - try: - try: - if searched_obj is None: - ret = ['\n'] - - ret.append('\n') - ret.append(pydevd_xml.var_to_xml( - searched_obj, - 'Skipping getting referrers for None', - additional_in_xml=' id="%s"' % (id(searched_obj),))) - ret.append('\n') - ret.append('') - ret = ''.join(ret) - return ret - - obj_id = id(searched_obj) - - try: - if DEBUG: - sys.stderr.write('Getting referrers...\n') - import gc - referrers = gc.get_referrers(searched_obj) - except: - pydev_log.exception() - ret = ['\n'] - - ret.append('\n') - ret.append(pydevd_xml.var_to_xml( - searched_obj, - 'Exception raised while trying to get_referrers.', - additional_in_xml=' id="%s"' % (id(searched_obj),))) - ret.append('\n') - ret.append('') - ret = ''.join(ret) - return ret - - if DEBUG: - sys.stderr.write('Found %s referrers.\n' % (len(referrers),)) - - curr_frame = sys._getframe() - frame_type = type(curr_frame) - - # Ignore this frame and any caller frame of this frame - - ignore_frames = {} # Should be a set, but it's not available on all python versions. - while curr_frame is not None: - if basename(curr_frame.f_code.co_filename).startswith('pydev'): - ignore_frames[curr_frame] = 1 - curr_frame = curr_frame.f_back - - ret = ['\n'] - - ret.append('\n') - if DEBUG: - sys.stderr.write('Searching Referrers of obj with id="%s"\n' % (obj_id,)) - - ret.append(pydevd_xml.var_to_xml( - searched_obj, - 'Referrers of obj with id="%s"' % (obj_id,))) - ret.append('\n') - - curr_frame = sys._getframe() - all_objects = None - - for r in referrers: - try: - if r in ignore_frames: - continue # Skip the references we may add ourselves - except: - pass # Ok: unhashable type checked... - - if r is referrers: - continue - - if r is curr_frame.f_locals: - continue - - r_type = type(r) - r_id = str(id(r)) - - representation = str(r_type) - - found_as = '' - if r_type == frame_type: - if DEBUG: - sys.stderr.write('Found frame referrer: %r\n' % (r,)) - for key, val in r.f_locals.items(): - if val is searched_obj: - found_as = key - break - - elif r_type == dict: - if DEBUG: - sys.stderr.write('Found dict referrer: %r\n' % (r,)) - - # Try to check if it's a value in the dict (and under which key it was found) - for key, val in r.items(): - if val is searched_obj: - found_as = key - if DEBUG: - sys.stderr.write(' Found as %r in dict\n' % (found_as,)) - break - - # Ok, there's one annoying thing: many times we find it in a dict from an instance, - # but with this we don't directly have the class, only the dict, so, to workaround that - # we iterate over all reachable objects ad check if one of those has the given dict. - if all_objects is None: - all_objects = gc.get_objects() - - for x in all_objects: - try: - if getattr(x, '__dict__', None) is r: - r = x - r_type = type(x) - r_id = str(id(r)) - representation = str(r_type) - break - except: - pass # Just ignore any error here (i.e.: ReferenceError, etc.) - - elif r_type in (tuple, list): - if DEBUG: - sys.stderr.write('Found tuple referrer: %r\n' % (r,)) - - for i, x in enumerate(r): - if x is searched_obj: - found_as = '%s[%s]' % (r_type.__name__, i) - if DEBUG: - sys.stderr.write(' Found as %s in tuple: \n' % (found_as,)) - break - - elif IS_PY311_OR_GREATER: - # Up to Python 3.10, gc.get_referrers for an instance actually returned the - # object.__dict__, but on Python 3.11 it returns the actual object, so, - # handling is a bit easier (we don't need the workaround from the dict - # case to find the actual instance, we just need to find the attribute name). - if DEBUG: - sys.stderr.write('Found dict referrer: %r\n' % (r,)) - - dct = getattr(r, '__dict__', None) - if dct: - # Try to check if it's a value in the dict (and under which key it was found) - for key, val in dct.items(): - if val is searched_obj: - found_as = key - if DEBUG: - sys.stderr.write(' Found as %r in object instance\n' % (found_as,)) - break - - if found_as: - if not isinstance(found_as, str): - found_as = str(found_as) - found_as = ' found_as="%s"' % (pydevd_xml.make_valid_xml_value(found_as),) - - ret.append(pydevd_xml.var_to_xml( - r, - representation, - additional_in_xml=' id="%s"%s' % (r_id, found_as))) - finally: - if DEBUG: - sys.stderr.write('Done searching for references.\n') - - # If we have any exceptions, don't keep dangling references from this frame to any of our objects. - all_objects = None - referrers = None - searched_obj = None - r = None - x = None - key = None - val = None - curr_frame = None - ignore_frames = None - except: - pydev_log.exception() - ret = ['\n'] - - ret.append('\n') - ret.append(pydevd_xml.var_to_xml( - searched_obj, - 'Error getting referrers for:', - additional_in_xml=' id="%s"' % (id(searched_obj),))) - ret.append('\n') - ret.append('') - ret = ''.join(ret) - return ret - - ret.append('') - ret = ''.join(ret) - return ret - diff --git a/spaces/Superlang/ImageProcessor/annotator/uniformer/mmcv/ops/modulated_deform_conv.py b/spaces/Superlang/ImageProcessor/annotator/uniformer/mmcv/ops/modulated_deform_conv.py deleted file mode 100644 index 75559579cf053abcc99538606cbb88c723faf783..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/uniformer/mmcv/ops/modulated_deform_conv.py +++ /dev/null @@ -1,282 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import math - -import torch -import torch.nn as nn -from torch.autograd import Function -from torch.autograd.function import once_differentiable -from torch.nn.modules.utils import _pair, _single - -from annotator.uniformer.mmcv.utils import deprecated_api_warning -from ..cnn import CONV_LAYERS -from ..utils import ext_loader, print_log - -ext_module = ext_loader.load_ext( - '_ext', - ['modulated_deform_conv_forward', 'modulated_deform_conv_backward']) - - -class ModulatedDeformConv2dFunction(Function): - - @staticmethod - def symbolic(g, input, offset, mask, weight, bias, stride, padding, - dilation, groups, deform_groups): - input_tensors = [input, offset, mask, weight] - if bias is not None: - input_tensors.append(bias) - return g.op( - 'mmcv::MMCVModulatedDeformConv2d', - *input_tensors, - stride_i=stride, - padding_i=padding, - dilation_i=dilation, - groups_i=groups, - deform_groups_i=deform_groups) - - @staticmethod - def forward(ctx, - input, - offset, - mask, - weight, - bias=None, - stride=1, - padding=0, - dilation=1, - groups=1, - deform_groups=1): - if input is not None and input.dim() != 4: - raise ValueError( - f'Expected 4D tensor as input, got {input.dim()}D tensor \ - instead.') - ctx.stride = _pair(stride) - ctx.padding = _pair(padding) - ctx.dilation = _pair(dilation) - ctx.groups = groups - ctx.deform_groups = deform_groups - ctx.with_bias = bias is not None - if not ctx.with_bias: - bias = input.new_empty(0) # fake tensor - # When pytorch version >= 1.6.0, amp is adopted for fp16 mode; - # amp won't cast the type of model (float32), but "offset" is cast - # to float16 by nn.Conv2d automatically, leading to the type - # mismatch with input (when it is float32) or weight. - # The flag for whether to use fp16 or amp is the type of "offset", - # we cast weight and input to temporarily support fp16 and amp - # whatever the pytorch version is. - input = input.type_as(offset) - weight = weight.type_as(input) - ctx.save_for_backward(input, offset, mask, weight, bias) - output = input.new_empty( - ModulatedDeformConv2dFunction._output_size(ctx, input, weight)) - ctx._bufs = [input.new_empty(0), input.new_empty(0)] - ext_module.modulated_deform_conv_forward( - input, - weight, - bias, - ctx._bufs[0], - offset, - mask, - output, - ctx._bufs[1], - kernel_h=weight.size(2), - kernel_w=weight.size(3), - stride_h=ctx.stride[0], - stride_w=ctx.stride[1], - pad_h=ctx.padding[0], - pad_w=ctx.padding[1], - dilation_h=ctx.dilation[0], - dilation_w=ctx.dilation[1], - group=ctx.groups, - deformable_group=ctx.deform_groups, - with_bias=ctx.with_bias) - return output - - @staticmethod - @once_differentiable - def backward(ctx, grad_output): - input, offset, mask, weight, bias = ctx.saved_tensors - grad_input = torch.zeros_like(input) - grad_offset = torch.zeros_like(offset) - grad_mask = torch.zeros_like(mask) - grad_weight = torch.zeros_like(weight) - grad_bias = torch.zeros_like(bias) - grad_output = grad_output.contiguous() - ext_module.modulated_deform_conv_backward( - input, - weight, - bias, - ctx._bufs[0], - offset, - mask, - ctx._bufs[1], - grad_input, - grad_weight, - grad_bias, - grad_offset, - grad_mask, - grad_output, - kernel_h=weight.size(2), - kernel_w=weight.size(3), - stride_h=ctx.stride[0], - stride_w=ctx.stride[1], - pad_h=ctx.padding[0], - pad_w=ctx.padding[1], - dilation_h=ctx.dilation[0], - dilation_w=ctx.dilation[1], - group=ctx.groups, - deformable_group=ctx.deform_groups, - with_bias=ctx.with_bias) - if not ctx.with_bias: - grad_bias = None - - return (grad_input, grad_offset, grad_mask, grad_weight, grad_bias, - None, None, None, None, None) - - @staticmethod - def _output_size(ctx, input, weight): - channels = weight.size(0) - output_size = (input.size(0), channels) - for d in range(input.dim() - 2): - in_size = input.size(d + 2) - pad = ctx.padding[d] - kernel = ctx.dilation[d] * (weight.size(d + 2) - 1) + 1 - stride_ = ctx.stride[d] - output_size += ((in_size + (2 * pad) - kernel) // stride_ + 1, ) - if not all(map(lambda s: s > 0, output_size)): - raise ValueError( - 'convolution input is too small (output would be ' + - 'x'.join(map(str, output_size)) + ')') - return output_size - - -modulated_deform_conv2d = ModulatedDeformConv2dFunction.apply - - -class ModulatedDeformConv2d(nn.Module): - - @deprecated_api_warning({'deformable_groups': 'deform_groups'}, - cls_name='ModulatedDeformConv2d') - def __init__(self, - in_channels, - out_channels, - kernel_size, - stride=1, - padding=0, - dilation=1, - groups=1, - deform_groups=1, - bias=True): - super(ModulatedDeformConv2d, self).__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.kernel_size = _pair(kernel_size) - self.stride = _pair(stride) - self.padding = _pair(padding) - self.dilation = _pair(dilation) - self.groups = groups - self.deform_groups = deform_groups - # enable compatibility with nn.Conv2d - self.transposed = False - self.output_padding = _single(0) - - self.weight = nn.Parameter( - torch.Tensor(out_channels, in_channels // groups, - *self.kernel_size)) - if bias: - self.bias = nn.Parameter(torch.Tensor(out_channels)) - else: - self.register_parameter('bias', None) - self.init_weights() - - def init_weights(self): - n = self.in_channels - for k in self.kernel_size: - n *= k - stdv = 1. / math.sqrt(n) - self.weight.data.uniform_(-stdv, stdv) - if self.bias is not None: - self.bias.data.zero_() - - def forward(self, x, offset, mask): - return modulated_deform_conv2d(x, offset, mask, self.weight, self.bias, - self.stride, self.padding, - self.dilation, self.groups, - self.deform_groups) - - -@CONV_LAYERS.register_module('DCNv2') -class ModulatedDeformConv2dPack(ModulatedDeformConv2d): - """A ModulatedDeformable Conv Encapsulation that acts as normal Conv - layers. - - Args: - in_channels (int): Same as nn.Conv2d. - out_channels (int): Same as nn.Conv2d. - kernel_size (int or tuple[int]): Same as nn.Conv2d. - stride (int): Same as nn.Conv2d, while tuple is not supported. - padding (int): Same as nn.Conv2d, while tuple is not supported. - dilation (int): Same as nn.Conv2d, while tuple is not supported. - groups (int): Same as nn.Conv2d. - bias (bool or str): If specified as `auto`, it will be decided by the - norm_cfg. Bias will be set as True if norm_cfg is None, otherwise - False. - """ - - _version = 2 - - def __init__(self, *args, **kwargs): - super(ModulatedDeformConv2dPack, self).__init__(*args, **kwargs) - self.conv_offset = nn.Conv2d( - self.in_channels, - self.deform_groups * 3 * self.kernel_size[0] * self.kernel_size[1], - kernel_size=self.kernel_size, - stride=self.stride, - padding=self.padding, - dilation=self.dilation, - bias=True) - self.init_weights() - - def init_weights(self): - super(ModulatedDeformConv2dPack, self).init_weights() - if hasattr(self, 'conv_offset'): - self.conv_offset.weight.data.zero_() - self.conv_offset.bias.data.zero_() - - def forward(self, x): - out = self.conv_offset(x) - o1, o2, mask = torch.chunk(out, 3, dim=1) - offset = torch.cat((o1, o2), dim=1) - mask = torch.sigmoid(mask) - return modulated_deform_conv2d(x, offset, mask, self.weight, self.bias, - self.stride, self.padding, - self.dilation, self.groups, - self.deform_groups) - - def _load_from_state_dict(self, state_dict, prefix, local_metadata, strict, - missing_keys, unexpected_keys, error_msgs): - version = local_metadata.get('version', None) - - if version is None or version < 2: - # the key is different in early versions - # In version < 2, ModulatedDeformConvPack - # loads previous benchmark models. - if (prefix + 'conv_offset.weight' not in state_dict - and prefix[:-1] + '_offset.weight' in state_dict): - state_dict[prefix + 'conv_offset.weight'] = state_dict.pop( - prefix[:-1] + '_offset.weight') - if (prefix + 'conv_offset.bias' not in state_dict - and prefix[:-1] + '_offset.bias' in state_dict): - state_dict[prefix + - 'conv_offset.bias'] = state_dict.pop(prefix[:-1] + - '_offset.bias') - - if version is not None and version > 1: - print_log( - f'ModulatedDeformConvPack {prefix.rstrip(".")} is upgraded to ' - 'version 2.', - logger='root') - - super()._load_from_state_dict(state_dict, prefix, local_metadata, - strict, missing_keys, unexpected_keys, - error_msgs) diff --git a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/evaluation/lvis_evaluation.py b/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/evaluation/lvis_evaluation.py deleted file mode 100644 index 0604feaaf42ffd072e3cb91f395204f818fa709a..0000000000000000000000000000000000000000 --- a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/evaluation/lvis_evaluation.py +++ /dev/null @@ -1,380 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import copy -import itertools -import json -import logging -import os -import pickle -from collections import OrderedDict -import torch - -import detectron2.utils.comm as comm -from detectron2.config import CfgNode -from detectron2.data import MetadataCatalog -from detectron2.structures import Boxes, BoxMode, pairwise_iou -from detectron2.utils.file_io import PathManager -from detectron2.utils.logger import create_small_table - -from .coco_evaluation import instances_to_coco_json -from .evaluator import DatasetEvaluator - - -class LVISEvaluator(DatasetEvaluator): - """ - Evaluate object proposal and instance detection/segmentation outputs using - LVIS's metrics and evaluation API. - """ - - def __init__( - self, - dataset_name, - tasks=None, - distributed=True, - output_dir=None, - *, - max_dets_per_image=None, - ): - """ - Args: - dataset_name (str): name of the dataset to be evaluated. - It must have the following corresponding metadata: - "json_file": the path to the LVIS format annotation - tasks (tuple[str]): tasks that can be evaluated under the given - configuration. A task is one of "bbox", "segm". - By default, will infer this automatically from predictions. - distributed (True): if True, will collect results from all ranks for evaluation. - Otherwise, will evaluate the results in the current process. - output_dir (str): optional, an output directory to dump results. - max_dets_per_image (None or int): limit on maximum detections per image in evaluating AP - This limit, by default of the LVIS dataset, is 300. - """ - from lvis import LVIS - - self._logger = logging.getLogger(__name__) - - if tasks is not None and isinstance(tasks, CfgNode): - self._logger.warn( - "COCO Evaluator instantiated using config, this is deprecated behavior." - " Please pass in explicit arguments instead." - ) - self._tasks = None # Infering it from predictions should be better - else: - self._tasks = tasks - - self._distributed = distributed - self._output_dir = output_dir - self._max_dets_per_image = max_dets_per_image - - self._cpu_device = torch.device("cpu") - - self._metadata = MetadataCatalog.get(dataset_name) - json_file = PathManager.get_local_path(self._metadata.json_file) - self._lvis_api = LVIS(json_file) - # Test set json files do not contain annotations (evaluation must be - # performed using the LVIS evaluation server). - self._do_evaluation = len(self._lvis_api.get_ann_ids()) > 0 - - def reset(self): - self._predictions = [] - - def process(self, inputs, outputs): - """ - Args: - inputs: the inputs to a LVIS model (e.g., GeneralizedRCNN). - It is a list of dict. Each dict corresponds to an image and - contains keys like "height", "width", "file_name", "image_id". - outputs: the outputs of a LVIS model. It is a list of dicts with key - "instances" that contains :class:`Instances`. - """ - for input, output in zip(inputs, outputs): - prediction = {"image_id": input["image_id"]} - - if "instances" in output: - instances = output["instances"].to(self._cpu_device) - prediction["instances"] = instances_to_coco_json(instances, input["image_id"]) - if "proposals" in output: - prediction["proposals"] = output["proposals"].to(self._cpu_device) - self._predictions.append(prediction) - - def evaluate(self): - if self._distributed: - comm.synchronize() - predictions = comm.gather(self._predictions, dst=0) - predictions = list(itertools.chain(*predictions)) - - if not comm.is_main_process(): - return - else: - predictions = self._predictions - - if len(predictions) == 0: - self._logger.warning("[LVISEvaluator] Did not receive valid predictions.") - return {} - - if self._output_dir: - PathManager.mkdirs(self._output_dir) - file_path = os.path.join(self._output_dir, "instances_predictions.pth") - with PathManager.open(file_path, "wb") as f: - torch.save(predictions, f) - - self._results = OrderedDict() - if "proposals" in predictions[0]: - self._eval_box_proposals(predictions) - if "instances" in predictions[0]: - self._eval_predictions(predictions) - # Copy so the caller can do whatever with results - return copy.deepcopy(self._results) - - def _tasks_from_predictions(self, predictions): - for pred in predictions: - if "segmentation" in pred: - return ("bbox", "segm") - return ("bbox",) - - def _eval_predictions(self, predictions): - """ - Evaluate predictions. Fill self._results with the metrics of the tasks. - - Args: - predictions (list[dict]): list of outputs from the model - """ - self._logger.info("Preparing results in the LVIS format ...") - lvis_results = list(itertools.chain(*[x["instances"] for x in predictions])) - tasks = self._tasks or self._tasks_from_predictions(lvis_results) - - # LVIS evaluator can be used to evaluate results for COCO dataset categories. - # In this case `_metadata` variable will have a field with COCO-specific category mapping. - if hasattr(self._metadata, "thing_dataset_id_to_contiguous_id"): - reverse_id_mapping = { - v: k for k, v in self._metadata.thing_dataset_id_to_contiguous_id.items() - } - for result in lvis_results: - result["category_id"] = reverse_id_mapping[result["category_id"]] - else: - # unmap the category ids for LVIS (from 0-indexed to 1-indexed) - for result in lvis_results: - result["category_id"] += 1 - - if self._output_dir: - file_path = os.path.join(self._output_dir, "lvis_instances_results.json") - self._logger.info("Saving results to {}".format(file_path)) - with PathManager.open(file_path, "w") as f: - f.write(json.dumps(lvis_results)) - f.flush() - - if not self._do_evaluation: - self._logger.info("Annotations are not available for evaluation.") - return - - self._logger.info("Evaluating predictions ...") - for task in sorted(tasks): - res = _evaluate_predictions_on_lvis( - self._lvis_api, - lvis_results, - task, - max_dets_per_image=self._max_dets_per_image, - class_names=self._metadata.get("thing_classes"), - ) - self._results[task] = res - - def _eval_box_proposals(self, predictions): - """ - Evaluate the box proposals in predictions. - Fill self._results with the metrics for "box_proposals" task. - """ - if self._output_dir: - # Saving generated box proposals to file. - # Predicted box_proposals are in XYXY_ABS mode. - bbox_mode = BoxMode.XYXY_ABS.value - ids, boxes, objectness_logits = [], [], [] - for prediction in predictions: - ids.append(prediction["image_id"]) - boxes.append(prediction["proposals"].proposal_boxes.tensor.numpy()) - objectness_logits.append(prediction["proposals"].objectness_logits.numpy()) - - proposal_data = { - "boxes": boxes, - "objectness_logits": objectness_logits, - "ids": ids, - "bbox_mode": bbox_mode, - } - with PathManager.open(os.path.join(self._output_dir, "box_proposals.pkl"), "wb") as f: - pickle.dump(proposal_data, f) - - if not self._do_evaluation: - self._logger.info("Annotations are not available for evaluation.") - return - - self._logger.info("Evaluating bbox proposals ...") - res = {} - areas = {"all": "", "small": "s", "medium": "m", "large": "l"} - for limit in [100, 1000]: - for area, suffix in areas.items(): - stats = _evaluate_box_proposals(predictions, self._lvis_api, area=area, limit=limit) - key = "AR{}@{:d}".format(suffix, limit) - res[key] = float(stats["ar"].item() * 100) - self._logger.info("Proposal metrics: \n" + create_small_table(res)) - self._results["box_proposals"] = res - - -# inspired from Detectron: -# https://github.com/facebookresearch/Detectron/blob/a6a835f5b8208c45d0dce217ce9bbda915f44df7/detectron/datasets/json_dataset_evaluator.py#L255 # noqa -def _evaluate_box_proposals(dataset_predictions, lvis_api, thresholds=None, area="all", limit=None): - """ - Evaluate detection proposal recall metrics. This function is a much - faster alternative to the official LVIS API recall evaluation code. However, - it produces slightly different results. - """ - # Record max overlap value for each gt box - # Return vector of overlap values - areas = { - "all": 0, - "small": 1, - "medium": 2, - "large": 3, - "96-128": 4, - "128-256": 5, - "256-512": 6, - "512-inf": 7, - } - area_ranges = [ - [0 ** 2, 1e5 ** 2], # all - [0 ** 2, 32 ** 2], # small - [32 ** 2, 96 ** 2], # medium - [96 ** 2, 1e5 ** 2], # large - [96 ** 2, 128 ** 2], # 96-128 - [128 ** 2, 256 ** 2], # 128-256 - [256 ** 2, 512 ** 2], # 256-512 - [512 ** 2, 1e5 ** 2], - ] # 512-inf - assert area in areas, "Unknown area range: {}".format(area) - area_range = area_ranges[areas[area]] - gt_overlaps = [] - num_pos = 0 - - for prediction_dict in dataset_predictions: - predictions = prediction_dict["proposals"] - - # sort predictions in descending order - # TODO maybe remove this and make it explicit in the documentation - inds = predictions.objectness_logits.sort(descending=True)[1] - predictions = predictions[inds] - - ann_ids = lvis_api.get_ann_ids(img_ids=[prediction_dict["image_id"]]) - anno = lvis_api.load_anns(ann_ids) - gt_boxes = [ - BoxMode.convert(obj["bbox"], BoxMode.XYWH_ABS, BoxMode.XYXY_ABS) for obj in anno - ] - gt_boxes = torch.as_tensor(gt_boxes).reshape(-1, 4) # guard against no boxes - gt_boxes = Boxes(gt_boxes) - gt_areas = torch.as_tensor([obj["area"] for obj in anno]) - - if len(gt_boxes) == 0 or len(predictions) == 0: - continue - - valid_gt_inds = (gt_areas >= area_range[0]) & (gt_areas <= area_range[1]) - gt_boxes = gt_boxes[valid_gt_inds] - - num_pos += len(gt_boxes) - - if len(gt_boxes) == 0: - continue - - if limit is not None and len(predictions) > limit: - predictions = predictions[:limit] - - overlaps = pairwise_iou(predictions.proposal_boxes, gt_boxes) - - _gt_overlaps = torch.zeros(len(gt_boxes)) - for j in range(min(len(predictions), len(gt_boxes))): - # find which proposal box maximally covers each gt box - # and get the iou amount of coverage for each gt box - max_overlaps, argmax_overlaps = overlaps.max(dim=0) - - # find which gt box is 'best' covered (i.e. 'best' = most iou) - gt_ovr, gt_ind = max_overlaps.max(dim=0) - assert gt_ovr >= 0 - # find the proposal box that covers the best covered gt box - box_ind = argmax_overlaps[gt_ind] - # record the iou coverage of this gt box - _gt_overlaps[j] = overlaps[box_ind, gt_ind] - assert _gt_overlaps[j] == gt_ovr - # mark the proposal box and the gt box as used - overlaps[box_ind, :] = -1 - overlaps[:, gt_ind] = -1 - - # append recorded iou coverage level - gt_overlaps.append(_gt_overlaps) - gt_overlaps = ( - torch.cat(gt_overlaps, dim=0) if len(gt_overlaps) else torch.zeros(0, dtype=torch.float32) - ) - gt_overlaps, _ = torch.sort(gt_overlaps) - - if thresholds is None: - step = 0.05 - thresholds = torch.arange(0.5, 0.95 + 1e-5, step, dtype=torch.float32) - recalls = torch.zeros_like(thresholds) - # compute recall for each iou threshold - for i, t in enumerate(thresholds): - recalls[i] = (gt_overlaps >= t).float().sum() / float(num_pos) - # ar = 2 * np.trapz(recalls, thresholds) - ar = recalls.mean() - return { - "ar": ar, - "recalls": recalls, - "thresholds": thresholds, - "gt_overlaps": gt_overlaps, - "num_pos": num_pos, - } - - -def _evaluate_predictions_on_lvis( - lvis_gt, lvis_results, iou_type, max_dets_per_image=None, class_names=None -): - """ - Args: - iou_type (str): - max_dets_per_image (None or int): limit on maximum detections per image in evaluating AP - This limit, by default of the LVIS dataset, is 300. - class_names (None or list[str]): if provided, will use it to predict - per-category AP. - - Returns: - a dict of {metric name: score} - """ - metrics = { - "bbox": ["AP", "AP50", "AP75", "APs", "APm", "APl", "APr", "APc", "APf"], - "segm": ["AP", "AP50", "AP75", "APs", "APm", "APl", "APr", "APc", "APf"], - }[iou_type] - - logger = logging.getLogger(__name__) - - if len(lvis_results) == 0: # TODO: check if needed - logger.warn("No predictions from the model!") - return {metric: float("nan") for metric in metrics} - - if iou_type == "segm": - lvis_results = copy.deepcopy(lvis_results) - # When evaluating mask AP, if the results contain bbox, LVIS API will - # use the box area as the area of the instance, instead of the mask area. - # This leads to a different definition of small/medium/large. - # We remove the bbox field to let mask AP use mask area. - for c in lvis_results: - c.pop("bbox", None) - - if max_dets_per_image is None: - max_dets_per_image = 300 # Default for LVIS dataset - - from lvis import LVISEval, LVISResults - - logger.info(f"Evaluating with max detections per image = {max_dets_per_image}") - lvis_results = LVISResults(lvis_gt, lvis_results, max_dets=max_dets_per_image) - lvis_eval = LVISEval(lvis_gt, lvis_results, iou_type) - lvis_eval.run() - lvis_eval.print_results() - - # Pull the standard metrics from the LVIS results - results = lvis_eval.get_results() - results = {metric: float(results[metric] * 100) for metric in metrics} - logger.info("Evaluation results for {}: \n".format(iou_type) + create_small_table(results)) - return results diff --git a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/docs/tutorials/datasets.md b/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/docs/tutorials/datasets.md deleted file mode 100644 index 91103f64264aa6f3059611c5fe06ecd65bcb986f..0000000000000000000000000000000000000000 --- a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/docs/tutorials/datasets.md +++ /dev/null @@ -1,290 +0,0 @@ -# Use Custom Datasets - -This document explains how the dataset APIs -([DatasetCatalog](../modules/data.html#detectron2.data.DatasetCatalog), [MetadataCatalog](../modules/data.html#detectron2.data.MetadataCatalog)) -work, and how to use them to add custom datasets. - -Datasets that have builtin support in detectron2 are listed in [builtin datasets](builtin_datasets.md). -If you want to use a custom dataset while also reusing detectron2's data loaders, -you will need to: - -1. __Register__ your dataset (i.e., tell detectron2 how to obtain your dataset). -2. Optionally, __register metadata__ for your dataset. - -Next, we explain the above two concepts in detail. - -The [Colab tutorial](https://colab.research.google.com/drive/16jcaJoc6bCFAQ96jDe2HwtXj7BMD_-m5) -has a live example of how to register and train on a dataset of custom formats. - -### Register a Dataset - -To let detectron2 know how to obtain a dataset named "my_dataset", users need to implement -a function that returns the items in your dataset and then tell detectron2 about this -function: -```python -def my_dataset_function(): - ... - return list[dict] in the following format - -from detectron2.data import DatasetCatalog -DatasetCatalog.register("my_dataset", my_dataset_function) -# later, to access the data: -data: List[Dict] = DatasetCatalog.get("my_dataset") -``` - -Here, the snippet associates a dataset named "my_dataset" with a function that returns the data. -The function must return the same data (with same order) if called multiple times. -The registration stays effective until the process exits. - -The function can do arbitrary things and should return the data in `list[dict]`, each dict in either -of the following formats: -1. Detectron2's standard dataset dict, described below. This will make it work with many other builtin - features in detectron2, so it's recommended to use it when it's sufficient. -2. Any custom format. You can also return arbitrary dicts in your own format, - such as adding extra keys for new tasks. - Then you will need to handle them properly downstream as well. - See below for more details. - -#### Standard Dataset Dicts - -For standard tasks -(instance detection, instance/semantic/panoptic segmentation, keypoint detection), -we load the original dataset into `list[dict]` with a specification similar to COCO's annotations. -This is our standard representation for a dataset. - -Each dict contains information about one image. -The dict may have the following fields, -and the required fields vary based on what the dataloader or the task needs (see more below). - -```eval_rst -.. list-table:: - :header-rows: 1 - - * - Task - - Fields - * - Common - - file_name, height, width, image_id - - * - Instance detection/segmentation - - annotations - - * - Semantic segmentation - - sem_seg_file_name - - * - Panoptic segmentation - - pan_seg_file_name, segments_info -``` - -+ `file_name`: the full path to the image file. -+ `height`, `width`: integer. The shape of the image. -+ `image_id` (str or int): a unique id that identifies this image. Required by many - evaluators to identify the images, but a dataset may use it for different purposes. -+ `annotations` (list[dict]): Required by __instance detection/segmentation or keypoint detection__ tasks. - Each dict corresponds to annotations of one instance in this image, and - may contain the following keys: - + `bbox` (list[float], required): list of 4 numbers representing the bounding box of the instance. - + `bbox_mode` (int, required): the format of bbox. It must be a member of - [structures.BoxMode](../modules/structures.html#detectron2.structures.BoxMode). - Currently supports: `BoxMode.XYXY_ABS`, `BoxMode.XYWH_ABS`. - + `category_id` (int, required): an integer in the range [0, num_categories-1] representing the category label. - The value num_categories is reserved to represent the "background" category, if applicable. - + `segmentation` (list[list[float]] or dict): the segmentation mask of the instance. - + If `list[list[float]]`, it represents a list of polygons, one for each connected component - of the object. Each `list[float]` is one simple polygon in the format of `[x1, y1, ..., xn, yn]` (n≥3). - The Xs and Ys are absolute coordinates in unit of pixels. - + If `dict`, it represents the per-pixel segmentation mask in COCO's compressed RLE format. - The dict should have keys "size" and "counts". You can convert a uint8 segmentation mask of 0s and - 1s into such dict by `pycocotools.mask.encode(np.asarray(mask, order="F"))`. - `cfg.INPUT.MASK_FORMAT` must be set to `bitmask` if using the default data loader with such format. - + `keypoints` (list[float]): in the format of [x1, y1, v1,..., xn, yn, vn]. - v[i] means the [visibility](http://cocodataset.org/#format-data) of this keypoint. - `n` must be equal to the number of keypoint categories. - The Xs and Ys are absolute real-value coordinates in range [0, W or H]. - - (Note that the keypoint coordinates in COCO format are integers in range [0, W-1 or H-1], which is different - from our standard format. Detectron2 adds 0.5 to COCO keypoint coordinates to convert them from discrete - pixel indices to floating point coordinates.) - + `iscrowd`: 0 (default) or 1. Whether this instance is labeled as COCO's "crowd - region". Don't include this field if you don't know what it means. - - If `annotations` is an empty list, it means the image is labeled to have no objects. - Such images will by default be removed from training, - but can be included using `DATALOADER.FILTER_EMPTY_ANNOTATIONS`. - -+ `sem_seg_file_name` (str): - The full path to the semantic segmentation ground truth file. - It should be a grayscale image whose pixel values are integer labels. -+ `pan_seg_file_name` (str): - The full path to panoptic segmentation ground truth file. - It should be an RGB image whose pixel values are integer ids encoded using the - [panopticapi.utils.id2rgb](https://github.com/cocodataset/panopticapi/) function. - The ids are defined by `segments_info`. - If an id does not appear in `segments_info`, the pixel is considered unlabeled - and is usually ignored in training & evaluation. -+ `segments_info` (list[dict]): defines the meaning of each id in panoptic segmentation ground truth. - Each dict has the following keys: - + `id` (int): integer that appears in the ground truth image. - + `category_id` (int): an integer in the range [0, num_categories-1] representing the category label. - + `iscrowd`: 0 (default) or 1. Whether this instance is labeled as COCO's "crowd region". - - -```eval_rst - -.. note:: - - The PanopticFPN model does not use the panoptic segmentation - format defined here, but a combination of both instance segmentation and semantic segmentation data - format. See :doc:`builtin_datasets` for instructions on COCO. - -``` - -Fast R-CNN (with pre-computed proposals) models are rarely used today. -To train a Fast R-CNN, the following extra keys are needed: - -+ `proposal_boxes` (array): 2D numpy array with shape (K, 4) representing K precomputed proposal boxes for this image. -+ `proposal_objectness_logits` (array): numpy array with shape (K, ), which corresponds to the objectness - logits of proposals in 'proposal_boxes'. -+ `proposal_bbox_mode` (int): the format of the precomputed proposal bbox. - It must be a member of - [structures.BoxMode](../modules/structures.html#detectron2.structures.BoxMode). - Default is `BoxMode.XYXY_ABS`. - - - -#### Custom Dataset Dicts for New Tasks - -In the `list[dict]` that your dataset function returns, the dictionary can also have __arbitrary custom data__. -This will be useful for a new task that needs extra information not covered -by the standard dataset dicts. In this case, you need to make sure the downstream code can handle your data -correctly. Usually this requires writing a new `mapper` for the dataloader (see [Use Custom Dataloaders](./data_loading.md)). - -When designing a custom format, note that all dicts are stored in memory -(sometimes serialized and with multiple copies). -To save memory, each dict is meant to contain __small__ but sufficient information -about each sample, such as file names and annotations. -Loading full samples typically happens in the data loader. - -For attributes shared among the entire dataset, use `Metadata` (see below). -To avoid extra memory, do not save such information inside each sample. - -### "Metadata" for Datasets - -Each dataset is associated with some metadata, accessible through -`MetadataCatalog.get(dataset_name).some_metadata`. -Metadata is a key-value mapping that contains information that's shared among -the entire dataset, and usually is used to interpret what's in the dataset, e.g., -names of classes, colors of classes, root of files, etc. -This information will be useful for augmentation, evaluation, visualization, logging, etc. -The structure of metadata depends on what is needed from the corresponding downstream code. - -If you register a new dataset through `DatasetCatalog.register`, -you may also want to add its corresponding metadata through -`MetadataCatalog.get(dataset_name).some_key = some_value`, to enable any features that need the metadata. -You can do it like this (using the metadata key "thing_classes" as an example): - -```python -from detectron2.data import MetadataCatalog -MetadataCatalog.get("my_dataset").thing_classes = ["person", "dog"] -``` - -Here is a list of metadata keys that are used by builtin features in detectron2. -If you add your own dataset without these metadata, some features may be -unavailable to you: - -* `thing_classes` (list[str]): Used by all instance detection/segmentation tasks. - A list of names for each instance/thing category. - If you load a COCO format dataset, it will be automatically set by the function `load_coco_json`. - -* `thing_colors` (list[tuple(r, g, b)]): Pre-defined color (in [0, 255]) for each thing category. - Used for visualization. If not given, random colors will be used. - -* `stuff_classes` (list[str]): Used by semantic and panoptic segmentation tasks. - A list of names for each stuff category. - -* `stuff_colors` (list[tuple(r, g, b)]): Pre-defined color (in [0, 255]) for each stuff category. - Used for visualization. If not given, random colors are used. - -* `ignore_label` (int): Used by semantic and panoptic segmentation tasks. Pixels in ground-truth - annotations with this category label should be ignored in evaluation. Typically these are "unlabeled" - pixels. - -* `keypoint_names` (list[str]): Used by keypoint detection. A list of names for each keypoint. - -* `keypoint_flip_map` (list[tuple[str]]): Used by keypoint detection. A list of pairs of names, - where each pair are the two keypoints that should be flipped if the image is - flipped horizontally during augmentation. -* `keypoint_connection_rules`: list[tuple(str, str, (r, g, b))]. Each tuple specifies a pair of keypoints - that are connected and the color (in [0, 255]) to use for the line between them when visualized. - -Some additional metadata that are specific to the evaluation of certain datasets (e.g. COCO): - -* `thing_dataset_id_to_contiguous_id` (dict[int->int]): Used by all instance detection/segmentation tasks in the COCO format. - A mapping from instance class ids in the dataset to contiguous ids in range [0, #class). - Will be automatically set by the function `load_coco_json`. - -* `stuff_dataset_id_to_contiguous_id` (dict[int->int]): Used when generating prediction json files for - semantic/panoptic segmentation. - A mapping from semantic segmentation class ids in the dataset - to contiguous ids in [0, num_categories). It is useful for evaluation only. - -* `json_file`: The COCO annotation json file. Used by COCO evaluation for COCO-format datasets. -* `panoptic_root`, `panoptic_json`: Used by COCO-format panoptic evaluation. -* `evaluator_type`: Used by the builtin main training script to select - evaluator. Don't use it in a new training script. - You can just provide the [DatasetEvaluator](../modules/evaluation.html#detectron2.evaluation.DatasetEvaluator) - for your dataset directly in your main script. - -```eval_rst -.. note:: - - In recognition, sometimes we use the term "thing" for instance-level tasks, - and "stuff" for semantic segmentation tasks. - Both are used in panoptic segmentation tasks. - For background on the concept of "thing" and "stuff", see - `On Seeing Stuff: The Perception of Materials by Humans and Machines - `_. -``` - -### Register a COCO Format Dataset - -If your instance-level (detection, segmentation, keypoint) dataset is already a json file in the COCO format, -the dataset and its associated metadata can be registered easily with: -```python -from detectron2.data.datasets import register_coco_instances -register_coco_instances("my_dataset", {}, "json_annotation.json", "path/to/image/dir") -``` - -If your dataset is in COCO format but need to be further processed, or has extra custom per-instance annotations, -the [load_coco_json](../modules/data.html#detectron2.data.datasets.load_coco_json) -function might be useful. - -### Update the Config for New Datasets - -Once you've registered the dataset, you can use the name of the dataset (e.g., "my_dataset" in -example above) in `cfg.DATASETS.{TRAIN,TEST}`. -There are other configs you might want to change to train or evaluate on new datasets: - -* `MODEL.ROI_HEADS.NUM_CLASSES` and `MODEL.RETINANET.NUM_CLASSES` are the number of thing classes - for R-CNN and RetinaNet models, respectively. -* `MODEL.ROI_KEYPOINT_HEAD.NUM_KEYPOINTS` sets the number of keypoints for Keypoint R-CNN. - You'll also need to set [Keypoint OKS](http://cocodataset.org/#keypoints-eval) - with `TEST.KEYPOINT_OKS_SIGMAS` for evaluation. -* `MODEL.SEM_SEG_HEAD.NUM_CLASSES` sets the number of stuff classes for Semantic FPN & Panoptic FPN. -* `TEST.DETECTIONS_PER_IMAGE` controls the maximum number of objects to be detected. - Set it to a larger number if test images may contain >100 objects. -* If you're training Fast R-CNN (with precomputed proposals), `DATASETS.PROPOSAL_FILES_{TRAIN,TEST}` - need to match the datasets. The format of proposal files are documented - [here](../modules/data.html#detectron2.data.load_proposals_into_dataset). - -New models -(e.g. [TensorMask](../../projects/TensorMask), -[PointRend](../../projects/PointRend)) -often have similar configs of their own that need to be changed as well. - -```eval_rst -.. tip:: - - After changing the number of classes, certain layers in a pre-trained model will become incompatible - and therefore cannot be loaded to the new model. - This is expected, and loading such pre-trained models will produce warnings about such layers. -``` diff --git a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/modeling/meta_arch/centernet_detector.py b/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/modeling/meta_arch/centernet_detector.py deleted file mode 100644 index b7525c7b31cbbca504442e9a0dc8fb5005ea91b3..0000000000000000000000000000000000000000 --- a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/modeling/meta_arch/centernet_detector.py +++ /dev/null @@ -1,69 +0,0 @@ -import math -import json -import numpy as np -import torch -from torch import nn - -from detectron2.modeling.meta_arch.build import META_ARCH_REGISTRY -from detectron2.modeling import build_backbone, build_proposal_generator -from detectron2.modeling import detector_postprocess -from detectron2.structures import ImageList - -@META_ARCH_REGISTRY.register() -class CenterNetDetector(nn.Module): - def __init__(self, cfg): - super().__init__() - self.mean, self.std = cfg.MODEL.PIXEL_MEAN, cfg.MODEL.PIXEL_STD - self.register_buffer("pixel_mean", torch.Tensor(cfg.MODEL.PIXEL_MEAN).view(-1, 1, 1)) - self.register_buffer("pixel_std", torch.Tensor(cfg.MODEL.PIXEL_STD).view(-1, 1, 1)) - - self.backbone = build_backbone(cfg) - self.proposal_generator = build_proposal_generator( - cfg, self.backbone.output_shape()) # TODO: change to a more precise name - - - def forward(self, batched_inputs): - if not self.training: - return self.inference(batched_inputs) - images = self.preprocess_image(batched_inputs) - features = self.backbone(images.tensor) - gt_instances = [x["instances"].to(self.device) for x in batched_inputs] - - _, proposal_losses = self.proposal_generator( - images, features, gt_instances) - return proposal_losses - - - @property - def device(self): - return self.pixel_mean.device - - - @torch.no_grad() - def inference(self, batched_inputs, do_postprocess=True): - images = self.preprocess_image(batched_inputs) - inp = images.tensor - features = self.backbone(inp) - proposals, _ = self.proposal_generator(images, features, None) - - processed_results = [] - for results_per_image, input_per_image, image_size in zip( - proposals, batched_inputs, images.image_sizes): - if do_postprocess: - height = input_per_image.get("height", image_size[0]) - width = input_per_image.get("width", image_size[1]) - r = detector_postprocess(results_per_image, height, width) - processed_results.append({"instances": r}) - else: - r = results_per_image - processed_results.append(r) - return processed_results - - def preprocess_image(self, batched_inputs): - """ - Normalize, pad and batch the input images. - """ - images = [x["image"].to(self.device) for x in batched_inputs] - images = [(x - self.pixel_mean) / self.pixel_std for x in images] - images = ImageList.from_tensors(images, self.backbone.size_divisibility) - return images diff --git a/spaces/ThirdEyeData/Object-Detection-Using-FRCNN/app.py b/spaces/ThirdEyeData/Object-Detection-Using-FRCNN/app.py deleted file mode 100644 index 7d07ed3ecdaf0b9a585b84f61047640eb8268aab..0000000000000000000000000000000000000000 --- a/spaces/ThirdEyeData/Object-Detection-Using-FRCNN/app.py +++ /dev/null @@ -1,133 +0,0 @@ -import streamlit as st -import torch -import torchvision -import torchvision.transforms as transforms -from torchvision.models.detection.faster_rcnn import FastRCNNPredictor -from torchvision.transforms import ToTensor -from PIL import Image, ImageDraw -import cv2 -import numpy as np -import pandas as pd -import os - -import tempfile -from tempfile import NamedTemporaryFile - -# Create an FRCNN model instance with the same structure as the saved model -model = torchvision.models.detection.fasterrcnn_resnet50_fpn(num_classes=91) - -device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - -# Load the saved parameters into the model -model.load_state_dict(torch.load("frcnn_model.pth")) - -# Define the classes for object detection -classes = [ - '__background__', 'person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', - 'train', 'truck', 'boat', 'traffic light', 'fire hydrant', 'N/A', 'stop sign', - 'parking meter', 'bench', 'bird', 'cat', 'dog', 'horse', 'sheep', 'cow', - 'elephant', 'bear', 'zebra', 'giraffe', 'N/A', 'backpack', 'umbrella', 'N/A', - 'N/A', 'handbag', 'tie', 'suitcase', 'frisbee', 'skis', 'snowboard', - 'sports ball', 'kite', 'baseball bat', 'baseball glove', 'skateboard', - 'surfboard', 'tennis racket', 'bottle', 'N/A', 'wine glass', 'cup', 'fork', - 'knife', 'spoon', 'bowl', 'banana', 'apple', 'sandwich', 'orange', - 'broccoli', 'carrot', 'hot dog', 'pizza', 'donut', 'cake', 'chair', 'couch', - 'potted plant', 'bed', 'N/A', 'dining table', 'N/A', 'N/A', 'toilet', 'N/A', - 'tv', 'laptop', 'mouse', 'remote', 'keyboard', 'cell phone', 'microwave', - 'oven', 'toaster', 'sink', 'refrigerator', 'N/A', 'book', 'clock', 'vase', - 'scissors', 'teddy bear', 'hair drier', 'toothbrush' - ] - -# Set the threshold for object detection. It is IoU (Intersection over Union) -threshold = 0.5 - -st.title(""" Object Detection Using Faster-RCNN """) - -# st.subheader("Prediction of Object Detection") - -st.write(""" The Faster R-CNN (Region-based Convolutional Neural Network) is a powerful object detection model that combines deep - learning with region proposal networks to achieve highly accurate object detection in images. - It is trained on a large dataset of images and can detect a wide range of objects with high precision and recall. - The model is based on the ResNet-50 architecture, which allows it to capture complex visual features from the input image. - It uses a two-stage approach, first proposing regions of interest (RoIs) in the image and then classifying and refining the - object boundaries within these RoIs. This approach makes it extremely efficient and accurate in detecting multiple objects - in a single image. - """) - -images = ["test2.jpg","img7.jpg","img20.jpg","img23.jpg","test1.jpg","img18.jpg","img3.jpg","img15.jpg","img17.jpg"] -with st.sidebar: - st.write("Choose an Image") - st.image(images) - -# define the function to perform object detection on an image -def detect_objects(image_path): - # load the image - image = Image.open(image_path).convert('RGB') - - # convert the image to a tensor - image_tensor = ToTensor()(image).to(device) - - # run the image through the model to get the predictions - model.eval() - with torch.no_grad(): - predictions = model([image_tensor]) - - # filter out the predictions below the threshold - scores = predictions[0]['scores'].cpu().numpy() - boxes = predictions[0]['boxes'].cpu().numpy() - labels = predictions[0]['labels'].cpu().numpy() - mask = scores > threshold - scores = scores[mask] - boxes = boxes[mask] - labels = labels[mask] - - # create a new image with the predicted objects outlined in rectangles - draw = ImageDraw.Draw(image) - for box, label in zip(boxes, labels): - - # draw the rectangle around the object - draw.rectangle([(box[0], box[1]), (box[2], box[3])], outline='red') - - # write the object class above the rectangle - class_name = classes[label] - draw.text((box[0], box[1]), class_name, fill='yellow') - - # show the image - st.write("Obects detected in the image are: ") - st.image(image, use_column_width=True) - # st.image.show() - - -file = st.file_uploader('Upload an Image', type=(["jpeg", "jpg", "png"])) - - -if file is None: - st.write("Please upload an image file") -else: - image = Image.open(file) - st.write("Input Image") - st.image(image, use_column_width=True) - with NamedTemporaryFile(dir='.', suffix='.') as f: - f.write(file.getbuffer()) - # your_function_which_takes_a_path(f.name) - detect_objects(f.name) - -# if file is None: -# st.write("Please upload an image file") -# else: -# image = Image.open(file) -# st.write("Input Image") -# st.image(image, use_column_width=True) -# with NamedTemporaryFile(dir='.', suffix='.jpeg') as f: # this line gives error and only accepts .jpeg and so used above snippet -# f.write(file.getbuffer()) # which will accepts all formats of images. -# # your_function_which_takes_a_path(f.name) -# detect_objects(f.name) - -st.write(""" This Streamlit app provides a user-friendly interface for uploading an image and visualizing the output of the Faster R-CNN - model. It displays the uploaded image along with the predicted objects highlighted with bounding box overlays. The app allows - users to explore the detected objects in the image, providing valuable insights and understanding of the model's predictions. - It can be used for a wide range of applications, such as object recognition, image analysis, and visual storytelling. - Whether it's identifying objects in real-world images or understanding the capabilities of state-of-the-art object detection - models, this Streamlit app powered by Faster R-CNN is a powerful tool for computer vision tasks. - """) - diff --git a/spaces/Trangluna2002/AI_Cover_Gen/src/infer_pack/commons.py b/spaces/Trangluna2002/AI_Cover_Gen/src/infer_pack/commons.py deleted file mode 100644 index 54470986f37825b35d90d7efa7437d1c26b87215..0000000000000000000000000000000000000000 --- a/spaces/Trangluna2002/AI_Cover_Gen/src/infer_pack/commons.py +++ /dev/null @@ -1,166 +0,0 @@ -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size * dilation - dilation) / 2) - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def kl_divergence(m_p, logs_p, m_q, logs_q): - """KL(P||Q)""" - kl = (logs_q - logs_p) - 0.5 - kl += ( - 0.5 * (torch.exp(2.0 * logs_p) + ((m_p - m_q) ** 2)) * torch.exp(-2.0 * logs_q) - ) - return kl - - -def rand_gumbel(shape): - """Sample from the Gumbel distribution, protect from overflows.""" - uniform_samples = torch.rand(shape) * 0.99998 + 0.00001 - return -torch.log(-torch.log(uniform_samples)) - - -def rand_gumbel_like(x): - g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device) - return g - - -def slice_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, :, idx_str:idx_end] - return ret - - -def slice_segments2(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, idx_str:idx_end] - return ret - - -def rand_slice_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def get_timing_signal_1d(length, channels, min_timescale=1.0, max_timescale=1.0e4): - position = torch.arange(length, dtype=torch.float) - num_timescales = channels // 2 - log_timescale_increment = math.log(float(max_timescale) / float(min_timescale)) / ( - num_timescales - 1 - ) - inv_timescales = min_timescale * torch.exp( - torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment - ) - scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1) - signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0) - signal = F.pad(signal, [0, 0, 0, channels % 2]) - signal = signal.view(1, channels, length) - return signal - - -def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return x + signal.to(dtype=x.dtype, device=x.device) - - -def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis) - - -def subsequent_mask(length): - mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0) - return mask - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def shift_1d(x): - x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1] - return x - - -def sequence_mask(length, max_length=None): - if max_length is None: - max_length = length.max() - x = torch.arange(max_length, dtype=length.dtype, device=length.device) - return x.unsqueeze(0) < length.unsqueeze(1) - - -def generate_path(duration, mask): - """ - duration: [b, 1, t_x] - mask: [b, 1, t_y, t_x] - """ - device = duration.device - - b, _, t_y, t_x = mask.shape - cum_duration = torch.cumsum(duration, -1) - - cum_duration_flat = cum_duration.view(b * t_x) - path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype) - path = path.view(b, t_x, t_y) - path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1] - path = path.unsqueeze(1).transpose(2, 3) * mask - return path - - -def clip_grad_value_(parameters, clip_value, norm_type=2): - if isinstance(parameters, torch.Tensor): - parameters = [parameters] - parameters = list(filter(lambda p: p.grad is not None, parameters)) - norm_type = float(norm_type) - if clip_value is not None: - clip_value = float(clip_value) - - total_norm = 0 - for p in parameters: - param_norm = p.grad.data.norm(norm_type) - total_norm += param_norm.item() ** norm_type - if clip_value is not None: - p.grad.data.clamp_(min=-clip_value, max=clip_value) - total_norm = total_norm ** (1.0 / norm_type) - return total_norm diff --git a/spaces/Vicent3/ocr-endpoint/style.css b/spaces/Vicent3/ocr-endpoint/style.css deleted file mode 100644 index 114adf441e9032febb46bc056b2a8bb651075f0d..0000000000000000000000000000000000000000 --- a/spaces/Vicent3/ocr-endpoint/style.css +++ /dev/null @@ -1,28 +0,0 @@ -body { - padding: 2rem; - font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif; -} - -h1 { - font-size: 16px; - margin-top: 0; -} - -p { - color: rgb(107, 114, 128); - font-size: 15px; - margin-bottom: 10px; - margin-top: 5px; -} - -.card { - max-width: 620px; - margin: 0 auto; - padding: 16px; - border: 1px solid lightgray; - border-radius: 16px; -} - -.card p:last-child { - margin-bottom: 0; -} diff --git a/spaces/VoiceHero69/changer/webui/modules/download.py b/spaces/VoiceHero69/changer/webui/modules/download.py deleted file mode 100644 index c85fc65837e89f91a7f5176e933945a958cdade4..0000000000000000000000000000000000000000 --- a/spaces/VoiceHero69/changer/webui/modules/download.py +++ /dev/null @@ -1,54 +0,0 @@ -import os.path - -import gradio -import huggingface_hub -import webui.modules.models as mod - -model_types = ['text-to-speech', 'automatic-speech-recognition', 'audio-to-audio', 'rvc'] - - -class AutoModel: - def __init__(self, repo_id, model_type): - self.repo_id = repo_id - self.model_type = model_type - - def __str__(self): - return self.repo_id - - -def get_rvc_models(): - path = os.path.join('data', 'models', 'rvc') - output = [] - for f in os.listdir(path): - f_path = os.path.join(path, f) - if os.path.isdir(f_path): - for f2 in os.listdir(f_path): - if f2.endswith('.pth') and f2 not in ['f0D40k.pth', 'f0G40k.pth', 'f0D48k.pth', 'f0G48k.pth']: - output.append(os.path.join(f, f2)) - # Don't allow files anymore, it's bugged. - # elif os.path.isfile(f_path): - # if f.endswith('.pth') and f not in ['f0D40k.pth', 'f0G40k.pth', 'f0D48k.pth', 'f0G48k.pth']: - # output.append(f) - return output - - -def fill_models(model_type: str): - if model_type == 'text-to-speech': - return [m for m in mod.all_tts() if not m.no_install] - if model_type == 'rvc': - return get_rvc_models() - return [model.modelId for model in - huggingface_hub.list_models(filter=huggingface_hub.ModelFilter(task=model_type), sort='downloads')] - - -def get_file_name(repo_id: str): - return repo_id.replace('/', '--') - - -def hub_download(repo_id: str, model_type: str): - try: - huggingface_hub.snapshot_download(repo_id, local_dir_use_symlinks=False, - local_dir=f'data/models/{model_type}/{get_file_name(repo_id)}') - except Exception as e: - return [f'

    {str(e)}

    ', gradio.Dropdown.update()] - return [f"Successfully downloaded {repo_id}", mod.refresh_choices()] diff --git a/spaces/XS-1/BW_IMAGE_VIDEO_COLORIZER/fastai/callbacks/oversampling.py b/spaces/XS-1/BW_IMAGE_VIDEO_COLORIZER/fastai/callbacks/oversampling.py deleted file mode 100644 index 0254185f8f1b89362e60ff64fffaecb7cbc5b193..0000000000000000000000000000000000000000 --- a/spaces/XS-1/BW_IMAGE_VIDEO_COLORIZER/fastai/callbacks/oversampling.py +++ /dev/null @@ -1,22 +0,0 @@ -from ..torch_core import * -from ..basic_data import DataBunch -from ..callback import * -from ..basic_train import Learner,LearnerCallback -from torch.utils.data.sampler import WeightedRandomSampler - -__all__ = ['OverSamplingCallback'] - - - -class OverSamplingCallback(LearnerCallback): - def __init__(self,learn:Learner,weights:torch.Tensor=None): - super().__init__(learn) - self.labels = self.learn.data.train_dl.dataset.y.items - _, counts = np.unique(self.labels,return_counts=True) - self.weights = (weights if weights is not None else - torch.DoubleTensor((1/counts)[self.labels])) - self.label_counts = np.bincount([self.learn.data.train_dl.dataset.y[i].data for i in range(len(self.learn.data.train_dl.dataset))]) - self.total_len_oversample = int(self.learn.data.c*np.max(self.label_counts)) - - def on_train_begin(self, **kwargs): - self.learn.data.train_dl.dl.batch_sampler = BatchSampler(WeightedRandomSampler(self.weights,self.total_len_oversample), self.learn.data.train_dl.batch_size,False) \ No newline at end of file diff --git a/spaces/Xanthius/llama-token-counter/README.md b/spaces/Xanthius/llama-token-counter/README.md deleted file mode 100644 index 4cf23803d668bddf27e2ea65cf149889e95922f2..0000000000000000000000000000000000000000 --- a/spaces/Xanthius/llama-token-counter/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Llama Token Counter -emoji: 📈 -colorFrom: blue -colorTo: yellow -sdk: gradio -sdk_version: 3.24.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Xhaheen/tasweer/index.html b/spaces/Xhaheen/tasweer/index.html deleted file mode 100644 index 45c685a445e9421effe287fc577ea629d5df832a..0000000000000000000000000000000000000000 --- a/spaces/Xhaheen/tasweer/index.html +++ /dev/null @@ -1,64 +0,0 @@ - - - - - - - - - - - - - - - - - - - - -
    - - - \ No newline at end of file diff --git a/spaces/YONG627/456123/yolov5-code-main/utils/google_app_engine/Dockerfile b/spaces/YONG627/456123/yolov5-code-main/utils/google_app_engine/Dockerfile deleted file mode 100644 index 0155618f475104e9858b81470339558156c94e13..0000000000000000000000000000000000000000 --- a/spaces/YONG627/456123/yolov5-code-main/utils/google_app_engine/Dockerfile +++ /dev/null @@ -1,25 +0,0 @@ -FROM gcr.io/google-appengine/python - -# Create a virtualenv for dependencies. This isolates these packages from -# system-level packages. -# Use -p python3 or -p python3.7 to select python version. Default is version 2. -RUN virtualenv /env -p python3 - -# Setting these environment variables are the same as running -# source /env/bin/activate. -ENV VIRTUAL_ENV /env -ENV PATH /env/bin:$PATH - -RUN apt-get update && apt-get install -y python-opencv - -# Copy the application's requirements.txt and run pip to install all -# dependencies into the virtualenv. -ADD requirements.txt /app/requirements.txt -RUN pip install -r /app/requirements.txt - -# Add the application source code. -ADD . /app - -# Run a WSGI server to serve the application. gunicorn must be declared as -# a dependency in requirements.txt. -CMD gunicorn -b :$PORT main:app diff --git a/spaces/Yan233th/so-vits-svc-models/modules/mel_processing.py b/spaces/Yan233th/so-vits-svc-models/modules/mel_processing.py deleted file mode 100644 index 99c5b35beb83f3b288af0fac5b49ebf2c69f062c..0000000000000000000000000000000000000000 --- a/spaces/Yan233th/so-vits-svc-models/modules/mel_processing.py +++ /dev/null @@ -1,112 +0,0 @@ -import math -import os -import random -import torch -from torch import nn -import torch.nn.functional as F -import torch.utils.data -import numpy as np -import librosa -import librosa.util as librosa_util -from librosa.util import normalize, pad_center, tiny -from scipy.signal import get_window -from scipy.io.wavfile import read -from librosa.filters import mel as librosa_mel_fn - -MAX_WAV_VALUE = 32768.0 - - -def dynamic_range_compression_torch(x, C=1, clip_val=1e-5): - """ - PARAMS - ------ - C: compression factor - """ - return torch.log(torch.clamp(x, min=clip_val) * C) - - -def dynamic_range_decompression_torch(x, C=1): - """ - PARAMS - ------ - C: compression factor used to compress - """ - return torch.exp(x) / C - - -def spectral_normalize_torch(magnitudes): - output = dynamic_range_compression_torch(magnitudes) - return output - - -def spectral_de_normalize_torch(magnitudes): - output = dynamic_range_decompression_torch(magnitudes) - return output - - -mel_basis = {} -hann_window = {} - - -def spectrogram_torch(y, n_fft, sampling_rate, hop_size, win_size, center=False): - if torch.min(y) < -1.: - print('min value is ', torch.min(y)) - if torch.max(y) > 1.: - print('max value is ', torch.max(y)) - - global hann_window - dtype_device = str(y.dtype) + '_' + str(y.device) - wnsize_dtype_device = str(win_size) + '_' + dtype_device - if wnsize_dtype_device not in hann_window: - hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device) - - y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect') - y = y.squeeze(1) - - spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device], - center=center, pad_mode='reflect', normalized=False, onesided=True, return_complex=False) - - spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6) - return spec - - -def spec_to_mel_torch(spec, n_fft, num_mels, sampling_rate, fmin, fmax): - global mel_basis - dtype_device = str(spec.dtype) + '_' + str(spec.device) - fmax_dtype_device = str(fmax) + '_' + dtype_device - if fmax_dtype_device not in mel_basis: - mel = librosa_mel_fn(sr=sampling_rate, n_fft=n_fft, n_mels=num_mels, fmin=fmin, fmax=fmax) - mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=spec.dtype, device=spec.device) - spec = torch.matmul(mel_basis[fmax_dtype_device], spec) - spec = spectral_normalize_torch(spec) - return spec - - -def mel_spectrogram_torch(y, n_fft, num_mels, sampling_rate, hop_size, win_size, fmin, fmax, center=False): - if torch.min(y) < -1.: - print('min value is ', torch.min(y)) - if torch.max(y) > 1.: - print('max value is ', torch.max(y)) - - global mel_basis, hann_window - dtype_device = str(y.dtype) + '_' + str(y.device) - fmax_dtype_device = str(fmax) + '_' + dtype_device - wnsize_dtype_device = str(win_size) + '_' + dtype_device - if fmax_dtype_device not in mel_basis: - mel = librosa_mel_fn(sr=sampling_rate, n_fft=n_fft, n_mels=num_mels, fmin=fmin, fmax=fmax) - mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=y.dtype, device=y.device) - if wnsize_dtype_device not in hann_window: - hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device) - - y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect') - y = y.squeeze(1) - - spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device], - center=center, pad_mode='reflect', normalized=False, onesided=True, return_complex=False) - - spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6) - - spec = torch.matmul(mel_basis[fmax_dtype_device], spec) - spec = spectral_normalize_torch(spec) - - return spec diff --git a/spaces/YazawaSunrise/so-vits-svc-LoveLive/commons.py b/spaces/YazawaSunrise/so-vits-svc-LoveLive/commons.py deleted file mode 100644 index 074888006392e956ce204d8368362dbb2cd4e304..0000000000000000000000000000000000000000 --- a/spaces/YazawaSunrise/so-vits-svc-LoveLive/commons.py +++ /dev/null @@ -1,188 +0,0 @@ -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - -def slice_pitch_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, idx_str:idx_end] - return ret - -def rand_slice_segments_with_pitch(x, pitch, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - ret_pitch = slice_pitch_segments(pitch, ids_str, segment_size) - return ret, ret_pitch, ids_str - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size*dilation - dilation)/2) - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def intersperse(lst, item): - result = [item] * (len(lst) * 2 + 1) - result[1::2] = lst - return result - - -def kl_divergence(m_p, logs_p, m_q, logs_q): - """KL(P||Q)""" - kl = (logs_q - logs_p) - 0.5 - kl += 0.5 * (torch.exp(2. * logs_p) + ((m_p - m_q)**2)) * torch.exp(-2. * logs_q) - return kl - - -def rand_gumbel(shape): - """Sample from the Gumbel distribution, protect from overflows.""" - uniform_samples = torch.rand(shape) * 0.99998 + 0.00001 - return -torch.log(-torch.log(uniform_samples)) - - -def rand_gumbel_like(x): - g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device) - return g - - -def slice_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, :, idx_str:idx_end] - return ret - - -def rand_slice_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def rand_spec_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def get_timing_signal_1d( - length, channels, min_timescale=1.0, max_timescale=1.0e4): - position = torch.arange(length, dtype=torch.float) - num_timescales = channels // 2 - log_timescale_increment = ( - math.log(float(max_timescale) / float(min_timescale)) / - (num_timescales - 1)) - inv_timescales = min_timescale * torch.exp( - torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment) - scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1) - signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0) - signal = F.pad(signal, [0, 0, 0, channels % 2]) - signal = signal.view(1, channels, length) - return signal - - -def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return x + signal.to(dtype=x.dtype, device=x.device) - - -def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis) - - -def subsequent_mask(length): - mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0) - return mask - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def shift_1d(x): - x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1] - return x - - -def sequence_mask(length, max_length=None): - if max_length is None: - max_length = length.max() - x = torch.arange(max_length, dtype=length.dtype, device=length.device) - return x.unsqueeze(0) < length.unsqueeze(1) - - -def generate_path(duration, mask): - """ - duration: [b, 1, t_x] - mask: [b, 1, t_y, t_x] - """ - device = duration.device - - b, _, t_y, t_x = mask.shape - cum_duration = torch.cumsum(duration, -1) - - cum_duration_flat = cum_duration.view(b * t_x) - path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype) - path = path.view(b, t_x, t_y) - path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1] - path = path.unsqueeze(1).transpose(2,3) * mask - return path - - -def clip_grad_value_(parameters, clip_value, norm_type=2): - if isinstance(parameters, torch.Tensor): - parameters = [parameters] - parameters = list(filter(lambda p: p.grad is not None, parameters)) - norm_type = float(norm_type) - if clip_value is not None: - clip_value = float(clip_value) - - total_norm = 0 - for p in parameters: - param_norm = p.grad.data.norm(norm_type) - total_norm += param_norm.item() ** norm_type - if clip_value is not None: - p.grad.data.clamp_(min=-clip_value, max=clip_value) - total_norm = total_norm ** (1. / norm_type) - return total_norm diff --git a/spaces/Yiqin/ChatVID/model/vision/grit_src/grit/modeling/roi_heads/grit_fast_rcnn.py b/spaces/Yiqin/ChatVID/model/vision/grit_src/grit/modeling/roi_heads/grit_fast_rcnn.py deleted file mode 100644 index 5d03daabac26aecf214baf1f743c97a5d7486bf7..0000000000000000000000000000000000000000 --- a/spaces/Yiqin/ChatVID/model/vision/grit_src/grit/modeling/roi_heads/grit_fast_rcnn.py +++ /dev/null @@ -1,126 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# Modified by Jialian Wu from https://github.com/facebookresearch/Detic/blob/main/detic/modeling/roi_heads/detic_fast_rcnn.py -import torch -from fvcore.nn import giou_loss, smooth_l1_loss -from torch import nn -from torch.nn import functional as F -import fvcore.nn.weight_init as weight_init -from detectron2.config import configurable -from detectron2.layers import ShapeSpec, batched_nms, cat, cross_entropy, nonzero_tuple -from detectron2.modeling.roi_heads.fast_rcnn import FastRCNNOutputLayers -from detectron2.modeling.roi_heads.fast_rcnn import _log_classification_stats - - -__all__ = ["GRiTFastRCNNOutputLayers"] - - -class GRiTFastRCNNOutputLayers(FastRCNNOutputLayers): - @configurable - def __init__( - self, - input_shape: ShapeSpec, - **kwargs, - ): - super().__init__( - input_shape=input_shape, - **kwargs, - ) - - input_size = input_shape.channels * \ - (input_shape.width or 1) * (input_shape.height or 1) - - self.bbox_pred = nn.Sequential( - nn.Linear(input_size, input_size), - nn.ReLU(inplace=True), - nn.Linear(input_size, 4) - ) - weight_init.c2_xavier_fill(self.bbox_pred[0]) - nn.init.normal_(self.bbox_pred[-1].weight, std=0.001) - nn.init.constant_(self.bbox_pred[-1].bias, 0) - - @classmethod - def from_config(cls, cfg, input_shape): - ret = super().from_config(cfg, input_shape) - return ret - - def losses(self, predictions, proposals): - scores, proposal_deltas = predictions - gt_classes = ( - cat([p.gt_classes for p in proposals], dim=0) if len(proposals) else torch.empty(0) - ) - num_classes = self.num_classes - _log_classification_stats(scores, gt_classes) - - if len(proposals): - proposal_boxes = cat([p.proposal_boxes.tensor for p in proposals], dim=0) # Nx4 - assert not proposal_boxes.requires_grad, "Proposals should not require gradients!" - gt_boxes = cat( - [(p.gt_boxes if p.has("gt_boxes") else p.proposal_boxes).tensor for p in proposals], - dim=0, - ) - else: - proposal_boxes = gt_boxes = torch.empty((0, 4), device=proposal_deltas.device) - - loss_cls = self.softmax_cross_entropy_loss(scores, gt_classes) - return { - "loss_cls": loss_cls, - "loss_box_reg": self.box_reg_loss( - proposal_boxes, gt_boxes, proposal_deltas, gt_classes, - num_classes=num_classes) - } - - def softmax_cross_entropy_loss(self, pred_class_logits, gt_classes): - if pred_class_logits.numel() == 0: - return pred_class_logits.new_zeros([1])[0] - - loss = F.cross_entropy( - pred_class_logits, gt_classes, reduction="mean") - return loss - - def box_reg_loss( - self, proposal_boxes, gt_boxes, pred_deltas, gt_classes, - num_classes=-1): - num_classes = num_classes if num_classes > 0 else self.num_classes - box_dim = proposal_boxes.shape[1] - fg_inds = nonzero_tuple((gt_classes >= 0) & (gt_classes < num_classes))[0] - if pred_deltas.shape[1] == box_dim: - fg_pred_deltas = pred_deltas[fg_inds] - else: - fg_pred_deltas = pred_deltas.view(-1, self.num_classes, box_dim)[ - fg_inds, gt_classes[fg_inds] - ] - - if self.box_reg_loss_type == "smooth_l1": - gt_pred_deltas = self.box2box_transform.get_deltas( - proposal_boxes[fg_inds], - gt_boxes[fg_inds], - ) - loss_box_reg = smooth_l1_loss( - fg_pred_deltas, gt_pred_deltas, self.smooth_l1_beta, reduction="sum" - ) - elif self.box_reg_loss_type == "giou": - fg_pred_boxes = self.box2box_transform.apply_deltas( - fg_pred_deltas, proposal_boxes[fg_inds] - ) - loss_box_reg = giou_loss(fg_pred_boxes, gt_boxes[fg_inds], reduction="sum") - else: - raise ValueError(f"Invalid bbox reg loss type '{self.box_reg_loss_type}'") - return loss_box_reg / max(gt_classes.numel(), 1.0) - - def predict_probs(self, predictions, proposals): - scores = predictions[0] - num_inst_per_image = [len(p) for p in proposals] - probs = F.softmax(scores, dim=-1) - return probs.split(num_inst_per_image, dim=0) - - def forward(self, x): - if x.dim() > 2: - x = torch.flatten(x, start_dim=1) - scores = [] - - cls_scores = self.cls_score(x) - scores.append(cls_scores) - scores = torch.cat(scores, dim=1) - - proposal_deltas = self.bbox_pred(x) - return scores, proposal_deltas \ No newline at end of file diff --git a/spaces/YouLiXiya/Mobile-SAM/sam_extension/distillation_models/__init__.py b/spaces/YouLiXiya/Mobile-SAM/sam_extension/distillation_models/__init__.py deleted file mode 100644 index 03ffdeaa458e001339a6bf6772135b2233f2f925..0000000000000000000000000000000000000000 --- a/spaces/YouLiXiya/Mobile-SAM/sam_extension/distillation_models/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -from .dino import DINO -from .sam import SAMEncoderViT, DINOSAMViT -from .fastertinyvit import FasterTinyViT -# from .flashvision_transformer import FlashVisionTransformer \ No newline at end of file diff --git a/spaces/Yudha515/Rvc-Models/setup.py b/spaces/Yudha515/Rvc-Models/setup.py deleted file mode 100644 index 78a172b7c90003b689bde40b49cc8fe1fb8107d4..0000000000000000000000000000000000000000 --- a/spaces/Yudha515/Rvc-Models/setup.py +++ /dev/null @@ -1,65 +0,0 @@ -""" - Copyright (c) Meta Platforms, Inc. and affiliates. - All rights reserved. - - This source code is licensed under the license found in the - LICENSE file in the root directory of this source tree. - -""" - -from pathlib import Path - -from setuptools import setup, find_packages - - -NAME = 'audiocraft' -DESCRIPTION = 'Audio research library for PyTorch' - -URL = 'https://github.com/fairinternal/audiocraft' -AUTHOR = 'FAIR Speech & Audio' -EMAIL = 'defossez@meta.com' -REQUIRES_PYTHON = '>=3.8.0' - -for line in open('audiocraft/__init__.py'): - line = line.strip() - if '__version__' in line: - context = {} - exec(line, context) - VERSION = context['__version__'] - -HERE = Path(__file__).parent - -try: - with open(HERE / "README.md", encoding='utf-8') as f: - long_description = '\n' + f.read() -except FileNotFoundError: - long_description = DESCRIPTION - -REQUIRED = [i.strip() for i in open(HERE / 'requirements.txt') if not i.startswith('#')] - -setup( - name=NAME, - version=VERSION, - description=DESCRIPTION, - author_email=EMAIL, - long_description=long_description, - long_description_content_type='text/markdown', - author=AUTHOR, - url=URL, - python_requires=REQUIRES_PYTHON, - install_requires=REQUIRED, - extras_require={ - 'dev': ['coverage', 'flake8', 'mypy', 'pdoc3', 'pytest'], - }, - packages=find_packages(), - package_data={'audiocraft': ['py.typed']}, - include_package_data=True, - license='MIT License', - classifiers=[ - # Trove classifiers - # Full list: https://pypi.python.org/pypi?%3Aaction=list_classifiers - 'License :: OSI Approved :: MIT License', - 'Topic :: Multimedia :: Sound/Audio', - 'Topic :: Scientific/Engineering :: Artificial Intelligence', - ], -) diff --git a/spaces/Yukki-Yui/moe-tts/modules.py b/spaces/Yukki-Yui/moe-tts/modules.py deleted file mode 100644 index 9c7fd9cd6eb8b7e0ec0e08957e970744a374a924..0000000000000000000000000000000000000000 --- a/spaces/Yukki-Yui/moe-tts/modules.py +++ /dev/null @@ -1,390 +0,0 @@ -import copy -import math -import numpy as np -import scipy -import torch -from torch import nn -from torch.nn import functional as F - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm - -import commons -from commons import init_weights, get_padding -from transforms import piecewise_rational_quadratic_transform - - -LRELU_SLOPE = 0.1 - - -class LayerNorm(nn.Module): - def __init__(self, channels, eps=1e-5): - super().__init__() - self.channels = channels - self.eps = eps - - self.gamma = nn.Parameter(torch.ones(channels)) - self.beta = nn.Parameter(torch.zeros(channels)) - - def forward(self, x): - x = x.transpose(1, -1) - x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps) - return x.transpose(1, -1) - - -class ConvReluNorm(nn.Module): - def __init__(self, in_channels, hidden_channels, out_channels, kernel_size, n_layers, p_dropout): - super().__init__() - self.in_channels = in_channels - self.hidden_channels = hidden_channels - self.out_channels = out_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - assert n_layers > 1, "Number of layers should be larger than 0." - - self.conv_layers = nn.ModuleList() - self.norm_layers = nn.ModuleList() - self.conv_layers.append(nn.Conv1d(in_channels, hidden_channels, kernel_size, padding=kernel_size//2)) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.relu_drop = nn.Sequential( - nn.ReLU(), - nn.Dropout(p_dropout)) - for _ in range(n_layers-1): - self.conv_layers.append(nn.Conv1d(hidden_channels, hidden_channels, kernel_size, padding=kernel_size//2)) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask): - x_org = x - for i in range(self.n_layers): - x = self.conv_layers[i](x * x_mask) - x = self.norm_layers[i](x) - x = self.relu_drop(x) - x = x_org + self.proj(x) - return x * x_mask - - -class DDSConv(nn.Module): - """ - Dialted and Depth-Separable Convolution - """ - def __init__(self, channels, kernel_size, n_layers, p_dropout=0.): - super().__init__() - self.channels = channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - - self.drop = nn.Dropout(p_dropout) - self.convs_sep = nn.ModuleList() - self.convs_1x1 = nn.ModuleList() - self.norms_1 = nn.ModuleList() - self.norms_2 = nn.ModuleList() - for i in range(n_layers): - dilation = kernel_size ** i - padding = (kernel_size * dilation - dilation) // 2 - self.convs_sep.append(nn.Conv1d(channels, channels, kernel_size, - groups=channels, dilation=dilation, padding=padding - )) - self.convs_1x1.append(nn.Conv1d(channels, channels, 1)) - self.norms_1.append(LayerNorm(channels)) - self.norms_2.append(LayerNorm(channels)) - - def forward(self, x, x_mask, g=None): - if g is not None: - x = x + g - for i in range(self.n_layers): - y = self.convs_sep[i](x * x_mask) - y = self.norms_1[i](y) - y = F.gelu(y) - y = self.convs_1x1[i](y) - y = self.norms_2[i](y) - y = F.gelu(y) - y = self.drop(y) - x = x + y - return x * x_mask - - -class WN(torch.nn.Module): - def __init__(self, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=0, p_dropout=0): - super(WN, self).__init__() - assert(kernel_size % 2 == 1) - self.hidden_channels =hidden_channels - self.kernel_size = kernel_size, - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - self.p_dropout = p_dropout - - self.in_layers = torch.nn.ModuleList() - self.res_skip_layers = torch.nn.ModuleList() - self.drop = nn.Dropout(p_dropout) - - if gin_channels != 0: - cond_layer = torch.nn.Conv1d(gin_channels, 2*hidden_channels*n_layers, 1) - self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name='weight') - - for i in range(n_layers): - dilation = dilation_rate ** i - padding = int((kernel_size * dilation - dilation) / 2) - in_layer = torch.nn.Conv1d(hidden_channels, 2*hidden_channels, kernel_size, - dilation=dilation, padding=padding) - in_layer = torch.nn.utils.weight_norm(in_layer, name='weight') - self.in_layers.append(in_layer) - - # last one is not necessary - if i < n_layers - 1: - res_skip_channels = 2 * hidden_channels - else: - res_skip_channels = hidden_channels - - res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1) - res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name='weight') - self.res_skip_layers.append(res_skip_layer) - - def forward(self, x, x_mask, g=None, **kwargs): - output = torch.zeros_like(x) - n_channels_tensor = torch.IntTensor([self.hidden_channels]) - - if g is not None: - g = self.cond_layer(g) - - for i in range(self.n_layers): - x_in = self.in_layers[i](x) - if g is not None: - cond_offset = i * 2 * self.hidden_channels - g_l = g[:,cond_offset:cond_offset+2*self.hidden_channels,:] - else: - g_l = torch.zeros_like(x_in) - - acts = commons.fused_add_tanh_sigmoid_multiply( - x_in, - g_l, - n_channels_tensor) - acts = self.drop(acts) - - res_skip_acts = self.res_skip_layers[i](acts) - if i < self.n_layers - 1: - res_acts = res_skip_acts[:,:self.hidden_channels,:] - x = (x + res_acts) * x_mask - output = output + res_skip_acts[:,self.hidden_channels:,:] - else: - output = output + res_skip_acts - return output * x_mask - - def remove_weight_norm(self): - if self.gin_channels != 0: - torch.nn.utils.remove_weight_norm(self.cond_layer) - for l in self.in_layers: - torch.nn.utils.remove_weight_norm(l) - for l in self.res_skip_layers: - torch.nn.utils.remove_weight_norm(l) - - -class ResBlock1(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)): - super(ResBlock1, self).__init__() - self.convs1 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]))) - ]) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))) - ]) - self.convs2.apply(init_weights) - - def forward(self, x, x_mask=None): - for c1, c2 in zip(self.convs1, self.convs2): - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c1(xt) - xt = F.leaky_relu(xt, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c2(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_weight_norm(l) - for l in self.convs2: - remove_weight_norm(l) - - -class ResBlock2(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3)): - super(ResBlock2, self).__init__() - self.convs = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))) - ]) - self.convs.apply(init_weights) - - def forward(self, x, x_mask=None): - for c in self.convs: - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs: - remove_weight_norm(l) - - -class Log(nn.Module): - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask - logdet = torch.sum(-y, [1, 2]) - return y, logdet - else: - x = torch.exp(x) * x_mask - return x - - -class Flip(nn.Module): - def forward(self, x, *args, reverse=False, **kwargs): - x = torch.flip(x, [1]) - if not reverse: - logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device) - return x, logdet - else: - return x - - -class ElementwiseAffine(nn.Module): - def __init__(self, channels): - super().__init__() - self.channels = channels - self.m = nn.Parameter(torch.zeros(channels,1)) - self.logs = nn.Parameter(torch.zeros(channels,1)) - - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = self.m + torch.exp(self.logs) * x - y = y * x_mask - logdet = torch.sum(self.logs * x_mask, [1,2]) - return y, logdet - else: - x = (x - self.m) * torch.exp(-self.logs) * x_mask - return x - - -class ResidualCouplingLayer(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=0, - gin_channels=0, - mean_only=False): - assert channels % 2 == 0, "channels should be divisible by 2" - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.half_channels = channels // 2 - self.mean_only = mean_only - - self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1) - self.enc = WN(hidden_channels, kernel_size, dilation_rate, n_layers, p_dropout=p_dropout, gin_channels=gin_channels) - self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1) - self.post.weight.data.zero_() - self.post.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) * x_mask - h = self.enc(h, x_mask, g=g) - stats = self.post(h) * x_mask - if not self.mean_only: - m, logs = torch.split(stats, [self.half_channels]*2, 1) - else: - m = stats - logs = torch.zeros_like(m) - - if not reverse: - x1 = m + x1 * torch.exp(logs) * x_mask - x = torch.cat([x0, x1], 1) - logdet = torch.sum(logs, [1,2]) - return x, logdet - else: - x1 = (x1 - m) * torch.exp(-logs) * x_mask - x = torch.cat([x0, x1], 1) - return x - - -class ConvFlow(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, n_layers, num_bins=10, tail_bound=5.0): - super().__init__() - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.num_bins = num_bins - self.tail_bound = tail_bound - self.half_channels = in_channels // 2 - - self.pre = nn.Conv1d(self.half_channels, filter_channels, 1) - self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.) - self.proj = nn.Conv1d(filter_channels, self.half_channels * (num_bins * 3 - 1), 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) - h = self.convs(h, x_mask, g=g) - h = self.proj(h) * x_mask - - b, c, t = x0.shape - h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?] - - unnormalized_widths = h[..., :self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_heights = h[..., self.num_bins:2*self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_derivatives = h[..., 2 * self.num_bins:] - - x1, logabsdet = piecewise_rational_quadratic_transform(x1, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=reverse, - tails='linear', - tail_bound=self.tail_bound - ) - - x = torch.cat([x0, x1], 1) * x_mask - logdet = torch.sum(logabsdet * x_mask, [1,2]) - if not reverse: - return x, logdet - else: - return x diff --git a/spaces/Yuliang/ECON/lib/common/BNI_utils.py b/spaces/Yuliang/ECON/lib/common/BNI_utils.py deleted file mode 100644 index 235910fb51c263c51410afc247795f224456dd78..0000000000000000000000000000000000000000 --- a/spaces/Yuliang/ECON/lib/common/BNI_utils.py +++ /dev/null @@ -1,731 +0,0 @@ -import os -import os.path as osp - -import cupy as cp -import cv2 -import numpy as np -import torch -import trimesh -from cupyx.scipy.sparse import ( - coo_matrix, - csr_matrix, - diags, - hstack, - spdiags, - vstack, -) -from cupyx.scipy.sparse.linalg import cg -from PIL import Image -from tqdm.auto import tqdm - -from lib.dataset.mesh_util import clean_floats - - -def find_max_list(lst): - list_len = [len(i) for i in lst] - max_id = np.argmax(np.array(list_len)) - return lst[max_id] - - -def interpolate_pts(pts, diff_ids): - - pts_extend = np.around((pts[diff_ids] + pts[diff_ids - 1]) * 0.5).astype(np.int32) - pts = np.insert(pts, diff_ids, pts_extend, axis=0) - - return pts - - -def align_pts(pts1, pts2): - - diff_num = abs(len(pts1) - len(pts2)) - diff_ids = np.sort(np.random.choice(min(len(pts2), len(pts1)), diff_num, replace=True)) - - if len(pts1) > len(pts2): - pts2 = interpolate_pts(pts2, diff_ids) - elif len(pts2) > len(pts1): - pts1 = interpolate_pts(pts1, diff_ids) - else: - pass - - return pts1, pts2 - - -def repeat_pts(pts1, pts2): - - coverage_mask = ((pts1[:, None, :] == pts2[None, :, :]).sum(axis=2) == 2.).any(axis=1) - - return coverage_mask - - -def find_contour(mask, method='all'): - - if method == 'all': - - contours, _ = cv2.findContours(mask.astype(np.uint8), cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE) - else: - contours, _ = cv2.findContours( - mask.astype(np.uint8), cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE - ) - - contour_cloth = np.array(find_max_list(contours))[:, 0, :] - - return contour_cloth - - -def mean_value_cordinates(inner_pts, contour_pts): - - body_edges_a = np.sqrt(((inner_pts[:, None] - contour_pts[None, :])**2).sum(axis=2)) - body_edges_c = np.roll(body_edges_a, shift=-1, axis=1) - body_edges_b = np.sqrt(((contour_pts - np.roll(contour_pts, shift=-1, axis=0))**2).sum(axis=1)) - - body_edges = np.concatenate([ - body_edges_a[..., None], body_edges_c[..., None], - np.repeat(body_edges_b[None, :, None], axis=0, repeats=len(inner_pts)) - ], - axis=-1) - - body_cos = (body_edges[:, :, 0]**2 + body_edges[:, :, 1]**2 - - body_edges[:, :, 2]**2) / (2 * body_edges[:, :, 0] * body_edges[:, :, 1]) - body_tan_half = np.sqrt( - (1. - np.clip(body_cos, a_max=1., a_min=-1.)) / np.clip(1. + body_cos, 1e-6, 2.) - ) - - w = (body_tan_half + np.roll(body_tan_half, shift=1, axis=1)) / body_edges_a - w /= w.sum(axis=1, keepdims=True) - - return w - - -def get_dst_mat(contour_body, contour_cloth): - - dst_mat = ((contour_body[:, None, :] - contour_cloth[None, :, :])**2).sum(axis=2) - - return dst_mat - - -def dispCorres(img_size, contour1, contour2, phi, dir_path): - - contour1 = contour1[None, :, None, :].astype(np.int32) - contour2 = contour2[None, :, None, :].astype(np.int32) - - disp = np.zeros((img_size, img_size, 3), dtype=np.uint8) - cv2.drawContours(disp, contour1, -1, (0, 255, 0), 1) # green - cv2.drawContours(disp, contour2, -1, (255, 0, 0), 1) # blue - - for i in range(contour1.shape[1]): # do not show all the points when display - # cv2.circle(disp, (contour1[0, i, 0, 0], contour1[0, i, 0, 1]), 1, - # (255, 0, 0), -1) - corresPoint = contour2[0, phi[i], 0] - # cv2.circle(disp, (corresPoint[0], corresPoint[1]), 1, (0, 255, 0), -1) - cv2.line( - disp, (contour1[0, i, 0, 0], contour1[0, i, 0, 1]), (corresPoint[0], corresPoint[1]), - (255, 255, 255), 1 - ) - - cv2.imwrite(osp.join(dir_path, "corres.png"), disp) - - -def remove_stretched_faces(verts, faces): - - mesh = trimesh.Trimesh(verts, faces) - camera_ray = np.array([0.0, 0.0, 1.0]) - faces_cam_angles = np.dot(mesh.face_normals, camera_ray) - - # cos(90-20)=0.34 cos(90-10)=0.17, 10~20 degree - faces_mask = np.abs(faces_cam_angles) > 2e-1 - - mesh.update_faces(faces_mask) - mesh.remove_unreferenced_vertices() - - return mesh.vertices, mesh.faces - - -def tensor2arr(t, mask=False): - if not mask: - return t.squeeze(0).permute(1, 2, 0).detach().cpu().numpy() - else: - mask = t.squeeze(0).abs().sum(dim=0, keepdim=True) - return (mask != mask[:, 0, 0]).float().squeeze(0).detach().cpu().numpy() - - -def arr2png(t): - return ((t + 1.0) * 0.5 * 255.0).astype(np.uint8) - - -def depth2arr(t): - - return t.float().detach().cpu().numpy() - - -def depth2png(t): - - t_copy = t.copy() - t_bg = t_copy[0, 0] - valid_region = np.logical_and(t > -1.0, t != t_bg) - t_copy[valid_region] -= t_copy[valid_region].min() - t_copy[valid_region] /= t_copy[valid_region].max() - t_copy[valid_region] = (1. - t_copy[valid_region]) * 255.0 - t_copy[~valid_region] = 0.0 - - return t_copy[..., None].astype(np.uint8) - - -def verts_transform(t, depth_scale): - - t_copy = t.clone() - t_copy *= depth_scale * 0.5 - t_copy += depth_scale * 0.5 - t_copy = t_copy[:, [1, 0, 2]] * torch.Tensor([2.0, 2.0, -2.0]) + torch.Tensor([ - 0.0, 0.0, depth_scale - ]) - - return t_copy - - -def verts_inverse_transform(t, depth_scale): - - t_copy = t.clone() - t_copy -= torch.tensor([0.0, 0.0, depth_scale]) - t_copy /= torch.tensor([2.0, 2.0, -2.0]) - t_copy -= depth_scale * 0.5 - t_copy /= depth_scale * 0.5 - t_copy = t_copy[:, [1, 0, 2]] - - return t_copy - - -def depth_inverse_transform(t, depth_scale): - - t_copy = t.clone() - t_copy -= torch.tensor(depth_scale) - t_copy /= torch.tensor(-2.0) - t_copy -= depth_scale * 0.5 - t_copy /= depth_scale * 0.5 - - return t_copy - - -# BNI related - - -def move_left(mask): - return cp.pad(mask, ((0, 0), (0, 1)), "constant", constant_values=0)[:, 1:] - - -def move_right(mask): - return cp.pad(mask, ((0, 0), (1, 0)), "constant", constant_values=0)[:, :-1] - - -def move_top(mask): - return cp.pad(mask, ((0, 1), (0, 0)), "constant", constant_values=0)[1:, :] - - -def move_bottom(mask): - return cp.pad(mask, ((1, 0), (0, 0)), "constant", constant_values=0)[:-1, :] - - -def move_top_left(mask): - return cp.pad(mask, ((0, 1), (0, 1)), "constant", constant_values=0)[1:, 1:] - - -def move_top_right(mask): - return cp.pad(mask, ((0, 1), (1, 0)), "constant", constant_values=0)[1:, :-1] - - -def move_bottom_left(mask): - return cp.pad(mask, ((1, 0), (0, 1)), "constant", constant_values=0)[:-1, 1:] - - -def move_bottom_right(mask): - return cp.pad(mask, ((1, 0), (1, 0)), "constant", constant_values=0)[:-1, :-1] - - -def generate_dx_dy_new(mask, nz_horizontal, nz_vertical, step_size=1): - # pixel coordinates - # ^ vertical positive - # | - # | - # | - # o ---> horizontal positive - num_pixel = cp.sum(mask) - - pixel_idx = cp.zeros_like(mask, dtype=int) - pixel_idx[mask] = cp.arange(num_pixel) - - has_left_mask = cp.logical_and(move_right(mask), mask) - has_right_mask = cp.logical_and(move_left(mask), mask) - has_bottom_mask = cp.logical_and(move_top(mask), mask) - has_top_mask = cp.logical_and(move_bottom(mask), mask) - - nz_left = nz_horizontal[has_left_mask[mask]] - nz_right = nz_horizontal[has_right_mask[mask]] - nz_top = nz_vertical[has_top_mask[mask]] - nz_bottom = nz_vertical[has_bottom_mask[mask]] - - data = cp.stack([-nz_left / step_size, nz_left / step_size], -1).flatten() - indices = cp.stack((pixel_idx[move_left(has_left_mask)], pixel_idx[has_left_mask]), - -1).flatten() - indptr = cp.concatenate([cp.array([0]), cp.cumsum(has_left_mask[mask].astype(int) * 2)]) - D_horizontal_neg = csr_matrix((data, indices, indptr), shape=(num_pixel, num_pixel)) - - data = cp.stack([-nz_right / step_size, nz_right / step_size], -1).flatten() - indices = cp.stack((pixel_idx[has_right_mask], pixel_idx[move_right(has_right_mask)]), - -1).flatten() - indptr = cp.concatenate([cp.array([0]), cp.cumsum(has_right_mask[mask].astype(int) * 2)]) - D_horizontal_pos = csr_matrix((data, indices, indptr), shape=(num_pixel, num_pixel)) - - data = cp.stack([-nz_top / step_size, nz_top / step_size], -1).flatten() - indices = cp.stack((pixel_idx[has_top_mask], pixel_idx[move_top(has_top_mask)]), -1).flatten() - indptr = cp.concatenate([cp.array([0]), cp.cumsum(has_top_mask[mask].astype(int) * 2)]) - D_vertical_pos = csr_matrix((data, indices, indptr), shape=(num_pixel, num_pixel)) - - data = cp.stack([-nz_bottom / step_size, nz_bottom / step_size], -1).flatten() - indices = cp.stack((pixel_idx[move_bottom(has_bottom_mask)], pixel_idx[has_bottom_mask]), - -1).flatten() - indptr = cp.concatenate([cp.array([0]), cp.cumsum(has_bottom_mask[mask].astype(int) * 2)]) - D_vertical_neg = csr_matrix((data, indices, indptr), shape=(num_pixel, num_pixel)) - - return D_horizontal_pos, D_horizontal_neg, D_vertical_pos, D_vertical_neg - - -def generate_dx_dy(mask, nz_horizontal, nz_vertical, step_size=1): - # pixel coordinates - # ^ vertical positive - # | - # | - # | - # o ---> horizontal positive - num_pixel = cp.sum(mask) - - pixel_idx = cp.zeros_like(mask, dtype=int) - pixel_idx[mask] = cp.arange(num_pixel) - - has_left_mask = cp.logical_and(move_right(mask), mask) - has_right_mask = cp.logical_and(move_left(mask), mask) - has_bottom_mask = cp.logical_and(move_top(mask), mask) - has_top_mask = cp.logical_and(move_bottom(mask), mask) - - nz_left = nz_horizontal[has_left_mask[mask]] - nz_right = nz_horizontal[has_right_mask[mask]] - nz_top = nz_vertical[has_top_mask[mask]] - nz_bottom = nz_vertical[has_bottom_mask[mask]] - - data = cp.stack([-nz_left / step_size, nz_left / step_size], -1).flatten() - indices = cp.stack((pixel_idx[move_left(has_left_mask)], pixel_idx[has_left_mask]), - -1).flatten() - indptr = cp.concatenate([cp.array([0]), cp.cumsum(has_left_mask[mask].astype(int) * 2)]) - D_horizontal_neg = csr_matrix((data, indices, indptr), shape=(num_pixel, num_pixel)) - - data = cp.stack([-nz_right / step_size, nz_right / step_size], -1).flatten() - indices = cp.stack((pixel_idx[has_right_mask], pixel_idx[move_right(has_right_mask)]), - -1).flatten() - indptr = cp.concatenate([cp.array([0]), cp.cumsum(has_right_mask[mask].astype(int) * 2)]) - D_horizontal_pos = csr_matrix((data, indices, indptr), shape=(num_pixel, num_pixel)) - - data = cp.stack([-nz_top / step_size, nz_top / step_size], -1).flatten() - indices = cp.stack((pixel_idx[has_top_mask], pixel_idx[move_top(has_top_mask)]), -1).flatten() - indptr = cp.concatenate([cp.array([0]), cp.cumsum(has_top_mask[mask].astype(int) * 2)]) - D_vertical_pos = csr_matrix((data, indices, indptr), shape=(num_pixel, num_pixel)) - - data = cp.stack([-nz_bottom / step_size, nz_bottom / step_size], -1).flatten() - indices = cp.stack((pixel_idx[move_bottom(has_bottom_mask)], pixel_idx[has_bottom_mask]), - -1).flatten() - indptr = cp.concatenate([cp.array([0]), cp.cumsum(has_bottom_mask[mask].astype(int) * 2)]) - D_vertical_neg = csr_matrix((data, indices, indptr), shape=(num_pixel, num_pixel)) - - return D_horizontal_pos, D_horizontal_neg, D_vertical_pos, D_vertical_neg - - -def construct_facets_from(mask): - idx = cp.zeros_like(mask, dtype=int) - idx[mask] = cp.arange(cp.sum(mask)) - - facet_move_top_mask = move_top(mask) - facet_move_left_mask = move_left(mask) - facet_move_top_left_mask = move_top_left(mask) - facet_top_left_mask = ( - facet_move_top_mask * facet_move_left_mask * facet_move_top_left_mask * mask - ) - facet_top_right_mask = move_right(facet_top_left_mask) - facet_bottom_left_mask = move_bottom(facet_top_left_mask) - facet_bottom_right_mask = move_bottom_right(facet_top_left_mask) - - return cp.hstack(( - 4 * cp.ones((cp.sum(facet_top_left_mask).item(), 1)), - idx[facet_top_left_mask][:, None], - idx[facet_bottom_left_mask][:, None], - idx[facet_bottom_right_mask][:, None], - idx[facet_top_right_mask][:, None], - )).astype(int) - - -def map_depth_map_to_point_clouds(depth_map, mask, K=None, step_size=1): - # y - # | z - # | / - # |/ - # o ---x - H, W = mask.shape - yy, xx = cp.meshgrid(cp.arange(W), cp.arange(H)) - xx = cp.flip(xx, axis=0) - - if K is None: - vertices = cp.zeros((H, W, 3)) - vertices[..., 0] = xx * step_size - vertices[..., 1] = yy * step_size - vertices[..., 2] = depth_map - vertices = vertices[mask] - else: - u = cp.zeros((H, W, 3)) - u[..., 0] = xx - u[..., 1] = yy - u[..., 2] = 1 - u = u[mask].T # 3 x m - vertices = (cp.linalg.inv(K) @ u).T * depth_map[mask, cp.newaxis] # m x 3 - - return vertices - - -def sigmoid(x, k=1): - return 1 / (1 + cp.exp(-k * x)) - - -def boundary_excluded_mask(mask): - top_mask = cp.pad(mask, ((1, 0), (0, 0)), "constant", constant_values=0)[:-1, :] - bottom_mask = cp.pad(mask, ((0, 1), (0, 0)), "constant", constant_values=0)[1:, :] - left_mask = cp.pad(mask, ((0, 0), (1, 0)), "constant", constant_values=0)[:, :-1] - right_mask = cp.pad(mask, ((0, 0), (0, 1)), "constant", constant_values=0)[:, 1:] - be_mask = top_mask * bottom_mask * left_mask * right_mask * mask - - # discard single point - top_mask = cp.pad(be_mask, ((1, 0), (0, 0)), "constant", constant_values=0)[:-1, :] - bottom_mask = cp.pad(be_mask, ((0, 1), (0, 0)), "constant", constant_values=0)[1:, :] - left_mask = cp.pad(be_mask, ((0, 0), (1, 0)), "constant", constant_values=0)[:, :-1] - right_mask = cp.pad(be_mask, ((0, 0), (0, 1)), "constant", constant_values=0)[:, 1:] - bes_mask = (top_mask + bottom_mask + left_mask + right_mask).astype(bool) - be_mask = cp.logical_and(be_mask, bes_mask) - return be_mask - - -def create_boundary_matrix(mask): - num_pixel = cp.sum(mask) - pixel_idx = cp.zeros_like(mask, dtype=int) - pixel_idx[mask] = cp.arange(num_pixel) - - be_mask = boundary_excluded_mask(mask) - boundary_mask = cp.logical_xor(be_mask, mask) - diag_data_term = boundary_mask[mask].astype(int) - B = diags(diag_data_term) - - num_boundary_pixel = cp.sum(boundary_mask).item() - data_term = cp.concatenate((cp.ones(num_boundary_pixel), -cp.ones(num_boundary_pixel))) - row_idx = cp.tile(cp.arange(num_boundary_pixel), 2) - col_idx = cp.concatenate((pixel_idx[boundary_mask], pixel_idx[boundary_mask] + num_pixel)) - B_full = coo_matrix((data_term, (row_idx, col_idx)), shape=(num_boundary_pixel, 2 * num_pixel)) - return B, B_full - - -def double_side_bilateral_normal_integration( - normal_front, - normal_back, - normal_mask, - depth_front=None, - depth_back=None, - depth_mask=None, - k=2, - lambda_normal_back=1, - lambda_depth_front=1e-4, - lambda_depth_back=1e-2, - lambda_boundary_consistency=1, - step_size=1, - max_iter=150, - tol=1e-4, - cg_max_iter=5000, - cg_tol=1e-3, - cut_intersection=True, -): - - # To avoid confusion, we list the coordinate systems in this code as follows - # - # pixel coordinates camera coordinates normal coordinates (the main paper's Fig. 1 (a)) - # u x y - # | | z | - # | | / o -- x - # | |/ / - # o --- v o --- y z - # (bottom left) - # (o is the optical center; - # xy-plane is parallel to the image plane; - # +z is the viewing direction.) - # - # The input normal map should be defined in the normal coordinates. - # The camera matrix K should be defined in the camera coordinates. - # K = [[fx, 0, cx], - # [0, fy, cy], - # [0, 0, 1]] - - num_normals = cp.sum(normal_mask) - normal_map_front = cp.asarray(normal_front) - normal_map_back = cp.asarray(normal_back) - normal_mask = cp.asarray(normal_mask) - if depth_mask is not None: - depth_map_front = cp.asarray(depth_front) - depth_map_back = cp.asarray(depth_back) - depth_mask = cp.asarray(depth_mask) - - # transfer the normal map from the normal coordinates to the camera coordinates - nx_front = normal_map_front[normal_mask, 1] - ny_front = normal_map_front[normal_mask, 0] - nz_front = -normal_map_front[normal_mask, 2] - del normal_map_front - - nx_back = normal_map_back[normal_mask, 1] - ny_back = normal_map_back[normal_mask, 0] - nz_back = -normal_map_back[normal_mask, 2] - del normal_map_back - - # right, left, top, bottom - A3_f, A4_f, A1_f, A2_f = generate_dx_dy( - normal_mask, nz_horizontal=nz_front, nz_vertical=nz_front, step_size=step_size - ) - A3_b, A4_b, A1_b, A2_b = generate_dx_dy( - normal_mask, nz_horizontal=nz_back, nz_vertical=nz_back, step_size=step_size - ) - - has_left_mask = cp.logical_and(move_right(normal_mask), normal_mask) - has_right_mask = cp.logical_and(move_left(normal_mask), normal_mask) - has_bottom_mask = cp.logical_and(move_top(normal_mask), normal_mask) - has_top_mask = cp.logical_and(move_bottom(normal_mask), normal_mask) - - top_boundnary_mask = cp.logical_xor(has_top_mask, normal_mask)[normal_mask] - bottom_boundary_mask = cp.logical_xor(has_bottom_mask, normal_mask)[normal_mask] - left_boundary_mask = cp.logical_xor(has_left_mask, normal_mask)[normal_mask] - right_boudnary_mask = cp.logical_xor(has_right_mask, normal_mask)[normal_mask] - - A_front_data = vstack((A1_f, A2_f, A3_f, A4_f)) - A_front_zero = csr_matrix(A_front_data.shape) - A_front = hstack([A_front_data, A_front_zero]) - - A_back_data = vstack((A1_b, A2_b, A3_b, A4_b)) - A_back_zero = csr_matrix(A_back_data.shape) - A_back = hstack([A_back_zero, A_back_data]) - - b_front = cp.concatenate((-nx_front, -nx_front, -ny_front, -ny_front)) - b_back = cp.concatenate((-nx_back, -nx_back, -ny_back, -ny_back)) - - # initialization - W_front = spdiags( - 0.5 * cp.ones(4 * num_normals), 0, 4 * num_normals, 4 * num_normals, format="csr" - ) - W_back = spdiags( - 0.5 * cp.ones(4 * num_normals), 0, 4 * num_normals, 4 * num_normals, format="csr" - ) - - z_front = cp.zeros(num_normals, float) - z_back = cp.zeros(num_normals, float) - z_combined = cp.concatenate((z_front, z_back)) - - B, B_full = create_boundary_matrix(normal_mask) - B_mat = lambda_boundary_consistency * coo_matrix(B_full.get().T @ B_full.get()) #bug - - energy_list = [] - - if depth_mask is not None: - depth_mask_flat = depth_mask[normal_mask].astype(bool) # shape: (num_normals,) - z_prior_front = depth_map_front[normal_mask] # shape: (num_normals,) - z_prior_front[~depth_mask_flat] = 0 - z_prior_back = depth_map_back[normal_mask] - z_prior_back[~depth_mask_flat] = 0 - m = depth_mask[normal_mask].astype(int) - M = diags(m) - - energy = (A_front @ z_combined - b_front).T @ W_front @ (A_front @ z_combined - b_front) + \ - lambda_normal_back * (A_back @ z_combined - b_back).T @ W_back @ (A_back @ z_combined - b_back) + \ - lambda_depth_front * (z_front - z_prior_front).T @ M @ (z_front - z_prior_front) + \ - lambda_depth_back * (z_back - z_prior_back).T @ M @ (z_back - z_prior_back) + \ - lambda_boundary_consistency * (z_back - z_front).T @ B @ (z_back - z_front) - - depth_map_front_est = cp.ones_like(normal_mask, float) * cp.nan - depth_map_back_est = cp.ones_like(normal_mask, float) * cp.nan - - facets_back = cp.asnumpy(construct_facets_from(normal_mask)) - faces_back = np.concatenate((facets_back[:, [1, 4, 3]], facets_back[:, [1, 3, 2]]), axis=0) - faces_front = np.concatenate((facets_back[:, [1, 2, 3]], facets_back[:, [1, 3, 4]]), axis=0) - - for i in range(max_iter): - A_mat_front = A_front_data.T @ W_front @ A_front_data - b_vec_front = A_front_data.T @ W_front @ b_front - - A_mat_back = A_back_data.T @ W_back @ A_back_data - b_vec_back = A_back_data.T @ W_back @ b_back - if depth_mask is not None: - b_vec_front += lambda_depth_front * M @ z_prior_front - b_vec_back += lambda_depth_back * M @ z_prior_back - A_mat_front += lambda_depth_front * M - A_mat_back += lambda_depth_back * M - offset_front = cp.mean((z_prior_front - z_combined[:num_normals])[depth_mask_flat]) - offset_back = cp.mean((z_prior_back - z_combined[num_normals:])[depth_mask_flat]) - z_combined[:num_normals] = z_combined[:num_normals] + offset_front - z_combined[num_normals:] = z_combined[num_normals:] + offset_back - - - A_mat_combined = hstack([vstack((A_mat_front, csr_matrix((num_normals, num_normals)))), \ - vstack((csr_matrix((num_normals, num_normals)), A_mat_back))]) + B_mat - b_vec_combined = cp.concatenate((b_vec_front, b_vec_back)) - - D = spdiags( - 1 / cp.clip(A_mat_combined.diagonal(), 1e-5, None), 0, 2 * num_normals, 2 * num_normals, - "csr" - ) # Jacob preconditioner - - z_combined, _ = cg( - A_mat_combined, b_vec_combined, M=D, x0=z_combined, maxiter=cg_max_iter, tol=cg_tol - ) - z_front = z_combined[:num_normals] - z_back = z_combined[num_normals:] - wu_f = sigmoid((A2_f.dot(z_front))**2 - (A1_f.dot(z_front))**2, k) # top - wv_f = sigmoid((A4_f.dot(z_front))**2 - (A3_f.dot(z_front))**2, k) # right - wu_f[top_boundnary_mask] = 0.5 - wu_f[bottom_boundary_mask] = 0.5 - wv_f[left_boundary_mask] = 0.5 - wv_f[right_boudnary_mask] = 0.5 - W_front = spdiags( - cp.concatenate((wu_f, 1 - wu_f, wv_f, 1 - wv_f)), - 0, - 4 * num_normals, - 4 * num_normals, - format="csr" - ) - - wu_b = sigmoid((A2_b.dot(z_back))**2 - (A1_b.dot(z_back))**2, k) # top - wv_b = sigmoid((A4_b.dot(z_back))**2 - (A3_b.dot(z_back))**2, k) # right - wu_b[top_boundnary_mask] = 0.5 - wu_b[bottom_boundary_mask] = 0.5 - wv_b[left_boundary_mask] = 0.5 - wv_b[right_boudnary_mask] = 0.5 - W_back = spdiags( - cp.concatenate((wu_b, 1 - wu_b, wv_b, 1 - wv_b)), - 0, - 4 * num_normals, - 4 * num_normals, - format="csr" - ) - - energy_old = energy - energy = (A_front_data @ z_front - b_front).T @ W_front @ (A_front_data @ z_front - b_front) + \ - lambda_normal_back * (A_back_data @ z_back - b_back).T @ W_back @ (A_back_data @ z_back - b_back) + \ - lambda_depth_front * (z_front - z_prior_front).T @ M @ (z_front - z_prior_front) + \ - lambda_depth_back * (z_back - z_prior_back).T @ M @ (z_back - z_prior_back) +\ - lambda_boundary_consistency * (z_back - z_front).T @ B @ (z_back - z_front) - - energy_list.append(energy) - relative_energy = cp.abs(energy - energy_old) / energy_old - - # print(f"step {i + 1}/{max_iter} energy: {energy:.3e}" - # f" relative energy: {relative_energy:.3e}") - - if False: - # intermediate results - depth_map_front_est[normal_mask] = z_front - depth_map_back_est[normal_mask] = z_back - vertices_front = cp.asnumpy( - map_depth_map_to_point_clouds( - depth_map_front_est, normal_mask, K=None, step_size=step_size - ) - ) - vertices_back = cp.asnumpy( - map_depth_map_to_point_clouds( - depth_map_back_est, normal_mask, K=None, step_size=step_size - ) - ) - - vertices_front, faces_front_ = remove_stretched_faces(vertices_front, faces_front) - vertices_back, faces_back_ = remove_stretched_faces(vertices_back, faces_back) - - F_verts = verts_inverse_transform(torch.as_tensor(vertices_front).float(), 256.0) - B_verts = verts_inverse_transform(torch.as_tensor(vertices_back).float(), 256.0) - - F_B_verts = torch.cat((F_verts, B_verts), dim=0) - F_B_faces = torch.cat(( - torch.as_tensor(faces_front_).long(), - torch.as_tensor(faces_back_).long() + faces_front_.max() + 1 - ), - dim=0) - - front_surf = trimesh.Trimesh(F_verts, faces_front_) - back_surf = trimesh.Trimesh(B_verts, faces_back_) - double_surf = trimesh.Trimesh(F_B_verts, F_B_faces) - - bini_dir = "/home/yxiu/Code/ECON/log/bini/OBJ" - front_surf.export(osp.join(bini_dir, f"{i:04d}_F.obj")) - back_surf.export(osp.join(bini_dir, f"{i:04d}_B.obj")) - double_surf.export(osp.join(bini_dir, f"{i:04d}_FB.obj")) - - if relative_energy < tol: - break - # del A1, A2, A3, A4, nx, ny - - depth_map_front_est[normal_mask] = z_front - depth_map_back_est[normal_mask] = z_back - - if cut_intersection: - # manually cut the intersection - normal_mask[depth_map_front_est >= depth_map_back_est] = False - depth_map_front_est[~normal_mask] = cp.nan - depth_map_back_est[~normal_mask] = cp.nan - - vertices_front = cp.asnumpy( - map_depth_map_to_point_clouds( - depth_map_front_est, normal_mask, K=None, step_size=step_size - ) - ) - vertices_back = cp.asnumpy( - map_depth_map_to_point_clouds(depth_map_back_est, normal_mask, K=None, step_size=step_size) - ) - - facets_back = cp.asnumpy(construct_facets_from(normal_mask)) - faces_back = np.concatenate((facets_back[:, [1, 4, 3]], facets_back[:, [1, 3, 2]]), axis=0) - faces_front = np.concatenate((facets_back[:, [1, 2, 3]], facets_back[:, [1, 3, 4]]), axis=0) - - vertices_front, faces_front = remove_stretched_faces(vertices_front, faces_front) - vertices_back, faces_back = remove_stretched_faces(vertices_back, faces_back) - - front_mesh = clean_floats(trimesh.Trimesh(vertices_front, faces_front)) - back_mesh = clean_floats(trimesh.Trimesh(vertices_back, faces_back)) - - result = { - "F_verts": torch.as_tensor(front_mesh.vertices).float(), "F_faces": torch.as_tensor( - front_mesh.faces - ).long(), "B_verts": torch.as_tensor(back_mesh.vertices).float(), "B_faces": - torch.as_tensor(back_mesh.faces).long(), "F_depth": - torch.as_tensor(depth_map_front_est).float(), "B_depth": - torch.as_tensor(depth_map_back_est).float() - } - - return result - - -def save_normal_tensor(in_tensor, idx, png_path, thickness=0.0): - - os.makedirs(os.path.dirname(png_path), exist_ok=True) - - normal_F_arr = tensor2arr(in_tensor["normal_F"][idx:idx + 1]) - normal_B_arr = tensor2arr(in_tensor["normal_B"][idx:idx + 1]) - mask_normal_arr = tensor2arr(in_tensor["image"][idx:idx + 1], True) - - depth_F_arr = depth2arr(in_tensor["depth_F"][idx]) - depth_B_arr = depth2arr(in_tensor["depth_B"][idx]) - - BNI_dict = {} - - # clothed human - BNI_dict["normal_F"] = normal_F_arr - BNI_dict["normal_B"] = normal_B_arr - BNI_dict["mask"] = mask_normal_arr > 0. - BNI_dict["depth_F"] = depth_F_arr - 100. - thickness - BNI_dict["depth_B"] = 100. - depth_B_arr + thickness - BNI_dict["depth_mask"] = depth_F_arr != -1.0 - - np.save(png_path + ".npy", BNI_dict, allow_pickle=True) - - return BNI_dict diff --git a/spaces/Yuras/CorpusBy/app.py b/spaces/Yuras/CorpusBy/app.py deleted file mode 100644 index b26f27d5576cf8baee09f4dd5e5e06db61c91009..0000000000000000000000000000000000000000 --- a/spaces/Yuras/CorpusBy/app.py +++ /dev/null @@ -1 +0,0 @@ -print(1+1) \ No newline at end of file diff --git a/spaces/Yuyang2022/Translation_yue_to_any/README.md b/spaces/Yuyang2022/Translation_yue_to_any/README.md deleted file mode 100644 index 5def656991046d90054dfc2878e91a21296dfbc8..0000000000000000000000000000000000000000 --- a/spaces/Yuyang2022/Translation_yue_to_any/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Translation Yue To Any -emoji: 📉 -colorFrom: yellow -colorTo: indigo -sdk: gradio -sdk_version: 3.12.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Zeebra/chatGPT_whisper_AI_voice_assistant/config.py b/spaces/Zeebra/chatGPT_whisper_AI_voice_assistant/config.py deleted file mode 100644 index 6165001954ddfaf82fb5ee159b2b059cb4d05675..0000000000000000000000000000000000000000 --- a/spaces/Zeebra/chatGPT_whisper_AI_voice_assistant/config.py +++ /dev/null @@ -1,3 +0,0 @@ -API_KEYS = { - 'openai':'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', -} \ No newline at end of file diff --git a/spaces/Zubia/clipdemo/README.md b/spaces/Zubia/clipdemo/README.md deleted file mode 100644 index 4e9bb24229aa68167258cebbdaa0717093b5632d..0000000000000000000000000000000000000000 --- a/spaces/Zubia/clipdemo/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Clipdemo -emoji: 📉 -colorFrom: pink -colorTo: purple -sdk: gradio -sdk_version: 3.16.2 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmcv/ops/voxelize.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmcv/ops/voxelize.py deleted file mode 100644 index ca3226a4fbcbfe58490fa2ea8e1c16b531214121..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmcv/ops/voxelize.py +++ /dev/null @@ -1,132 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from torch import nn -from torch.autograd import Function -from torch.nn.modules.utils import _pair - -from ..utils import ext_loader - -ext_module = ext_loader.load_ext( - '_ext', ['dynamic_voxelize_forward', 'hard_voxelize_forward']) - - -class _Voxelization(Function): - - @staticmethod - def forward(ctx, - points, - voxel_size, - coors_range, - max_points=35, - max_voxels=20000): - """Convert kitti points(N, >=3) to voxels. - - Args: - points (torch.Tensor): [N, ndim]. Points[:, :3] contain xyz points - and points[:, 3:] contain other information like reflectivity. - voxel_size (tuple or float): The size of voxel with the shape of - [3]. - coors_range (tuple or float): The coordinate range of voxel with - the shape of [6]. - max_points (int, optional): maximum points contained in a voxel. if - max_points=-1, it means using dynamic_voxelize. Default: 35. - max_voxels (int, optional): maximum voxels this function create. - for second, 20000 is a good choice. Users should shuffle points - before call this function because max_voxels may drop points. - Default: 20000. - - Returns: - voxels_out (torch.Tensor): Output voxels with the shape of [M, - max_points, ndim]. Only contain points and returned when - max_points != -1. - coors_out (torch.Tensor): Output coordinates with the shape of - [M, 3]. - num_points_per_voxel_out (torch.Tensor): Num points per voxel with - the shape of [M]. Only returned when max_points != -1. - """ - if max_points == -1 or max_voxels == -1: - coors = points.new_zeros(size=(points.size(0), 3), dtype=torch.int) - ext_module.dynamic_voxelize_forward(points, coors, voxel_size, - coors_range, 3) - return coors - else: - voxels = points.new_zeros( - size=(max_voxels, max_points, points.size(1))) - coors = points.new_zeros(size=(max_voxels, 3), dtype=torch.int) - num_points_per_voxel = points.new_zeros( - size=(max_voxels, ), dtype=torch.int) - voxel_num = ext_module.hard_voxelize_forward( - points, voxels, coors, num_points_per_voxel, voxel_size, - coors_range, max_points, max_voxels, 3) - # select the valid voxels - voxels_out = voxels[:voxel_num] - coors_out = coors[:voxel_num] - num_points_per_voxel_out = num_points_per_voxel[:voxel_num] - return voxels_out, coors_out, num_points_per_voxel_out - - -voxelization = _Voxelization.apply - - -class Voxelization(nn.Module): - """Convert kitti points(N, >=3) to voxels. - - Please refer to `PVCNN `_ for more - details. - - Args: - voxel_size (tuple or float): The size of voxel with the shape of [3]. - point_cloud_range (tuple or float): The coordinate range of voxel with - the shape of [6]. - max_num_points (int): maximum points contained in a voxel. if - max_points=-1, it means using dynamic_voxelize. - max_voxels (int, optional): maximum voxels this function create. - for second, 20000 is a good choice. Users should shuffle points - before call this function because max_voxels may drop points. - Default: 20000. - """ - - def __init__(self, - voxel_size, - point_cloud_range, - max_num_points, - max_voxels=20000): - super().__init__() - - self.voxel_size = voxel_size - self.point_cloud_range = point_cloud_range - self.max_num_points = max_num_points - if isinstance(max_voxels, tuple): - self.max_voxels = max_voxels - else: - self.max_voxels = _pair(max_voxels) - - point_cloud_range = torch.tensor( - point_cloud_range, dtype=torch.float32) - voxel_size = torch.tensor(voxel_size, dtype=torch.float32) - grid_size = (point_cloud_range[3:] - - point_cloud_range[:3]) / voxel_size - grid_size = torch.round(grid_size).long() - input_feat_shape = grid_size[:2] - self.grid_size = grid_size - # the origin shape is as [x-len, y-len, z-len] - # [w, h, d] -> [d, h, w] - self.pcd_shape = [*input_feat_shape, 1][::-1] - - def forward(self, input): - if self.training: - max_voxels = self.max_voxels[0] - else: - max_voxels = self.max_voxels[1] - - return voxelization(input, self.voxel_size, self.point_cloud_range, - self.max_num_points, max_voxels) - - def __repr__(self): - s = self.__class__.__name__ + '(' - s += 'voxel_size=' + str(self.voxel_size) - s += ', point_cloud_range=' + str(self.point_cloud_range) - s += ', max_num_points=' + str(self.max_num_points) - s += ', max_voxels=' + str(self.max_voxels) - s += ')' - return s diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmcv/ops/info.py b/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmcv/ops/info.py deleted file mode 100644 index 29f2e5598ae2bb5866ccd15a7d3b4de33c0cd14d..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmcv/ops/info.py +++ /dev/null @@ -1,36 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import glob -import os - -import torch - -if torch.__version__ == 'parrots': - import parrots - - def get_compiler_version(): - return 'GCC ' + parrots.version.compiler - - def get_compiling_cuda_version(): - return parrots.version.cuda -else: - from ..utils import ext_loader - ext_module = ext_loader.load_ext( - '_ext', ['get_compiler_version', 'get_compiling_cuda_version']) - - def get_compiler_version(): - return ext_module.get_compiler_version() - - def get_compiling_cuda_version(): - return ext_module.get_compiling_cuda_version() - - -def get_onnxruntime_op_path(): - wildcard = os.path.join( - os.path.abspath(os.path.dirname(os.path.dirname(__file__))), - '_ext_ort.*.so') - - paths = glob.glob(wildcard) - if len(paths) > 0: - return paths[0] - else: - return '' diff --git a/spaces/aiditi/nvidia_denoiser/stft_loss.py b/spaces/aiditi/nvidia_denoiser/stft_loss.py deleted file mode 100644 index b6ec78702088ea127cf21be7a9a78a8b7016893c..0000000000000000000000000000000000000000 --- a/spaces/aiditi/nvidia_denoiser/stft_loss.py +++ /dev/null @@ -1,184 +0,0 @@ -# Adapted from https://github.com/kan-bayashi/ParallelWaveGAN - -# Original Copyright 2019 Tomoki Hayashi -# MIT License (https://opensource.org/licenses/MIT) - -"""STFT-based Loss modules.""" - -import torch -import torch.nn.functional as F - -from distutils.version import LooseVersion - -is_pytorch_17plus = LooseVersion(torch.__version__) >= LooseVersion("1.7") - - -def stft(x, fft_size, hop_size, win_length, window): - """Perform STFT and convert to magnitude spectrogram. - Args: - x (Tensor): Input signal tensor (B, T). - fft_size (int): FFT size. - hop_size (int): Hop size. - win_length (int): Window length. - window (str): Window function type. - Returns: - Tensor: Magnitude spectrogram (B, #frames, fft_size // 2 + 1). - - """ - if is_pytorch_17plus: - x_stft = torch.stft( - x, fft_size, hop_size, win_length, window, return_complex=False - ) - else: - x_stft = torch.stft(x, fft_size, hop_size, win_length, window) - real = x_stft[..., 0] - imag = x_stft[..., 1] - - # NOTE(kan-bayashi): clamp is needed to avoid nan or inf - return torch.sqrt(torch.clamp(real**2 + imag**2, min=1e-7)).transpose(2, 1) - - -class SpectralConvergenceLoss(torch.nn.Module): - """Spectral convergence loss module.""" - - def __init__(self): - """Initilize spectral convergence loss module.""" - super(SpectralConvergenceLoss, self).__init__() - - def forward(self, x_mag, y_mag): - """Calculate forward propagation. - - Args: - x_mag (Tensor): Magnitude spectrogram of predicted signal (B, #frames, #freq_bins). - y_mag (Tensor): Magnitude spectrogram of groundtruth signal (B, #frames, #freq_bins). - - Returns: - Tensor: Spectral convergence loss value. - - """ - return torch.norm(y_mag - x_mag, p="fro") / torch.norm(y_mag, p="fro") - - -class LogSTFTMagnitudeLoss(torch.nn.Module): - """Log STFT magnitude loss module.""" - - def __init__(self): - """Initilize los STFT magnitude loss module.""" - super(LogSTFTMagnitudeLoss, self).__init__() - - def forward(self, x_mag, y_mag): - """Calculate forward propagation. - - Args: - x_mag (Tensor): Magnitude spectrogram of predicted signal (B, #frames, #freq_bins). - y_mag (Tensor): Magnitude spectrogram of groundtruth signal (B, #frames, #freq_bins). - - Returns: - Tensor: Log STFT magnitude loss value. - - """ - return F.l1_loss(torch.log(y_mag), torch.log(x_mag)) - - -class STFTLoss(torch.nn.Module): - """STFT loss module.""" - - def __init__( - self, fft_size=1024, shift_size=120, win_length=600, window="hann_window", - band="full" - ): - """Initialize STFT loss module.""" - super(STFTLoss, self).__init__() - self.fft_size = fft_size - self.shift_size = shift_size - self.win_length = win_length - self.band = band - - self.spectral_convergence_loss = SpectralConvergenceLoss() - self.log_stft_magnitude_loss = LogSTFTMagnitudeLoss() - # NOTE(kan-bayashi): Use register_buffer to fix #223 - self.register_buffer("window", getattr(torch, window)(win_length)) - - def forward(self, x, y): - """Calculate forward propagation. - - Args: - x (Tensor): Predicted signal (B, T). - y (Tensor): Groundtruth signal (B, T). - - Returns: - Tensor: Spectral convergence loss value. - Tensor: Log STFT magnitude loss value. - - """ - x_mag = stft(x, self.fft_size, self.shift_size, self.win_length, self.window) - y_mag = stft(y, self.fft_size, self.shift_size, self.win_length, self.window) - - if self.band == "high": - freq_mask_ind = x_mag.shape[1] // 2 # only select high frequency bands - sc_loss = self.spectral_convergence_loss(x_mag[:,freq_mask_ind:,:], y_mag[:,freq_mask_ind:,:]) - mag_loss = self.log_stft_magnitude_loss(x_mag[:,freq_mask_ind:,:], y_mag[:,freq_mask_ind:,:]) - elif self.band == "full": - sc_loss = self.spectral_convergence_loss(x_mag, y_mag) - mag_loss = self.log_stft_magnitude_loss(x_mag, y_mag) - else: - raise NotImplementedError - - return sc_loss, mag_loss - - -class MultiResolutionSTFTLoss(torch.nn.Module): - """Multi resolution STFT loss module.""" - - def __init__( - self, fft_sizes=[1024, 2048, 512], hop_sizes=[120, 240, 50], win_lengths=[600, 1200, 240], - window="hann_window", sc_lambda=0.1, mag_lambda=0.1, band="full" - ): - """Initialize Multi resolution STFT loss module. - - Args: - fft_sizes (list): List of FFT sizes. - hop_sizes (list): List of hop sizes. - win_lengths (list): List of window lengths. - window (str): Window function type. - *_lambda (float): a balancing factor across different losses. - band (str): high-band or full-band loss - - """ - super(MultiResolutionSTFTLoss, self).__init__() - self.sc_lambda = sc_lambda - self.mag_lambda = mag_lambda - - assert len(fft_sizes) == len(hop_sizes) == len(win_lengths) - self.stft_losses = torch.nn.ModuleList() - for fs, ss, wl in zip(fft_sizes, hop_sizes, win_lengths): - self.stft_losses += [STFTLoss(fs, ss, wl, window, band)] - - def forward(self, x, y): - """Calculate forward propagation. - - Args: - x (Tensor): Predicted signal (B, T) or (B, #subband, T). - y (Tensor): Groundtruth signal (B, T) or (B, #subband, T). - - Returns: - Tensor: Multi resolution spectral convergence loss value. - Tensor: Multi resolution log STFT magnitude loss value. - - """ - if len(x.shape) == 3: - x = x.view(-1, x.size(2)) # (B, C, T) -> (B x C, T) - y = y.view(-1, y.size(2)) # (B, C, T) -> (B x C, T) - sc_loss = 0.0 - mag_loss = 0.0 - for f in self.stft_losses: - sc_l, mag_l = f(x, y) - sc_loss += sc_l - mag_loss += mag_l - - sc_loss *= self.sc_lambda - sc_loss /= len(self.stft_losses) - mag_loss *= self.mag_lambda - mag_loss /= len(self.stft_losses) - - return sc_loss, mag_loss \ No newline at end of file diff --git a/spaces/akhaliq/lama/predict.py b/spaces/akhaliq/lama/predict.py deleted file mode 100644 index 878b7988c113778f48ec3f940d2031a30c12e03f..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/lama/predict.py +++ /dev/null @@ -1,89 +0,0 @@ -#!/usr/bin/env python3 - -# Example command: -# ./bin/predict.py \ -# model.path= \ -# indir= \ -# outdir= - -import logging -import os -import sys -import traceback - -from saicinpainting.evaluation.utils import move_to_device - -os.environ['OMP_NUM_THREADS'] = '1' -os.environ['OPENBLAS_NUM_THREADS'] = '1' -os.environ['MKL_NUM_THREADS'] = '1' -os.environ['VECLIB_MAXIMUM_THREADS'] = '1' -os.environ['NUMEXPR_NUM_THREADS'] = '1' - -import cv2 -import hydra -import numpy as np -import torch -import tqdm -import yaml -from omegaconf import OmegaConf -from torch.utils.data._utils.collate import default_collate - -from saicinpainting.training.data.datasets import make_default_val_dataset -from saicinpainting.training.trainers import load_checkpoint -from saicinpainting.utils import register_debug_signal_handlers - -LOGGER = logging.getLogger(__name__) - - -@hydra.main(config_path='configs/prediction', config_name='default.yaml') -def main(predict_config: OmegaConf): - try: - register_debug_signal_handlers() # kill -10 will result in traceback dumped into log - - device = torch.device(predict_config.device) - - train_config_path = os.path.join(predict_config.model.path, 'config.yaml') - with open(train_config_path, 'r') as f: - train_config = OmegaConf.create(yaml.safe_load(f)) - - train_config.training_model.predict_only = True - - out_ext = predict_config.get('out_ext', '.png') - - checkpoint_path = os.path.join(predict_config.model.path, - 'models', - predict_config.model.checkpoint) - model = load_checkpoint(train_config, checkpoint_path, strict=False, map_location='cpu') - model.freeze() - model.to(device) - - if not predict_config.indir.endswith('/'): - predict_config.indir += '/' - - dataset = make_default_val_dataset(predict_config.indir, **predict_config.dataset) - with torch.no_grad(): - for img_i in tqdm.trange(len(dataset)): - mask_fname = dataset.mask_filenames[img_i] - cur_out_fname = os.path.join( - predict_config.outdir, - os.path.splitext(mask_fname[len(predict_config.indir):])[0] + out_ext - ) - os.makedirs(os.path.dirname(cur_out_fname), exist_ok=True) - - batch = move_to_device(default_collate([dataset[img_i]]), device) - batch['mask'] = (batch['mask'] > 0) * 1 - batch = model(batch) - cur_res = batch[predict_config.out_key][0].permute(1, 2, 0).detach().cpu().numpy() - - cur_res = np.clip(cur_res * 255, 0, 255).astype('uint8') - cur_res = cv2.cvtColor(cur_res, cv2.COLOR_RGB2BGR) - cv2.imwrite(cur_out_fname, cur_res) - except KeyboardInterrupt: - LOGGER.warning('Interrupted by user') - except Exception as ex: - LOGGER.critical(f'Prediction failed due to {ex}:\n{traceback.format_exc()}') - sys.exit(1) - - -if __name__ == '__main__': - main() diff --git a/spaces/akhaliq/layout-parser/README.md b/spaces/akhaliq/layout-parser/README.md deleted file mode 100644 index b9266dca21bdf875397bec4c3c098fe2c401571d..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/layout-parser/README.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -title: Layout Parser -emoji: 🐨 -colorFrom: indigo -colorTo: purple -sdk: gradio -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/albertvillanova/datasets-tagging/tagging_app.py b/spaces/albertvillanova/datasets-tagging/tagging_app.py deleted file mode 100644 index 67c902f4630b618ccab601b7e81bb4488dd424b4..0000000000000000000000000000000000000000 --- a/spaces/albertvillanova/datasets-tagging/tagging_app.py +++ /dev/null @@ -1,400 +0,0 @@ -import json -import logging -from pathlib import Path -from typing import Callable, Dict, List, Tuple - -import langcodes as lc -import streamlit as st -import yaml -from datasets.utils.metadata import ( - DatasetMetadata, - known_creators, - known_licenses, - known_multilingualities, - known_size_categories, - known_task_ids, -) - -from apputils import new_state - -st.set_page_config( - page_title="HF Dataset Tagging App", - page_icon="https://huggingface.co/front/assets/huggingface_logo.svg", - layout="wide", - initial_sidebar_state="auto", -) - -# XXX: restyling errors as streamlit does not respect whitespaces on `st.error` and doesn't scroll horizontally, which -# generally makes things easier when reading error reports -st.markdown( - """ - -""", - unsafe_allow_html=True, -) - -######################## -## Helper functions -######################## - - -def load_ds_datas() -> Dict[str, Dict[str, Dict]]: - metada_exports = sorted( - [f for f in Path.cwd().iterdir() if f.name.startswith("metadata_")], - key=lambda f: f.lstat().st_mtime, - reverse=True, - ) - if len(metada_exports) == 0: - raise ValueError("need to run ./build_metada_file.py at least once") - with metada_exports[0].open() as fi: - logging.info(f"loaded {metada_exports[0]}") - return json.load(fi) - - -def split_known(vals: List[str], okset: List[str]) -> Tuple[List[str], List[str]]: - if vals is None: - return [], [] - return [v for v in vals if v in okset], [v for v in vals if v not in okset] - - -def multiselect( - w: st.delta_generator.DeltaGenerator, - title: str, - markdown: str, - values: List[str], - valid_set: List[str], - format_func: Callable = str, -): - valid_values, invalid_values = split_known(values, valid_set) - w.markdown(f"#### {title}") - if len(invalid_values) > 0: - w.markdown("Found the following invalid values:") - w.error(invalid_values) - return w.multiselect(markdown, valid_set, default=valid_values, format_func=format_func) - - -def validate_dict(w: st.delta_generator.DeltaGenerator, state_dict: Dict): - try: - DatasetMetadata(**state_dict) - w.markdown("✅ This is a valid tagset! 🤗") - except Exception as e: - w.markdown("❌ This is an invalid tagset, here are the errors in it:") - w.error(e) - - -def map_num_examples_to_size_categories(n: int) -> str: - if n <= 0: - size_cat = "unknown" - elif n < 1000: - size_cat = "n<1K" - elif n < 10000: - size_cat = "1K bool: - return sum(len(v) if v is not None else 0 for v in state.values()) == 0 - - -state = new_state() -datasets_md = load_ds_datas() -dataset_ids = list(datasets_md.keys()) -dataset_id_to_metadata = {name: mds["metadata"] for name, mds in datasets_md.items()} -dataset_id_to_infos = {name: mds["infos"] for name, mds in datasets_md.items()} - - -######################## -## Dataset selection -######################## - - -st.sidebar.markdown( - """ -# HuggingFace Dataset Tagger - -This app aims to make it easier to add structured tags to the datasets present in the library. - -""" -) - - -queryparams = st.experimental_get_query_params() -preload = queryparams.get("preload_dataset", list()) -preloaded_id = None -initial_state = None -initial_infos, initial_info_cfg = None, None -dataset_selector_index = 0 - -if len(preload) == 1 and preload[0] in dataset_ids: - preloaded_id, *_ = preload - initial_state = dataset_id_to_metadata.get(preloaded_id) - initial_infos = dataset_id_to_infos.get(preloaded_id) - initial_info_cfg = next(iter(initial_infos)) if initial_infos is not None else None # pick first available config - state = initial_state or new_state() - dataset_selector_index = dataset_ids.index(preloaded_id) - -preloaded_id = st.sidebar.selectbox( - label="Choose dataset to load tag set from", options=dataset_ids, index=dataset_selector_index -) - -leftbtn, rightbtn = st.sidebar.columns(2) -if leftbtn.button("pre-load"): - initial_state = dataset_id_to_metadata[preloaded_id] - initial_infos = dataset_id_to_infos[preloaded_id] - initial_info_cfg = next(iter(initial_infos)) # pick first available config - state = initial_state or new_state() - st.experimental_set_query_params(preload_dataset=preloaded_id) -if not is_state_empty(state): - if rightbtn.button("flush state"): - state = new_state() - initial_state = None - preloaded_id = None - st.experimental_set_query_params() - -if preloaded_id is not None and initial_state is not None: - st.sidebar.markdown( - f""" ---- -The current base tagset is [`{preloaded_id}`](https://huggingface.co/datasets/{preloaded_id}) -""" - ) - validate_dict(st.sidebar, initial_state) - st.sidebar.markdown( - f""" -Here is the matching yaml block: - -```yaml -{yaml.dump(initial_state)} -``` -""" - ) - - -leftcol, _, rightcol = st.columns([12, 1, 12]) - -# -# DATASET NAME -# -leftcol.markdown("### Dataset name") -state["pretty_name"] = leftcol.text_area( - "Pick a nice descriptive name for the dataset", -) - - - -# -# TASKS -# -leftcol.markdown("### Supported tasks") -state["task_categories"] = multiselect( - leftcol, - "Task category", - "What categories of task does the dataset support?", - values=state["task_categories"], - valid_set=list(known_task_ids.keys()), - format_func=lambda tg: f"{tg}: {known_task_ids[tg]['description']}", -) -task_specifics = [] -for task_category in state["task_categories"]: - specs = multiselect( - leftcol, - f"Specific _{task_category}_ tasks", - f"What specific tasks does the dataset support?", - values=[ts for ts in (state["task_ids"] or []) if ts in known_task_ids[task_category]["options"]], - valid_set=known_task_ids[task_category]["options"], - ) - if "other" in specs: - other_task = leftcol.text_input( - "You selected 'other' task. Please enter a short hyphen-separated description for the task:", - value="my-task-description", - ) - leftcol.write(f"Registering {task_category}-other-{other_task} task") - specs[specs.index("other")] = f"{task_category}-other-{other_task}" - task_specifics += specs -state["task_ids"] = task_specifics - - -# -# LANGUAGES -# -leftcol.markdown("### Languages") -state["multilinguality"] = multiselect( - leftcol, - "Monolingual?", - "Does the dataset contain more than one language?", - values=state["multilinguality"], - valid_set=list(known_multilingualities.keys()), - format_func=lambda m: f"{m} : {known_multilingualities[m]}", -) - -if "other" in state["multilinguality"]: - other_multilinguality = leftcol.text_input( - "You selected 'other' type of multilinguality. Please enter a short hyphen-separated description:", - value="my-multilinguality", - ) - leftcol.write(f"Registering other-{other_multilinguality} multilinguality") - state["multilinguality"][state["multilinguality"].index("other")] = f"other-{other_multilinguality}" - -valid_values, invalid_values = list(), list() -for langtag in state["languages"]: - try: - lc.get(langtag) - valid_values.append(langtag) - except: - invalid_values.append(langtag) -leftcol.markdown("#### Languages") -if len(invalid_values) > 0: - leftcol.markdown("Found the following invalid values:") - leftcol.error(invalid_values) - -langtags = leftcol.text_area( - "What languages are represented in the dataset? expected format is BCP47 tags separated for ';' e.g. 'en-US;fr-FR'", - value=";".join(valid_values), -) -state["languages"] = langtags.strip().split(";") if langtags.strip() != "" else [] - - -# -# DATASET CREATORS & ORIGINS -# -leftcol.markdown("### Dataset creators") -state["language_creators"] = multiselect( - leftcol, - "Data origin", - "Where does the text in the dataset come from?", - values=state["language_creators"], - valid_set=known_creators["language"], -) -state["annotations_creators"] = multiselect( - leftcol, - "Annotations origin", - "Where do the annotations in the dataset come from?", - values=state["annotations_creators"], - valid_set=known_creators["annotations"], -) - - -# -# LICENSES -# -state["licenses"] = multiselect( - leftcol, - "Licenses", - "What licenses is the dataset under?", - valid_set=list(known_licenses.keys()), - values=state["licenses"], - format_func=lambda l: f"{l} : {known_licenses[l]}", -) -if "other" in state["licenses"]: - other_license = st.text_input( - "You selected 'other' type of license. Please enter a short hyphen-separated description:", - value="my-license", - ) - st.write(f"Registering other-{other_license} license") - state["licenses"][state["licenses"].index("other")] = f"other-{other_license}" - - -# -# LINK TO SUPPORTED DATASETS -# -pre_select_ext_a = [] -if "original" in state["source_datasets"]: - pre_select_ext_a += ["original"] -if any([p.startswith("extended") for p in state["source_datasets"]]): - pre_select_ext_a += ["extended"] -state["source_datasets"] = multiselect( - leftcol, - "Relations to existing work", - "Does the dataset contain original data and/or was it extended from other datasets?", - values=pre_select_ext_a, - valid_set=["original", "extended"], -) - -if "extended" in state["source_datasets"]: - pre_select_ext_b = [p.split("|")[1] for p in state["source_datasets"] if p.startswith("extended|")] - extended_sources = multiselect( - leftcol, - "Linked datasets", - "Which other datasets does this one use data from?", - values=pre_select_ext_b, - valid_set=dataset_ids + ["other"], - ) - # flush placeholder - state["source_datasets"].remove("extended") - state["source_datasets"] += [f"extended|{src}" for src in extended_sources] - - -# -# SIZE CATEGORY -# -leftcol.markdown("### Size category") -logging.info(initial_infos[initial_info_cfg]["splits"] if initial_infos is not None else 0) -initial_num_examples = ( - sum([dct.get("num_examples", 0) for _split, dct in initial_infos[initial_info_cfg].get("splits", dict()).items()]) - if initial_infos is not None - else -1 -) -initial_size_cats = map_num_examples_to_size_categories(initial_num_examples) -leftcol.markdown(f"Computed size category from automatically generated dataset info to: `{initial_size_cats}`") -current_size_cats = state.get("size_categories") or ["unknown"] -ok, nonok = split_known(current_size_cats, known_size_categories) -if len(nonok) > 0: - leftcol.markdown(f"**Found bad codes in existing tagset**:\n{nonok}") -else: - state["size_categories"] = [initial_size_cats] - - -######################## -## Show results -######################## - -rightcol.markdown( - f""" -### Finalized tag set - -""" -) -if is_state_empty(state): - rightcol.markdown("❌ This is an invalid tagset: it's empty!") -else: - validate_dict(rightcol, state) - - -rightcol.markdown( - f""" - -```yaml -{yaml.dump(state)} -``` ---- -#### Arbitrary yaml validator - -This is a standalone tool, it is useful to check for errors on an existing tagset or modifying directly the text rather than the UI on the left. -""", -) - -yamlblock = rightcol.text_area("Input your yaml here") -if yamlblock.strip() != "": - inputdict = yaml.safe_load(yamlblock) - validate_dict(rightcol, inputdict) diff --git a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/pep517/colorlog.py b/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/pep517/colorlog.py deleted file mode 100644 index 69c8a59d3d4e038450aa37ec5b801914b817e675..0000000000000000000000000000000000000000 --- a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/pep517/colorlog.py +++ /dev/null @@ -1,115 +0,0 @@ -"""Nicer log formatting with colours. - -Code copied from Tornado, Apache licensed. -""" -# Copyright 2012 Facebook -# -# Licensed under the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. You may obtain -# a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT -# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the -# License for the specific language governing permissions and limitations -# under the License. - -import logging -import sys - -try: - import curses -except ImportError: - curses = None - - -def _stderr_supports_color(): - color = False - if curses and hasattr(sys.stderr, 'isatty') and sys.stderr.isatty(): - try: - curses.setupterm() - if curses.tigetnum("colors") > 0: - color = True - except Exception: - pass - return color - - -class LogFormatter(logging.Formatter): - """Log formatter with colour support - """ - DEFAULT_COLORS = { - logging.INFO: 2, # Green - logging.WARNING: 3, # Yellow - logging.ERROR: 1, # Red - logging.CRITICAL: 1, - } - - def __init__(self, color=True, datefmt=None): - r""" - :arg bool color: Enables color support. - :arg string fmt: Log message format. - It will be applied to the attributes dict of log records. The - text between ``%(color)s`` and ``%(end_color)s`` will be colored - depending on the level if color support is on. - :arg dict colors: color mappings from logging level to terminal color - code - :arg string datefmt: Datetime format. - Used for formatting ``(asctime)`` placeholder in ``prefix_fmt``. - .. versionchanged:: 3.2 - Added ``fmt`` and ``datefmt`` arguments. - """ - logging.Formatter.__init__(self, datefmt=datefmt) - self._colors = {} - if color and _stderr_supports_color(): - # The curses module has some str/bytes confusion in - # python3. Until version 3.2.3, most methods return - # bytes, but only accept strings. In addition, we want to - # output these strings with the logging module, which - # works with unicode strings. The explicit calls to - # unicode() below are harmless in python2 but will do the - # right conversion in python 3. - fg_color = (curses.tigetstr("setaf") or - curses.tigetstr("setf") or "") - if (3, 0) < sys.version_info < (3, 2, 3): - fg_color = str(fg_color, "ascii") - - for levelno, code in self.DEFAULT_COLORS.items(): - self._colors[levelno] = str( - curses.tparm(fg_color, code), "ascii") - self._normal = str(curses.tigetstr("sgr0"), "ascii") - - scr = curses.initscr() - self.termwidth = scr.getmaxyx()[1] - curses.endwin() - else: - self._normal = '' - # Default width is usually 80, but too wide is - # worse than too narrow - self.termwidth = 70 - - def formatMessage(self, record): - mlen = len(record.message) - right_text = '{initial}-{name}'.format(initial=record.levelname[0], - name=record.name) - if mlen + len(right_text) < self.termwidth: - space = ' ' * (self.termwidth - (mlen + len(right_text))) - else: - space = ' ' - - if record.levelno in self._colors: - start_color = self._colors[record.levelno] - end_color = self._normal - else: - start_color = end_color = '' - - return record.message + space + start_color + right_text + end_color - - -def enable_colourful_output(level=logging.INFO): - handler = logging.StreamHandler() - handler.setFormatter(LogFormatter()) - logging.root.addHandler(handler) - logging.root.setLevel(level) diff --git a/spaces/algomuffin/jojo_fork/e4e/models/stylegan2/op/upfirdn2d.py b/spaces/algomuffin/jojo_fork/e4e/models/stylegan2/op/upfirdn2d.py deleted file mode 100644 index 7bc5a1e331c2bbb1893ac748cfd0f144ff0651b4..0000000000000000000000000000000000000000 --- a/spaces/algomuffin/jojo_fork/e4e/models/stylegan2/op/upfirdn2d.py +++ /dev/null @@ -1,184 +0,0 @@ -import os - -import torch -from torch.autograd import Function -from torch.utils.cpp_extension import load - -module_path = os.path.dirname(__file__) -upfirdn2d_op = load( - 'upfirdn2d', - sources=[ - os.path.join(module_path, 'upfirdn2d.cpp'), - os.path.join(module_path, 'upfirdn2d_kernel.cu'), - ], -) - - -class UpFirDn2dBackward(Function): - @staticmethod - def forward( - ctx, grad_output, kernel, grad_kernel, up, down, pad, g_pad, in_size, out_size - ): - up_x, up_y = up - down_x, down_y = down - g_pad_x0, g_pad_x1, g_pad_y0, g_pad_y1 = g_pad - - grad_output = grad_output.reshape(-1, out_size[0], out_size[1], 1) - - grad_input = upfirdn2d_op.upfirdn2d( - grad_output, - grad_kernel, - down_x, - down_y, - up_x, - up_y, - g_pad_x0, - g_pad_x1, - g_pad_y0, - g_pad_y1, - ) - grad_input = grad_input.view(in_size[0], in_size[1], in_size[2], in_size[3]) - - ctx.save_for_backward(kernel) - - pad_x0, pad_x1, pad_y0, pad_y1 = pad - - ctx.up_x = up_x - ctx.up_y = up_y - ctx.down_x = down_x - ctx.down_y = down_y - ctx.pad_x0 = pad_x0 - ctx.pad_x1 = pad_x1 - ctx.pad_y0 = pad_y0 - ctx.pad_y1 = pad_y1 - ctx.in_size = in_size - ctx.out_size = out_size - - return grad_input - - @staticmethod - def backward(ctx, gradgrad_input): - kernel, = ctx.saved_tensors - - gradgrad_input = gradgrad_input.reshape(-1, ctx.in_size[2], ctx.in_size[3], 1) - - gradgrad_out = upfirdn2d_op.upfirdn2d( - gradgrad_input, - kernel, - ctx.up_x, - ctx.up_y, - ctx.down_x, - ctx.down_y, - ctx.pad_x0, - ctx.pad_x1, - ctx.pad_y0, - ctx.pad_y1, - ) - # gradgrad_out = gradgrad_out.view(ctx.in_size[0], ctx.out_size[0], ctx.out_size[1], ctx.in_size[3]) - gradgrad_out = gradgrad_out.view( - ctx.in_size[0], ctx.in_size[1], ctx.out_size[0], ctx.out_size[1] - ) - - return gradgrad_out, None, None, None, None, None, None, None, None - - -class UpFirDn2d(Function): - @staticmethod - def forward(ctx, input, kernel, up, down, pad): - up_x, up_y = up - down_x, down_y = down - pad_x0, pad_x1, pad_y0, pad_y1 = pad - - kernel_h, kernel_w = kernel.shape - batch, channel, in_h, in_w = input.shape - ctx.in_size = input.shape - - input = input.reshape(-1, in_h, in_w, 1) - - ctx.save_for_backward(kernel, torch.flip(kernel, [0, 1])) - - out_h = (in_h * up_y + pad_y0 + pad_y1 - kernel_h) // down_y + 1 - out_w = (in_w * up_x + pad_x0 + pad_x1 - kernel_w) // down_x + 1 - ctx.out_size = (out_h, out_w) - - ctx.up = (up_x, up_y) - ctx.down = (down_x, down_y) - ctx.pad = (pad_x0, pad_x1, pad_y0, pad_y1) - - g_pad_x0 = kernel_w - pad_x0 - 1 - g_pad_y0 = kernel_h - pad_y0 - 1 - g_pad_x1 = in_w * up_x - out_w * down_x + pad_x0 - up_x + 1 - g_pad_y1 = in_h * up_y - out_h * down_y + pad_y0 - up_y + 1 - - ctx.g_pad = (g_pad_x0, g_pad_x1, g_pad_y0, g_pad_y1) - - out = upfirdn2d_op.upfirdn2d( - input, kernel, up_x, up_y, down_x, down_y, pad_x0, pad_x1, pad_y0, pad_y1 - ) - # out = out.view(major, out_h, out_w, minor) - out = out.view(-1, channel, out_h, out_w) - - return out - - @staticmethod - def backward(ctx, grad_output): - kernel, grad_kernel = ctx.saved_tensors - - grad_input = UpFirDn2dBackward.apply( - grad_output, - kernel, - grad_kernel, - ctx.up, - ctx.down, - ctx.pad, - ctx.g_pad, - ctx.in_size, - ctx.out_size, - ) - - return grad_input, None, None, None, None - - -def upfirdn2d(input, kernel, up=1, down=1, pad=(0, 0)): - out = UpFirDn2d.apply( - input, kernel, (up, up), (down, down), (pad[0], pad[1], pad[0], pad[1]) - ) - - return out - - -def upfirdn2d_native( - input, kernel, up_x, up_y, down_x, down_y, pad_x0, pad_x1, pad_y0, pad_y1 -): - _, in_h, in_w, minor = input.shape - kernel_h, kernel_w = kernel.shape - - out = input.view(-1, in_h, 1, in_w, 1, minor) - out = F.pad(out, [0, 0, 0, up_x - 1, 0, 0, 0, up_y - 1]) - out = out.view(-1, in_h * up_y, in_w * up_x, minor) - - out = F.pad( - out, [0, 0, max(pad_x0, 0), max(pad_x1, 0), max(pad_y0, 0), max(pad_y1, 0)] - ) - out = out[ - :, - max(-pad_y0, 0): out.shape[1] - max(-pad_y1, 0), - max(-pad_x0, 0): out.shape[2] - max(-pad_x1, 0), - :, - ] - - out = out.permute(0, 3, 1, 2) - out = out.reshape( - [-1, 1, in_h * up_y + pad_y0 + pad_y1, in_w * up_x + pad_x0 + pad_x1] - ) - w = torch.flip(kernel, [0, 1]).view(1, 1, kernel_h, kernel_w) - out = F.conv2d(out, w) - out = out.reshape( - -1, - minor, - in_h * up_y + pad_y0 + pad_y1 - kernel_h + 1, - in_w * up_x + pad_x0 + pad_x1 - kernel_w + 1, - ) - out = out.permute(0, 2, 3, 1) - - return out[:, ::down_y, ::down_x, :] diff --git a/spaces/allberto/Porn_Merge_V1.3/README.md b/spaces/allberto/Porn_Merge_V1.3/README.md deleted file mode 100644 index 783cd699f60ec40994b275e632f654041aaf04f7..0000000000000000000000000000000000000000 --- a/spaces/allberto/Porn_Merge_V1.3/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Porn Merge V1.3 -emoji: 📊 -colorFrom: red -colorTo: blue -sdk: gradio -sdk_version: 3.50.2 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/alsalemi/pv-segment-01/gradio-app.py b/spaces/alsalemi/pv-segment-01/gradio-app.py deleted file mode 100644 index 45ce0c4b300fb90cfedf17bc63af6f6c8ca0a57a..0000000000000000000000000000000000000000 --- a/spaces/alsalemi/pv-segment-01/gradio-app.py +++ /dev/null @@ -1,64 +0,0 @@ -# IMPORTS -from PIL import Image -import numpy as np -import gradio as gr -from functions import * - -# CONFIG -dataset_path = 'sample_data/' -model_path = 'models/' - -# DATASET -dataset = LesionDataset(dataset_path, get_transform(train=True), one_class_mode=one_class_mode, linecolors=linecolors, batch_no=batch_no, downsample=downsample, excluded_classes=excluded_classes) -indices = range(len(dataset.imgs)) -labels = [os.path.basename(os.path.dirname(dataset.imgs[i])) for i in range(len(dataset))] -title = 'Precision Vision Lesion Segmentation Demo' -description='Scroll to the bottom of this page to select an image to automatically annotate its lesion.' - -# SUPPORTING FUNCTIONS -def predict_oed(input_img, index, gt_radio_label): - model = get_model_instance_segmentation(num_classes) - model.to(device) - model.load_state_dict(torch.load(model_path + model_filename, map_location=device)) - print(index) - index = int(index) if index is not None else 0 - fig = eval_one_image(model, dataset, index, device) - return fig2img(fig) - -# MAIN -def main(): - demo_images = [dataset.imgs[indices[i]] for i in range(len(indices))] - # examples = [[demo_images[0], indices[0]], [demo_images[1], indices[1]], [demo_images[2], indices[2]]] - examples = list(map(list, zip(demo_images, indices, labels))) - - # Gradio modules - gr_img = gr.Image( - height=350, - width=350, - type="pil", - interactive=False, - show_download_button=False, - source='canvas', - label="Input Image", - value=examples[0][0], - ) - gt_radio_no = gr.Radio(choices=indices, - label='No.', - visible=False,) - gt_radio_label= gr.Radio(choices=labels, - label='Label', - visible=False,) - demo = gr.Interface( - predict_oed, - [gr_img, gt_radio_no, gt_radio_label], - "image", - allow_flagging='never', - examples=examples, - title=title, - description=description, - cache_examples=False, - ) - demo.launch() - -if __name__ == "__main__": - main() \ No newline at end of file diff --git a/spaces/alvanlii/FROMAGe/app.py b/spaces/alvanlii/FROMAGe/app.py deleted file mode 100644 index d1d3b6043b66ef4824c97a69783238ffeff086ed..0000000000000000000000000000000000000000 --- a/spaces/alvanlii/FROMAGe/app.py +++ /dev/null @@ -1,122 +0,0 @@ -import os, time, copy -os.environ["HF_HUB_ENABLE_HF_TRANSFER"] = "False" - -from PIL import Image - -import gradio as gr - -import numpy as np -import torch -from transformers import logging -logging.set_verbosity_error() - -from fromage import models -from fromage import utils - -BASE_WIDTH = 512 -MODEL_DIR = './fromage_model/fromage_vis4' - - -class ChatBotCheese: - def __init__(self): - from huggingface_hub import hf_hub_download - model_ckpt_path = hf_hub_download("alvanlii/fromage", "pretrained_ckpt.pth.tar") - self.model = models.load_fromage(MODEL_DIR, model_ckpt_path) - self.curr_image = None - - def add_image(self, state, image_in): - state = state + [(f'', "Ok, now type your message")] - self.curr_image = Image.open(image_in.name).convert('RGB') - return state, state - - def save_im(self, image_pil): - file_name = f"{int(time.time())}_{np.random.randint(100)}.png" - image_pil.save(file_name) - return file_name - - def chat(self, input_text, state, ret_scale_factor, num_ims, num_words, temp, chat_state): - chat_state.append(f'Q: {input_text} \nA:') - chat_history = "".join(chat_state) - model_input = [] - # print(chat_history) - if self.curr_image is not None: - model_input = [self.curr_image, chat_history] - else: - model_input = [chat_history] - - model_outputs = self.model.generate_for_images_and_texts(model_input, max_num_rets=num_ims, num_words=num_words, ret_scale_factor=ret_scale_factor, temperature=temp) - chat_state.append(' '.join([s for s in model_outputs if type(s) == str]) + '\n') - - im_names = [] - if len(model_outputs) > 1: - im_names = [self.save_im(im) for im in model_outputs[1]] - - response = model_outputs[0] - for im_name in im_names: - response += f'' - state.append((input_text, response.replace("[RET]", ""))) - self.curr_image = None - return state, state, chat_state - - def reset(self): - self.curr_image = None - return [], [], [] - - def main(self): - with gr.Blocks(css="#chatbot {height:600px; overflow-y:auto;}") as demo: - gr.Markdown( - """ - ### FROMAGe: Grounding Language Models to Images for Multimodal Generation - Jing Yu Koh, Ruslan Salakhutdinov, Daniel Fried
    - [Paper](https://arxiv.org/abs/2301.13823) [Github](https://github.com/kohjingyu/fromage) [Official Demo](https://huggingface.co/spaces/jykoh/fromage)
    - This is an unofficial Gradio demo for the paper FROMAGe
    - - Instructions (in order): - - [Optional] Upload an image (the button with a photo emoji) - - [Optional] Change the parameters - - Send a message by typing into the box and pressing Enter on your keyboard - - Ask about the image! Tell it to find similar images, or ones with different styles. - - Check out the examples at the bottom! - ##### Notes - - Please be kind to it! - - It retrieves images from a database, and does not edit images - - If it returns nothing, try resetting and refreshing the page - """ - ) - - chatbot = gr.Chatbot(elem_id="chatbot") - gr_state = gr.State([]) - gr_chat_state = gr.State([]) - - with gr.Row(): - with gr.Column(scale=0.85): - txt = gr.Textbox(show_label=False, placeholder="Upload an image first [Optional]. Then enter text and press enter,").style(container=False) - with gr.Column(scale=0.15, min_width=0): - btn = gr.UploadButton("🖼️", file_types=["image"]) - - with gr.Row(): - with gr.Column(scale=0.20, min_width=0): - reset_btn = gr.Button("Reset Messages") - gr_ret_scale_factor = gr.Number(value=1.0, label="Increased prob of returning images", interactive=True) - gr_num_ims = gr.Number(value=3, precision=1, label="Max # of Images returned", interactive=True) - gr_num_words = gr.Number(value=32, precision=1, label="Max # of words returned", interactive=True) - gr_temp = gr.Number(value=0.0, label="Temperature", interactive=True) - - with gr.Row(): - gr.Image("example_1.png", label="Example 1") - gr.Image("example_2.png", label="Example 2") - gr.Image("example_3.png", label="Example 3") - - - txt.submit(self.chat, [txt, gr_state, gr_ret_scale_factor, gr_num_ims, gr_num_words, gr_temp, gr_chat_state], [gr_state, chatbot, gr_chat_state]) - txt.submit(lambda :"", None, txt) - btn.upload(self.add_image, [gr_state, btn], [gr_state, chatbot]) - reset_btn.click(self.reset, [], [gr_state, chatbot, gr_chat_state]) - - demo.launch(share=False, server_name="0.0.0.0") - -def main(): - cheddar = ChatBotCheese() - cheddar.main() - -if __name__ == "__main__": - main() \ No newline at end of file diff --git a/spaces/amarchheda/ChordDuplicate/portaudio/pablio/test_w_saw.c b/spaces/amarchheda/ChordDuplicate/portaudio/pablio/test_w_saw.c deleted file mode 100644 index 2303d20e7fc278da7ae98a90e6f917f5d4a11b9b..0000000000000000000000000000000000000000 --- a/spaces/amarchheda/ChordDuplicate/portaudio/pablio/test_w_saw.c +++ /dev/null @@ -1,114 +0,0 @@ -/* - * $Id$ - * test_w_saw.c - * Generate stereo sawtooth waveforms. - * - * Author: Phil Burk, http://www.softsynth.com - * - * This program uses PABLIO, the Portable Audio Blocking I/O Library. - * PABLIO is built on top of PortAudio, the Portable Audio Library. - * - * For more information see: http://www.portaudio.com - * Copyright (c) 1999-2000 Ross Bencina and Phil Burk - * - * Permission is hereby granted, free of charge, to any person obtaining - * a copy of this software and associated documentation files - * (the "Software"), to deal in the Software without restriction, - * including without limitation the rights to use, copy, modify, merge, - * publish, distribute, sublicense, and/or sell copies of the Software, - * and to permit persons to whom the Software is furnished to do so, - * subject to the following conditions: - * - * The above copyright notice and this permission notice shall be - * included in all copies or substantial portions of the Software. - * - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, - * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. - * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR - * ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF - * CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION - * WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - */ - -/* - * The text above constitutes the entire PortAudio license; however, - * the PortAudio community also makes the following non-binding requests: - * - * Any person wishing to distribute modifications to the Software is - * requested to send the modifications to the original developer so that - * they can be incorporated into the canonical version. It is also - * requested that these non-binding requests be included along with the - * license above. - */ - -#include -#include -#include -#include "pablio.h" -#include - -#define SAMPLE_RATE (44100) -#define NUM_SECONDS (6) -#define SAMPLES_PER_FRAME (2) - -#define FREQUENCY (220.0f) -#define PHASE_INCREMENT (2.0f * FREQUENCY / SAMPLE_RATE) -#define FRAMES_PER_BLOCK (100) - -float samples[FRAMES_PER_BLOCK][SAMPLES_PER_FRAME]; -float phases[SAMPLES_PER_FRAME]; - -/*******************************************************************/ -int main(void); -int main(void) -{ - int i,j; - PaError err; - PABLIO_Stream *aOutStream; - - printf("Generate sawtooth waves using PABLIO.\n"); - fflush(stdout); - - /* Open simplified blocking I/O layer on top of PortAudio. */ - err = OpenAudioStream( &aOutStream, SAMPLE_RATE, paFloat32, - (PABLIO_WRITE | PABLIO_STEREO) ); - if( err != paNoError ) goto error; - - /* Initialize oscillator phases. */ - phases[0] = 0.0; - phases[1] = 0.0; - - for( i=0; i<(NUM_SECONDS * SAMPLE_RATE); i += FRAMES_PER_BLOCK ) - { - /* Generate sawtooth waveforms in a block for efficiency. */ - for( j=0; j 1.0f ) phases[0] -= 2.0f; - samples[j][0] = phases[0]; - - /* On the second channel, generate a sawtooth wave a fifth higher. */ - phases[1] += PHASE_INCREMENT * (3.0f / 2.0f); - if( phases[1] > 1.0f ) phases[1] -= 2.0f; - samples[j][1] = phases[1]; - } - - /* Write samples to output. */ - WriteAudioStream( aOutStream, samples, FRAMES_PER_BLOCK ); - } - - CloseAudioStream( aOutStream ); - - printf("Sawtooth sound test complete.\n" ); - fflush(stdout); - return 0; - -error: - fprintf( stderr, "An error occurred while using PABLIO\n" ); - fprintf( stderr, "Error number: %d\n", err ); - fprintf( stderr, "Error message: %s\n", Pa_GetErrorText( err ) ); - return -1; -} diff --git a/spaces/anakin87/who-killed-laura-palmer/crawler/tpcrawler/tpcrawler/spiders/__init__.py b/spaces/anakin87/who-killed-laura-palmer/crawler/tpcrawler/tpcrawler/spiders/__init__.py deleted file mode 100644 index ebd689ac51d69c5e1dbbe80083c2b20a39f8bb79..0000000000000000000000000000000000000000 --- a/spaces/anakin87/who-killed-laura-palmer/crawler/tpcrawler/tpcrawler/spiders/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -# This package will contain the spiders of your Scrapy project -# -# Please refer to the documentation for information on how to create and manage -# your spiders. diff --git a/spaces/aodianyun/stable-diffusion-webui/modules/ui_tempdir.py b/spaces/aodianyun/stable-diffusion-webui/modules/ui_tempdir.py deleted file mode 100644 index 126f73a21d71070887fd094beaf0fe6d7e12df9c..0000000000000000000000000000000000000000 --- a/spaces/aodianyun/stable-diffusion-webui/modules/ui_tempdir.py +++ /dev/null @@ -1,82 +0,0 @@ -import os -import tempfile -from collections import namedtuple -from pathlib import Path - -import gradio as gr - -from PIL import PngImagePlugin - -from modules import shared - - -Savedfile = namedtuple("Savedfile", ["name"]) - - -def register_tmp_file(gradio, filename): - if hasattr(gradio, 'temp_file_sets'): # gradio 3.15 - gradio.temp_file_sets[0] = gradio.temp_file_sets[0] | {os.path.abspath(filename)} - - if hasattr(gradio, 'temp_dirs'): # gradio 3.9 - gradio.temp_dirs = gradio.temp_dirs | {os.path.abspath(os.path.dirname(filename))} - - -def check_tmp_file(gradio, filename): - if hasattr(gradio, 'temp_file_sets'): - return any([filename in fileset for fileset in gradio.temp_file_sets]) - - if hasattr(gradio, 'temp_dirs'): - return any(Path(temp_dir).resolve() in Path(filename).resolve().parents for temp_dir in gradio.temp_dirs) - - return False - - -def save_pil_to_file(pil_image, dir=None): - already_saved_as = getattr(pil_image, 'already_saved_as', None) - if already_saved_as and os.path.isfile(already_saved_as): - register_tmp_file(shared.demo, already_saved_as) - - file_obj = Savedfile(already_saved_as) - return file_obj - - if shared.opts.temp_dir != "": - dir = shared.opts.temp_dir - - use_metadata = False - metadata = PngImagePlugin.PngInfo() - for key, value in pil_image.info.items(): - if isinstance(key, str) and isinstance(value, str): - metadata.add_text(key, value) - use_metadata = True - - file_obj = tempfile.NamedTemporaryFile(delete=False, suffix=".png", dir=dir) - pil_image.save(file_obj, pnginfo=(metadata if use_metadata else None)) - return file_obj - - -# override save to file function so that it also writes PNG info -gr.processing_utils.save_pil_to_file = save_pil_to_file - - -def on_tmpdir_changed(): - if shared.opts.temp_dir == "" or shared.demo is None: - return - - os.makedirs(shared.opts.temp_dir, exist_ok=True) - - register_tmp_file(shared.demo, os.path.join(shared.opts.temp_dir, "x")) - - -def cleanup_tmpdr(): - temp_dir = shared.opts.temp_dir - if temp_dir == "" or not os.path.isdir(temp_dir): - return - - for root, dirs, files in os.walk(temp_dir, topdown=False): - for name in files: - _, extension = os.path.splitext(name) - if extension != ".png": - continue - - filename = os.path.join(root, name) - os.remove(filename) diff --git a/spaces/arakimk/SakamataFontDCGAN/README.md b/spaces/arakimk/SakamataFontDCGAN/README.md deleted file mode 100644 index d1e018f4284836ebf9f6624af5f740bbb7db4e4c..0000000000000000000000000000000000000000 --- a/spaces/arakimk/SakamataFontDCGAN/README.md +++ /dev/null @@ -1,18 +0,0 @@ ---- -title: SakamataFontDCGAN -emoji: 🎣 -colorFrom: red -colorTo: blue -sdk: gradio -sdk_version: 3.29.0 -app_file: app.py -pinned: false -license: other ---- - -This is experimental project that make fake handwritten character by DCGAN. - -Dataset: [SakamataFontProject](https://github.com/sakamata-ch/SakamataFontProject) - -This project working under Hololive Derivative Works Guidelines. -You have to read and agree for guideline if you want to use artifact of this project. \ No newline at end of file diff --git a/spaces/artificialguybr/video-dubbing/TTS/tests/tts_tests2/test_glow_tts_train.py b/spaces/artificialguybr/video-dubbing/TTS/tests/tts_tests2/test_glow_tts_train.py deleted file mode 100644 index 0a8e226b65edf1da6ed477422d579c420ecdf74d..0000000000000000000000000000000000000000 --- a/spaces/artificialguybr/video-dubbing/TTS/tests/tts_tests2/test_glow_tts_train.py +++ /dev/null @@ -1,73 +0,0 @@ -import glob -import json -import os -import shutil - -from trainer import get_last_checkpoint - -from tests import get_device_id, get_tests_output_path, run_cli -from TTS.tts.configs.glow_tts_config import GlowTTSConfig - -config_path = os.path.join(get_tests_output_path(), "test_model_config.json") -output_path = os.path.join(get_tests_output_path(), "train_outputs") - - -config = GlowTTSConfig( - batch_size=2, - eval_batch_size=8, - num_loader_workers=0, - num_eval_loader_workers=0, - text_cleaner="english_cleaners", - use_phonemes=True, - phoneme_language="en-us", - phoneme_cache_path="tests/data/ljspeech/phoneme_cache/", - run_eval=True, - test_delay_epochs=-1, - epochs=1, - print_step=1, - print_eval=True, - test_sentences=[ - "Be a voice, not an echo.", - ], - data_dep_init_steps=1.0, -) -config.audio.do_trim_silence = True -config.audio.trim_db = 60 -config.save_json(config_path) - -# train the model for one epoch -command_train = ( - f"CUDA_VISIBLE_DEVICES='{get_device_id()}' python TTS/bin/train_tts.py --config_path {config_path} " - f"--coqpit.output_path {output_path} " - "--coqpit.datasets.0.formatter ljspeech " - "--coqpit.datasets.0.meta_file_train metadata.csv " - "--coqpit.datasets.0.meta_file_val metadata.csv " - "--coqpit.datasets.0.path tests/data/ljspeech " - "--coqpit.datasets.0.meta_file_attn_mask tests/data/ljspeech/metadata_attn_mask.txt " - "--coqpit.test_delay_epochs 0" -) -run_cli(command_train) - -# Find latest folder -continue_path = max(glob.glob(os.path.join(output_path, "*/")), key=os.path.getmtime) - -# Inference using TTS API -continue_config_path = os.path.join(continue_path, "config.json") -continue_restore_path, _ = get_last_checkpoint(continue_path) -out_wav_path = os.path.join(get_tests_output_path(), "output.wav") - -# Check integrity of the config -with open(continue_config_path, "r", encoding="utf-8") as f: - config_loaded = json.load(f) -assert config_loaded["characters"] is not None -assert config_loaded["output_path"] in continue_path -assert config_loaded["test_delay_epochs"] == 0 - -# Load the model and run inference -inference_command = f"CUDA_VISIBLE_DEVICES='{get_device_id()}' tts --text 'This is an example.' --config_path {continue_config_path} --model_path {continue_restore_path} --out_path {out_wav_path}" -run_cli(inference_command) - -# restore the model and continue training for one more epoch -command_train = f"CUDA_VISIBLE_DEVICES='{get_device_id()}' python TTS/bin/train_tts.py --continue_path {continue_path} " -run_cli(command_train) -shutil.rmtree(continue_path) diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/Hash/KMAC256.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/Hash/KMAC256.py deleted file mode 100644 index 2be8e2f3d57aabf8cafbdc422cf6e74e44ae2df9..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/Hash/KMAC256.py +++ /dev/null @@ -1,74 +0,0 @@ -# =================================================================== -# -# Copyright (c) 2021, Legrandin -# All rights reserved. -# -# Redistribution and use in source and binary forms, with or without -# modification, are permitted provided that the following conditions -# are met: -# -# 1. Redistributions of source code must retain the above copyright -# notice, this list of conditions and the following disclaimer. -# 2. Redistributions in binary form must reproduce the above copyright -# notice, this list of conditions and the following disclaimer in -# the documentation and/or other materials provided with the -# distribution. -# -# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS -# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT -# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS -# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE -# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, -# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, -# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; -# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER -# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT -# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN -# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE -# POSSIBILITY OF SUCH DAMAGE. -# =================================================================== - -from Crypto.Util.py3compat import is_bytes - -from .KMAC128 import KMAC_Hash -from . import cSHAKE256 - - -def new(**kwargs): - """Create a new KMAC256 object. - - Args: - key (bytes/bytearray/memoryview): - The key to use to compute the MAC. - It must be at least 256 bits long (32 bytes). - data (bytes/bytearray/memoryview): - Optional. The very first chunk of the message to authenticate. - It is equivalent to an early call to :meth:`KMAC_Hash.update`. - mac_len (integer): - Optional. The size of the authentication tag, in bytes. - Default is 64. Minimum is 8. - custom (bytes/bytearray/memoryview): - Optional. A customization byte string (``S`` in SP 800-185). - - Returns: - A :class:`KMAC_Hash` hash object - """ - - key = kwargs.pop("key", None) - if not is_bytes(key): - raise TypeError("You must pass a key to KMAC256") - if len(key) < 32: - raise ValueError("The key must be at least 256 bits long (32 bytes)") - - data = kwargs.pop("data", None) - - mac_len = kwargs.pop("mac_len", 64) - if mac_len < 8: - raise ValueError("'mac_len' must be 8 bytes or more") - - custom = kwargs.pop("custom", b"") - - if kwargs: - raise TypeError("Unknown parameters: " + str(kwargs)) - - return KMAC_Hash(data, key, mac_len, custom, "20", cSHAKE256, 136) diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/vegalite/schema.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/vegalite/schema.py deleted file mode 100644 index fd7f3df7eb6cc79e627a98c474754ee50825a45b..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/vegalite/schema.py +++ /dev/null @@ -1,3 +0,0 @@ -"""Altair schema wrappers""" -# flake8: noqa -from .v4.schema import * diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/anyio/_core/_exceptions.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/anyio/_core/_exceptions.py deleted file mode 100644 index db2bbcf59336da6f9b61726995e00e0d33bfbc70..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/anyio/_core/_exceptions.py +++ /dev/null @@ -1,93 +0,0 @@ -from traceback import format_exception -from typing import List - - -class BrokenResourceError(Exception): - """ - Raised when trying to use a resource that has been rendered unusable due to external causes - (e.g. a send stream whose peer has disconnected). - """ - - -class BrokenWorkerProcess(Exception): - """ - Raised by :func:`run_sync_in_process` if the worker process terminates abruptly or otherwise - misbehaves. - """ - - -class BusyResourceError(Exception): - """Raised when two tasks are trying to read from or write to the same resource concurrently.""" - - def __init__(self, action: str): - super().__init__(f"Another task is already {action} this resource") - - -class ClosedResourceError(Exception): - """Raised when trying to use a resource that has been closed.""" - - -class DelimiterNotFound(Exception): - """ - Raised during :meth:`~anyio.streams.buffered.BufferedByteReceiveStream.receive_until` if the - maximum number of bytes has been read without the delimiter being found. - """ - - def __init__(self, max_bytes: int) -> None: - super().__init__( - f"The delimiter was not found among the first {max_bytes} bytes" - ) - - -class EndOfStream(Exception): - """Raised when trying to read from a stream that has been closed from the other end.""" - - -class ExceptionGroup(BaseException): - """ - Raised when multiple exceptions have been raised in a task group. - - :var ~typing.Sequence[BaseException] exceptions: the sequence of exceptions raised together - """ - - SEPARATOR = "----------------------------\n" - - exceptions: List[BaseException] - - def __str__(self) -> str: - tracebacks = [ - "".join(format_exception(type(exc), exc, exc.__traceback__)) - for exc in self.exceptions - ] - return ( - f"{len(self.exceptions)} exceptions were raised in the task group:\n" - f"{self.SEPARATOR}{self.SEPARATOR.join(tracebacks)}" - ) - - def __repr__(self) -> str: - exception_reprs = ", ".join(repr(exc) for exc in self.exceptions) - return f"<{self.__class__.__name__}: {exception_reprs}>" - - -class IncompleteRead(Exception): - """ - Raised during :meth:`~anyio.streams.buffered.BufferedByteReceiveStream.receive_exactly` or - :meth:`~anyio.streams.buffered.BufferedByteReceiveStream.receive_until` if the - connection is closed before the requested amount of bytes has been read. - """ - - def __init__(self) -> None: - super().__init__( - "The stream was closed before the read operation could be completed" - ) - - -class TypedAttributeLookupError(LookupError): - """ - Raised by :meth:`~anyio.TypedAttributeProvider.extra` when the given typed attribute is not - found and no default value has been given. - """ - - -class WouldBlock(Exception): - """Raised by ``X_nowait`` functions if ``X()`` would block.""" diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/audioread/exceptions.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/audioread/exceptions.py deleted file mode 100644 index 26e03fc39c07c1c505a7a7ea8e67a147676ed002..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/audioread/exceptions.py +++ /dev/null @@ -1,25 +0,0 @@ -# This file is part of audioread. -# Copyright 2013, Adrian Sampson. -# -# Permission is hereby granted, free of charge, to any person obtaining -# a copy of this software and associated documentation files (the -# "Software"), to deal in the Software without restriction, including -# without limitation the rights to use, copy, modify, merge, publish, -# distribute, sublicense, and/or sell copies of the Software, and to -# permit persons to whom the Software is furnished to do so, subject to -# the following conditions: -# -# The above copyright notice and this permission notice shall be -# included in all copies or substantial portions of the Software. - - -class DecodeError(Exception): - """The base exception class for all decoding errors raised by this - package. - """ - - -class NoBackendError(DecodeError): - """The file could not be decoded by any backend. Either no backends - are available or each available backend failed to decode the file. - """ diff --git a/spaces/aseuteurideu/audio_deepfake_detector/main.py b/spaces/aseuteurideu/audio_deepfake_detector/main.py deleted file mode 100644 index ccb42fe8925fb550abe7dd681d39b66b47e58c21..0000000000000000000000000000000000000000 --- a/spaces/aseuteurideu/audio_deepfake_detector/main.py +++ /dev/null @@ -1,247 +0,0 @@ -import os -import argparse -from tqdm import tqdm -import torch.nn as nn -import tensorflow as tf -import torch.optim as optim - -from models.TMC import ETMC, ce_loss -import torchvision.transforms as transforms -from data.dfdt_dataset import FakeAVCelebDatasetTrain, FakeAVCelebDatasetVal - - -from utils.utils import * -from utils.logger import create_logger -from sklearn.metrics import accuracy_score -from torch.utils.tensorboard import SummaryWriter - -# Define the audio_args dictionary -audio_args = { - 'nb_samp': 64600, - 'first_conv': 1024, - 'in_channels': 1, - 'filts': [20, [20, 20], [20, 128], [128, 128]], - 'blocks': [2, 4], - 'nb_fc_node': 1024, - 'gru_node': 1024, - 'nb_gru_layer': 3, -} - - -def get_args(parser): - parser.add_argument("--batch_size", type=int, default=8) - parser.add_argument("--data_dir", type=str, default="datasets/train/fakeavceleb*") - parser.add_argument("--LOAD_SIZE", type=int, default=256) - parser.add_argument("--FINE_SIZE", type=int, default=224) - parser.add_argument("--dropout", type=float, default=0.2) - parser.add_argument("--gradient_accumulation_steps", type=int, default=1) - parser.add_argument("--hidden", nargs="*", type=int, default=[]) - parser.add_argument("--hidden_sz", type=int, default=768) - parser.add_argument("--img_embed_pool_type", type=str, default="avg", choices=["max", "avg"]) - parser.add_argument("--img_hidden_sz", type=int, default=1024) - parser.add_argument("--include_bn", type=int, default=True) - parser.add_argument("--lr", type=float, default=1e-4) - parser.add_argument("--lr_factor", type=float, default=0.3) - parser.add_argument("--lr_patience", type=int, default=10) - parser.add_argument("--max_epochs", type=int, default=500) - parser.add_argument("--n_workers", type=int, default=12) - parser.add_argument("--name", type=str, default="MMDF") - parser.add_argument("--num_image_embeds", type=int, default=1) - parser.add_argument("--patience", type=int, default=20) - parser.add_argument("--savedir", type=str, default="./savepath/") - parser.add_argument("--seed", type=int, default=1) - parser.add_argument("--n_classes", type=int, default=2) - parser.add_argument("--annealing_epoch", type=int, default=10) - parser.add_argument("--device", type=str, default='cpu') - parser.add_argument("--pretrained_image_encoder", type=bool, default = False) - parser.add_argument("--freeze_image_encoder", type=bool, default = True) - parser.add_argument("--pretrained_audio_encoder", type = bool, default=False) - parser.add_argument("--freeze_audio_encoder", type = bool, default = True) - parser.add_argument("--augment_dataset", type = bool, default = True) - - for key, value in audio_args.items(): - parser.add_argument(f"--{key}", type=type(value), default=value) - -def get_optimizer(model, args): - optimizer = optim.Adam(model.parameters(), lr=args.lr, weight_decay=1e-5) - return optimizer - - -def get_scheduler(optimizer, args): - return optim.lr_scheduler.ReduceLROnPlateau( - optimizer, "max", patience=args.lr_patience, verbose=True, factor=args.lr_factor - ) - -def model_forward(i_epoch, model, args, ce_loss, batch): - rgb, spec, tgt = batch['video_reshaped'], batch['spectrogram'], batch['label_map'] - rgb_pt = torch.Tensor(rgb.numpy()) - spec = spec.numpy() - spec_pt = torch.Tensor(spec) - tgt_pt = torch.Tensor(tgt.numpy()) - - if torch.cuda.is_available(): - rgb_pt, spec_pt, tgt_pt = rgb_pt.cuda(), spec_pt.cuda(), tgt_pt.cuda() - - # depth_alpha, rgb_alpha, depth_rgb_alpha = model(rgb_pt, spec_pt) - - # loss = ce_loss(tgt_pt, depth_alpha, args.n_classes, i_epoch, args.annealing_epoch) + \ - # ce_loss(tgt_pt, rgb_alpha, args.n_classes, i_epoch, args.annealing_epoch) + \ - # ce_loss(tgt_pt, depth_rgb_alpha, args.n_classes, i_epoch, args.annealing_epoch) - # return loss, depth_alpha, rgb_alpha, depth_rgb_alpha, tgt_pt - - depth_alpha, rgb_alpha, pseudo_alpha, depth_rgb_alpha = model(rgb_pt, spec_pt) - - loss = ce_loss(tgt_pt, depth_alpha, args.n_classes, i_epoch, args.annealing_epoch) + \ - ce_loss(tgt_pt, rgb_alpha, args.n_classes, i_epoch, args.annealing_epoch) + \ - ce_loss(tgt_pt, pseudo_alpha, args.n_classes, i_epoch, args.annealing_epoch) + \ - ce_loss(tgt_pt, depth_rgb_alpha, args.n_classes, i_epoch, args.annealing_epoch) - return loss, depth_alpha, rgb_alpha, depth_rgb_alpha, tgt_pt - - - -def model_eval(i_epoch, data, model, args, criterion): - model.eval() - with torch.no_grad(): - losses, depth_preds, rgb_preds, depthrgb_preds, tgts = [], [], [], [], [] - for batch in tqdm(data): - loss, depth_alpha, rgb_alpha, depth_rgb_alpha, tgt = model_forward(i_epoch, model, args, criterion, batch) - losses.append(loss.item()) - - depth_pred = depth_alpha.argmax(dim=1).cpu().detach().numpy() - rgb_pred = rgb_alpha.argmax(dim=1).cpu().detach().numpy() - depth_rgb_pred = depth_rgb_alpha.argmax(dim=1).cpu().detach().numpy() - - depth_preds.append(depth_pred) - rgb_preds.append(rgb_pred) - depthrgb_preds.append(depth_rgb_pred) - tgt = tgt.cpu().detach().numpy() - tgts.append(tgt) - - metrics = {"loss": np.mean(losses)} - print(f"Mean loss is: {metrics['loss']}") - - tgts = [l for sl in tgts for l in sl] - depth_preds = [l for sl in depth_preds for l in sl] - rgb_preds = [l for sl in rgb_preds for l in sl] - depthrgb_preds = [l for sl in depthrgb_preds for l in sl] - metrics["spec_acc"] = accuracy_score(tgts, depth_preds) - metrics["rgb_acc"] = accuracy_score(tgts, rgb_preds) - metrics["specrgb_acc"] = accuracy_score(tgts, depthrgb_preds) - return metrics - -def write_weight_histograms(writer, step, model): - for idx, item in enumerate(model.named_parameters()): - name = item[0] - weights = item[1].data - if weights.size(dim = 0) > 2: - try: - writer.add_histogram(name, weights, idx) - except ValueError as e: - continue - -writer = SummaryWriter() - -def train(args): - set_seed(args.seed) - args.savedir = os.path.join(args.savedir, args.name) - os.makedirs(args.savedir, exist_ok=True) - - train_ds = FakeAVCelebDatasetTrain(args) - train_ds = train_ds.load_features_from_tfrec() - - val_ds = FakeAVCelebDatasetVal(args) - val_ds = val_ds.load_features_from_tfrec() - - model = ETMC(args) - optimizer = get_optimizer(model, args) - scheduler = get_scheduler(optimizer, args) - logger = create_logger("%s/logfile.log" % args.savedir, args) - if torch.cuda.is_available(): - model.cuda() - - torch.save(args, os.path.join(args.savedir, "checkpoint.pt")) - start_epoch, global_step, n_no_improve, best_metric = 0, 0, 0, -np.inf - - for i_epoch in range(start_epoch, args.max_epochs): - train_losses = [] - model.train() - optimizer.zero_grad() - - for index, batch in tqdm(enumerate(train_ds)): - loss, depth_out, rgb_out, depthrgb, tgt = model_forward(i_epoch, model, args, ce_loss, batch) - if args.gradient_accumulation_steps > 1: - loss = loss / args.gradient_accumulation_steps - - train_losses.append(loss.item()) - loss.backward() - global_step += 1 - if global_step % args.gradient_accumulation_steps == 0: - optimizer.step() - optimizer.zero_grad() - - #Write weight histograms to Tensorboard. - write_weight_histograms(writer, i_epoch, model) - - model.eval() - metrics = model_eval( - np.inf, val_ds, model, args, ce_loss - ) - logger.info("Train Loss: {:.4f}".format(np.mean(train_losses))) - log_metrics("val", metrics, logger) - logger.info( - "{}: Loss: {:.5f} | spec_acc: {:.5f}, rgb_acc: {:.5f}, depth rgb acc: {:.5f}".format( - "val", metrics["loss"], metrics["spec_acc"], metrics["rgb_acc"], metrics["specrgb_acc"] - ) - ) - tuning_metric = metrics["specrgb_acc"] - - scheduler.step(tuning_metric) - is_improvement = tuning_metric > best_metric - if is_improvement: - best_metric = tuning_metric - n_no_improve = 0 - else: - n_no_improve += 1 - - save_checkpoint( - { - "epoch": i_epoch + 1, - "optimizer": optimizer.state_dict(), - "scheduler": scheduler.state_dict(), - "n_no_improve": n_no_improve, - "best_metric": best_metric, - }, - is_improvement, - args.savedir, - ) - - if n_no_improve >= args.patience: - logger.info("No improvement. Breaking out of loop.") - break - writer.close() - # load_checkpoint(model, os.path.join(args.savedir, "model_best.pt")) - model.eval() - test_metrics = model_eval( - np.inf, val_ds, model, args, ce_loss - ) - logger.info( - "{}: Loss: {:.5f} | spec_acc: {:.5f}, rgb_acc: {:.5f}, depth rgb acc: {:.5f}".format( - "Test", test_metrics["loss"], test_metrics["spec_acc"], test_metrics["rgb_acc"], - test_metrics["depthrgb_acc"] - ) - ) - log_metrics(f"Test", test_metrics, logger) - - -def cli_main(): - parser = argparse.ArgumentParser(description="Train Models") - get_args(parser) - args, remaining_args = parser.parse_known_args() - assert remaining_args == [], remaining_args - train(args) - - -if __name__ == "__main__": - import warnings - warnings.filterwarnings("ignore") - cli_main() diff --git a/spaces/at2507/SM_NLP_RecoSys/Data/Mentor_interviews/Asad Norouzi.html b/spaces/at2507/SM_NLP_RecoSys/Data/Mentor_interviews/Asad Norouzi.html deleted file mode 100644 index f83df7e9dc9cddefd27b2077d502adfd57232c41..0000000000000000000000000000000000000000 --- a/spaces/at2507/SM_NLP_RecoSys/Data/Mentor_interviews/Asad Norouzi.html +++ /dev/null @@ -1,134 +0,0 @@ - - - - Asad Norouzi - - - - -
    -

    Asad Norouzi

    - -
    -
    How did you hear about SM?
    • a friend who is on board already, and heard it from her
    • passionate about education and motivated o help people switch careers

    Brief background
    • robotics background
    • Singapore then Canada
    • self-driving now!
    • always been doing education
      • part-time prof at Seneca college

    Mentorship exp
    • Entrepreneur first
    • started a companyyy
    • now still in touch with EF and providing mentorship

    What do beginners need and how can you help?
    • really ambitious goals, which can hinder them to focus on meaningful progress, take step by step
    • thinking out of the box
    • understand their personality
    • fully believe in customized education
    • then help them plan a journey
    -
    -
    Questions about SM:
    • really liked my email with the breakdown, and did his research
    -
    - -
    - - - \ No newline at end of file diff --git a/spaces/atimughal662/InfoFusion/gradio_utils/css.py b/spaces/atimughal662/InfoFusion/gradio_utils/css.py deleted file mode 100644 index 6f3d0dd56bfd4287034afd0b23751e3abd59a143..0000000000000000000000000000000000000000 --- a/spaces/atimughal662/InfoFusion/gradio_utils/css.py +++ /dev/null @@ -1,148 +0,0 @@ -def get_css(kwargs) -> str: - if kwargs['h2ocolors']: - css_code = """footer {visibility: hidden;} - body{background:linear-gradient(#f5f5f5,#e5e5e5);} - body.dark{background:linear-gradient(#000000,#0d0d0d);} - """ - else: - css_code = """footer {visibility: hidden}""" - - css_code += make_css_base() - return css_code - - -def make_css_base() -> str: - return """ - #col_container {margin-left: auto; margin-right: auto; text-align: left;} - - @import url('https://fonts.googleapis.com/css2?family=Source+Sans+Pro:wght@400;600&display=swap'); - - body.dark{#warning {background-color: #555555};} - - #sidebar { - order: 1; - - @media (max-width: 463px) { - order: 2; - } - } - - #col-tabs { - order: 2; - - @media (max-width: 463px) { - order: 1; - } - } - - #small_btn { - margin: 0.6em 0em 0.55em 0; - max-width: 20em; - min-width: 5em !important; - height: 5em; - font-size: 14px !important; - } - - #prompt-form { - border: 1px solid var(--primary-500) !important; - } - - #prompt-form.block { - border-radius: var(--block-radius) !important; - } - - #prompt-form textarea { - border: 1px solid rgb(209, 213, 219); - } - - #prompt-form label > div { - margin-top: 4px; - } - - button.primary:hover { - background-color: var(--primary-600) !important; - transition: .2s; - } - - #prompt-form-area { - margin-bottom: 2.5rem; - } - .chatsmall chatbot {font-size: 10px !important} - - .gradio-container { - max-width: none !important; - } - - div.message { - padding: var(--text-lg) !important; - } - - div.message.user > div.icon-button { - top: unset; - bottom: 0; - } - - div.message.bot > div.icon-button { - top: unset; - bottom: 0; - } - - #prompt-form-row { - position: relative; - } - - #attach-button { - position: absolute; - top: 45px; - right: 20px; - - display: flex; - justify-content: center; - border: 1px solid var(--primary-500) !important; - - @media (max-width: 463px) { - width: 56px; - } - } - - #attach-button > img { - margin-right: 0; - } - - #prompt-form > label > textarea { - padding-right: 104px; - - @media (max-width: 463px) { - min-height: 94px; - padding-right: 70px; - } - } - - #visible-models > label > div.wrap > div.wrap-inner > div.secondary-wrap > div.remove-all { - display: none !important; - } - - #visible-models > label > div.wrap > div.wrap-inner > div.token { - display: none !important; - } - - #visible-models > label > div.wrap > div.wrap-inner > div.secondary-wrap::before { - content: "Select"; - padding: 0 4px; - margin-right: 2px; - } - - #langchain_agents > label > div.wrap > div.wrap-inner > div.secondary-wrap > div.remove-all { - display: none !important; - } - - #langchain_agents > label > div.wrap > div.wrap-inner > div.token { - display: none !important; - } - - #langchain_agents > label > div.wrap > div.wrap-inner > div.secondary-wrap::before { - content: "Select"; - padding: 0 4px; - margin-right: 2px; - } - """ diff --git a/spaces/atsantiago/Monocular_Depth_Filter/utils.py b/spaces/atsantiago/Monocular_Depth_Filter/utils.py deleted file mode 100644 index 61730831aad1d549da95c1ab96e731b71ba5918f..0000000000000000000000000000000000000000 --- a/spaces/atsantiago/Monocular_Depth_Filter/utils.py +++ /dev/null @@ -1,25 +0,0 @@ -import numpy as np - - -def depth_norm(x, maxDepth): - return maxDepth / x - - -def predict(model, images, minDepth=10, maxDepth=1000, batch_size=2): - # Support multiple RGBs, one RGB image, even grayscale - if len(images.shape) < 3: images = np.stack((images, images, images), axis=2) - if len(images.shape) < 4: images = images.reshape((1, images.shape[0], images.shape[1], images.shape[2])) - # Compute predictions - predictions = model.predict(images, batch_size=batch_size) - # Put in expected range - # print("Max Depth:", np.amax(predictions), maxDepth) - # print("Min Depth:", np.amin(predictions), minDepth) - return np.clip(depth_norm(predictions, maxDepth=maxDepth), minDepth, maxDepth) / maxDepth - - -def load_images(image_files): - loaded_images = [] - for file in image_files: - x = np.clip(file.reshape(480, 640, 3) / 255, 0, 1) - loaded_images.append(x) - return np.stack(loaded_images, axis=0) diff --git a/spaces/awaawawawa/iurf7irfuyytruyyugb/ldmlib/modules/diffusionmodules/util.py b/spaces/awaawawawa/iurf7irfuyytruyyugb/ldmlib/modules/diffusionmodules/util.py deleted file mode 100644 index c1dc1d424015d2c6c92342b85a992f931e5a1dc1..0000000000000000000000000000000000000000 --- a/spaces/awaawawawa/iurf7irfuyytruyyugb/ldmlib/modules/diffusionmodules/util.py +++ /dev/null @@ -1,267 +0,0 @@ -# adopted from -# https://github.com/openai/improved-diffusion/blob/main/improved_diffusion/gaussian_diffusion.py -# and -# https://github.com/lucidrains/denoising-diffusion-pytorch/blob/7706bdfc6f527f58d33f84b7b522e61e6e3164b3/denoising_diffusion_pytorch/denoising_diffusion_pytorch.py -# and -# https://github.com/openai/guided-diffusion/blob/0ba878e517b276c45d1195eb29f6f5f72659a05b/guided_diffusion/nn.py -# -# thanks! - - -import os -import math -import torch -import torch.nn as nn -import numpy as np -from einops import repeat - -from ldmlib.util import instantiate_from_config - - -def make_beta_schedule(schedule, n_timestep, linear_start=1e-4, linear_end=2e-2, cosine_s=8e-3): - if schedule == "linear": - betas = ( - torch.linspace(linear_start ** 0.5, linear_end ** 0.5, n_timestep, dtype=torch.float64) ** 2 - ) - - elif schedule == "cosine": - timesteps = ( - torch.arange(n_timestep + 1, dtype=torch.float64) / n_timestep + cosine_s - ) - alphas = timesteps / (1 + cosine_s) * np.pi / 2 - alphas = torch.cos(alphas).pow(2) - alphas = alphas / alphas[0] - betas = 1 - alphas[1:] / alphas[:-1] - betas = np.clip(betas, a_min=0, a_max=0.999) - - elif schedule == "sqrt_linear": - betas = torch.linspace(linear_start, linear_end, n_timestep, dtype=torch.float64) - elif schedule == "sqrt": - betas = torch.linspace(linear_start, linear_end, n_timestep, dtype=torch.float64) ** 0.5 - else: - raise ValueError(f"schedule '{schedule}' unknown.") - return betas.numpy() - - -def make_ddim_timesteps(ddim_discr_method, num_ddim_timesteps, num_ddpm_timesteps, verbose=True): - if ddim_discr_method == 'uniform': - c = num_ddpm_timesteps // num_ddim_timesteps - ddim_timesteps = np.asarray(list(range(0, num_ddpm_timesteps, c))) - elif ddim_discr_method == 'quad': - ddim_timesteps = ((np.linspace(0, np.sqrt(num_ddpm_timesteps * .8), num_ddim_timesteps)) ** 2).astype(int) - else: - raise NotImplementedError(f'There is no ddim discretization method called "{ddim_discr_method}"') - - # assert ddim_timesteps.shape[0] == num_ddim_timesteps - # add one to get the final alpha values right (the ones from first scale to data during sampling) - steps_out = ddim_timesteps + 1 - if verbose: - print(f'Selected timesteps for ddim sampler: {steps_out}') - return steps_out - - -def make_ddim_sampling_parameters(alphacums, ddim_timesteps, eta, verbose=True): - # select alphas for computing the variance schedule - alphas = alphacums[ddim_timesteps] - alphas_prev = np.asarray([alphacums[0]] + alphacums[ddim_timesteps[:-1]].tolist()) - - # according the the formula provided in https://arxiv.org/abs/2010.02502 - sigmas = eta * np.sqrt((1 - alphas_prev) / (1 - alphas) * (1 - alphas / alphas_prev)) - if verbose: - print(f'Selected alphas for ddim sampler: a_t: {alphas}; a_(t-1): {alphas_prev}') - print(f'For the chosen value of eta, which is {eta}, ' - f'this results in the following sigma_t schedule for ddim sampler {sigmas}') - return sigmas, alphas, alphas_prev - - -def betas_for_alpha_bar(num_diffusion_timesteps, alpha_bar, max_beta=0.999): - """ - Create a beta schedule that discretizes the given alpha_t_bar function, - which defines the cumulative product of (1-beta) over time from t = [0,1]. - :param num_diffusion_timesteps: the number of betas to produce. - :param alpha_bar: a lambda that takes an argument t from 0 to 1 and - produces the cumulative product of (1-beta) up to that - part of the diffusion process. - :param max_beta: the maximum beta to use; use values lower than 1 to - prevent singularities. - """ - betas = [] - for i in range(num_diffusion_timesteps): - t1 = i / num_diffusion_timesteps - t2 = (i + 1) / num_diffusion_timesteps - betas.append(min(1 - alpha_bar(t2) / alpha_bar(t1), max_beta)) - return np.array(betas) - - -def extract_into_tensor(a, t, x_shape): - b, *_ = t.shape - out = a.gather(-1, t) - return out.reshape(b, *((1,) * (len(x_shape) - 1))) - - -def checkpoint(func, inputs, params, flag): - """ - Evaluate a function without caching intermediate activations, allowing for - reduced memory at the expense of extra compute in the backward pass. - :param func: the function to evaluate. - :param inputs: the argument sequence to pass to `func`. - :param params: a sequence of parameters `func` depends on but does not - explicitly take as arguments. - :param flag: if False, disable gradient checkpointing. - """ - if flag: - args = tuple(inputs) + tuple(params) - return CheckpointFunction.apply(func, len(inputs), *args) - else: - return func(*inputs) - - -class CheckpointFunction(torch.autograd.Function): - @staticmethod - def forward(ctx, run_function, length, *args): - ctx.run_function = run_function - ctx.input_tensors = list(args[:length]) - ctx.input_params = list(args[length:]) - - with torch.no_grad(): - output_tensors = ctx.run_function(*ctx.input_tensors) - return output_tensors - - @staticmethod - def backward(ctx, *output_grads): - ctx.input_tensors = [x.detach().requires_grad_(True) for x in ctx.input_tensors] - with torch.enable_grad(): - # Fixes a bug where the first op in run_function modifies the - # Tensor storage in place, which is not allowed for detach()'d - # Tensors. - shallow_copies = [x.view_as(x) for x in ctx.input_tensors] - output_tensors = ctx.run_function(*shallow_copies) - input_grads = torch.autograd.grad( - output_tensors, - ctx.input_tensors + ctx.input_params, - output_grads, - allow_unused=True, - ) - del ctx.input_tensors - del ctx.input_params - del output_tensors - return (None, None) + input_grads - - -def timestep_embedding(timesteps, dim, max_period=10000, repeat_only=False): - """ - Create sinusoidal timestep embeddings. - :param timesteps: a 1-D Tensor of N indices, one per batch element. - These may be fractional. - :param dim: the dimension of the output. - :param max_period: controls the minimum frequency of the embeddings. - :return: an [N x dim] Tensor of positional embeddings. - """ - if not repeat_only: - half = dim // 2 - freqs = torch.exp( - -math.log(max_period) * torch.arange(start=0, end=half, dtype=torch.float32) / half - ).to(device=timesteps.device) - args = timesteps[:, None].float() * freqs[None] - embedding = torch.cat([torch.cos(args), torch.sin(args)], dim=-1) - if dim % 2: - embedding = torch.cat([embedding, torch.zeros_like(embedding[:, :1])], dim=-1) - else: - embedding = repeat(timesteps, 'b -> b d', d=dim) - return embedding - - -def zero_module(module): - """ - Zero out the parameters of a module and return it. - """ - for p in module.parameters(): - p.detach().zero_() - return module - - -def scale_module(module, scale): - """ - Scale the parameters of a module and return it. - """ - for p in module.parameters(): - p.detach().mul_(scale) - return module - - -def mean_flat(tensor): - """ - Take the mean over all non-batch dimensions. - """ - return tensor.mean(dim=list(range(1, len(tensor.shape)))) - - -def normalization(channels): - """ - Make a standard normalization layer. - :param channels: number of input channels. - :return: an nn.Module for normalization. - """ - return GroupNorm32(32, channels) - - -# PyTorch 1.7 has SiLU, but we support PyTorch 1.5. -class SiLU(nn.Module): - def forward(self, x): - return x * torch.sigmoid(x) - - -class GroupNorm32(nn.GroupNorm): - def forward(self, x): - return super().forward(x.float()).type(x.dtype) - -def conv_nd(dims, *args, **kwargs): - """ - Create a 1D, 2D, or 3D convolution module. - """ - if dims == 1: - return nn.Conv1d(*args, **kwargs) - elif dims == 2: - return nn.Conv2d(*args, **kwargs) - elif dims == 3: - return nn.Conv3d(*args, **kwargs) - raise ValueError(f"unsupported dimensions: {dims}") - - -def linear(*args, **kwargs): - """ - Create a linear module. - """ - return nn.Linear(*args, **kwargs) - - -def avg_pool_nd(dims, *args, **kwargs): - """ - Create a 1D, 2D, or 3D average pooling module. - """ - if dims == 1: - return nn.AvgPool1d(*args, **kwargs) - elif dims == 2: - return nn.AvgPool2d(*args, **kwargs) - elif dims == 3: - return nn.AvgPool3d(*args, **kwargs) - raise ValueError(f"unsupported dimensions: {dims}") - - -class HybridConditioner(nn.Module): - - def __init__(self, c_concat_config, c_crossattn_config): - super().__init__() - self.concat_conditioner = instantiate_from_config(c_concat_config) - self.crossattn_conditioner = instantiate_from_config(c_crossattn_config) - - def forward(self, c_concat, c_crossattn): - c_concat = self.concat_conditioner(c_concat) - c_crossattn = self.crossattn_conditioner(c_crossattn) - return {'c_concat': [c_concat], 'c_crossattn': [c_crossattn]} - - -def noise_like(shape, device, repeat=False): - repeat_noise = lambda: torch.randn((1, *shape[1:]), device=device).repeat(shape[0], *((1,) * (len(shape) - 1))) - noise = lambda: torch.randn(shape, device=device) - return repeat_noise() if repeat else noise() diff --git a/spaces/awaawawawa/iurf7irfuyytruyyugb/ui/media/drawingboard.min.js b/spaces/awaawawawa/iurf7irfuyytruyyugb/ui/media/drawingboard.min.js deleted file mode 100644 index 289d40ae9b017d4a7979b02a0a03609634bd9e84..0000000000000000000000000000000000000000 --- a/spaces/awaawawawa/iurf7irfuyytruyyugb/ui/media/drawingboard.min.js +++ /dev/null @@ -1,4 +0,0 @@ -/* drawingboard.js v0.4.6 - https://github.com/Leimi/drawingboard.js -* Copyright (c) 2015 Emmanuel Pelletier -* Licensed MIT */ -!function(){"use strict";function a(a,b){for(;a.length>b;)a.shift()}var b=function(a){var b=a?a:{},c={provider:function(){throw new Error("No provider!")},maxLength:30,onUpdate:function(){}};this.provider="undefined"!=typeof b.provider?b.provider:c.provider,this.maxLength="undefined"!=typeof b.maxLength?b.maxLength:c.maxLength,this.onUpdate="undefined"!=typeof b.onUpdate?b.onUpdate:c.onUpdate,this.initialItem=null,this.clear()};b.prototype.initialize=function(a){this.stack[0]=a,this.initialItem=a},b.prototype.clear=function(){this.stack=[this.initialItem],this.position=0,this.onUpdate()},b.prototype.save=function(){this.provider(function(b){a(this.stack,this.maxLength),this.position=Math.min(this.position,this.stack.length-1),this.stack=this.stack.slice(0,this.position+1),this.stack.push(b),this.position++,this.onUpdate()}.bind(this))},b.prototype.undo=function(a){if(this.canUndo()){var b=this.stack[--this.position];this.onUpdate(),a&&a(b)}},b.prototype.redo=function(a){if(this.canRedo()){var b=this.stack[++this.position];this.onUpdate(),a&&a(b)}},b.prototype.canUndo=function(){return this.position>0},b.prototype.canRedo=function(){return this.positionh;h++){if(g=g[e[h]],g===a)throw"tim: '"+e[h]+"' not found in "+b;if(h===f-1)return g}})}}(),DrawingBoard.Utils.MicroEvent=function(){},DrawingBoard.Utils.MicroEvent.prototype={bind:function(a,b){this._events=this._events||{},this._events[a]=this._events[a]||[],this._events[a].push(b)},unbind:function(a,b){this._events=this._events||{},a in this._events!=!1&&this._events[a].splice(this._events[a].indexOf(b),1)},trigger:function(a){if(this._events=this._events||{},a in this._events!=!1)for(var b=0;b=0;g--)f+=parseInt(a.css(e[g]).replace("px",""),10);return f},DrawingBoard.Utils.boxBorderWidth=function(a,b,c){return DrawingBoard.Utils._boxBorderSize(a,b,c,"width")},DrawingBoard.Utils.boxBorderHeight=function(a,b,c){return DrawingBoard.Utils._boxBorderSize(a,b,c,"height")},DrawingBoard.Utils.isColor=function(a){return a&&a.length?/(^#[0-9A-F]{6}$)|(^#[0-9A-F]{3}$)/i.test(a)||-1!==$.inArray(a.substring(0,3),["rgb","hsl"]):!1},DrawingBoard.Utils.RGBToInt=function(a,b,c){var d=0;return d|=(255&a)<<16,d|=(255&b)<<8,d|=255&c},DrawingBoard.Utils.pixelAt=function(a,b,c){var d=4*(c*a.width+b),e=DrawingBoard.Utils.RGBToInt(a.data[d],a.data[d+1],a.data[d+2]);return[d,b,c,e]},DrawingBoard.Utils.compareColors=function(a,b,c){if(0===c)return a===b;var d=a>>16&255,e=b>>16&255,f=a>>8&255,g=b>>8&255,h=255&a,i=255&b;return Math.abs(d-e)<=c&&Math.abs(f-g)<=c&&Math.abs(h-i)<=c},function(){for(var a=["ms","moz","webkit","o"],b=0;b-1?c+='
    ':c='
    '+c,this.$el.addClass("drawing-board").append(c),this.dom={$canvasWrapper:this.$el.find(".drawing-board-canvas-wrapper"),$canvas:this.$el.find(".drawing-board-canvas"),$cursor:this.$el.find(".drawing-board-cursor"),$controls:this.$el.find(".drawing-board-controls")},$.each(["left","right","center"],$.proxy(function(a,b){return this.opts.controlsPosition.indexOf(b)>-1?(this.dom.$controls.attr("data-align",b),!1):void 0},this)),this.canvas=this.dom.$canvas.get(0),this.ctx=this.canvas&&this.canvas.getContext&&this.canvas.getContext("2d")?this.canvas.getContext("2d"):null,this.color=this.opts.color,this.ctx?(this.storage=this._getStorage(),this.initHistory(),this.reset({webStorage:!1,history:!1,background:!1}),this.initControls(),this.resize(),this.reset({webStorage:!1,history:!1,background:!0}),this.restoreWebStorage(),this.initDropEvents(),void this.initDrawEvents()):(this.opts.errorMessage&&this.$el.html(this.opts.errorMessage),!1)},DrawingBoard.Board.defaultOpts={controls:["Color","DrawingMode","Size","Navigation"],controlsPosition:"top left",color:"#000000",size:1,background:"#fff",eraserColor:"background",fillTolerance:100,fillHack:!0,webStorage:"session",droppable:!1,enlargeYourContainer:!1,errorMessage:'

    It seems you use an obsolete browser. Update it to start drawing.

    ',stretchImg:!1},DrawingBoard.Board.prototype={mergeOptions:function(a){return a=$.extend({},DrawingBoard.Board.defaultOpts,a),a.background||"background"!==a.eraserColor||(a.eraserColor="transparent"),a},reset:function(a){a=$.extend({color:this.opts.color,size:this.opts.size,webStorage:!0,history:!0,background:!1},a),this.setMode("pencil"),a.background&&this.resetBackground(this.opts.background,$.proxy(function(){a.history&&this.saveHistory()},this)),a.color&&this.setColor(a.color),a.size&&(this.ctx.lineWidth=a.size),this.ctx.lineCap="round",this.ctx.lineJoin="round",a.webStorage&&this.saveWebStorage(),a.history&&!a.background&&this.saveHistory(),this.blankCanvas=this.getImg(),this.ev.trigger("board:reset",a)},resetBackground:function(a,b){a=a||this.opts.background;var c=DrawingBoard.Utils.isColor(a),d=this.getMode();this.setMode("pencil"),this.ctx.clearRect(0,0,this.ctx.canvas.width,this.ctx.canvas.height),c?(this.ctx.fillStyle=a,this.ctx.fillRect(0,0,this.ctx.canvas.width,this.ctx.canvas.height),this.history.initialize(this.getImg()),b&&b()):a&&this.setImg(a,{callback:$.proxy(function(){this.history.initialize(this.getImg()),b&&b()},this)}),this.setMode(d)},resize:function(){this.dom.$controls.toggleClass("drawing-board-controls-hidden",!this.controls||!this.controls.length);var a,b,c=[this.$el.width(),DrawingBoard.Utils.boxBorderWidth(this.$el),DrawingBoard.Utils.boxBorderWidth(this.dom.$canvasWrapper,!0,!0)],d=[this.$el.height(),DrawingBoard.Utils.boxBorderHeight(this.$el),this.dom.$controls.height(),DrawingBoard.Utils.boxBorderHeight(this.dom.$controls,!1,!0),DrawingBoard.Utils.boxBorderHeight(this.dom.$canvasWrapper,!0,!0)],e=function(a,b){b=b||1;for(var c=a[0],d=1;d0&&q.push(DrawingBoard.Utils.pixelAt(c,p[e]-1,p[f])),p[e]0&&q.push(DrawingBoard.Utils.pixelAt(c,p[e],p[f]-1)),p[f]10&&this.isMouseHovering){this.dom.$cursor.css({width:this.ctx.lineWidth+"px",height:this.ctx.lineWidth+"px"});var a=DrawingBoard.Utils.tpl("translateX({{x}}px) translateY({{y}}px)",{x:this.coords.current.x-this.ctx.lineWidth/2,y:this.coords.current.y-this.ctx.lineWidth/2});this.dom.$cursor.css({transform:a,"-webkit-transform":a,"-ms-transform":a}),this.dom.$cursor.removeClass("drawing-board-utils-hidden")}else this.dom.$cursor.addClass("drawing-board-utils-hidden");if(this.isDrawing){var b=this._getMidInputCoords(this.coords.current);this.ctx.beginPath(),this.ctx.moveTo(b.x,b.y),this.ctx.quadraticCurveTo(this.coords.old.x,this.coords.old.y,this.coords.oldMid.x,this.coords.oldMid.y),this.ctx.stroke(),this.coords.old=this.coords.current,this.coords.oldMid=b}window.requestAnimationFrame&&requestAnimationFrame($.proxy(function(){this.draw()},this))},_onInputStart:function(a,b){this.coords.current=this.coords.old=b,this.coords.oldMid=this._getMidInputCoords(b),this.isDrawing=!0,window.requestAnimationFrame||this.draw(),this.ev.trigger("board:startDrawing",{e:a,coords:b}),a.stopPropagation(),a.preventDefault()},_onInputMove:function(a,b){this.coords.current=b,this.ev.trigger("board:drawing",{e:a,coords:b}),window.requestAnimationFrame||this.draw(),a.stopPropagation(),a.preventDefault()},_onInputStop:function(a,b){!this.isDrawing||a.touches&&0!==a.touches.length||(this.isDrawing=!1,this.saveWebStorage(),this.saveHistory(),this.ev.trigger("board:stopDrawing",{e:a,coords:b}),this.ev.trigger("board:userAction"),a.stopPropagation(),a.preventDefault())},_onMouseOver:function(a,b){this.isMouseHovering=!0,this.coords.old=this._getInputCoords(a),this.coords.oldMid=this._getMidInputCoords(this.coords.old),this.ev.trigger("board:mouseOver",{e:a,coords:b})},_onMouseOut:function(a,b){this.isMouseHovering=!1,this.ev.trigger("board:mouseOut",{e:a,coords:b})},_getInputCoords:function(a){a=a.originalEvent?a.originalEvent:a;var b,c,d=this.canvas.getBoundingClientRect(),e=this.dom.$canvas.width(),f=this.dom.$canvas.height();return a.touches&&1==a.touches.length?(b=a.touches[0].pageX,c=a.touches[0].pageY):(b=a.pageX,c=a.pageY),b-=this.dom.$canvas.offset().left,c-=this.dom.$canvas.offset().top,b*=e/d.width,c*=f/d.height,{x:b,y:c}},_getMidInputCoords:function(a){return{x:this.coords.old.x+a.x>>1,y:this.coords.old.y+a.y>>1}}},DrawingBoard.Control=function(a,b){return this.board=a,this.opts=$.extend({},this.defaults,b),this.$el=$(document.createElement("div")).addClass("drawing-board-control"),this.name&&this.$el.addClass("drawing-board-control-"+this.name),this.board.ev.bind("board:reset",$.proxy(this.onBoardReset,this)),this.initialize.apply(this,arguments),this},DrawingBoard.Control.prototype={name:"",defaults:{},initialize:function(){},addToBoard:function(){this.board.addControl(this)},onBoardReset:function(){}},DrawingBoard.Control.extend=function(a,b){var c,d=this;c=a&&a.hasOwnProperty("constructor")?a.constructor:function(){return d.apply(this,arguments)},$.extend(c,d,b);var e=function(){this.constructor=c};return e.prototype=d.prototype,c.prototype=new e,a&&$.extend(c.prototype,a),c.__super__=d.prototype,c},DrawingBoard.Control.Color=DrawingBoard.Control.extend({name:"colors",initialize:function(){this.initTemplate();var a=this;this.$el.on("click",".drawing-board-control-colors-picker",function(b){var c=$(this).attr("data-color");a.board.setColor(c),a.$el.find(".drawing-board-control-colors-current").css("background-color",c).attr("data-color",c),a.board.ev.trigger("color:changed",c),a.$el.find(".drawing-board-control-colors-rainbows").addClass("drawing-board-utils-hidden"),b.preventDefault()}),this.$el.on("click",".drawing-board-control-colors-current",function(b){a.$el.find(".drawing-board-control-colors-rainbows").toggleClass("drawing-board-utils-hidden"),b.preventDefault()}),$("body").on("click",function(b){var c=$(b.target),d=c.hasClass("drawing-board-control-colors-current")?c:c.closest(".drawing-board-control-colors-current"),e=a.$el.find(".drawing-board-control-colors-current"),f=a.$el.find(".drawing-board-control-colors-rainbows");d.length&&d.get(0)===e.get(0)||f.hasClass("drawing-board-utils-hidden")||f.addClass("drawing-board-utils-hidden")})},initTemplate:function(){var a='
    {{rainbows}}
    ',b='
    ',c="";$.each([.75,.5,.25],$.proxy(function(a,d){var e=0,f=null;for(c+='
    ',.25==d&&(f=this._rgba(0,0,0,1)),.5==d&&(f=this._rgba(150,150,150,1)),.75==d&&(f=this._rgba(255,255,255,1)),c+=DrawingBoard.Utils.tpl(b,{color:f.toString()});330>=e;)c+=DrawingBoard.Utils.tpl(b,{color:this._hsl2Rgba(this._hsl(e-60,1,d)).toString()}),e+=30;c+="
    "},this)),this.$el.append($(DrawingBoard.Utils.tpl(a,{color:this.board.color,rainbows:c}))),this.$el.find(".drawing-board-control-colors-rainbows").addClass("drawing-board-utils-hidden")},onBoardReset:function(){this.board.setColor(this.$el.find(".drawing-board-control-colors-current").attr("data-color"))},_rgba:function(a,b,c,d){return{r:a,g:b,b:c,a:d,toString:function(){return"rgba("+a+", "+b+", "+c+", "+d+")"}}},_hsl:function(a,b,c){return{h:a,s:b,l:c,toString:function(){return"hsl("+a+", "+100*b+"%, "+100*c+"%)"}}},_hex2Rgba:function(a){var b=parseInt(a.substring(1),16);return this._rgba(b>>16,b>>8&255,255&b,1)},_hsl2Rgba:function(a){function b(a,b,c){return 0>c&&(c+=1),c>1&&(c-=1),1/6>c?a+6*(b-a)*c:.5>c?b:2/3>c?a+(b-a)*(2/3-c)*6:a}var c,d,e,f=a.h/360,g=a.s,h=a.l;if(0===g)c=d=e=h;else{var i=.5>h?h*(1+g):h+g-h*g,j=2*h-i;c=Math.floor(255*b(j,i,f+1/3)),d=Math.floor(255*b(j,i,f)),e=Math.floor(255*b(j,i,f-1/3))}return this._rgba(c,d,e,1)}}),DrawingBoard.Control.DrawingMode=DrawingBoard.Control.extend({name:"drawingmode",defaults:{pencil:!0,eraser:!0,filler:!0},initialize:function(){this.prevMode=this.board.getMode(),$.each(["pencil","eraser","filler"],$.proxy(function(a,b){this.opts[b]&&this.$el.append('')},this)),this.$el.on("click","button[data-mode]",$.proxy(function(a){var b=$(a.currentTarget).attr("data-mode"),c=this.board.getMode();c!==b&&(this.prevMode=c);var d=c===b?this.prevMode:b;this.board.setMode(d),a.preventDefault()},this)),this.board.ev.bind("board:mode",$.proxy(function(a){this.toggleButtons(a)},this)),this.toggleButtons(this.board.getMode())},toggleButtons:function(a){this.$el.find("button[data-mode]").each(function(b,c){var d=$(c);d.toggleClass("active",a===d.attr("data-mode"))})}}),DrawingBoard.Control.Navigation=DrawingBoard.Control.extend({name:"navigation",defaults:{back:!0,forward:!0,reset:!0},initialize:function(){var a="";if(this.opts.back&&(a+=''),this.opts.forward&&(a+=''),this.opts.reset&&(a+=''),this.$el.append(a),this.opts.back){var b=this.$el.find(".drawing-board-control-navigation-back");this.board.ev.bind("historyNavigation",$.proxy(this.updateBack,this,b)),this.$el.on("click",".drawing-board-control-navigation-back",$.proxy(function(a){this.board.goBackInHistory(),a.preventDefault()},this)),this.updateBack(b)}if(this.opts.forward){var c=this.$el.find(".drawing-board-control-navigation-forward");this.board.ev.bind("historyNavigation",$.proxy(this.updateForward,this,c)),this.$el.on("click",".drawing-board-control-navigation-forward",$.proxy(function(a){this.board.goForthInHistory(),a.preventDefault()},this)),this.updateForward(c)}this.opts.reset&&this.$el.on("click",".drawing-board-control-navigation-reset",$.proxy(function(a){this.board.reset({background:!0}),a.preventDefault()},this))},updateBack:function(a){this.board.history.canUndo()?a.removeAttr("disabled"):a.attr("disabled","disabled")},updateForward:function(a){this.board.history.canRedo()?a.removeAttr("disabled"):a.attr("disabled","disabled")}}),DrawingBoard.Control.Size=DrawingBoard.Control.extend({name:"size",defaults:{type:"auto",dropdownValues:[1,3,6,10,20,30,40,50],min:1,max:50},types:["dropdown","range"],initialize:function(){"auto"==this.opts.type&&(this.opts.type=this._iHasRangeInput()?"range":"dropdown");var a=$.inArray(this.opts.type,this.types)>-1?this["_"+this.opts.type+"Template"]():!1;if(!a)return!1;this.val=this.board.opts.size,this.$el.append($(a)),this.$el.attr("data-drawing-board-type",this.opts.type),this.updateView();var b=this;"range"==this.opts.type&&this.$el.on("change",".drawing-board-control-size-range-input",function(a){b.val=$(this).val(),b.updateView(),b.board.ev.trigger("size:changed",b.val),a.preventDefault()}),"dropdown"==this.opts.type&&(this.$el.on("click",".drawing-board-control-size-dropdown-current",$.proxy(function(){this.$el.find(".drawing-board-control-size-dropdown").toggleClass("drawing-board-utils-hidden")},this)),this.$el.on("click","[data-size]",function(a){b.val=parseInt($(this).attr("data-size"),0),b.updateView(),b.board.ev.trigger("size:changed",b.val),a.preventDefault()}))},_rangeTemplate:function(){var a='
    ';return DrawingBoard.Utils.tpl(a,{min:this.opts.min,max:this.opts.max,size:this.board.opts.size})},_dropdownTemplate:function(){var a='
      ';return $.each(this.opts.dropdownValues,function(b,c){a+=DrawingBoard.Utils.tpl('
    • ',{size:c})}),a+="
    "},onBoardReset:function(){this.updateView()},updateView:function(){var a=this.val;if(this.board.ctx.lineWidth=a,this.$el.find(".drawing-board-control-size-range-current, .drawing-board-control-size-dropdown-current span").css({width:a+"px",height:a+"px",borderRadius:a+"px",marginLeft:-1*a/2+"px",marginTop:-1*a/2+"px"}),this.$el.find(".drawing-board-control-inner").attr("title",a),"dropdown"==this.opts.type){var b=null;$.each(this.opts.dropdownValues,function(c,d){(null===b||Math.abs(d-a)'),this.$el.on("click",".drawing-board-control-download-button",$.proxy(function(a){this.board.downloadImg(),a.preventDefault()},this))}}); \ No newline at end of file diff --git a/spaces/awacke1/CardEvolution-PlayingBoard/app.py b/spaces/awacke1/CardEvolution-PlayingBoard/app.py deleted file mode 100644 index 6f43650af6bfaf6afeed6393eaa2b22d58d76701..0000000000000000000000000000000000000000 --- a/spaces/awacke1/CardEvolution-PlayingBoard/app.py +++ /dev/null @@ -1,75 +0,0 @@ -import streamlit as st -import os -import csv -import time - -uploaded_images = {'characters': {}, 'terrain': {}} - -def get_image_path(img, name, image_type): - file_path = f"data/uploadedImages/{image_type}/{name}/{img.name}" - os.makedirs(os.path.dirname(file_path), exist_ok=True) - with open(file_path, "wb") as img_file: - img_file.write(img.getbuffer()) - return file_path - -def update_csv_file(uploaded_file, name, image_type): - csv_file_path = "Resources.csv" - with open(csv_file_path, mode='a', newline='') as csv_file: - csv_writer = csv.writer(csv_file) - csv_writer.writerow([name, uploaded_file.name, image_type]) - -def get_uploaded_files_info(): - csv_file_path = "Resources.csv" - with open(csv_file_path, mode='r') as csv_file: - csv_reader = csv.reader(csv_file) - files_info = [] - for row in csv_reader: - files_info.append(row) - return files_info - -def display_images_from_csv(): - files_info = get_uploaded_files_info() - for row in files_info: - if row[2] == 'characters': - img_path = f"data/uploadedImages/{row[2]}/{row[0]}/{row[1]}" - st.sidebar.image(img_path, width=100, caption=row[0]) - else: - img_path = f"data/uploadedImages/{row[2]}/{row[0]}/{row[1]}" - st.image(img_path, width=100, caption=row[0]) - -image_type = st.selectbox('Choose image type:', options=['characters', 'terrain']) -name = st.text_input('Enter a name for the image:') -uploaded_files = st.file_uploader('Upload image(s)', type=['png', 'jpg'], accept_multiple_files=True) - -for uploaded_file in uploaded_files: - if uploaded_file is not None: - # Get actual image file - bytes_data = get_image_path(uploaded_file, name, image_type) - uploaded_images[image_type].setdefault(name, []) - uploaded_images[image_type][name].append(bytes_data) - st.image(bytes_data, use_column_width=True) - update_csv_file(uploaded_file, name, image_type) - -if image_type == 'characters': - if uploaded_images['characters']: - st.sidebar.write('**Characters**') - for name, files in uploaded_images['characters'].items(): - for file in files: - st.sidebar.image(file, width=100, caption=name) -else: - if uploaded_images['terrain']: - st.write('**Terrain**') - row = [] - for name, files in uploaded_images['terrain'].items(): - for file in files: - row.append(file) - if len(row) == 3: - st.image(row, width=100 * 3) - row = [] - if row: - st.image(row, width=100 * len(row)) # Last row, if not complete - -while True: - time.sleep(20) - st.empty() - display_images_from_csv() diff --git "a/spaces/awacke1/CardWriterPro/1_\360\237\223\235_form.py" "b/spaces/awacke1/CardWriterPro/1_\360\237\223\235_form.py" deleted file mode 100644 index fecf4769bbc4d860cfafa3406b2c7202c358ecc1..0000000000000000000000000000000000000000 --- "a/spaces/awacke1/CardWriterPro/1_\360\237\223\235_form.py" +++ /dev/null @@ -1,302 +0,0 @@ -from yaml import load -from persist import persist, load_widget_state -import streamlit as st -from io import StringIO -import tempfile -from pathlib import Path -import requests -from huggingface_hub import hf_hub_download, upload_file -import pandas as pd -from huggingface_hub import create_repo -import os -from middleMan import parse_into_jinja_markdown as pj - - - -@st.cache -def get_cached_data(): - languages_df = pd.read_html("https://hf.co/languages")[0] - languages_map = pd.Series(languages_df["Language"].values, index=languages_df["ISO code"]).to_dict() - - license_df = pd.read_html("https://huggingface.co/docs/hub/repositories-licenses")[0] - license_map = pd.Series( - license_df["License identifier (to use in model card)"].values, index=license_df.Fullname - ).to_dict() - - available_metrics = [x['id'] for x in requests.get('https://huggingface.co/api/metrics').json()] - - r = requests.get('https://huggingface.co/api/models-tags-by-type') - tags_data = r.json() - libraries = [x['id'] for x in tags_data['library']] - tasks = [x['id'] for x in tags_data['pipeline_tag']] - return languages_map, license_map, available_metrics, libraries, tasks - - -def card_upload(card_info,repo_id,token): - #commit_message=None, - repo_type = "space" - commit_description=None, - revision=None, - create_pr=None - with tempfile.TemporaryDirectory() as tmpdir: - tmp_path = Path(tmpdir) / "README.md" - tmp_path.write_text(str(card_info)) - url = upload_file( - path_or_fileobj=str(tmp_path), - path_in_repo="README.md", - repo_id=repo_id, - token=token, - repo_type=repo_type, - identical_ok=True, - revision=revision, - ) - return url - -def validate(self, repo_type="model"): - """Validates card against Hugging Face Hub's model card validation logic. - Using this function requires access to the internet, so it is only called - internally by `modelcards.ModelCard.push_to_hub`. - Args: - repo_type (`str`, *optional*): - The type of Hugging Face repo to push to. Defaults to None, which will use - use "model". Other options are "dataset" and "space". - """ - if repo_type is None: - repo_type = "model" - - # TODO - compare against repo types constant in huggingface_hub if we move this object there. - if repo_type not in ["model", "space", "dataset"]: - raise RuntimeError( - "Provided repo_type '{repo_type}' should be one of ['model', 'space'," - " 'dataset']." - ) - - body = { - "repoType": repo_type, - "content": str(self), - } - headers = {"Accept": "text/plain"} - - try: - r = requests.post( - "https://huggingface.co/api/validate-yaml", body, headers=headers - ) - r.raise_for_status() - except requests.exceptions.HTTPError as exc: - if r.status_code == 400: - raise RuntimeError(r.text) - else: - raise exc - - -## Save uploaded [markdown] file to directory to be used by jinja parser function -def save_uploadedfile(uploadedfile): - with open(os.path.join("temp_uploaded_filed_Dir",uploadedfile.name),"wb") as f: - f.write(uploadedfile.getbuffer()) - st.success("Saved File:{} to temp_uploaded_filed_Dir".format(uploadedfile.name)) - return uploadedfile.name - - -def main_page(): - - - if "model_name" not in st.session_state: - # Initialize session state. - st.session_state.update({ - "input_model_name": "", - "languages": [], - "license": "", - "library_name": "", - "datasets": "", - "metrics": [], - "task": "", - "tags": "", - "model_description": "Some cool model...", - "the_authors":"", - "Shared_by":"", - "Model_details_text": "", - "Model_developers": "", - "blog_url":"", - "Parent_Model_url":"", - "Parent_Model_name":"", - - "Model_how_to": "", - - "Model_uses": "", - "Direct_Use": "", - "Downstream_Use":"", - "Out-of-Scope_Use":"", - - "Model_Limits_n_Risks": "", - "Recommendations":"", - - "training_Data": "", - "model_preprocessing":"", - "Speeds_Sizes_Times":"", - - - - "Model_Eval": "", - "Testing_Data":"", - "Factors":"", - "Metrics":"", - "Model_Results":"", - - "Model_c02_emitted": "", - "Model_hardware":"", - "hours_used":"", - "Model_cloud_provider":"", - "Model_cloud_region":"", - - "Model_cite": "", - "paper_url": "", - "github_url": "", - "bibtex_citation": "", - "APA_citation":"", - - "Model_examin":"", - "Model_card_contact":"", - "Model_card_authors":"", - "Glossary":"", - "More_info":"", - - "Model_specs":"", - "compute_infrastructure":"", - "technical_specs_software":"", - - "check_box": bool, - "markdown_upload":" ", - "legal_view":bool, - "researcher_view":bool, - "beginner_technical_view":bool, - "markdown_state":"", - }) - ## getting cache for each warnings - languages_map, license_map, available_metrics, libraries, tasks = get_cached_data() - - ## form UI setting - st.header("Model Card Form") - - warning_placeholder = st.empty() - - st.text_input("Model Name", key=persist("model_name")) - st.text_area("Model Description", help="The model description provides basic details about the model. This includes the architecture, version, if it was introduced in a paper, if an original implementation is available, the author, and general information about the model. Any copyright should be attributed here. General information about training procedures, parameters, and important disclaimers can also be mentioned in this section.", key=persist('model_description')) - st.multiselect("Language(s)", list(languages_map), format_func=lambda x: languages_map[x], help="The language(s) associated with this model. If this is not a text-based model, you should specify whatever language that is used in the dataset. For instance, if the dataset's labels are in english, you should select English here.", key=persist("languages")) - st.selectbox("License", [""] + list(license_map.values()), help="The license associated with this model.", key=persist("license")) - st.selectbox("Library Name", [""] + libraries, help="The name of the library this model came from (Ex. pytorch, timm, spacy, keras, etc.). This is usually automatically detected in model repos, so it is not required.", key=persist('library_name')) - st.text_input("Parent Model (URL)", help="If this model has another model as its base, please provide the URL link to the parent model", key=persist("Parent_Model_name")) - st.text_input("Datasets (comma separated)", help="The dataset(s) used to train this model. Use dataset id from https://hf.co/datasets.", key=persist("datasets")) - st.multiselect("Metrics", available_metrics, help="Metrics used in the training/evaluation of this model. Use metric id from https://hf.co/metrics.", key=persist("metrics")) - st.selectbox("Task", [""] + tasks, help="What task does this model aim to solve?", key=persist('task')) - st.text_input("Tags (comma separated)", help="Additional tags to add which will be filterable on https://hf.co/models. (Ex. image-classification, vision, resnet)", key=persist("tags")) - st.text_input("Author(s) (comma separated)", help="The authors who developed this model. If you trained this model, the author is you.", key=persist("the_authors")) - st.text_input("Related Research Paper", help="Research paper related to this model.", key=persist("paper_url")) - st.text_input("Related GitHub Repository", help="Link to a GitHub repository used in the development of this model", key=persist("github_url")) - st.text_area("Bibtex Citation", help="Bibtex citations for related work", key=persist("bibtex_citations")) - st.text_input("Carbon Emitted:", help="You can estimate carbon emissions using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700)", key=persist("Model_c02_emitted")) - - - - # warnings setting - languages=st.session_state.languages or None - license=st.session_state.license or None - task = st.session_state.task or None - markdown_upload = st.session_state.markdown_upload - #uploaded_model_card = st.session_state.uploaded_model - # Handle any warnings... - do_warn = False - warning_msg = "Warning: The following fields are required but have not been filled in: " - if not languages: - warning_msg += "\n- Languages" - do_warn = True - if not license: - warning_msg += "\n- License" - do_warn = True - if not task or not markdown_upload: - warning_msg += "\n- Please choose a task or upload a model card" - do_warn = True - if do_warn: - warning_placeholder.error(warning_msg) - - with st.sidebar: - - ###################################################### - ### Uploading a model card from local drive - ###################################################### - st.markdown("## Upload Model Card") - - st.markdown("#### Model Card must be in markdown (.md) format.") - - # Read a single file - uploaded_file = st.file_uploader("Choose a file", type = ['md'], help = 'Please choose a markdown (.md) file type to upload') - if uploaded_file is not None: - - file_details = {"FileName":uploaded_file.name,"FileType":uploaded_file.type} - name_of_uploaded_file = save_uploadedfile(uploaded_file) - - st.session_state.markdown_upload = name_of_uploaded_file ## uploaded model card - - elif st.session_state.task =='fill-mask' or 'translation' or 'token-classification' or ' sentence-similarity' or 'summarization' or 'question-answering' or 'text2text-generation' or 'text-classification' or 'text-generation' or 'conversational': - #st.session_state.markdown_upload = open( - # "language_model_template1.md", "r+" - #).read() - st.session_state.markdown_upload = "language_model_template1.md" ## language model template - - elif st.session_state.task: - - st.session_state.markdown_upload = "current_card.md" ## default non language model template - - ######################################### - ### Uploading model card to HUB - ######################################### - out_markdown =open( st.session_state.markdown_upload, "r+" - ).read() - print_out_final = f"{out_markdown}" - st.markdown("## Export Loaded Model Card to Hub") - with st.form("Upload to 🤗 Hub"): - st.markdown("Use a token with write access from [here](https://hf.co/settings/tokens)") - token = st.text_input("Token", type='password') - repo_id = st.text_input("Repo ID") - submit = st.form_submit_button('Upload to 🤗 Hub', help='The current model card will be uploaded to a branch in the supplied repo ') - - if submit: - if len(repo_id.split('/')) == 2: - repo_url = create_repo(repo_id, exist_ok=True, token=token) - new_url = card_upload(pj(),repo_id, token=token) - st.success(f"Pushed the card to the repo [here]({new_url})!") # note: was repo_url - else: - st.error("Repo ID invalid. It should be username/repo-name. For example: nateraw/food") - - - ######################################### - ### Download model card - ######################################### - - - st.markdown("## Download current Model Card") - - if st.session_state.model_name is None or st.session_state.model_name== ' ': - downloaded_file_name = 'current_model_card.md' - else: - downloaded_file_name = st.session_state.model_name+'_'+'model_card.md' - download_status = st.download_button(label = 'Download Model Card', data = pj(), file_name = downloaded_file_name, help = "The current model card will be downloaded as a markdown (.md) file") - if download_status == True: - st.success("Your current model card, successfully downloaded 🤗") - - -def page_switcher(page): - st.session_state.runpage = page - -def main(): - - st.header("About Model Cards") - st.markdown(Path('about.md').read_text(), unsafe_allow_html=True) - btn = st.button('Create a Model Card 📝',on_click=page_switcher,args=(main_page,)) - if btn: - st.experimental_rerun() # rerun is needed to clear the page - -if __name__ == '__main__': - load_widget_state() - if 'runpage' not in st.session_state : - st.session_state.runpage = main - st.session_state.runpage() diff --git a/spaces/awacke1/SpaceBuggyPlaycanvasHTML5/index.html b/spaces/awacke1/SpaceBuggyPlaycanvasHTML5/index.html deleted file mode 100644 index 91c19c96dfc3a6d811de8b121e02f01435e9756e..0000000000000000000000000000000000000000 --- a/spaces/awacke1/SpaceBuggyPlaycanvasHTML5/index.html +++ /dev/null @@ -1,11 +0,0 @@ - - -

    Space Buggy Sim

    -

    User input: WASD

    -

    This WebGL demo demonstrates PlayCanvas runnable in an HTML5 playable surface available anywhere your browser goes. Check it out alone here:🤗 Inference API.

    -

    PlayCanvas project is here

    -
    - -
    - - \ No newline at end of file diff --git a/spaces/awacke1/StreamlitCalendar/app.py b/spaces/awacke1/StreamlitCalendar/app.py deleted file mode 100644 index 8e3f8ac2acc14a052f0fdb455e94acb5c886a905..0000000000000000000000000000000000000000 --- a/spaces/awacke1/StreamlitCalendar/app.py +++ /dev/null @@ -1,33 +0,0 @@ -import streamlit as st -import calendar - -def create_calendar(year, month): - c = calendar.Calendar(firstweekday=calendar.SUNDAY) - cal = c.monthdatescalendar(year, month) - return cal - -def schedule_appointment(year, month, day, hour, minute, duration): - # Placeholder logic to schedule the appointment - # You can add your own implementation here - appointment = f"Scheduled appointment on {month}/{day}/{year} at {hour}:{minute} for {duration} hour(s)." - return appointment - -def main(): - st.title("Appointment Scheduler") - - year = st.sidebar.slider("Year", min_value=2020, max_value=2025, value=2023, step=1) - month = st.sidebar.slider("Month", min_value=1, max_value=12, value=2, step=1) - cal = create_calendar(year, month) - - st.write("Choose a day to schedule an appointment:") - day_options = [day.day for week in cal for day in week if day.month == month] - day = st.selectbox("Day", day_options) - hour = st.selectbox("Hour", [str(i).zfill(2) for i in range(0, 24)]) - minute = st.selectbox("Minute", [str(i).zfill(2) for i in range(0, 60)]) - duration = st.selectbox("Duration (hours)", [0.5, 1]) - - appointment = schedule_appointment(year, month, day, hour, minute, duration) - st.write(appointment) - -if __name__ == "__main__": - main() \ No newline at end of file diff --git a/spaces/banana-projects/web3d/node_modules/three/examples/js/controls/MapControls.js b/spaces/banana-projects/web3d/node_modules/three/examples/js/controls/MapControls.js deleted file mode 100644 index 186a1bf0a434d1e55a76be951ffc191b733b7b42..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/examples/js/controls/MapControls.js +++ /dev/null @@ -1,1155 +0,0 @@ -/** - * @author qiao / https://github.com/qiao - * @author mrdoob / http://mrdoob.com - * @author alteredq / http://alteredqualia.com/ - * @author WestLangley / http://github.com/WestLangley - * @author erich666 / http://erichaines.com - * @author moroine / https://github.com/moroine - */ - -// This set of controls performs orbiting, dollying (zooming), and panning. -// Unlike TrackballControls, it maintains the "up" direction object.up (+Y by default). -// This is very similar to OrbitControls, another set of touch behavior -// -// Orbit - right mouse, or left mouse + ctrl/meta/shiftKey / touch: two-finger rotate -// Zoom - middle mouse, or mousewheel / touch: two-finger spread or squish -// Pan - left mouse, or arrow keys / touch: one-finger move - -THREE.MapControls = function ( object, domElement ) { - - this.object = object; - - this.domElement = ( domElement !== undefined ) ? domElement : document; - - // Set to false to disable this control - this.enabled = true; - - // "target" sets the location of focus, where the object orbits around - this.target = new THREE.Vector3(); - - // How far you can dolly in and out ( PerspectiveCamera only ) - this.minDistance = 0; - this.maxDistance = Infinity; - - // How far you can zoom in and out ( OrthographicCamera only ) - this.minZoom = 0; - this.maxZoom = Infinity; - - // How far you can orbit vertically, upper and lower limits. - // Range is 0 to Math.PI radians. - this.minPolarAngle = 0; // radians - this.maxPolarAngle = Math.PI; // radians - - // How far you can orbit horizontally, upper and lower limits. - // If set, must be a sub-interval of the interval [ - Math.PI, Math.PI ]. - this.minAzimuthAngle = - Infinity; // radians - this.maxAzimuthAngle = Infinity; // radians - - // Set to true to enable damping (inertia) - // If damping is enabled, you must call controls.update() in your animation loop - this.enableDamping = false; - this.dampingFactor = 0.25; - - // This option actually enables dollying in and out; left as "zoom" for backwards compatibility. - // Set to false to disable zooming - this.enableZoom = true; - this.zoomSpeed = 1.0; - - // Set to false to disable rotating - this.enableRotate = true; - this.rotateSpeed = 1.0; - - // Set to false to disable panning - this.enablePan = true; - this.panSpeed = 1.0; - this.screenSpacePanning = false; // if true, pan in screen-space - this.keyPanSpeed = 7.0; // pixels moved per arrow key push - - // Set to true to automatically rotate around the target - // If auto-rotate is enabled, you must call controls.update() in your animation loop - this.autoRotate = false; - this.autoRotateSpeed = 2.0; // 30 seconds per round when fps is 60 - - // Set to false to disable use of the keys - this.enableKeys = true; - - // The four arrow keys - this.keys = { LEFT: 37, UP: 38, RIGHT: 39, BOTTOM: 40 }; - - // Mouse buttons - this.mouseButtons = { LEFT: THREE.MOUSE.LEFT, MIDDLE: THREE.MOUSE.MIDDLE, RIGHT: THREE.MOUSE.RIGHT }; - - // for reset - this.target0 = this.target.clone(); - this.position0 = this.object.position.clone(); - this.zoom0 = this.object.zoom; - - // - // public methods - // - - this.getPolarAngle = function () { - - return spherical.phi; - - }; - - this.getAzimuthalAngle = function () { - - return spherical.theta; - - }; - - this.saveState = function () { - - scope.target0.copy( scope.target ); - scope.position0.copy( scope.object.position ); - scope.zoom0 = scope.object.zoom; - - }; - - this.reset = function () { - - scope.target.copy( scope.target0 ); - scope.object.position.copy( scope.position0 ); - scope.object.zoom = scope.zoom0; - - scope.object.updateProjectionMatrix(); - scope.dispatchEvent( changeEvent ); - - scope.update(); - - state = STATE.NONE; - - }; - - // this method is exposed, but perhaps it would be better if we can make it private... - this.update = function () { - - var offset = new THREE.Vector3(); - - // so camera.up is the orbit axis - var quat = new THREE.Quaternion().setFromUnitVectors( object.up, new THREE.Vector3( 0, 1, 0 ) ); - var quatInverse = quat.clone().inverse(); - - var lastPosition = new THREE.Vector3(); - var lastQuaternion = new THREE.Quaternion(); - - return function update() { - - var position = scope.object.position; - - offset.copy( position ).sub( scope.target ); - - // rotate offset to "y-axis-is-up" space - offset.applyQuaternion( quat ); - - // angle from z-axis around y-axis - spherical.setFromVector3( offset ); - - if ( scope.autoRotate && state === STATE.NONE ) { - - rotateLeft( getAutoRotationAngle() ); - - } - - spherical.theta += sphericalDelta.theta; - spherical.phi += sphericalDelta.phi; - - // restrict theta to be between desired limits - spherical.theta = Math.max( scope.minAzimuthAngle, Math.min( scope.maxAzimuthAngle, spherical.theta ) ); - - // restrict phi to be between desired limits - spherical.phi = Math.max( scope.minPolarAngle, Math.min( scope.maxPolarAngle, spherical.phi ) ); - - spherical.makeSafe(); - - - spherical.radius *= scale; - - // restrict radius to be between desired limits - spherical.radius = Math.max( scope.minDistance, Math.min( scope.maxDistance, spherical.radius ) ); - - // move target to panned location - scope.target.add( panOffset ); - - offset.setFromSpherical( spherical ); - - // rotate offset back to "camera-up-vector-is-up" space - offset.applyQuaternion( quatInverse ); - - position.copy( scope.target ).add( offset ); - - scope.object.lookAt( scope.target ); - - if ( scope.enableDamping === true ) { - - sphericalDelta.theta *= ( 1 - scope.dampingFactor ); - sphericalDelta.phi *= ( 1 - scope.dampingFactor ); - - panOffset.multiplyScalar( 1 - scope.dampingFactor ); - - } else { - - sphericalDelta.set( 0, 0, 0 ); - - panOffset.set( 0, 0, 0 ); - - } - - scale = 1; - - // update condition is: - // min(camera displacement, camera rotation in radians)^2 > EPS - // using small-angle approximation cos(x/2) = 1 - x^2 / 8 - - if ( zoomChanged || - lastPosition.distanceToSquared( scope.object.position ) > EPS || - 8 * ( 1 - lastQuaternion.dot( scope.object.quaternion ) ) > EPS ) { - - scope.dispatchEvent( changeEvent ); - - lastPosition.copy( scope.object.position ); - lastQuaternion.copy( scope.object.quaternion ); - zoomChanged = false; - - return true; - - } - - return false; - - }; - - }(); - - this.dispose = function () { - - scope.domElement.removeEventListener( 'contextmenu', onContextMenu, false ); - scope.domElement.removeEventListener( 'mousedown', onMouseDown, false ); - scope.domElement.removeEventListener( 'wheel', onMouseWheel, false ); - - scope.domElement.removeEventListener( 'touchstart', onTouchStart, false ); - scope.domElement.removeEventListener( 'touchend', onTouchEnd, false ); - scope.domElement.removeEventListener( 'touchmove', onTouchMove, false ); - - document.removeEventListener( 'mousemove', onMouseMove, false ); - document.removeEventListener( 'mouseup', onMouseUp, false ); - - window.removeEventListener( 'keydown', onKeyDown, false ); - - //scope.dispatchEvent( { type: 'dispose' } ); // should this be added here? - - }; - - // - // internals - // - - var scope = this; - - var changeEvent = { type: 'change' }; - var startEvent = { type: 'start' }; - var endEvent = { type: 'end' }; - - var STATE = { - NONE: 0, - ROTATE_UP: 1, - ROTATE_LEFT: 2, - ROTATE: 3, // ROTATE_UP | ROTATE_LEFT - DOLLY: 4, - DOLLY_ROTATE: 7, // ROTATE | DOLLY - PAN: 8, - DOLLY_PAN: 12, // DOLLY | PAN - }; - - var state = STATE.NONE; - - var EPS = 0.000001; - - // current position in spherical coordinates - var spherical = new THREE.Spherical(); - var sphericalDelta = new THREE.Spherical(); - - var scale = 1; - var panOffset = new THREE.Vector3(); - var zoomChanged = false; - - var rotateStart = new THREE.Vector2(); - var rotateStart2 = new THREE.Vector2(); - var rotateEnd = new THREE.Vector2(); - var rotateEnd2 = new THREE.Vector2(); - var rotateDelta = new THREE.Vector2(); - var rotateDelta2 = new THREE.Vector2(); - var rotateDeltaStartFingers = new THREE.Vector2(); - var rotateDeltaEndFingers = new THREE.Vector2(); - - var panStart = new THREE.Vector2(); - var panEnd = new THREE.Vector2(); - var panDelta = new THREE.Vector2(); - - var dollyStart = new THREE.Vector2(); - var dollyEnd = new THREE.Vector2(); - var dollyDelta = new THREE.Vector2(); - - function getAutoRotationAngle() { - - return 2 * Math.PI / 60 / 60 * scope.autoRotateSpeed; - - } - - function getZoomScale() { - - return Math.pow( 0.95, scope.zoomSpeed ); - - } - - function rotateLeft( angle ) { - - sphericalDelta.theta -= angle; - - } - - function rotateUp( angle ) { - - sphericalDelta.phi -= angle; - - } - - var panLeft = function () { - - var v = new THREE.Vector3(); - - return function panLeft( distance, objectMatrix ) { - - v.setFromMatrixColumn( objectMatrix, 0 ); // get X column of objectMatrix - v.multiplyScalar( - distance ); - - panOffset.add( v ); - - }; - - }(); - - var panUp = function () { - - var v = new THREE.Vector3(); - - return function panUp( distance, objectMatrix ) { - - if ( scope.screenSpacePanning === true ) { - - v.setFromMatrixColumn( objectMatrix, 1 ); - - } else { - - v.setFromMatrixColumn( objectMatrix, 0 ); - v.crossVectors( scope.object.up, v ); - - } - - v.multiplyScalar( distance ); - - panOffset.add( v ); - - }; - - }(); - - // deltaX and deltaY are in pixels; right and down are positive - var pan = function () { - - var offset = new THREE.Vector3(); - - return function pan( deltaX, deltaY ) { - - var element = scope.domElement === document ? scope.domElement.body : scope.domElement; - - if ( scope.object.isPerspectiveCamera ) { - - // perspective - var position = scope.object.position; - offset.copy( position ).sub( scope.target ); - var targetDistance = offset.length(); - - // half of the fov is center to top of screen - targetDistance *= Math.tan( ( scope.object.fov / 2 ) * Math.PI / 180.0 ); - - // we use only clientHeight here so aspect ratio does not distort speed - panLeft( 2 * deltaX * targetDistance / element.clientHeight, scope.object.matrix ); - panUp( 2 * deltaY * targetDistance / element.clientHeight, scope.object.matrix ); - - } else if ( scope.object.isOrthographicCamera ) { - - // orthographic - panLeft( deltaX * ( scope.object.right - scope.object.left ) / scope.object.zoom / element.clientWidth, scope.object.matrix ); - panUp( deltaY * ( scope.object.top - scope.object.bottom ) / scope.object.zoom / element.clientHeight, scope.object.matrix ); - - } else { - - // camera neither orthographic nor perspective - console.warn( 'WARNING: MapControls.js encountered an unknown camera type - pan disabled.' ); - scope.enablePan = false; - - } - - }; - - }(); - - function dollyIn( dollyScale ) { - - if ( scope.object.isPerspectiveCamera ) { - - scale /= dollyScale; - - } else if ( scope.object.isOrthographicCamera ) { - - scope.object.zoom = Math.max( scope.minZoom, Math.min( scope.maxZoom, scope.object.zoom * dollyScale ) ); - scope.object.updateProjectionMatrix(); - zoomChanged = true; - - } else { - - console.warn( 'WARNING: MapControls.js encountered an unknown camera type - dolly/zoom disabled.' ); - scope.enableZoom = false; - - } - - } - - function dollyOut( dollyScale ) { - - if ( scope.object.isPerspectiveCamera ) { - - scale *= dollyScale; - - } else if ( scope.object.isOrthographicCamera ) { - - scope.object.zoom = Math.max( scope.minZoom, Math.min( scope.maxZoom, scope.object.zoom / dollyScale ) ); - scope.object.updateProjectionMatrix(); - zoomChanged = true; - - } else { - - console.warn( 'WARNING: MapControls.js encountered an unknown camera type - dolly/zoom disabled.' ); - scope.enableZoom = false; - - } - - } - - // - // event callbacks - update the object state - // - - function handleMouseDownRotate( event ) { - - //console.log( 'handleMouseDownRotate' ); - - rotateStart.set( event.clientX, event.clientY ); - - } - - function handleMouseDownDolly( event ) { - - //console.log( 'handleMouseDownDolly' ); - - dollyStart.set( event.clientX, event.clientY ); - - } - - function handleMouseDownPan( event ) { - - //console.log( 'handleMouseDownPan' ); - - panStart.set( event.clientX, event.clientY ); - - } - - function handleMouseMoveRotate( event ) { - - //console.log( 'handleMouseMoveRotate' ); - - rotateEnd.set( event.clientX, event.clientY ); - - rotateDelta.subVectors( rotateEnd, rotateStart ).multiplyScalar( scope.rotateSpeed ); - - var element = scope.domElement === document ? scope.domElement.body : scope.domElement; - - rotateLeft( 2 * Math.PI * rotateDelta.x / element.clientHeight ); // yes, height - - rotateUp( 2 * Math.PI * rotateDelta.y / element.clientHeight ); - - rotateStart.copy( rotateEnd ); - - scope.update(); - - } - - function handleMouseMoveDolly( event ) { - - //console.log( 'handleMouseMoveDolly' ); - - dollyEnd.set( event.clientX, event.clientY ); - - dollyDelta.subVectors( dollyEnd, dollyStart ); - - if ( dollyDelta.y > 0 ) { - - dollyIn( getZoomScale() ); - - } else if ( dollyDelta.y < 0 ) { - - dollyOut( getZoomScale() ); - - } - - dollyStart.copy( dollyEnd ); - - scope.update(); - - } - - function handleMouseMovePan( event ) { - - //console.log( 'handleMouseMovePan' ); - - panEnd.set( event.clientX, event.clientY ); - - panDelta.subVectors( panEnd, panStart ).multiplyScalar( scope.panSpeed ); - - pan( panDelta.x, panDelta.y ); - - panStart.copy( panEnd ); - - scope.update(); - - } - - function handleMouseUp( event ) { - - // console.log( 'handleMouseUp' ); - - } - - function handleMouseWheel( event ) { - - // console.log( 'handleMouseWheel' ); - - if ( event.deltaY < 0 ) { - - dollyOut( getZoomScale() ); - - } else if ( event.deltaY > 0 ) { - - dollyIn( getZoomScale() ); - - } - - scope.update(); - - } - - function handleKeyDown( event ) { - - //console.log( 'handleKeyDown' ); - - switch ( event.keyCode ) { - - case scope.keys.UP: - pan( 0, scope.keyPanSpeed ); - scope.update(); - break; - - case scope.keys.BOTTOM: - pan( 0, - scope.keyPanSpeed ); - scope.update(); - break; - - case scope.keys.LEFT: - pan( scope.keyPanSpeed, 0 ); - scope.update(); - break; - - case scope.keys.RIGHT: - pan( - scope.keyPanSpeed, 0 ); - scope.update(); - break; - - } - - } - - function handleTouchStartRotate( event ) { - - // console.log( 'handleTouchStartRotate' ); - - // First finger - rotateStart.set( event.touches[ 0 ].pageX, event.touches[ 0 ].pageY ); - - // Second finger - rotateStart2.set( event.touches[ 1 ].pageX, event.touches[ 1 ].pageY ); - - } - - function handleTouchStartDolly( event ) { - - if ( scope.enableZoom ) { - - // console.log( 'handleTouchStartDolly' ); - - var dx = event.touches[ 0 ].pageX - event.touches[ 1 ].pageX; - var dy = event.touches[ 0 ].pageY - event.touches[ 1 ].pageY; - - var distance = Math.sqrt( dx * dx + dy * dy ); - - dollyStart.set( 0, distance ); - - } - - } - - function handleTouchStartPan( event ) { - - if ( scope.enablePan ) { - - // console.log( 'handleTouchStartPan' ); - - panStart.set( event.touches[ 0 ].pageX, event.touches[ 0 ].pageY ); - - } - - } - - function handleTouchMoveRotate( event ) { - - if ( scope.enableRotate === false ) return; - if ( ( state & STATE.ROTATE ) === 0 ) return; - - // First finger - rotateEnd.set( event.touches[ 0 ].pageX, event.touches[ 0 ].pageY ); - - // Second finger - rotateEnd2.set( event.touches[ 1 ].pageX, event.touches[ 1 ].pageY ); - - rotateDelta.subVectors( rotateEnd, rotateStart ); - rotateDelta2.subVectors( rotateEnd2, rotateStart2 ); - rotateDeltaStartFingers.subVectors( rotateStart2, rotateStart ); - rotateDeltaEndFingers.subVectors( rotateEnd2, rotateEnd ); - - if ( isRotateUp() ) { - - var element = scope.domElement === document ? scope.domElement.body : scope.domElement; - - // rotating up and down along whole screen attempts to go 360, but limited to 180 - rotateUp( 2 * Math.PI * rotateDelta.y / element.clientHeight ); - - // Start rotateUp ==> disable all movement to prevent flickering - state = STATE.ROTATE_UP; - - } else if ( ( state & STATE.ROTATE_LEFT ) !== 0 ) { - - rotateLeft( ( rotateDeltaStartFingers.angle() - rotateDeltaEndFingers.angle() ) * scope.rotateSpeed ); - - } - - rotateStart.copy( rotateEnd ); - rotateStart2.copy( rotateEnd2 ); - - } - - function isRotateUp() { - - // At start, does the two fingers are aligned horizontally - if ( ! isHorizontal( rotateDeltaStartFingers ) ) { - - return false; - - } - - // At end, does the two fingers are aligned horizontally - if ( ! isHorizontal( rotateDeltaEndFingers ) ) { - - return false; - - } - - // does the first finger moved vertically between start and end - if ( ! isVertical( rotateDelta ) ) { - - return false; - - } - - // does the second finger moved vertically between start and end - if ( ! isVertical( rotateDelta2 ) ) { - - return false; - - } - - // Does the two finger moved in the same direction (prevent moving one finger vertically up while the other goes down) - return rotateDelta.dot( rotateDelta2 ) > 0; - - } - - var isHorizontal = function () { - - var precision = Math.sin( Math.PI / 6 ); - - return function isHorizontal( vector ) { - - return Math.abs( Math.sin( vector.angle() ) ) < precision; - - }; - - }(); - - var isVertical = function () { - - var precision = Math.cos( Math.PI / 2 - Math.PI / 6 ); - - return function isVertical( vector ) { - - return Math.abs( Math.cos( vector.angle() ) ) < precision; - - }; - - }(); - - function handleTouchMoveDolly( event ) { - - if ( scope.enableZoom === false ) return; - if ( ( state & STATE.DOLLY ) === 0 ) return; - - // console.log( 'handleTouchMoveDolly' ); - - var dx = event.touches[ 0 ].pageX - event.touches[ 1 ].pageX; - var dy = event.touches[ 0 ].pageY - event.touches[ 1 ].pageY; - - var distance = Math.sqrt( dx * dx + dy * dy ); - - dollyEnd.set( 0, distance ); - - dollyDelta.set( 0, Math.pow( dollyEnd.y / dollyStart.y, scope.zoomSpeed ) ); - - dollyIn( dollyDelta.y ); - - dollyStart.copy( dollyEnd ); - - } - - function handleTouchMovePan( event ) { - - if ( scope.enablePan === false ) return; - if ( ( state & STATE.PAN ) === 0 ) return; - - // console.log( 'handleTouchMovePan' ); - - panEnd.set( event.touches[ 0 ].pageX, event.touches[ 0 ].pageY ); - - panDelta.subVectors( panEnd, panStart ).multiplyScalar( scope.panSpeed ); - - pan( panDelta.x, panDelta.y ); - - panStart.copy( panEnd ); - - } - - function handleTouchEnd( event ) { - - //console.log( 'handleTouchEnd' ); - - } - - // - // event handlers - FSM: listen for events and reset state - // - - function onMouseDown( event ) { - - if ( scope.enabled === false ) return; - - event.preventDefault(); - - switch ( event.button ) { - - case scope.mouseButtons.LEFT: - - if ( event.ctrlKey || event.metaKey || event.shiftKey ) { - - if ( scope.enableRotate === false ) return; - - handleMouseDownRotate( event ); - - state = STATE.ROTATE; - - } else { - - if ( scope.enablePan === false ) return; - - handleMouseDownPan( event ); - - state = STATE.PAN; - - } - - break; - - case scope.mouseButtons.MIDDLE: - - if ( scope.enableZoom === false ) return; - - handleMouseDownDolly( event ); - - state = STATE.DOLLY; - - break; - - case scope.mouseButtons.RIGHT: - - if ( scope.enableRotate === false ) return; - - handleMouseDownRotate( event ); - - state = STATE.ROTATE; - - break; - - } - - if ( state !== STATE.NONE ) { - - document.addEventListener( 'mousemove', onMouseMove, false ); - document.addEventListener( 'mouseup', onMouseUp, false ); - - scope.dispatchEvent( startEvent ); - - } - - } - - function onMouseMove( event ) { - - if ( scope.enabled === false ) return; - - event.preventDefault(); - - switch ( state ) { - - case STATE.ROTATE: - - if ( scope.enableRotate === false ) return; - - handleMouseMoveRotate( event ); - - break; - - case STATE.DOLLY: - - if ( scope.enableZoom === false ) return; - - handleMouseMoveDolly( event ); - - break; - - case STATE.PAN: - - if ( scope.enablePan === false ) return; - - handleMouseMovePan( event ); - - break; - - } - - } - - function onMouseUp( event ) { - - if ( scope.enabled === false ) return; - - handleMouseUp( event ); - - document.removeEventListener( 'mousemove', onMouseMove, false ); - document.removeEventListener( 'mouseup', onMouseUp, false ); - - scope.dispatchEvent( endEvent ); - - state = STATE.NONE; - - } - - function onMouseWheel( event ) { - - if ( scope.enabled === false || scope.enableZoom === false || ( state !== STATE.NONE && state !== STATE.ROTATE ) ) return; - - event.preventDefault(); - event.stopPropagation(); - - scope.dispatchEvent( startEvent ); - - handleMouseWheel( event ); - - scope.dispatchEvent( endEvent ); - - } - - function onKeyDown( event ) { - - if ( scope.enabled === false || scope.enableKeys === false || scope.enablePan === false ) return; - - handleKeyDown( event ); - - } - - function onTouchStart( event ) { - - if ( scope.enabled === false ) return; - - event.preventDefault(); - - switch ( event.touches.length ) { - - case 1: // one-fingered touch: pan - - if ( scope.enablePan === false ) return; - - handleTouchStartPan( event ); - - state = STATE.PAN; - - break; - - case 2: // two-fingered touch: rotate-dolly - - if ( scope.enableZoom === false && scope.enableRotate === false ) return; - - handleTouchStartRotate( event ); - handleTouchStartDolly( event ); - - state = STATE.DOLLY_ROTATE; - - break; - - default: - - state = STATE.NONE; - - } - - if ( state !== STATE.NONE ) { - - scope.dispatchEvent( startEvent ); - - } - - } - - function onTouchMove( event ) { - - if ( scope.enabled === false ) return; - - event.preventDefault(); - event.stopPropagation(); - - switch ( event.touches.length ) { - - case 1: // one-fingered touch: pan - - if ( scope.enablePan === false ) return; - if ( state !== STATE.PAN ) return; // is this needed? - - handleTouchMovePan( event ); - - scope.update(); - - break; - - case 2: // two-fingered touch: rotate-dolly - - if ( scope.enableZoom === false && scope.enableRotate === false ) return; - if ( ( state & STATE.DOLLY_ROTATE ) === 0 ) return; // is this needed? - - handleTouchMoveRotate( event ); - handleTouchMoveDolly( event ); - - scope.update(); - - break; - - default: - - state = STATE.NONE; - - } - - } - - function onTouchEnd( event ) { - - if ( scope.enabled === false ) return; - - handleTouchEnd( event ); - - scope.dispatchEvent( endEvent ); - - state = STATE.NONE; - - } - - function onContextMenu( event ) { - - if ( scope.enabled === false ) return; - - event.preventDefault(); - - } - - // - - scope.domElement.addEventListener( 'contextmenu', onContextMenu, false ); - - scope.domElement.addEventListener( 'mousedown', onMouseDown, false ); - scope.domElement.addEventListener( 'wheel', onMouseWheel, false ); - - scope.domElement.addEventListener( 'touchstart', onTouchStart, false ); - scope.domElement.addEventListener( 'touchend', onTouchEnd, false ); - scope.domElement.addEventListener( 'touchmove', onTouchMove, false ); - - window.addEventListener( 'keydown', onKeyDown, false ); - - // force an update at start - - this.update(); - -}; - -THREE.MapControls.prototype = Object.create( THREE.EventDispatcher.prototype ); -THREE.MapControls.prototype.constructor = THREE.MapControls; - -Object.defineProperties( THREE.MapControls.prototype, { - - center: { - - get: function () { - - console.warn( 'THREE.MapControls: .center has been renamed to .target' ); - return this.target; - - } - - }, - - // backward compatibility - - noZoom: { - - get: function () { - - console.warn( 'THREE.MapControls: .noZoom has been deprecated. Use .enableZoom instead.' ); - return ! this.enableZoom; - - }, - - set: function ( value ) { - - console.warn( 'THREE.MapControls: .noZoom has been deprecated. Use .enableZoom instead.' ); - this.enableZoom = ! value; - - } - - }, - - noRotate: { - - get: function () { - - console.warn( 'THREE.MapControls: .noRotate has been deprecated. Use .enableRotate instead.' ); - return ! this.enableRotate; - - }, - - set: function ( value ) { - - console.warn( 'THREE.MapControls: .noRotate has been deprecated. Use .enableRotate instead.' ); - this.enableRotate = ! value; - - } - - }, - - noPan: { - - get: function () { - - console.warn( 'THREE.MapControls: .noPan has been deprecated. Use .enablePan instead.' ); - return ! this.enablePan; - - }, - - set: function ( value ) { - - console.warn( 'THREE.MapControls: .noPan has been deprecated. Use .enablePan instead.' ); - this.enablePan = ! value; - - } - - }, - - noKeys: { - - get: function () { - - console.warn( 'THREE.MapControls: .noKeys has been deprecated. Use .enableKeys instead.' ); - return ! this.enableKeys; - - }, - - set: function ( value ) { - - console.warn( 'THREE.MapControls: .noKeys has been deprecated. Use .enableKeys instead.' ); - this.enableKeys = ! value; - - } - - }, - - staticMoving: { - - get: function () { - - console.warn( 'THREE.MapControls: .staticMoving has been deprecated. Use .enableDamping instead.' ); - return ! this.enableDamping; - - }, - - set: function ( value ) { - - console.warn( 'THREE.MapControls: .staticMoving has been deprecated. Use .enableDamping instead.' ); - this.enableDamping = ! value; - - } - - }, - - dynamicDampingFactor: { - - get: function () { - - console.warn( 'THREE.MapControls: .dynamicDampingFactor has been renamed. Use .dampingFactor instead.' ); - return this.dampingFactor; - - }, - - set: function ( value ) { - - console.warn( 'THREE.MapControls: .dynamicDampingFactor has been renamed. Use .dampingFactor instead.' ); - this.dampingFactor = value; - - } - - } - -} ); diff --git a/spaces/banana-projects/web3d/node_modules/three/examples/jsm/pmrem/PMREMCubeUVPacker.js b/spaces/banana-projects/web3d/node_modules/three/examples/jsm/pmrem/PMREMCubeUVPacker.js deleted file mode 100644 index 3fa44199a14c7025f8af59b08ceb38cd1a7908cd..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/examples/jsm/pmrem/PMREMCubeUVPacker.js +++ /dev/null @@ -1,253 +0,0 @@ -/** - * @author Prashant Sharma / spidersharma03 - * @author Ben Houston / bhouston, https://clara.io - * - * This class takes the cube lods(corresponding to different roughness values), and creates a single cubeUV - * Texture. The format for a given roughness set of faces is simply:: - * +X+Y+Z - * -X-Y-Z - * For every roughness a mip map chain is also saved, which is essential to remove the texture artifacts due to - * minification. - * Right now for every face a PlaneMesh is drawn, which leads to a lot of geometry draw calls, but can be replaced - * later by drawing a single buffer and by sending the appropriate faceIndex via vertex attributes. - * The arrangement of the faces is fixed, as assuming this arrangement, the sampling function has been written. - */ - -import { - BackSide, - CubeUVReflectionMapping, - LinearFilter, - LinearToneMapping, - Mesh, - NoBlending, - OrthographicCamera, - PlaneBufferGeometry, - RGBEEncoding, - RGBM16Encoding, - Scene, - ShaderMaterial, - Vector2, - Vector3, - WebGLRenderTarget -} from "../../../build/three.module.js"; - -var PMREMCubeUVPacker = ( function () { - - var camera = new OrthographicCamera(); - var scene = new Scene(); - var shader = getShader(); - - var PMREMCubeUVPacker = function ( cubeTextureLods ) { - - this.cubeLods = cubeTextureLods; - var size = cubeTextureLods[ 0 ].width * 4; - - var sourceTexture = cubeTextureLods[ 0 ].texture; - var params = { - format: sourceTexture.format, - magFilter: sourceTexture.magFilter, - minFilter: sourceTexture.minFilter, - type: sourceTexture.type, - generateMipmaps: sourceTexture.generateMipmaps, - anisotropy: sourceTexture.anisotropy, - encoding: ( sourceTexture.encoding === RGBEEncoding ) ? RGBM16Encoding : sourceTexture.encoding - }; - - if ( params.encoding === RGBM16Encoding ) { - - params.magFilter = LinearFilter; - params.minFilter = LinearFilter; - - } - - this.CubeUVRenderTarget = new WebGLRenderTarget( size, size, params ); - this.CubeUVRenderTarget.texture.name = "PMREMCubeUVPacker.cubeUv"; - this.CubeUVRenderTarget.texture.mapping = CubeUVReflectionMapping; - - this.objects = []; - - var geometry = new PlaneBufferGeometry( 1, 1 ); - - var faceOffsets = []; - faceOffsets.push( new Vector2( 0, 0 ) ); - faceOffsets.push( new Vector2( 1, 0 ) ); - faceOffsets.push( new Vector2( 2, 0 ) ); - faceOffsets.push( new Vector2( 0, 1 ) ); - faceOffsets.push( new Vector2( 1, 1 ) ); - faceOffsets.push( new Vector2( 2, 1 ) ); - - var textureResolution = size; - size = cubeTextureLods[ 0 ].width; - - var offset2 = 0; - var c = 4.0; - this.numLods = Math.log( cubeTextureLods[ 0 ].width ) / Math.log( 2 ) - 2; // IE11 doesn't support Math.log2 - for ( var i = 0; i < this.numLods; i ++ ) { - - var offset1 = ( textureResolution - textureResolution / c ) * 0.5; - if ( size > 16 ) c *= 2; - var nMips = size > 16 ? 6 : 1; - var mipOffsetX = 0; - var mipOffsetY = 0; - var mipSize = size; - - for ( var j = 0; j < nMips; j ++ ) { - - // Mip Maps - for ( var k = 0; k < 6; k ++ ) { - - // 6 Cube Faces - var material = shader.clone(); - material.uniforms[ 'envMap' ].value = this.cubeLods[ i ].texture; - material.envMap = this.cubeLods[ i ].texture; - material.uniforms[ 'faceIndex' ].value = k; - material.uniforms[ 'mapSize' ].value = mipSize; - - var planeMesh = new Mesh( geometry, material ); - planeMesh.position.x = faceOffsets[ k ].x * mipSize - offset1 + mipOffsetX; - planeMesh.position.y = faceOffsets[ k ].y * mipSize - offset1 + offset2 + mipOffsetY; - planeMesh.material.side = BackSide; - planeMesh.scale.setScalar( mipSize ); - this.objects.push( planeMesh ); - - } - mipOffsetY += 1.75 * mipSize; - mipOffsetX += 1.25 * mipSize; - mipSize /= 2; - - } - offset2 += 2 * size; - if ( size > 16 ) size /= 2; - - } - - }; - - PMREMCubeUVPacker.prototype = { - - constructor: PMREMCubeUVPacker, - - update: function ( renderer ) { - - var size = this.cubeLods[ 0 ].width * 4; - // top and bottom are swapped for some reason? - camera.left = - size * 0.5; - camera.right = size * 0.5; - camera.top = - size * 0.5; - camera.bottom = size * 0.5; - camera.near = 0; - camera.far = 1; - camera.updateProjectionMatrix(); - - for ( var i = 0; i < this.objects.length; i ++ ) { - - scene.add( this.objects[ i ] ); - - } - - var gammaInput = renderer.gammaInput; - var gammaOutput = renderer.gammaOutput; - var toneMapping = renderer.toneMapping; - var toneMappingExposure = renderer.toneMappingExposure; - var currentRenderTarget = renderer.getRenderTarget(); - - renderer.gammaInput = false; - renderer.gammaOutput = false; - renderer.toneMapping = LinearToneMapping; - renderer.toneMappingExposure = 1.0; - renderer.setRenderTarget( this.CubeUVRenderTarget ); - renderer.render( scene, camera ); - - renderer.setRenderTarget( currentRenderTarget ); - renderer.toneMapping = toneMapping; - renderer.toneMappingExposure = toneMappingExposure; - renderer.gammaInput = gammaInput; - renderer.gammaOutput = gammaOutput; - - for ( var i = 0; i < this.objects.length; i ++ ) { - - scene.remove( this.objects[ i ] ); - - } - - }, - - dispose: function () { - - for ( var i = 0, l = this.objects.length; i < l; i ++ ) { - - this.objects[ i ].material.dispose(); - - } - - this.objects[ 0 ].geometry.dispose(); - - } - - }; - - function getShader() { - - var shaderMaterial = new ShaderMaterial( { - - uniforms: { - "faceIndex": { value: 0 }, - "mapSize": { value: 0 }, - "envMap": { value: null }, - "testColor": { value: new Vector3( 1, 1, 1 ) } - }, - - vertexShader: - "precision highp float;\ - varying vec2 vUv;\ - void main() {\ - vUv = uv;\ - gl_Position = projectionMatrix * modelViewMatrix * vec4( position, 1.0 );\ - }", - - fragmentShader: - "precision highp float;\ - varying vec2 vUv;\ - uniform samplerCube envMap;\ - uniform float mapSize;\ - uniform vec3 testColor;\ - uniform int faceIndex;\ - \ - void main() {\ - vec3 sampleDirection;\ - vec2 uv = vUv;\ - uv = uv * 2.0 - 1.0;\ - uv.y *= -1.0;\ - if(faceIndex == 0) {\ - sampleDirection = normalize(vec3(1.0, uv.y, -uv.x));\ - } else if(faceIndex == 1) {\ - sampleDirection = normalize(vec3(uv.x, 1.0, uv.y));\ - } else if(faceIndex == 2) {\ - sampleDirection = normalize(vec3(uv.x, uv.y, 1.0));\ - } else if(faceIndex == 3) {\ - sampleDirection = normalize(vec3(-1.0, uv.y, uv.x));\ - } else if(faceIndex == 4) {\ - sampleDirection = normalize(vec3(uv.x, -1.0, -uv.y));\ - } else {\ - sampleDirection = normalize(vec3(-uv.x, uv.y, -1.0));\ - }\ - vec4 color = envMapTexelToLinear( textureCube( envMap, sampleDirection ) );\ - gl_FragColor = linearToOutputTexel( color );\ - }", - - blending: NoBlending - - } ); - - shaderMaterial.type = 'PMREMCubeUVPacker'; - - return shaderMaterial; - - } - - - return PMREMCubeUVPacker; - -} )(); - -export { PMREMCubeUVPacker }; diff --git a/spaces/beihai/GFPGAN-V1.3-whole-image/.history/app_20220327005808.py b/spaces/beihai/GFPGAN-V1.3-whole-image/.history/app_20220327005808.py deleted file mode 100644 index 064650f6f979099bd0716a12c0a67eea333f2b6d..0000000000000000000000000000000000000000 --- a/spaces/beihai/GFPGAN-V1.3-whole-image/.history/app_20220327005808.py +++ /dev/null @@ -1,71 +0,0 @@ -import os -#os.system("pip install gfpgan") - -#os.system("pip freeze") -#os.system("wget https://github.com/TencentARC/GFPGAN/releases/download/v0.2.0/GFPGANCleanv1-NoCE-C2.pth -P .") -import random -import gradio as gr -from PIL import Image -import torch -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/a/ab/Abraham_Lincoln_O-77_matte_collodion_print.jpg/1024px-Abraham_Lincoln_O-77_matte_collodion_print.jpg', 'lincoln.jpg') -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/5/50/Albert_Einstein_%28Nobel%29.png', 'einstein.png') -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/9/9d/Thomas_Edison2.jpg/1024px-Thomas_Edison2.jpg', 'edison.jpg') -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/a/a9/Henry_Ford_1888.jpg/1024px-Henry_Ford_1888.jpg', 'Henry.jpg') -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/0/06/Frida_Kahlo%2C_by_Guillermo_Kahlo.jpg/800px-Frida_Kahlo%2C_by_Guillermo_Kahlo.jpg', 'Frida.jpg') - - - - -import cv2 -import glob -import numpy as np -from basicsr.utils import imwrite -from gfpgan import GFPGANer - -import warnings -warnings.warn('The unoptimized RealESRGAN is very slow on CPU. We do not use it. ' - 'If you really want to use it, please modify the corresponding codes.') -bg_upsampler = None - - - -# set up GFPGAN restorer -restorer = GFPGANer( - model_path='experiments/pretrained_models/GFPGANv1.3.pth', - upscale=2, - arch='clean', - channel_multiplier=2, - bg_upsampler=bg_upsampler) - - -def inference(img): - input_img = cv2.imread(img, cv2.IMREAD_COLOR) - cropped_faces, restored_faces, restored_img = restorer.enhance( - input_img, has_aligned=False, only_center_face=False, paste_back=True) - - return Image.fromarray(restored_img[0])[:,:,::-1] - -# return Image.fromarray(restored_faces[0])[:,:,::-1] -# return Image.fromarray(restored_faces[0][:,:,::-1]) - - -title = "GFP-GAN" -description = "Gradio demo for GFP-GAN: Towards Real-World Blind Face Restoration with Generative Facial Prior. To use it, simply upload your image, or click one of the examples to load them. Read more at the links below. Please click submit only once" -article = "

    Towards Real-World Blind Face Restoration with Generative Facial Prior | Github Repo

    visitor badge
    " -gr.Interface( - inference, - [gr.inputs.Image(type="filepath", label="Input")], - gr.outputs.Image(type="pil", label="Output"), - title=title, - description=description, - article=article, - examples=[ - ['lincoln.jpg'], - ['einstein.png'], - ['edison.jpg'], - ['Henry.jpg'], - ['Frida.jpg'] - ] - ).launch(enable_queue=True,cache_examples=True) - - diff --git a/spaces/beihai/GFPGAN-V1.3-whole-image/basicsr/utils/logger.py b/spaces/beihai/GFPGAN-V1.3-whole-image/basicsr/utils/logger.py deleted file mode 100644 index 73553dc664781a061737e94880ea1c6788c09043..0000000000000000000000000000000000000000 --- a/spaces/beihai/GFPGAN-V1.3-whole-image/basicsr/utils/logger.py +++ /dev/null @@ -1,213 +0,0 @@ -import datetime -import logging -import time - -from .dist_util import get_dist_info, master_only - -initialized_logger = {} - - -class AvgTimer(): - - def __init__(self, window=200): - self.window = window # average window - self.current_time = 0 - self.total_time = 0 - self.count = 0 - self.avg_time = 0 - self.start() - - def start(self): - self.start_time = self.tic = time.time() - - def record(self): - self.count += 1 - self.toc = time.time() - self.current_time = self.toc - self.tic - self.total_time += self.current_time - # calculate average time - self.avg_time = self.total_time / self.count - - # reset - if self.count > self.window: - self.count = 0 - self.total_time = 0 - - self.tic = time.time() - - def get_current_time(self): - return self.current_time - - def get_avg_time(self): - return self.avg_time - - -class MessageLogger(): - """Message logger for printing. - - Args: - opt (dict): Config. It contains the following keys: - name (str): Exp name. - logger (dict): Contains 'print_freq' (str) for logger interval. - train (dict): Contains 'total_iter' (int) for total iters. - use_tb_logger (bool): Use tensorboard logger. - start_iter (int): Start iter. Default: 1. - tb_logger (obj:`tb_logger`): Tensorboard logger. Default: None. - """ - - def __init__(self, opt, start_iter=1, tb_logger=None): - self.exp_name = opt['name'] - self.interval = opt['logger']['print_freq'] - self.start_iter = start_iter - self.max_iters = opt['train']['total_iter'] - self.use_tb_logger = opt['logger']['use_tb_logger'] - self.tb_logger = tb_logger - self.start_time = time.time() - self.logger = get_root_logger() - - def reset_start_time(self): - self.start_time = time.time() - - @master_only - def __call__(self, log_vars): - """Format logging message. - - Args: - log_vars (dict): It contains the following keys: - epoch (int): Epoch number. - iter (int): Current iter. - lrs (list): List for learning rates. - - time (float): Iter time. - data_time (float): Data time for each iter. - """ - # epoch, iter, learning rates - epoch = log_vars.pop('epoch') - current_iter = log_vars.pop('iter') - lrs = log_vars.pop('lrs') - - message = (f'[{self.exp_name[:5]}..][epoch:{epoch:3d}, iter:{current_iter:8,d}, lr:(') - for v in lrs: - message += f'{v:.3e},' - message += ')] ' - - # time and estimated time - if 'time' in log_vars.keys(): - iter_time = log_vars.pop('time') - data_time = log_vars.pop('data_time') - - total_time = time.time() - self.start_time - time_sec_avg = total_time / (current_iter - self.start_iter + 1) - eta_sec = time_sec_avg * (self.max_iters - current_iter - 1) - eta_str = str(datetime.timedelta(seconds=int(eta_sec))) - message += f'[eta: {eta_str}, ' - message += f'time (data): {iter_time:.3f} ({data_time:.3f})] ' - - # other items, especially losses - for k, v in log_vars.items(): - message += f'{k}: {v:.4e} ' - # tensorboard logger - if self.use_tb_logger and 'debug' not in self.exp_name: - if k.startswith('l_'): - self.tb_logger.add_scalar(f'losses/{k}', v, current_iter) - else: - self.tb_logger.add_scalar(k, v, current_iter) - self.logger.info(message) - - -@master_only -def init_tb_logger(log_dir): - from torch.utils.tensorboard import SummaryWriter - tb_logger = SummaryWriter(log_dir=log_dir) - return tb_logger - - -@master_only -def init_wandb_logger(opt): - """We now only use wandb to sync tensorboard log.""" - import wandb - logger = get_root_logger() - - project = opt['logger']['wandb']['project'] - resume_id = opt['logger']['wandb'].get('resume_id') - if resume_id: - wandb_id = resume_id - resume = 'allow' - logger.warning(f'Resume wandb logger with id={wandb_id}.') - else: - wandb_id = wandb.util.generate_id() - resume = 'never' - - wandb.init(id=wandb_id, resume=resume, name=opt['name'], config=opt, project=project, sync_tensorboard=True) - - logger.info(f'Use wandb logger with id={wandb_id}; project={project}.') - - -def get_root_logger(logger_name='basicsr', log_level=logging.INFO, log_file=None): - """Get the root logger. - - The logger will be initialized if it has not been initialized. By default a - StreamHandler will be added. If `log_file` is specified, a FileHandler will - also be added. - - Args: - logger_name (str): root logger name. Default: 'basicsr'. - log_file (str | None): The log filename. If specified, a FileHandler - will be added to the root logger. - log_level (int): The root logger level. Note that only the process of - rank 0 is affected, while other processes will set the level to - "Error" and be silent most of the time. - - Returns: - logging.Logger: The root logger. - """ - logger = logging.getLogger(logger_name) - # if the logger has been initialized, just return it - if logger_name in initialized_logger: - return logger - - format_str = '%(asctime)s %(levelname)s: %(message)s' - stream_handler = logging.StreamHandler() - stream_handler.setFormatter(logging.Formatter(format_str)) - logger.addHandler(stream_handler) - logger.propagate = False - rank, _ = get_dist_info() - if rank != 0: - logger.setLevel('ERROR') - elif log_file is not None: - logger.setLevel(log_level) - # add file handler - file_handler = logging.FileHandler(log_file, 'w') - file_handler.setFormatter(logging.Formatter(format_str)) - file_handler.setLevel(log_level) - logger.addHandler(file_handler) - initialized_logger[logger_name] = True - return logger - - -def get_env_info(): - """Get environment information. - - Currently, only log the software version. - """ - import torch - import torchvision - - from basicsr.version import __version__ - msg = r""" - ____ _ _____ ____ - / __ ) ____ _ _____ (_)_____/ ___/ / __ \ - / __ |/ __ `// ___// // ___/\__ \ / /_/ / - / /_/ // /_/ /(__ )/ // /__ ___/ // _, _/ - /_____/ \__,_//____//_/ \___//____//_/ |_| - ______ __ __ __ __ - / ____/____ ____ ____/ / / / __ __ _____ / /__ / / - / / __ / __ \ / __ \ / __ / / / / / / // ___// //_/ / / - / /_/ // /_/ // /_/ // /_/ / / /___/ /_/ // /__ / /< /_/ - \____/ \____/ \____/ \____/ /_____/\____/ \___//_/|_| (_) - """ - msg += ('\nVersion Information: ' - f'\n\tBasicSR: {__version__}' - f'\n\tPyTorch: {torch.__version__}' - f'\n\tTorchVision: {torchvision.__version__}') - return msg diff --git a/spaces/bharat-raghunathan/song-lyrics-classifier/app.py b/spaces/bharat-raghunathan/song-lyrics-classifier/app.py deleted file mode 100644 index 8582897cb7db07b8983d119f0f25a75f19184c8d..0000000000000000000000000000000000000000 --- a/spaces/bharat-raghunathan/song-lyrics-classifier/app.py +++ /dev/null @@ -1,30 +0,0 @@ -import gradio as gr -import numpy as np -import torch -from transformers import AutoModelForSequenceClassification, AutoTokenizer -from examples import yellow, stairway, numb, puppets, firework - -def lyrics_categories(input_text): - spotify_model = "juliensimon/autonlp-song-lyrics-18753417" - model = AutoModelForSequenceClassification.from_pretrained(spotify_model) - tokenizer = AutoTokenizer.from_pretrained(spotify_model) - labels = model.config.id2label - inputs = tokenizer(input_text, return_tensors="pt") - outputs = model(**inputs) - predictions = torch.nn.functional.softmax(outputs.logits, dim=-1) - predictions = predictions.detach().numpy()[0] - index_sorted = np.argsort(predictions)[::-1] - clean_outputs = {labels[idx]:str(predictions[idx]) for idx in index_sorted} - print(clean_outputs) - return clean_outputs - -description = "With lyrics, find the top 5 genres this song belongs to! (Powered by Spotify)" - -iface = gr.Interface(fn=lyrics_categories, - inputs=gr.inputs.Textbox(lines=20, placeholder="Enter song lyrics here...", label="Song Lyrics"), - outputs=gr.outputs.Label(num_top_classes=5, label="Genres/Categories"), - examples=[stairway, numb, puppets, firework, yellow], - article=description, - title="Song Genre Predictor", - ) -iface.launch() diff --git a/spaces/bigjoker/stable-diffusion-webui/extensions-builtin/SwinIR/preload.py b/spaces/bigjoker/stable-diffusion-webui/extensions-builtin/SwinIR/preload.py deleted file mode 100644 index e912c6402bc80faa797cf2e95183101fb9a10286..0000000000000000000000000000000000000000 --- a/spaces/bigjoker/stable-diffusion-webui/extensions-builtin/SwinIR/preload.py +++ /dev/null @@ -1,6 +0,0 @@ -import os -from modules import paths - - -def preload(parser): - parser.add_argument("--swinir-models-path", type=str, help="Path to directory with SwinIR model file(s).", default=os.path.join(paths.models_path, 'SwinIR')) diff --git a/spaces/bingbing520/ChatGPT/Dockerfile b/spaces/bingbing520/ChatGPT/Dockerfile deleted file mode 100644 index 335c2dba28ba8c365de9306858462a59dea25f28..0000000000000000000000000000000000000000 --- a/spaces/bingbing520/ChatGPT/Dockerfile +++ /dev/null @@ -1,15 +0,0 @@ -FROM python:3.9 as builder -RUN apt-get update && apt-get install -y build-essential -COPY requirements.txt . -COPY requirements_advanced.txt . -RUN pip install --user -r requirements.txt -# RUN pip install --user -r requirements_advanced.txt - -FROM python:3.9 -MAINTAINER iskoldt -COPY --from=builder /root/.local /root/.local -ENV PATH=/root/.local/bin:$PATH -COPY . /app -WORKDIR /app -ENV dockerrun yes -CMD ["python3", "-u", "ChuanhuChatbot.py", "2>&1", "|", "tee", "/var/log/application.log"] diff --git a/spaces/bioriAsaeru/text-to-voice/Benefits of Using Free Mototools 6.2 Download for Motorola Unlocking.md b/spaces/bioriAsaeru/text-to-voice/Benefits of Using Free Mototools 6.2 Download for Motorola Unlocking.md deleted file mode 100644 index a5973514b434defa9f053ba0a7aa47a8b49a2d7f..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Benefits of Using Free Mototools 6.2 Download for Motorola Unlocking.md +++ /dev/null @@ -1,11 +0,0 @@ -
    -

    Note: Motorola Phone Tools does not work with Android devices.Clunky, crashes a lot and not very well maintained (at least last time we checked), the Motorola Mobile Suite can do a lot of things. Unfortunately, those lots of things that we tried to do often slowed down the PC or simply crashed the system.Motorola Mobile Phone Tools (or MPT, now known as Motorola Media Link) promises users the ability to sync and backup contact information (useless if you use Google Accounts), copy and store music and media files, provides options for phone firmware updates and more.We also found MML to be more responsive and less prone to crashes when run on Windows XP as opposed to Windows 7.Though if you're looking for something that's a little better but less user-friendly, we'd suggest checking out Droid Explorer, but only if you're using a modern Android-powered device.Features of Motorola Mobile Phone Tools

  11. Motorola Firmware Updates.
  12. Motorola Mobile Phone Tools provides backup and restore functions.
  13. Provides the ability to create custom ringtones.
  14. Support for Microsoft Outlook synchronization.
  15. Synchronize calendars with Microsoft Outlook.
  16. Synchronizes data with your mobile phone.
  17. Synchronizes music, video and more.
  18. Synchronize your Motorola phone with the Windows address book.
  19. Transfer pictures and multimedia to your phone.
  20. Compatibility and LicenseMotorola Mobile Phone Tools is provided under a freeware license on Windows from mobile phone tools with no restrictions on usage. Download and installation of this PC software is free and MML 1.5.19 is the latest version last time we checked.

    -

    Motorola Mobile Phone Tools can be used on a computer running Windows 11 or Windows 10. Previous versions of the operating system shouldn't be a problem with Windows 8, Windows 7 and Windows Vista having been tested. Windows XP is supported. It runs on both 32-bit and 64-bit systems with no dedicated 64-bit download provided.Filed under: Motorola Mobile Phone Tools DownloadFree Mobile Phone Tools

We have tested Motorola Mobile Phone Tools MML 1.5.19 against malware with several different programs. We certify that this program is clean of viruses, malware and trojans.Free Download for Windows 46.12 MB - Tested clean
  • $$ Cost:Free Freeware

    -

    free mototools 6.2 download


    DOWNLOAD ····· https://urloso.com/2uyRrt



    -

    Communication channels can also be selected freely. Thanks to our communication server, MOVITOOLS® MotionStudio allows you to configure different communication media and up to four simultaneous communication channels. The server also allows the centralized maintenance of data and use of modern remote maintenance technology.

    -

    I started MPSOFTWARE back in 1998. I must have been around 15 years old. I did not have a computer fast enough to run the cool games back than. Instead I got fascinated by the internet and began to develop freeware programs that could help other people creating cool websites for the emerging internet. Over the past 15 years I have created programs like phpDesigner and htmlGate with downloads in more than 100 countries.

    -

    To use Ext JS, you first need to download it from sencha.com. (I used version 3.2.1, but you should grab the most recent version.) Note that a free, open source version of Ext JS is available for open source projects, non-profit organizations and educational use. For other uses you may need to purchase a license. See sencha.com/products/license.php for more information.

    -

    I set several validation rules for the fields such as specifying the minimum and maximum length allowed, deferring the field validation until form submission, and creating validation functions for URLs, e-mail addresses, and other types of data. You can see the details of this validation in the code download.

    -

    aaccfb2cb3
    -
    -
    \ No newline at end of file diff --git a/spaces/bioriAsaeru/text-to-voice/Humko Deewana Kar Gaye Movie Download [BETTER] In Hindi 1080p Experience the Passion and Drama of Akshay Kumar and Katrina Kaif.md b/spaces/bioriAsaeru/text-to-voice/Humko Deewana Kar Gaye Movie Download [BETTER] In Hindi 1080p Experience the Passion and Drama of Akshay Kumar and Katrina Kaif.md deleted file mode 100644 index fbd7fd50c44bd24cb238f2c1fcd73a64b5f3e9a3..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Humko Deewana Kar Gaye Movie Download [BETTER] In Hindi 1080p Experience the Passion and Drama of Akshay Kumar and Katrina Kaif.md +++ /dev/null @@ -1,6 +0,0 @@ -

    iGO primo 2.4 Win CE download torrent


    DOWNLOAD ••• https://urloso.com/2uyQy9



    -
    - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/birkancelik18/chatbot/README.md b/spaces/birkancelik18/chatbot/README.md deleted file mode 100644 index a430cc201ff9f5eea3b2056d6c9d782f852936a8..0000000000000000000000000000000000000000 --- a/spaces/birkancelik18/chatbot/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Chatbot -emoji: 👀 -colorFrom: pink -colorTo: gray -sdk: gradio -sdk_version: 3.28.3 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/layers/csrc/box_iou_rotated/box_iou_rotated.h b/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/layers/csrc/box_iou_rotated/box_iou_rotated.h deleted file mode 100644 index 3bf383b8ed9b358b5313d433a9682c294dfb77e4..0000000000000000000000000000000000000000 --- a/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/layers/csrc/box_iou_rotated/box_iou_rotated.h +++ /dev/null @@ -1,35 +0,0 @@ -// Copyright (c) Facebook, Inc. and its affiliates. -#pragma once -#include - -namespace detectron2 { - -at::Tensor box_iou_rotated_cpu( - const at::Tensor& boxes1, - const at::Tensor& boxes2); - -#if defined(WITH_CUDA) || defined(WITH_HIP) -at::Tensor box_iou_rotated_cuda( - const at::Tensor& boxes1, - const at::Tensor& boxes2); -#endif - -// Interface for Python -// inline is needed to prevent multiple function definitions when this header is -// included by different cpps -inline at::Tensor box_iou_rotated( - const at::Tensor& boxes1, - const at::Tensor& boxes2) { - assert(boxes1.device().is_cuda() == boxes2.device().is_cuda()); - if (boxes1.device().is_cuda()) { -#if defined(WITH_CUDA) || defined(WITH_HIP) - return box_iou_rotated_cuda(boxes1.contiguous(), boxes2.contiguous()); -#else - AT_ERROR("Detectron2 is not compiled with GPU support!"); -#endif - } - - return box_iou_rotated_cpu(boxes1.contiguous(), boxes2.contiguous()); -} - -} // namespace detectron2 diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/projects/DensePose/densepose/modeling/roi_heads/__init__.py b/spaces/brjathu/HMR2.0/vendor/detectron2/projects/DensePose/densepose/modeling/roi_heads/__init__.py deleted file mode 100644 index 8403589f23ec2ffa8afafcd566ca0b0b7b2671a7..0000000000000000000000000000000000000000 --- a/spaces/brjathu/HMR2.0/vendor/detectron2/projects/DensePose/densepose/modeling/roi_heads/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. - -from .v1convx import DensePoseV1ConvXHead -from .deeplab import DensePoseDeepLabHead -from .registry import ROI_DENSEPOSE_HEAD_REGISTRY -from .roi_head import Decoder, DensePoseROIHeads diff --git a/spaces/cccccch/VITS-fast-fine-tuning-DingZhen/cmd_inference.py b/spaces/cccccch/VITS-fast-fine-tuning-DingZhen/cmd_inference.py deleted file mode 100644 index cfaee189e3905d5e6f0fc6c85f36fbc978cb1508..0000000000000000000000000000000000000000 --- a/spaces/cccccch/VITS-fast-fine-tuning-DingZhen/cmd_inference.py +++ /dev/null @@ -1,106 +0,0 @@ -"""该模块用于生成VITS文件 -使用方法 - -python cmd_inference.py -m 模型路径 -c 配置文件路径 -o 输出文件路径 -l 输入的语言 -t 输入文本 -s 合成目标说话人名称 - -可选参数 --ns 感情变化程度 --nsw 音素发音长度 --ls 整体语速 --on 输出文件的名称 - -""" - -from pathlib import Path -import utils -from models import SynthesizerTrn -import torch -from torch import no_grad, LongTensor -import librosa -from text import text_to_sequence, _clean_text -import commons -import scipy.io.wavfile as wavf -import os - -device = "cuda:0" if torch.cuda.is_available() else "cpu" - -language_marks = { - "Japanese": "", - "日本語": "[JA]", - "简体中文": "[ZH]", - "English": "[EN]", - "Mix": "", -} - - -def get_text(text, hps, is_symbol): - text_norm = text_to_sequence(text, hps.symbols, [] if is_symbol else hps.data.text_cleaners) - if hps.data.add_blank: - text_norm = commons.intersperse(text_norm, 0) - text_norm = LongTensor(text_norm) - return text_norm - - - -if __name__ == "__main__": - import argparse - - parser = argparse.ArgumentParser(description='vits inference') - #必须参数 - parser.add_argument('-m', '--model_path', type=str, default="logs/44k/G_0.pth", help='模型路径') - parser.add_argument('-c', '--config_path', type=str, default="configs/config.json", help='配置文件路径') - parser.add_argument('-o', '--output_path', type=str, default="output/vits", help='输出文件路径') - parser.add_argument('-l', '--language', type=str, default="日本語", help='输入的语言') - parser.add_argument('-t', '--text', type=str, help='输入文本') - parser.add_argument('-s', '--spk', type=str, help='合成目标说话人名称') - #可选参数 - parser.add_argument('-on', '--output_name', type=str, default="output", help='输出文件的名称') - parser.add_argument('-ns', '--noise_scale', type=float,default= .667,help='感情变化程度') - parser.add_argument('-nsw', '--noise_scale_w', type=float,default=0.6, help='音素发音长度') - parser.add_argument('-ls', '--length_scale', type=float,default=1, help='整体语速') - - args = parser.parse_args() - - model_path = args.model_path - config_path = args.config_path - output_dir = Path(args.output_path) - output_dir.mkdir(parents=True, exist_ok=True) - - language = args.language - text = args.text - spk = args.spk - noise_scale = args.noise_scale - noise_scale_w = args.noise_scale_w - length = args.length_scale - output_name = args.output_name - - hps = utils.get_hparams_from_file(config_path) - net_g = SynthesizerTrn( - len(hps.symbols), - hps.data.filter_length // 2 + 1, - hps.train.segment_size // hps.data.hop_length, - n_speakers=hps.data.n_speakers, - **hps.model).to(device) - _ = net_g.eval() - _ = utils.load_checkpoint(model_path, net_g, None) - - speaker_ids = hps.speakers - - - if language is not None: - text = language_marks[language] + text + language_marks[language] - speaker_id = speaker_ids[spk] - stn_tst = get_text(text, hps, False) - with no_grad(): - x_tst = stn_tst.unsqueeze(0).to(device) - x_tst_lengths = LongTensor([stn_tst.size(0)]).to(device) - sid = LongTensor([speaker_id]).to(device) - audio = net_g.infer(x_tst, x_tst_lengths, sid=sid, noise_scale=noise_scale, noise_scale_w=noise_scale_w, - length_scale=1.0 / length)[0][0, 0].data.cpu().float().numpy() - del stn_tst, x_tst, x_tst_lengths, sid - - wavf.write(str(output_dir)+"/"+output_name+".wav",hps.data.sampling_rate,audio) - - - - \ No newline at end of file diff --git a/spaces/ceckenrode/Memory-Chat-Story-Generator-ChatGPT/app.py b/spaces/ceckenrode/Memory-Chat-Story-Generator-ChatGPT/app.py deleted file mode 100644 index 6f5e8fc60239f281eb4b9dbde9ce606028c1a02a..0000000000000000000000000000000000000000 --- a/spaces/ceckenrode/Memory-Chat-Story-Generator-ChatGPT/app.py +++ /dev/null @@ -1,148 +0,0 @@ -import gradio as gr -import os -import json -import requests - -#Streaming endpoint -API_URL = "https://api.openai.com/v1/chat/completions" #os.getenv("API_URL") + "/generate_stream" -OPENAI_API_KEY= os.environ["HF_TOKEN"] # Add a token to this space . Then copy it to the repository secret in this spaces settings panel. os.environ reads from there. -# Keys for Open AI ChatGPT API usage are created from here: https://platform.openai.com/account/api-keys - -def predict(inputs, top_p, temperature, chat_counter, chatbot=[], history=[]): #repetition_penalty, top_k - - # 1. Set up a payload - payload = { - "model": "gpt-3.5-turbo", - "messages": [{"role": "user", "content": f"{inputs}"}], - "temperature" : 1.0, - "top_p":1.0, - "n" : 1, - "stream": True, - "presence_penalty":0, - "frequency_penalty":0, - } - - # 2. Define your headers and add a key from https://platform.openai.com/account/api-keys - headers = { - "Content-Type": "application/json", - "Authorization": f"Bearer {OPENAI_API_KEY}" - } - - # 3. Create a chat counter loop that feeds [Predict next best anything based on last input and attention with memory defined by introspective attention over time] - print(f"chat_counter - {chat_counter}") - if chat_counter != 0 : - messages=[] - for data in chatbot: - temp1 = {} - temp1["role"] = "user" - temp1["content"] = data[0] - temp2 = {} - temp2["role"] = "assistant" - temp2["content"] = data[1] - messages.append(temp1) - messages.append(temp2) - temp3 = {} - temp3["role"] = "user" - temp3["content"] = inputs - messages.append(temp3) - #messages - payload = { - "model": "gpt-3.5-turbo", - "messages": messages, #[{"role": "user", "content": f"{inputs}"}], - "temperature" : temperature, #1.0, - "top_p": top_p, #1.0, - "n" : 1, - "stream": True, - "presence_penalty":0, - "frequency_penalty":0, - } - chat_counter+=1 - - # 4. POST it to OPENAI API - history.append(inputs) - print(f"payload is - {payload}") - # make a POST request to the API endpoint using the requests.post method, passing in stream=True - response = requests.post(API_URL, headers=headers, json=payload, stream=True) - #response = requests.post(API_URL, headers=headers, json=payload, stream=True) - token_counter = 0 - partial_words = "" - - # 5. Iterate through response lines and structure readable response - # TODO - make this parse out markdown so we can have similar interface - counter=0 - for chunk in response.iter_lines(): - #Skipping first chunk - if counter == 0: - counter+=1 - continue - #counter+=1 - # check whether each line is non-empty - if chunk.decode() : - chunk = chunk.decode() - # decode each line as response data is in bytes - if len(chunk) > 12 and "content" in json.loads(chunk[6:])['choices'][0]['delta']: - #if len(json.loads(chunk.decode()[6:])['choices'][0]["delta"]) == 0: - # break - partial_words = partial_words + json.loads(chunk[6:])['choices'][0]["delta"]["content"] - if token_counter == 0: - history.append(" " + partial_words) - else: - history[-1] = partial_words - chat = [(history[i], history[i + 1]) for i in range(0, len(history) - 1, 2) ] # convert to tuples of list - token_counter+=1 - yield chat, history, chat_counter # resembles {chatbot: chat, state: history} - - -def reset_textbox(): - return gr.update(value='') - -title = """

    Memory Chat Story Generator ChatGPT

    """ -description = """ - -## ChatGPT Datasets 📚 -- WebText -- Common Crawl -- BooksCorpus -- English Wikipedia -- Toronto Books Corpus -- OpenWebText - -## ChatGPT Datasets - Details 📚 -- **WebText:** A dataset of web pages crawled from domains on the Alexa top 5,000 list. This dataset was used to pretrain GPT-2. - - [WebText: A Large-Scale Unsupervised Text Corpus by Radford et al.](https://paperswithcode.com/dataset/webtext) -- **Common Crawl:** A dataset of web pages from a variety of domains, which is updated regularly. This dataset was used to pretrain GPT-3. - - [Language Models are Few-Shot Learners](https://paperswithcode.com/dataset/common-crawl) by Brown et al. -- **BooksCorpus:** A dataset of over 11,000 books from a variety of genres. - - [Scalable Methods for 8 Billion Token Language Modeling](https://paperswithcode.com/dataset/bookcorpus) by Zhu et al. -- **English Wikipedia:** A dump of the English-language Wikipedia as of 2018, with articles from 2001-2017. - - [Improving Language Understanding by Generative Pre-Training](https://huggingface.co/spaces/awacke1/WikipediaUltimateAISearch?logs=build) Space for Wikipedia Search -- **Toronto Books Corpus:** A dataset of over 7,000 books from a variety of genres, collected by the University of Toronto. - - [Massively Multilingual Sentence Embeddings for Zero-Shot Cross-Lingual Transfer and Beyond](https://paperswithcode.com/dataset/bookcorpus) by Schwenk and Douze. -- **OpenWebText:** A dataset of web pages that were filtered to remove content that was likely to be low-quality or spammy. This dataset was used to pretrain GPT-3. - - [Language Models are Few-Shot Learners](https://paperswithcode.com/dataset/openwebtext) by Brown et al. - - """ - -# 6. Use Gradio to pull it all together -with gr.Blocks(css = """#col_container {width: 1000px; margin-left: auto; margin-right: auto;} - #chatbot {height: 520px; overflow: auto;}""") as demo: - gr.HTML(title) - gr.HTML('''
    Duplicate SpaceDuplicate the Space and run securely with your OpenAI API Key
    ''') - with gr.Column(elem_id = "col_container"): - chatbot = gr.Chatbot(elem_id='chatbot') #c - inputs = gr.Textbox(placeholder= "Hi there!", label= "Type an input and press Enter") #t - state = gr.State([]) #s - b1 = gr.Button() - - with gr.Accordion("Parameters", open=False): - top_p = gr.Slider( minimum=-0, maximum=1.0, value=1.0, step=0.05, interactive=True, label="Top-p (nucleus sampling)",) - temperature = gr.Slider( minimum=-0, maximum=5.0, value=1.0, step=0.1, interactive=True, label="Temperature",) - chat_counter = gr.Number(value=0, visible=False, precision=0) - - inputs.submit( predict, [inputs, top_p, temperature,chat_counter, chatbot, state], [chatbot, state, chat_counter],) - b1.click( predict, [inputs, top_p, temperature, chat_counter, chatbot, state], [chatbot, state, chat_counter],) - b1.click(reset_textbox, [], [inputs]) - inputs.submit(reset_textbox, [], [inputs]) - - gr.Markdown(description) - demo.queue().launch(debug=True) diff --git a/spaces/chasemcdo/hf_localai/pkg/utils/uri.go b/spaces/chasemcdo/hf_localai/pkg/utils/uri.go deleted file mode 100644 index 95527457ac7485ff496709186a89c5d435e7b72a..0000000000000000000000000000000000000000 --- a/spaces/chasemcdo/hf_localai/pkg/utils/uri.go +++ /dev/null @@ -1,59 +0,0 @@ -package utils - -import ( - "fmt" - "io/ioutil" - "net/http" - "strings" -) - -const ( - githubURI = "github:" -) - -func GetURI(url string, f func(url string, i []byte) error) error { - if strings.HasPrefix(url, githubURI) { - parts := strings.Split(url, ":") - repoParts := strings.Split(parts[1], "@") - branch := "main" - - if len(repoParts) > 1 { - branch = repoParts[1] - } - - repoPath := strings.Split(repoParts[0], "/") - org := repoPath[0] - project := repoPath[1] - projectPath := strings.Join(repoPath[2:], "/") - - url = fmt.Sprintf("https://raw.githubusercontent.com/%s/%s/%s/%s", org, project, branch, projectPath) - } - - if strings.HasPrefix(url, "file://") { - rawURL := strings.TrimPrefix(url, "file://") - // Read the response body - body, err := ioutil.ReadFile(rawURL) - if err != nil { - return err - } - - // Unmarshal YAML data into a struct - return f(url, body) - } - - // Send a GET request to the URL - response, err := http.Get(url) - if err != nil { - return err - } - defer response.Body.Close() - - // Read the response body - body, err := ioutil.ReadAll(response.Body) - if err != nil { - return err - } - - // Unmarshal YAML data into a struct - return f(url, body) -} diff --git a/spaces/chendl/compositional_test/transformers/examples/legacy/seq2seq/run_eval_search.py b/spaces/chendl/compositional_test/transformers/examples/legacy/seq2seq/run_eval_search.py deleted file mode 100644 index 9b5debfb2795eeace43c95153a04df33f5011c2b..0000000000000000000000000000000000000000 --- a/spaces/chendl/compositional_test/transformers/examples/legacy/seq2seq/run_eval_search.py +++ /dev/null @@ -1,158 +0,0 @@ -#!/usr/bin/env python -# Copyright 2020 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import argparse -import itertools -import operator -import sys -from collections import OrderedDict - -from run_eval import datetime_now, run_generate - -from utils import ROUGE_KEYS - - -# A table of supported tasks and the list of scores in the order of importance to be sorted by. -# To add a new task, simply list the score names that `run_eval.run_generate()` returns -task_score_names = { - "translation": ["bleu"], - "summarization": ROUGE_KEYS, -} - - -def parse_search_arg(search): - groups = search.split() - entries = dict((g.split("=") for g in groups)) - entry_names = list(entries.keys()) - sets = [[f"--{k} {v}" for v in vs.split(":")] for k, vs in entries.items()] - matrix = [list(x) for x in itertools.product(*sets)] - return matrix, entry_names - - -def run_search(): - """ - Run parametric search over the desired hparam space with help of ``run_eval.py``. - - All the arguments except ``--search`` are passed to ``run_eval.py`` as is. The values inside of "--search" are parsed, reformatted and fed to ``run_eval.py`` as additional args. - - The format for the ``--search`` value is a simple string with hparams and colon separated values to try, e.g.: - ``` - --search "num_beams=5:10 length_penalty=0.8:1.0:1.2 early_stopping=true:false" - ``` - which will generate ``12`` ``(2*3*2)`` searches for a product of each hparam. For example the example that was just used will invoke ``run_eval.py`` repeatedly with: - - ``` - --num_beams 5 --length_penalty 0.8 --early_stopping true - --num_beams 5 --length_penalty 0.8 --early_stopping false - [...] - --num_beams 10 --length_penalty 1.2 --early_stopping false - ``` - - On completion, this function prints a markdown table of the results sorted by the best BLEU score and the winning arguments. - - - """ - prog = sys.argv[0] - - parser = argparse.ArgumentParser( - usage=( - "\n\nImportant: this script accepts all arguments `run_eval.py` accepts and then a few extra, therefore" - " refer to `run_eval.py -h` for the complete list." - ) - ) - parser.add_argument( - "--search", - type=str, - required=False, - help='param space to search, e.g. "num_beams=5:10 length_penalty=0.8:1.0:1.2"', - ) - parser.add_argument( - "--bs", type=int, default=8, required=False, help="initial batch size (may get reduced if it's too big)" - ) - parser.add_argument("--task", type=str, help="used for task_specific_params + metrics") - parser.add_argument( - "--info", - nargs="?", - type=str, - const=datetime_now(), - help=( - "add custom notes to be printed before the results table. If no value is passed, the current datetime" - " string will be used." - ), - ) - args, args_main = parser.parse_known_args() - # we share some of the args - args_main.extend(["--task", args.task]) - args_normal = [prog] + args_main - - # to support variations like translation_en_to_de" - task = "translation" if "translation" in args.task else "summarization" - - matrix, col_names = parse_search_arg(args.search) - col_names[0:0] = task_score_names[task] # score cols first - col_widths = {col: len(str(col)) for col in col_names} - results = [] - for r in matrix: - hparams = dict((x.replace("--", "").split() for x in r)) - args_exp = " ".join(r).split() - args_exp.extend(["--bs", str(args.bs)]) # in case we need to reduce its size due to CUDA OOM - sys.argv = args_normal + args_exp - - # XXX: need to trap CUDA OOM and lower args.bs if that happens and retry - - scores = run_generate(verbose=False) - # make sure scores are first in the table - result = OrderedDict() - for score in task_score_names[task]: - result[score] = scores[score] - result.update(hparams) - results.append(result) - - # find widest entries - for k, v in result.items(): - l = len(str(v)) - if l > col_widths[k]: - col_widths[k] = l - - results_sorted = sorted(results, key=operator.itemgetter(*task_score_names[task]), reverse=True) - print(" | ".join([f"{col:{col_widths[col]}}" for col in col_names])) - print(" | ".join([f"{'-'*col_widths[col]}" for col in col_names])) - for row in results_sorted: - print(" | ".join([f"{row[col]:{col_widths[col]}}" for col in col_names])) - - best = results_sorted[0] - for score in task_score_names[task]: - del best[score] - best_args = [f"--{k} {v}" for k, v in best.items()] - dyn_args = ["--bs", str(args.bs)] - if args.info: - print(f"\nInfo: {args.info}") - print("\nBest score args:") - print(" ".join(args_main + best_args + dyn_args)) - - return results_sorted - - -if __name__ == "__main__": - # Usage: - # [normal-run_eval_search.py cmd plus] \ - # --search="num_beams=1:5:10 length_penalty=0.8:1:1.2 early_stopping=true:false" - # - # Example: - # PYTHONPATH="src:examples/seq2seq" python examples/seq2seq/run_eval_search.py $MODEL_NAME \ - # $DATA_DIR/val.source $SAVE_DIR/test_translations.txt --reference_path $DATA_DIR/val.target \ - # --score_path $SAVE_DIR/test_bleu.json --bs $BS --task translation \ - # --search="num_beams=1:5:10 length_penalty=0.8:1:1.2 early_stopping=true:false" - run_search() diff --git a/spaces/chendl/compositional_test/transformers/examples/research_projects/codeparrot/scripts/initialize_model.py b/spaces/chendl/compositional_test/transformers/examples/research_projects/codeparrot/scripts/initialize_model.py deleted file mode 100644 index 6bf028688f12627b23f5fb2236ad403d7c9e6442..0000000000000000000000000000000000000000 --- a/spaces/chendl/compositional_test/transformers/examples/research_projects/codeparrot/scripts/initialize_model.py +++ /dev/null @@ -1,27 +0,0 @@ -from arguments import InitializationArguments - -from transformers import AutoConfig, AutoModelForCausalLM, AutoTokenizer, HfArgumentParser - - -# Configuration -parser = HfArgumentParser(InitializationArguments) -args = parser.parse_args() - -# Load codeparrot tokenizer trained for Python code tokenization -tokenizer = AutoTokenizer.from_pretrained(args.tokenizer_name) - -# Config: "scale_attn_by_layer_idx" and "reorder_and_upcast_attn" are Mistral stability tweaks -config_kwargs = { - "vocab_size": len(tokenizer), - "scale_attn_by_inverse_layer_idx": True, - "reorder_and_upcast_attn": True, -} - -# Load model config (GPT-2 large in this case) -config = AutoConfig.from_pretrained(args.config_name, **config_kwargs) - -# Initialize new model with config -model = AutoModelForCausalLM.from_config(config) - -# Save model to the hub -model.save_pretrained(args.model_name, push_to_hub=args.push_to_hub) diff --git a/spaces/chenxiYan/ChatHaruhi-OpenAI/README.md b/spaces/chenxiYan/ChatHaruhi-OpenAI/README.md deleted file mode 100644 index bdb42a10257bf11a94079be78c222248c3d596ff..0000000000000000000000000000000000000000 --- a/spaces/chenxiYan/ChatHaruhi-OpenAI/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Haruhi -emoji: 💻 -colorFrom: green -colorTo: pink -sdk: gradio -sdk_version: 3.41.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/aiohttp/multipart.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/aiohttp/multipart.py deleted file mode 100644 index 73801f459aa274ca6aae7bf28a2c5bb3bf075d11..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/aiohttp/multipart.py +++ /dev/null @@ -1,961 +0,0 @@ -import base64 -import binascii -import json -import re -import uuid -import warnings -import zlib -from collections import deque -from types import TracebackType -from typing import ( - TYPE_CHECKING, - Any, - AsyncIterator, - Deque, - Dict, - Iterator, - List, - Mapping, - Optional, - Sequence, - Tuple, - Type, - Union, - cast, -) -from urllib.parse import parse_qsl, unquote, urlencode - -from multidict import CIMultiDict, CIMultiDictProxy, MultiMapping - -from .hdrs import ( - CONTENT_DISPOSITION, - CONTENT_ENCODING, - CONTENT_LENGTH, - CONTENT_TRANSFER_ENCODING, - CONTENT_TYPE, -) -from .helpers import CHAR, TOKEN, parse_mimetype, reify -from .http import HeadersParser -from .payload import ( - JsonPayload, - LookupError, - Order, - Payload, - StringPayload, - get_payload, - payload_type, -) -from .streams import StreamReader - -__all__ = ( - "MultipartReader", - "MultipartWriter", - "BodyPartReader", - "BadContentDispositionHeader", - "BadContentDispositionParam", - "parse_content_disposition", - "content_disposition_filename", -) - - -if TYPE_CHECKING: # pragma: no cover - from .client_reqrep import ClientResponse - - -class BadContentDispositionHeader(RuntimeWarning): - pass - - -class BadContentDispositionParam(RuntimeWarning): - pass - - -def parse_content_disposition( - header: Optional[str], -) -> Tuple[Optional[str], Dict[str, str]]: - def is_token(string: str) -> bool: - return bool(string) and TOKEN >= set(string) - - def is_quoted(string: str) -> bool: - return string[0] == string[-1] == '"' - - def is_rfc5987(string: str) -> bool: - return is_token(string) and string.count("'") == 2 - - def is_extended_param(string: str) -> bool: - return string.endswith("*") - - def is_continuous_param(string: str) -> bool: - pos = string.find("*") + 1 - if not pos: - return False - substring = string[pos:-1] if string.endswith("*") else string[pos:] - return substring.isdigit() - - def unescape(text: str, *, chars: str = "".join(map(re.escape, CHAR))) -> str: - return re.sub(f"\\\\([{chars}])", "\\1", text) - - if not header: - return None, {} - - disptype, *parts = header.split(";") - if not is_token(disptype): - warnings.warn(BadContentDispositionHeader(header)) - return None, {} - - params: Dict[str, str] = {} - while parts: - item = parts.pop(0) - - if "=" not in item: - warnings.warn(BadContentDispositionHeader(header)) - return None, {} - - key, value = item.split("=", 1) - key = key.lower().strip() - value = value.lstrip() - - if key in params: - warnings.warn(BadContentDispositionHeader(header)) - return None, {} - - if not is_token(key): - warnings.warn(BadContentDispositionParam(item)) - continue - - elif is_continuous_param(key): - if is_quoted(value): - value = unescape(value[1:-1]) - elif not is_token(value): - warnings.warn(BadContentDispositionParam(item)) - continue - - elif is_extended_param(key): - if is_rfc5987(value): - encoding, _, value = value.split("'", 2) - encoding = encoding or "utf-8" - else: - warnings.warn(BadContentDispositionParam(item)) - continue - - try: - value = unquote(value, encoding, "strict") - except UnicodeDecodeError: # pragma: nocover - warnings.warn(BadContentDispositionParam(item)) - continue - - else: - failed = True - if is_quoted(value): - failed = False - value = unescape(value[1:-1].lstrip("\\/")) - elif is_token(value): - failed = False - elif parts: - # maybe just ; in filename, in any case this is just - # one case fix, for proper fix we need to redesign parser - _value = f"{value};{parts[0]}" - if is_quoted(_value): - parts.pop(0) - value = unescape(_value[1:-1].lstrip("\\/")) - failed = False - - if failed: - warnings.warn(BadContentDispositionHeader(header)) - return None, {} - - params[key] = value - - return disptype.lower(), params - - -def content_disposition_filename( - params: Mapping[str, str], name: str = "filename" -) -> Optional[str]: - name_suf = "%s*" % name - if not params: - return None - elif name_suf in params: - return params[name_suf] - elif name in params: - return params[name] - else: - parts = [] - fnparams = sorted( - (key, value) for key, value in params.items() if key.startswith(name_suf) - ) - for num, (key, value) in enumerate(fnparams): - _, tail = key.split("*", 1) - if tail.endswith("*"): - tail = tail[:-1] - if tail == str(num): - parts.append(value) - else: - break - if not parts: - return None - value = "".join(parts) - if "'" in value: - encoding, _, value = value.split("'", 2) - encoding = encoding or "utf-8" - return unquote(value, encoding, "strict") - return value - - -class MultipartResponseWrapper: - """Wrapper around the MultipartReader. - - It takes care about - underlying connection and close it when it needs in. - """ - - def __init__( - self, - resp: "ClientResponse", - stream: "MultipartReader", - ) -> None: - self.resp = resp - self.stream = stream - - def __aiter__(self) -> "MultipartResponseWrapper": - return self - - async def __anext__( - self, - ) -> Union["MultipartReader", "BodyPartReader"]: - part = await self.next() - if part is None: - raise StopAsyncIteration - return part - - def at_eof(self) -> bool: - """Returns True when all response data had been read.""" - return self.resp.content.at_eof() - - async def next( - self, - ) -> Optional[Union["MultipartReader", "BodyPartReader"]]: - """Emits next multipart reader object.""" - item = await self.stream.next() - if self.stream.at_eof(): - await self.release() - return item - - async def release(self) -> None: - """Release the connection gracefully. - - All remaining content is read to the void. - """ - await self.resp.release() - - -class BodyPartReader: - """Multipart reader for single body part.""" - - chunk_size = 8192 - - def __init__( - self, boundary: bytes, headers: "CIMultiDictProxy[str]", content: StreamReader - ) -> None: - self.headers = headers - self._boundary = boundary - self._content = content - self._at_eof = False - length = self.headers.get(CONTENT_LENGTH, None) - self._length = int(length) if length is not None else None - self._read_bytes = 0 - # TODO: typeing.Deque is not supported by Python 3.5 - self._unread: Deque[bytes] = deque() - self._prev_chunk: Optional[bytes] = None - self._content_eof = 0 - self._cache: Dict[str, Any] = {} - - def __aiter__(self) -> AsyncIterator["BodyPartReader"]: - return self # type: ignore[return-value] - - async def __anext__(self) -> bytes: - part = await self.next() - if part is None: - raise StopAsyncIteration - return part - - async def next(self) -> Optional[bytes]: - item = await self.read() - if not item: - return None - return item - - async def read(self, *, decode: bool = False) -> bytes: - """Reads body part data. - - decode: Decodes data following by encoding - method from Content-Encoding header. If it missed - data remains untouched - """ - if self._at_eof: - return b"" - data = bytearray() - while not self._at_eof: - data.extend(await self.read_chunk(self.chunk_size)) - if decode: - return self.decode(data) - return data - - async def read_chunk(self, size: int = chunk_size) -> bytes: - """Reads body part content chunk of the specified size. - - size: chunk size - """ - if self._at_eof: - return b"" - if self._length: - chunk = await self._read_chunk_from_length(size) - else: - chunk = await self._read_chunk_from_stream(size) - - self._read_bytes += len(chunk) - if self._read_bytes == self._length: - self._at_eof = True - if self._at_eof: - clrf = await self._content.readline() - assert ( - b"\r\n" == clrf - ), "reader did not read all the data or it is malformed" - return chunk - - async def _read_chunk_from_length(self, size: int) -> bytes: - # Reads body part content chunk of the specified size. - # The body part must has Content-Length header with proper value. - assert self._length is not None, "Content-Length required for chunked read" - chunk_size = min(size, self._length - self._read_bytes) - chunk = await self._content.read(chunk_size) - return chunk - - async def _read_chunk_from_stream(self, size: int) -> bytes: - # Reads content chunk of body part with unknown length. - # The Content-Length header for body part is not necessary. - assert ( - size >= len(self._boundary) + 2 - ), "Chunk size must be greater or equal than boundary length + 2" - first_chunk = self._prev_chunk is None - if first_chunk: - self._prev_chunk = await self._content.read(size) - - chunk = await self._content.read(size) - self._content_eof += int(self._content.at_eof()) - assert self._content_eof < 3, "Reading after EOF" - assert self._prev_chunk is not None - window = self._prev_chunk + chunk - sub = b"\r\n" + self._boundary - if first_chunk: - idx = window.find(sub) - else: - idx = window.find(sub, max(0, len(self._prev_chunk) - len(sub))) - if idx >= 0: - # pushing boundary back to content - with warnings.catch_warnings(): - warnings.filterwarnings("ignore", category=DeprecationWarning) - self._content.unread_data(window[idx:]) - if size > idx: - self._prev_chunk = self._prev_chunk[:idx] - chunk = window[len(self._prev_chunk) : idx] - if not chunk: - self._at_eof = True - result = self._prev_chunk - self._prev_chunk = chunk - return result - - async def readline(self) -> bytes: - """Reads body part by line by line.""" - if self._at_eof: - return b"" - - if self._unread: - line = self._unread.popleft() - else: - line = await self._content.readline() - - if line.startswith(self._boundary): - # the very last boundary may not come with \r\n, - # so set single rules for everyone - sline = line.rstrip(b"\r\n") - boundary = self._boundary - last_boundary = self._boundary + b"--" - # ensure that we read exactly the boundary, not something alike - if sline == boundary or sline == last_boundary: - self._at_eof = True - self._unread.append(line) - return b"" - else: - next_line = await self._content.readline() - if next_line.startswith(self._boundary): - line = line[:-2] # strip CRLF but only once - self._unread.append(next_line) - - return line - - async def release(self) -> None: - """Like read(), but reads all the data to the void.""" - if self._at_eof: - return - while not self._at_eof: - await self.read_chunk(self.chunk_size) - - async def text(self, *, encoding: Optional[str] = None) -> str: - """Like read(), but assumes that body part contains text data.""" - data = await self.read(decode=True) - # see https://www.w3.org/TR/html5/forms.html#multipart/form-data-encoding-algorithm # NOQA - # and https://dvcs.w3.org/hg/xhr/raw-file/tip/Overview.html#dom-xmlhttprequest-send # NOQA - encoding = encoding or self.get_charset(default="utf-8") - return data.decode(encoding) - - async def json(self, *, encoding: Optional[str] = None) -> Optional[Dict[str, Any]]: - """Like read(), but assumes that body parts contains JSON data.""" - data = await self.read(decode=True) - if not data: - return None - encoding = encoding or self.get_charset(default="utf-8") - return cast(Dict[str, Any], json.loads(data.decode(encoding))) - - async def form(self, *, encoding: Optional[str] = None) -> List[Tuple[str, str]]: - """Like read(), but assumes that body parts contain form urlencoded data.""" - data = await self.read(decode=True) - if not data: - return [] - if encoding is not None: - real_encoding = encoding - else: - real_encoding = self.get_charset(default="utf-8") - return parse_qsl( - data.rstrip().decode(real_encoding), - keep_blank_values=True, - encoding=real_encoding, - ) - - def at_eof(self) -> bool: - """Returns True if the boundary was reached or False otherwise.""" - return self._at_eof - - def decode(self, data: bytes) -> bytes: - """Decodes data. - - Decoding is done according the specified Content-Encoding - or Content-Transfer-Encoding headers value. - """ - if CONTENT_TRANSFER_ENCODING in self.headers: - data = self._decode_content_transfer(data) - if CONTENT_ENCODING in self.headers: - return self._decode_content(data) - return data - - def _decode_content(self, data: bytes) -> bytes: - encoding = self.headers.get(CONTENT_ENCODING, "").lower() - - if encoding == "deflate": - return zlib.decompress(data, -zlib.MAX_WBITS) - elif encoding == "gzip": - return zlib.decompress(data, 16 + zlib.MAX_WBITS) - elif encoding == "identity": - return data - else: - raise RuntimeError(f"unknown content encoding: {encoding}") - - def _decode_content_transfer(self, data: bytes) -> bytes: - encoding = self.headers.get(CONTENT_TRANSFER_ENCODING, "").lower() - - if encoding == "base64": - return base64.b64decode(data) - elif encoding == "quoted-printable": - return binascii.a2b_qp(data) - elif encoding in ("binary", "8bit", "7bit"): - return data - else: - raise RuntimeError( - "unknown content transfer encoding: {}" "".format(encoding) - ) - - def get_charset(self, default: str) -> str: - """Returns charset parameter from Content-Type header or default.""" - ctype = self.headers.get(CONTENT_TYPE, "") - mimetype = parse_mimetype(ctype) - return mimetype.parameters.get("charset", default) - - @reify - def name(self) -> Optional[str]: - """Returns name specified in Content-Disposition header. - - If the header is missing or malformed, returns None. - """ - _, params = parse_content_disposition(self.headers.get(CONTENT_DISPOSITION)) - return content_disposition_filename(params, "name") - - @reify - def filename(self) -> Optional[str]: - """Returns filename specified in Content-Disposition header. - - Returns None if the header is missing or malformed. - """ - _, params = parse_content_disposition(self.headers.get(CONTENT_DISPOSITION)) - return content_disposition_filename(params, "filename") - - -@payload_type(BodyPartReader, order=Order.try_first) -class BodyPartReaderPayload(Payload): - def __init__(self, value: BodyPartReader, *args: Any, **kwargs: Any) -> None: - super().__init__(value, *args, **kwargs) - - params: Dict[str, str] = {} - if value.name is not None: - params["name"] = value.name - if value.filename is not None: - params["filename"] = value.filename - - if params: - self.set_content_disposition("attachment", True, **params) - - async def write(self, writer: Any) -> None: - field = self._value - chunk = await field.read_chunk(size=2**16) - while chunk: - await writer.write(field.decode(chunk)) - chunk = await field.read_chunk(size=2**16) - - -class MultipartReader: - """Multipart body reader.""" - - #: Response wrapper, used when multipart readers constructs from response. - response_wrapper_cls = MultipartResponseWrapper - #: Multipart reader class, used to handle multipart/* body parts. - #: None points to type(self) - multipart_reader_cls = None - #: Body part reader class for non multipart/* content types. - part_reader_cls = BodyPartReader - - def __init__(self, headers: Mapping[str, str], content: StreamReader) -> None: - self.headers = headers - self._boundary = ("--" + self._get_boundary()).encode() - self._content = content - self._last_part: Optional[Union["MultipartReader", BodyPartReader]] = None - self._at_eof = False - self._at_bof = True - self._unread: List[bytes] = [] - - def __aiter__( - self, - ) -> AsyncIterator["BodyPartReader"]: - return self # type: ignore[return-value] - - async def __anext__( - self, - ) -> Optional[Union["MultipartReader", BodyPartReader]]: - part = await self.next() - if part is None: - raise StopAsyncIteration - return part - - @classmethod - def from_response( - cls, - response: "ClientResponse", - ) -> MultipartResponseWrapper: - """Constructs reader instance from HTTP response. - - :param response: :class:`~aiohttp.client.ClientResponse` instance - """ - obj = cls.response_wrapper_cls( - response, cls(response.headers, response.content) - ) - return obj - - def at_eof(self) -> bool: - """Returns True if the final boundary was reached, false otherwise.""" - return self._at_eof - - async def next( - self, - ) -> Optional[Union["MultipartReader", BodyPartReader]]: - """Emits the next multipart body part.""" - # So, if we're at BOF, we need to skip till the boundary. - if self._at_eof: - return None - await self._maybe_release_last_part() - if self._at_bof: - await self._read_until_first_boundary() - self._at_bof = False - else: - await self._read_boundary() - if self._at_eof: # we just read the last boundary, nothing to do there - return None - self._last_part = await self.fetch_next_part() - return self._last_part - - async def release(self) -> None: - """Reads all the body parts to the void till the final boundary.""" - while not self._at_eof: - item = await self.next() - if item is None: - break - await item.release() - - async def fetch_next_part( - self, - ) -> Union["MultipartReader", BodyPartReader]: - """Returns the next body part reader.""" - headers = await self._read_headers() - return self._get_part_reader(headers) - - def _get_part_reader( - self, - headers: "CIMultiDictProxy[str]", - ) -> Union["MultipartReader", BodyPartReader]: - """Dispatches the response by the `Content-Type` header. - - Returns a suitable reader instance. - - :param dict headers: Response headers - """ - ctype = headers.get(CONTENT_TYPE, "") - mimetype = parse_mimetype(ctype) - - if mimetype.type == "multipart": - if self.multipart_reader_cls is None: - return type(self)(headers, self._content) - return self.multipart_reader_cls(headers, self._content) - else: - return self.part_reader_cls(self._boundary, headers, self._content) - - def _get_boundary(self) -> str: - mimetype = parse_mimetype(self.headers[CONTENT_TYPE]) - - assert mimetype.type == "multipart", "multipart/* content type expected" - - if "boundary" not in mimetype.parameters: - raise ValueError( - "boundary missed for Content-Type: %s" % self.headers[CONTENT_TYPE] - ) - - boundary = mimetype.parameters["boundary"] - if len(boundary) > 70: - raise ValueError("boundary %r is too long (70 chars max)" % boundary) - - return boundary - - async def _readline(self) -> bytes: - if self._unread: - return self._unread.pop() - return await self._content.readline() - - async def _read_until_first_boundary(self) -> None: - while True: - chunk = await self._readline() - if chunk == b"": - raise ValueError( - "Could not find starting boundary %r" % (self._boundary) - ) - chunk = chunk.rstrip() - if chunk == self._boundary: - return - elif chunk == self._boundary + b"--": - self._at_eof = True - return - - async def _read_boundary(self) -> None: - chunk = (await self._readline()).rstrip() - if chunk == self._boundary: - pass - elif chunk == self._boundary + b"--": - self._at_eof = True - epilogue = await self._readline() - next_line = await self._readline() - - # the epilogue is expected and then either the end of input or the - # parent multipart boundary, if the parent boundary is found then - # it should be marked as unread and handed to the parent for - # processing - if next_line[:2] == b"--": - self._unread.append(next_line) - # otherwise the request is likely missing an epilogue and both - # lines should be passed to the parent for processing - # (this handles the old behavior gracefully) - else: - self._unread.extend([next_line, epilogue]) - else: - raise ValueError(f"Invalid boundary {chunk!r}, expected {self._boundary!r}") - - async def _read_headers(self) -> "CIMultiDictProxy[str]": - lines = [b""] - while True: - chunk = await self._content.readline() - chunk = chunk.strip() - lines.append(chunk) - if not chunk: - break - parser = HeadersParser() - headers, raw_headers = parser.parse_headers(lines) - return headers - - async def _maybe_release_last_part(self) -> None: - """Ensures that the last read body part is read completely.""" - if self._last_part is not None: - if not self._last_part.at_eof(): - await self._last_part.release() - self._unread.extend(self._last_part._unread) - self._last_part = None - - -_Part = Tuple[Payload, str, str] - - -class MultipartWriter(Payload): - """Multipart body writer.""" - - def __init__(self, subtype: str = "mixed", boundary: Optional[str] = None) -> None: - boundary = boundary if boundary is not None else uuid.uuid4().hex - # The underlying Payload API demands a str (utf-8), not bytes, - # so we need to ensure we don't lose anything during conversion. - # As a result, require the boundary to be ASCII only. - # In both situations. - - try: - self._boundary = boundary.encode("ascii") - except UnicodeEncodeError: - raise ValueError("boundary should contain ASCII only chars") from None - ctype = f"multipart/{subtype}; boundary={self._boundary_value}" - - super().__init__(None, content_type=ctype) - - self._parts: List[_Part] = [] - - def __enter__(self) -> "MultipartWriter": - return self - - def __exit__( - self, - exc_type: Optional[Type[BaseException]], - exc_val: Optional[BaseException], - exc_tb: Optional[TracebackType], - ) -> None: - pass - - def __iter__(self) -> Iterator[_Part]: - return iter(self._parts) - - def __len__(self) -> int: - return len(self._parts) - - def __bool__(self) -> bool: - return True - - _valid_tchar_regex = re.compile(rb"\A[!#$%&'*+\-.^_`|~\w]+\Z") - _invalid_qdtext_char_regex = re.compile(rb"[\x00-\x08\x0A-\x1F\x7F]") - - @property - def _boundary_value(self) -> str: - """Wrap boundary parameter value in quotes, if necessary. - - Reads self.boundary and returns a unicode sting. - """ - # Refer to RFCs 7231, 7230, 5234. - # - # parameter = token "=" ( token / quoted-string ) - # token = 1*tchar - # quoted-string = DQUOTE *( qdtext / quoted-pair ) DQUOTE - # qdtext = HTAB / SP / %x21 / %x23-5B / %x5D-7E / obs-text - # obs-text = %x80-FF - # quoted-pair = "\" ( HTAB / SP / VCHAR / obs-text ) - # tchar = "!" / "#" / "$" / "%" / "&" / "'" / "*" - # / "+" / "-" / "." / "^" / "_" / "`" / "|" / "~" - # / DIGIT / ALPHA - # ; any VCHAR, except delimiters - # VCHAR = %x21-7E - value = self._boundary - if re.match(self._valid_tchar_regex, value): - return value.decode("ascii") # cannot fail - - if re.search(self._invalid_qdtext_char_regex, value): - raise ValueError("boundary value contains invalid characters") - - # escape %x5C and %x22 - quoted_value_content = value.replace(b"\\", b"\\\\") - quoted_value_content = quoted_value_content.replace(b'"', b'\\"') - - return '"' + quoted_value_content.decode("ascii") + '"' - - @property - def boundary(self) -> str: - return self._boundary.decode("ascii") - - def append(self, obj: Any, headers: Optional[MultiMapping[str]] = None) -> Payload: - if headers is None: - headers = CIMultiDict() - - if isinstance(obj, Payload): - obj.headers.update(headers) - return self.append_payload(obj) - else: - try: - payload = get_payload(obj, headers=headers) - except LookupError: - raise TypeError("Cannot create payload from %r" % obj) - else: - return self.append_payload(payload) - - def append_payload(self, payload: Payload) -> Payload: - """Adds a new body part to multipart writer.""" - # compression - encoding: Optional[str] = payload.headers.get( - CONTENT_ENCODING, - "", - ).lower() - if encoding and encoding not in ("deflate", "gzip", "identity"): - raise RuntimeError(f"unknown content encoding: {encoding}") - if encoding == "identity": - encoding = None - - # te encoding - te_encoding: Optional[str] = payload.headers.get( - CONTENT_TRANSFER_ENCODING, - "", - ).lower() - if te_encoding not in ("", "base64", "quoted-printable", "binary"): - raise RuntimeError( - "unknown content transfer encoding: {}" "".format(te_encoding) - ) - if te_encoding == "binary": - te_encoding = None - - # size - size = payload.size - if size is not None and not (encoding or te_encoding): - payload.headers[CONTENT_LENGTH] = str(size) - - self._parts.append((payload, encoding, te_encoding)) # type: ignore[arg-type] - return payload - - def append_json( - self, obj: Any, headers: Optional[MultiMapping[str]] = None - ) -> Payload: - """Helper to append JSON part.""" - if headers is None: - headers = CIMultiDict() - - return self.append_payload(JsonPayload(obj, headers=headers)) - - def append_form( - self, - obj: Union[Sequence[Tuple[str, str]], Mapping[str, str]], - headers: Optional[MultiMapping[str]] = None, - ) -> Payload: - """Helper to append form urlencoded part.""" - assert isinstance(obj, (Sequence, Mapping)) - - if headers is None: - headers = CIMultiDict() - - if isinstance(obj, Mapping): - obj = list(obj.items()) - data = urlencode(obj, doseq=True) - - return self.append_payload( - StringPayload( - data, headers=headers, content_type="application/x-www-form-urlencoded" - ) - ) - - @property - def size(self) -> Optional[int]: - """Size of the payload.""" - total = 0 - for part, encoding, te_encoding in self._parts: - if encoding or te_encoding or part.size is None: - return None - - total += int( - 2 - + len(self._boundary) - + 2 - + part.size # b'--'+self._boundary+b'\r\n' - + len(part._binary_headers) - + 2 # b'\r\n' - ) - - total += 2 + len(self._boundary) + 4 # b'--'+self._boundary+b'--\r\n' - return total - - async def write(self, writer: Any, close_boundary: bool = True) -> None: - """Write body.""" - for part, encoding, te_encoding in self._parts: - await writer.write(b"--" + self._boundary + b"\r\n") - await writer.write(part._binary_headers) - - if encoding or te_encoding: - w = MultipartPayloadWriter(writer) - if encoding: - w.enable_compression(encoding) - if te_encoding: - w.enable_encoding(te_encoding) - await part.write(w) # type: ignore[arg-type] - await w.write_eof() - else: - await part.write(writer) - - await writer.write(b"\r\n") - - if close_boundary: - await writer.write(b"--" + self._boundary + b"--\r\n") - - -class MultipartPayloadWriter: - def __init__(self, writer: Any) -> None: - self._writer = writer - self._encoding: Optional[str] = None - self._compress: Any = None - self._encoding_buffer: Optional[bytearray] = None - - def enable_encoding(self, encoding: str) -> None: - if encoding == "base64": - self._encoding = encoding - self._encoding_buffer = bytearray() - elif encoding == "quoted-printable": - self._encoding = "quoted-printable" - - def enable_compression( - self, encoding: str = "deflate", strategy: int = zlib.Z_DEFAULT_STRATEGY - ) -> None: - zlib_mode = 16 + zlib.MAX_WBITS if encoding == "gzip" else -zlib.MAX_WBITS - self._compress = zlib.compressobj(wbits=zlib_mode, strategy=strategy) - - async def write_eof(self) -> None: - if self._compress is not None: - chunk = self._compress.flush() - if chunk: - self._compress = None - await self.write(chunk) - - if self._encoding == "base64": - if self._encoding_buffer: - await self._writer.write(base64.b64encode(self._encoding_buffer)) - - async def write(self, chunk: bytes) -> None: - if self._compress is not None: - if chunk: - chunk = self._compress.compress(chunk) - if not chunk: - return - - if self._encoding == "base64": - buf = self._encoding_buffer - assert buf is not None - buf.extend(chunk) - - if buf: - div, mod = divmod(len(buf), 3) - enc_chunk, self._encoding_buffer = (buf[: div * 3], buf[div * 3 :]) - if enc_chunk: - b64chunk = base64.b64encode(enc_chunk) - await self._writer.write(b64chunk) - elif self._encoding == "quoted-printable": - await self._writer.write(binascii.b2a_qp(chunk)) - else: - await self._writer.write(chunk) diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/attr/_cmp.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/attr/_cmp.py deleted file mode 100644 index d9cbe22cde35ff08abb0f1261f2173091490e02f..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/attr/_cmp.py +++ /dev/null @@ -1,155 +0,0 @@ -# SPDX-License-Identifier: MIT - - -import functools -import types - -from ._make import _make_ne - - -_operation_names = {"eq": "==", "lt": "<", "le": "<=", "gt": ">", "ge": ">="} - - -def cmp_using( - eq=None, - lt=None, - le=None, - gt=None, - ge=None, - require_same_type=True, - class_name="Comparable", -): - """ - Create a class that can be passed into `attrs.field`'s ``eq``, ``order``, - and ``cmp`` arguments to customize field comparison. - - The resulting class will have a full set of ordering methods if at least - one of ``{lt, le, gt, ge}`` and ``eq`` are provided. - - :param Optional[callable] eq: `callable` used to evaluate equality of two - objects. - :param Optional[callable] lt: `callable` used to evaluate whether one - object is less than another object. - :param Optional[callable] le: `callable` used to evaluate whether one - object is less than or equal to another object. - :param Optional[callable] gt: `callable` used to evaluate whether one - object is greater than another object. - :param Optional[callable] ge: `callable` used to evaluate whether one - object is greater than or equal to another object. - - :param bool require_same_type: When `True`, equality and ordering methods - will return `NotImplemented` if objects are not of the same type. - - :param Optional[str] class_name: Name of class. Defaults to 'Comparable'. - - See `comparison` for more details. - - .. versionadded:: 21.1.0 - """ - - body = { - "__slots__": ["value"], - "__init__": _make_init(), - "_requirements": [], - "_is_comparable_to": _is_comparable_to, - } - - # Add operations. - num_order_functions = 0 - has_eq_function = False - - if eq is not None: - has_eq_function = True - body["__eq__"] = _make_operator("eq", eq) - body["__ne__"] = _make_ne() - - if lt is not None: - num_order_functions += 1 - body["__lt__"] = _make_operator("lt", lt) - - if le is not None: - num_order_functions += 1 - body["__le__"] = _make_operator("le", le) - - if gt is not None: - num_order_functions += 1 - body["__gt__"] = _make_operator("gt", gt) - - if ge is not None: - num_order_functions += 1 - body["__ge__"] = _make_operator("ge", ge) - - type_ = types.new_class( - class_name, (object,), {}, lambda ns: ns.update(body) - ) - - # Add same type requirement. - if require_same_type: - type_._requirements.append(_check_same_type) - - # Add total ordering if at least one operation was defined. - if 0 < num_order_functions < 4: - if not has_eq_function: - # functools.total_ordering requires __eq__ to be defined, - # so raise early error here to keep a nice stack. - raise ValueError( - "eq must be define is order to complete ordering from " - "lt, le, gt, ge." - ) - type_ = functools.total_ordering(type_) - - return type_ - - -def _make_init(): - """ - Create __init__ method. - """ - - def __init__(self, value): - """ - Initialize object with *value*. - """ - self.value = value - - return __init__ - - -def _make_operator(name, func): - """ - Create operator method. - """ - - def method(self, other): - if not self._is_comparable_to(other): - return NotImplemented - - result = func(self.value, other.value) - if result is NotImplemented: - return NotImplemented - - return result - - method.__name__ = f"__{name}__" - method.__doc__ = ( - f"Return a {_operation_names[name]} b. Computed by attrs." - ) - - return method - - -def _is_comparable_to(self, other): - """ - Check whether `other` is comparable to `self`. - """ - for func in self._requirements: - if not func(self, other): - return False - return True - - -def _check_same_type(self, other): - """ - Return True if *self* and *other* are of the same type, False otherwise. - """ - return other.value.__class__ is self.value.__class__ diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/cffi/backend_ctypes.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/cffi/backend_ctypes.py deleted file mode 100644 index e7956a79cfb1c3d28a2ad22a40b261ae7dbbbb5f..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/cffi/backend_ctypes.py +++ /dev/null @@ -1,1121 +0,0 @@ -import ctypes, ctypes.util, operator, sys -from . import model - -if sys.version_info < (3,): - bytechr = chr -else: - unicode = str - long = int - xrange = range - bytechr = lambda num: bytes([num]) - -class CTypesType(type): - pass - -class CTypesData(object): - __metaclass__ = CTypesType - __slots__ = ['__weakref__'] - __name__ = '' - - def __init__(self, *args): - raise TypeError("cannot instantiate %r" % (self.__class__,)) - - @classmethod - def _newp(cls, init): - raise TypeError("expected a pointer or array ctype, got '%s'" - % (cls._get_c_name(),)) - - @staticmethod - def _to_ctypes(value): - raise TypeError - - @classmethod - def _arg_to_ctypes(cls, *value): - try: - ctype = cls._ctype - except AttributeError: - raise TypeError("cannot create an instance of %r" % (cls,)) - if value: - res = cls._to_ctypes(*value) - if not isinstance(res, ctype): - res = cls._ctype(res) - else: - res = cls._ctype() - return res - - @classmethod - def _create_ctype_obj(cls, init): - if init is None: - return cls._arg_to_ctypes() - else: - return cls._arg_to_ctypes(init) - - @staticmethod - def _from_ctypes(ctypes_value): - raise TypeError - - @classmethod - def _get_c_name(cls, replace_with=''): - return cls._reftypename.replace(' &', replace_with) - - @classmethod - def _fix_class(cls): - cls.__name__ = 'CData<%s>' % (cls._get_c_name(),) - cls.__qualname__ = 'CData<%s>' % (cls._get_c_name(),) - cls.__module__ = 'ffi' - - def _get_own_repr(self): - raise NotImplementedError - - def _addr_repr(self, address): - if address == 0: - return 'NULL' - else: - if address < 0: - address += 1 << (8*ctypes.sizeof(ctypes.c_void_p)) - return '0x%x' % address - - def __repr__(self, c_name=None): - own = self._get_own_repr() - return '' % (c_name or self._get_c_name(), own) - - def _convert_to_address(self, BClass): - if BClass is None: - raise TypeError("cannot convert %r to an address" % ( - self._get_c_name(),)) - else: - raise TypeError("cannot convert %r to %r" % ( - self._get_c_name(), BClass._get_c_name())) - - @classmethod - def _get_size(cls): - return ctypes.sizeof(cls._ctype) - - def _get_size_of_instance(self): - return ctypes.sizeof(self._ctype) - - @classmethod - def _cast_from(cls, source): - raise TypeError("cannot cast to %r" % (cls._get_c_name(),)) - - def _cast_to_integer(self): - return self._convert_to_address(None) - - @classmethod - def _alignment(cls): - return ctypes.alignment(cls._ctype) - - def __iter__(self): - raise TypeError("cdata %r does not support iteration" % ( - self._get_c_name()),) - - def _make_cmp(name): - cmpfunc = getattr(operator, name) - def cmp(self, other): - v_is_ptr = not isinstance(self, CTypesGenericPrimitive) - w_is_ptr = (isinstance(other, CTypesData) and - not isinstance(other, CTypesGenericPrimitive)) - if v_is_ptr and w_is_ptr: - return cmpfunc(self._convert_to_address(None), - other._convert_to_address(None)) - elif v_is_ptr or w_is_ptr: - return NotImplemented - else: - if isinstance(self, CTypesGenericPrimitive): - self = self._value - if isinstance(other, CTypesGenericPrimitive): - other = other._value - return cmpfunc(self, other) - cmp.func_name = name - return cmp - - __eq__ = _make_cmp('__eq__') - __ne__ = _make_cmp('__ne__') - __lt__ = _make_cmp('__lt__') - __le__ = _make_cmp('__le__') - __gt__ = _make_cmp('__gt__') - __ge__ = _make_cmp('__ge__') - - def __hash__(self): - return hash(self._convert_to_address(None)) - - def _to_string(self, maxlen): - raise TypeError("string(): %r" % (self,)) - - -class CTypesGenericPrimitive(CTypesData): - __slots__ = [] - - def __hash__(self): - return hash(self._value) - - def _get_own_repr(self): - return repr(self._from_ctypes(self._value)) - - -class CTypesGenericArray(CTypesData): - __slots__ = [] - - @classmethod - def _newp(cls, init): - return cls(init) - - def __iter__(self): - for i in xrange(len(self)): - yield self[i] - - def _get_own_repr(self): - return self._addr_repr(ctypes.addressof(self._blob)) - - -class CTypesGenericPtr(CTypesData): - __slots__ = ['_address', '_as_ctype_ptr'] - _automatic_casts = False - kind = "pointer" - - @classmethod - def _newp(cls, init): - return cls(init) - - @classmethod - def _cast_from(cls, source): - if source is None: - address = 0 - elif isinstance(source, CTypesData): - address = source._cast_to_integer() - elif isinstance(source, (int, long)): - address = source - else: - raise TypeError("bad type for cast to %r: %r" % - (cls, type(source).__name__)) - return cls._new_pointer_at(address) - - @classmethod - def _new_pointer_at(cls, address): - self = cls.__new__(cls) - self._address = address - self._as_ctype_ptr = ctypes.cast(address, cls._ctype) - return self - - def _get_own_repr(self): - try: - return self._addr_repr(self._address) - except AttributeError: - return '???' - - def _cast_to_integer(self): - return self._address - - def __nonzero__(self): - return bool(self._address) - __bool__ = __nonzero__ - - @classmethod - def _to_ctypes(cls, value): - if not isinstance(value, CTypesData): - raise TypeError("unexpected %s object" % type(value).__name__) - address = value._convert_to_address(cls) - return ctypes.cast(address, cls._ctype) - - @classmethod - def _from_ctypes(cls, ctypes_ptr): - address = ctypes.cast(ctypes_ptr, ctypes.c_void_p).value or 0 - return cls._new_pointer_at(address) - - @classmethod - def _initialize(cls, ctypes_ptr, value): - if value: - ctypes_ptr.contents = cls._to_ctypes(value).contents - - def _convert_to_address(self, BClass): - if (BClass in (self.__class__, None) or BClass._automatic_casts - or self._automatic_casts): - return self._address - else: - return CTypesData._convert_to_address(self, BClass) - - -class CTypesBaseStructOrUnion(CTypesData): - __slots__ = ['_blob'] - - @classmethod - def _create_ctype_obj(cls, init): - # may be overridden - raise TypeError("cannot instantiate opaque type %s" % (cls,)) - - def _get_own_repr(self): - return self._addr_repr(ctypes.addressof(self._blob)) - - @classmethod - def _offsetof(cls, fieldname): - return getattr(cls._ctype, fieldname).offset - - def _convert_to_address(self, BClass): - if getattr(BClass, '_BItem', None) is self.__class__: - return ctypes.addressof(self._blob) - else: - return CTypesData._convert_to_address(self, BClass) - - @classmethod - def _from_ctypes(cls, ctypes_struct_or_union): - self = cls.__new__(cls) - self._blob = ctypes_struct_or_union - return self - - @classmethod - def _to_ctypes(cls, value): - return value._blob - - def __repr__(self, c_name=None): - return CTypesData.__repr__(self, c_name or self._get_c_name(' &')) - - -class CTypesBackend(object): - - PRIMITIVE_TYPES = { - 'char': ctypes.c_char, - 'short': ctypes.c_short, - 'int': ctypes.c_int, - 'long': ctypes.c_long, - 'long long': ctypes.c_longlong, - 'signed char': ctypes.c_byte, - 'unsigned char': ctypes.c_ubyte, - 'unsigned short': ctypes.c_ushort, - 'unsigned int': ctypes.c_uint, - 'unsigned long': ctypes.c_ulong, - 'unsigned long long': ctypes.c_ulonglong, - 'float': ctypes.c_float, - 'double': ctypes.c_double, - '_Bool': ctypes.c_bool, - } - - for _name in ['unsigned long long', 'unsigned long', - 'unsigned int', 'unsigned short', 'unsigned char']: - _size = ctypes.sizeof(PRIMITIVE_TYPES[_name]) - PRIMITIVE_TYPES['uint%d_t' % (8*_size)] = PRIMITIVE_TYPES[_name] - if _size == ctypes.sizeof(ctypes.c_void_p): - PRIMITIVE_TYPES['uintptr_t'] = PRIMITIVE_TYPES[_name] - if _size == ctypes.sizeof(ctypes.c_size_t): - PRIMITIVE_TYPES['size_t'] = PRIMITIVE_TYPES[_name] - - for _name in ['long long', 'long', 'int', 'short', 'signed char']: - _size = ctypes.sizeof(PRIMITIVE_TYPES[_name]) - PRIMITIVE_TYPES['int%d_t' % (8*_size)] = PRIMITIVE_TYPES[_name] - if _size == ctypes.sizeof(ctypes.c_void_p): - PRIMITIVE_TYPES['intptr_t'] = PRIMITIVE_TYPES[_name] - PRIMITIVE_TYPES['ptrdiff_t'] = PRIMITIVE_TYPES[_name] - if _size == ctypes.sizeof(ctypes.c_size_t): - PRIMITIVE_TYPES['ssize_t'] = PRIMITIVE_TYPES[_name] - - - def __init__(self): - self.RTLD_LAZY = 0 # not supported anyway by ctypes - self.RTLD_NOW = 0 - self.RTLD_GLOBAL = ctypes.RTLD_GLOBAL - self.RTLD_LOCAL = ctypes.RTLD_LOCAL - - def set_ffi(self, ffi): - self.ffi = ffi - - def _get_types(self): - return CTypesData, CTypesType - - def load_library(self, path, flags=0): - cdll = ctypes.CDLL(path, flags) - return CTypesLibrary(self, cdll) - - def new_void_type(self): - class CTypesVoid(CTypesData): - __slots__ = [] - _reftypename = 'void &' - @staticmethod - def _from_ctypes(novalue): - return None - @staticmethod - def _to_ctypes(novalue): - if novalue is not None: - raise TypeError("None expected, got %s object" % - (type(novalue).__name__,)) - return None - CTypesVoid._fix_class() - return CTypesVoid - - def new_primitive_type(self, name): - if name == 'wchar_t': - raise NotImplementedError(name) - ctype = self.PRIMITIVE_TYPES[name] - if name == 'char': - kind = 'char' - elif name in ('float', 'double'): - kind = 'float' - else: - if name in ('signed char', 'unsigned char'): - kind = 'byte' - elif name == '_Bool': - kind = 'bool' - else: - kind = 'int' - is_signed = (ctype(-1).value == -1) - # - def _cast_source_to_int(source): - if isinstance(source, (int, long, float)): - source = int(source) - elif isinstance(source, CTypesData): - source = source._cast_to_integer() - elif isinstance(source, bytes): - source = ord(source) - elif source is None: - source = 0 - else: - raise TypeError("bad type for cast to %r: %r" % - (CTypesPrimitive, type(source).__name__)) - return source - # - kind1 = kind - class CTypesPrimitive(CTypesGenericPrimitive): - __slots__ = ['_value'] - _ctype = ctype - _reftypename = '%s &' % name - kind = kind1 - - def __init__(self, value): - self._value = value - - @staticmethod - def _create_ctype_obj(init): - if init is None: - return ctype() - return ctype(CTypesPrimitive._to_ctypes(init)) - - if kind == 'int' or kind == 'byte': - @classmethod - def _cast_from(cls, source): - source = _cast_source_to_int(source) - source = ctype(source).value # cast within range - return cls(source) - def __int__(self): - return self._value - - if kind == 'bool': - @classmethod - def _cast_from(cls, source): - if not isinstance(source, (int, long, float)): - source = _cast_source_to_int(source) - return cls(bool(source)) - def __int__(self): - return int(self._value) - - if kind == 'char': - @classmethod - def _cast_from(cls, source): - source = _cast_source_to_int(source) - source = bytechr(source & 0xFF) - return cls(source) - def __int__(self): - return ord(self._value) - - if kind == 'float': - @classmethod - def _cast_from(cls, source): - if isinstance(source, float): - pass - elif isinstance(source, CTypesGenericPrimitive): - if hasattr(source, '__float__'): - source = float(source) - else: - source = int(source) - else: - source = _cast_source_to_int(source) - source = ctype(source).value # fix precision - return cls(source) - def __int__(self): - return int(self._value) - def __float__(self): - return self._value - - _cast_to_integer = __int__ - - if kind == 'int' or kind == 'byte' or kind == 'bool': - @staticmethod - def _to_ctypes(x): - if not isinstance(x, (int, long)): - if isinstance(x, CTypesData): - x = int(x) - else: - raise TypeError("integer expected, got %s" % - type(x).__name__) - if ctype(x).value != x: - if not is_signed and x < 0: - raise OverflowError("%s: negative integer" % name) - else: - raise OverflowError("%s: integer out of bounds" - % name) - return x - - if kind == 'char': - @staticmethod - def _to_ctypes(x): - if isinstance(x, bytes) and len(x) == 1: - return x - if isinstance(x, CTypesPrimitive): # > - return x._value - raise TypeError("character expected, got %s" % - type(x).__name__) - def __nonzero__(self): - return ord(self._value) != 0 - else: - def __nonzero__(self): - return self._value != 0 - __bool__ = __nonzero__ - - if kind == 'float': - @staticmethod - def _to_ctypes(x): - if not isinstance(x, (int, long, float, CTypesData)): - raise TypeError("float expected, got %s" % - type(x).__name__) - return ctype(x).value - - @staticmethod - def _from_ctypes(value): - return getattr(value, 'value', value) - - @staticmethod - def _initialize(blob, init): - blob.value = CTypesPrimitive._to_ctypes(init) - - if kind == 'char': - def _to_string(self, maxlen): - return self._value - if kind == 'byte': - def _to_string(self, maxlen): - return chr(self._value & 0xff) - # - CTypesPrimitive._fix_class() - return CTypesPrimitive - - def new_pointer_type(self, BItem): - getbtype = self.ffi._get_cached_btype - if BItem is getbtype(model.PrimitiveType('char')): - kind = 'charp' - elif BItem in (getbtype(model.PrimitiveType('signed char')), - getbtype(model.PrimitiveType('unsigned char'))): - kind = 'bytep' - elif BItem is getbtype(model.void_type): - kind = 'voidp' - else: - kind = 'generic' - # - class CTypesPtr(CTypesGenericPtr): - __slots__ = ['_own'] - if kind == 'charp': - __slots__ += ['__as_strbuf'] - _BItem = BItem - if hasattr(BItem, '_ctype'): - _ctype = ctypes.POINTER(BItem._ctype) - _bitem_size = ctypes.sizeof(BItem._ctype) - else: - _ctype = ctypes.c_void_p - if issubclass(BItem, CTypesGenericArray): - _reftypename = BItem._get_c_name('(* &)') - else: - _reftypename = BItem._get_c_name(' * &') - - def __init__(self, init): - ctypeobj = BItem._create_ctype_obj(init) - if kind == 'charp': - self.__as_strbuf = ctypes.create_string_buffer( - ctypeobj.value + b'\x00') - self._as_ctype_ptr = ctypes.cast( - self.__as_strbuf, self._ctype) - else: - self._as_ctype_ptr = ctypes.pointer(ctypeobj) - self._address = ctypes.cast(self._as_ctype_ptr, - ctypes.c_void_p).value - self._own = True - - def __add__(self, other): - if isinstance(other, (int, long)): - return self._new_pointer_at(self._address + - other * self._bitem_size) - else: - return NotImplemented - - def __sub__(self, other): - if isinstance(other, (int, long)): - return self._new_pointer_at(self._address - - other * self._bitem_size) - elif type(self) is type(other): - return (self._address - other._address) // self._bitem_size - else: - return NotImplemented - - def __getitem__(self, index): - if getattr(self, '_own', False) and index != 0: - raise IndexError - return BItem._from_ctypes(self._as_ctype_ptr[index]) - - def __setitem__(self, index, value): - self._as_ctype_ptr[index] = BItem._to_ctypes(value) - - if kind == 'charp' or kind == 'voidp': - @classmethod - def _arg_to_ctypes(cls, *value): - if value and isinstance(value[0], bytes): - return ctypes.c_char_p(value[0]) - else: - return super(CTypesPtr, cls)._arg_to_ctypes(*value) - - if kind == 'charp' or kind == 'bytep': - def _to_string(self, maxlen): - if maxlen < 0: - maxlen = sys.maxsize - p = ctypes.cast(self._as_ctype_ptr, - ctypes.POINTER(ctypes.c_char)) - n = 0 - while n < maxlen and p[n] != b'\x00': - n += 1 - return b''.join([p[i] for i in range(n)]) - - def _get_own_repr(self): - if getattr(self, '_own', False): - return 'owning %d bytes' % ( - ctypes.sizeof(self._as_ctype_ptr.contents),) - return super(CTypesPtr, self)._get_own_repr() - # - if (BItem is self.ffi._get_cached_btype(model.void_type) or - BItem is self.ffi._get_cached_btype(model.PrimitiveType('char'))): - CTypesPtr._automatic_casts = True - # - CTypesPtr._fix_class() - return CTypesPtr - - def new_array_type(self, CTypesPtr, length): - if length is None: - brackets = ' &[]' - else: - brackets = ' &[%d]' % length - BItem = CTypesPtr._BItem - getbtype = self.ffi._get_cached_btype - if BItem is getbtype(model.PrimitiveType('char')): - kind = 'char' - elif BItem in (getbtype(model.PrimitiveType('signed char')), - getbtype(model.PrimitiveType('unsigned char'))): - kind = 'byte' - else: - kind = 'generic' - # - class CTypesArray(CTypesGenericArray): - __slots__ = ['_blob', '_own'] - if length is not None: - _ctype = BItem._ctype * length - else: - __slots__.append('_ctype') - _reftypename = BItem._get_c_name(brackets) - _declared_length = length - _CTPtr = CTypesPtr - - def __init__(self, init): - if length is None: - if isinstance(init, (int, long)): - len1 = init - init = None - elif kind == 'char' and isinstance(init, bytes): - len1 = len(init) + 1 # extra null - else: - init = tuple(init) - len1 = len(init) - self._ctype = BItem._ctype * len1 - self._blob = self._ctype() - self._own = True - if init is not None: - self._initialize(self._blob, init) - - @staticmethod - def _initialize(blob, init): - if isinstance(init, bytes): - init = [init[i:i+1] for i in range(len(init))] - else: - if isinstance(init, CTypesGenericArray): - if (len(init) != len(blob) or - not isinstance(init, CTypesArray)): - raise TypeError("length/type mismatch: %s" % (init,)) - init = tuple(init) - if len(init) > len(blob): - raise IndexError("too many initializers") - addr = ctypes.cast(blob, ctypes.c_void_p).value - PTR = ctypes.POINTER(BItem._ctype) - itemsize = ctypes.sizeof(BItem._ctype) - for i, value in enumerate(init): - p = ctypes.cast(addr + i * itemsize, PTR) - BItem._initialize(p.contents, value) - - def __len__(self): - return len(self._blob) - - def __getitem__(self, index): - if not (0 <= index < len(self._blob)): - raise IndexError - return BItem._from_ctypes(self._blob[index]) - - def __setitem__(self, index, value): - if not (0 <= index < len(self._blob)): - raise IndexError - self._blob[index] = BItem._to_ctypes(value) - - if kind == 'char' or kind == 'byte': - def _to_string(self, maxlen): - if maxlen < 0: - maxlen = len(self._blob) - p = ctypes.cast(self._blob, - ctypes.POINTER(ctypes.c_char)) - n = 0 - while n < maxlen and p[n] != b'\x00': - n += 1 - return b''.join([p[i] for i in range(n)]) - - def _get_own_repr(self): - if getattr(self, '_own', False): - return 'owning %d bytes' % (ctypes.sizeof(self._blob),) - return super(CTypesArray, self)._get_own_repr() - - def _convert_to_address(self, BClass): - if BClass in (CTypesPtr, None) or BClass._automatic_casts: - return ctypes.addressof(self._blob) - else: - return CTypesData._convert_to_address(self, BClass) - - @staticmethod - def _from_ctypes(ctypes_array): - self = CTypesArray.__new__(CTypesArray) - self._blob = ctypes_array - return self - - @staticmethod - def _arg_to_ctypes(value): - return CTypesPtr._arg_to_ctypes(value) - - def __add__(self, other): - if isinstance(other, (int, long)): - return CTypesPtr._new_pointer_at( - ctypes.addressof(self._blob) + - other * ctypes.sizeof(BItem._ctype)) - else: - return NotImplemented - - @classmethod - def _cast_from(cls, source): - raise NotImplementedError("casting to %r" % ( - cls._get_c_name(),)) - # - CTypesArray._fix_class() - return CTypesArray - - def _new_struct_or_union(self, kind, name, base_ctypes_class): - # - class struct_or_union(base_ctypes_class): - pass - struct_or_union.__name__ = '%s_%s' % (kind, name) - kind1 = kind - # - class CTypesStructOrUnion(CTypesBaseStructOrUnion): - __slots__ = ['_blob'] - _ctype = struct_or_union - _reftypename = '%s &' % (name,) - _kind = kind = kind1 - # - CTypesStructOrUnion._fix_class() - return CTypesStructOrUnion - - def new_struct_type(self, name): - return self._new_struct_or_union('struct', name, ctypes.Structure) - - def new_union_type(self, name): - return self._new_struct_or_union('union', name, ctypes.Union) - - def complete_struct_or_union(self, CTypesStructOrUnion, fields, tp, - totalsize=-1, totalalignment=-1, sflags=0, - pack=0): - if totalsize >= 0 or totalalignment >= 0: - raise NotImplementedError("the ctypes backend of CFFI does not support " - "structures completed by verify(); please " - "compile and install the _cffi_backend module.") - struct_or_union = CTypesStructOrUnion._ctype - fnames = [fname for (fname, BField, bitsize) in fields] - btypes = [BField for (fname, BField, bitsize) in fields] - bitfields = [bitsize for (fname, BField, bitsize) in fields] - # - bfield_types = {} - cfields = [] - for (fname, BField, bitsize) in fields: - if bitsize < 0: - cfields.append((fname, BField._ctype)) - bfield_types[fname] = BField - else: - cfields.append((fname, BField._ctype, bitsize)) - bfield_types[fname] = Ellipsis - if sflags & 8: - struct_or_union._pack_ = 1 - elif pack: - struct_or_union._pack_ = pack - struct_or_union._fields_ = cfields - CTypesStructOrUnion._bfield_types = bfield_types - # - @staticmethod - def _create_ctype_obj(init): - result = struct_or_union() - if init is not None: - initialize(result, init) - return result - CTypesStructOrUnion._create_ctype_obj = _create_ctype_obj - # - def initialize(blob, init): - if is_union: - if len(init) > 1: - raise ValueError("union initializer: %d items given, but " - "only one supported (use a dict if needed)" - % (len(init),)) - if not isinstance(init, dict): - if isinstance(init, (bytes, unicode)): - raise TypeError("union initializer: got a str") - init = tuple(init) - if len(init) > len(fnames): - raise ValueError("too many values for %s initializer" % - CTypesStructOrUnion._get_c_name()) - init = dict(zip(fnames, init)) - addr = ctypes.addressof(blob) - for fname, value in init.items(): - BField, bitsize = name2fieldtype[fname] - assert bitsize < 0, \ - "not implemented: initializer with bit fields" - offset = CTypesStructOrUnion._offsetof(fname) - PTR = ctypes.POINTER(BField._ctype) - p = ctypes.cast(addr + offset, PTR) - BField._initialize(p.contents, value) - is_union = CTypesStructOrUnion._kind == 'union' - name2fieldtype = dict(zip(fnames, zip(btypes, bitfields))) - # - for fname, BField, bitsize in fields: - if fname == '': - raise NotImplementedError("nested anonymous structs/unions") - if hasattr(CTypesStructOrUnion, fname): - raise ValueError("the field name %r conflicts in " - "the ctypes backend" % fname) - if bitsize < 0: - def getter(self, fname=fname, BField=BField, - offset=CTypesStructOrUnion._offsetof(fname), - PTR=ctypes.POINTER(BField._ctype)): - addr = ctypes.addressof(self._blob) - p = ctypes.cast(addr + offset, PTR) - return BField._from_ctypes(p.contents) - def setter(self, value, fname=fname, BField=BField): - setattr(self._blob, fname, BField._to_ctypes(value)) - # - if issubclass(BField, CTypesGenericArray): - setter = None - if BField._declared_length == 0: - def getter(self, fname=fname, BFieldPtr=BField._CTPtr, - offset=CTypesStructOrUnion._offsetof(fname), - PTR=ctypes.POINTER(BField._ctype)): - addr = ctypes.addressof(self._blob) - p = ctypes.cast(addr + offset, PTR) - return BFieldPtr._from_ctypes(p) - # - else: - def getter(self, fname=fname, BField=BField): - return BField._from_ctypes(getattr(self._blob, fname)) - def setter(self, value, fname=fname, BField=BField): - # xxx obscure workaround - value = BField._to_ctypes(value) - oldvalue = getattr(self._blob, fname) - setattr(self._blob, fname, value) - if value != getattr(self._blob, fname): - setattr(self._blob, fname, oldvalue) - raise OverflowError("value too large for bitfield") - setattr(CTypesStructOrUnion, fname, property(getter, setter)) - # - CTypesPtr = self.ffi._get_cached_btype(model.PointerType(tp)) - for fname in fnames: - if hasattr(CTypesPtr, fname): - raise ValueError("the field name %r conflicts in " - "the ctypes backend" % fname) - def getter(self, fname=fname): - return getattr(self[0], fname) - def setter(self, value, fname=fname): - setattr(self[0], fname, value) - setattr(CTypesPtr, fname, property(getter, setter)) - - def new_function_type(self, BArgs, BResult, has_varargs): - nameargs = [BArg._get_c_name() for BArg in BArgs] - if has_varargs: - nameargs.append('...') - nameargs = ', '.join(nameargs) - # - class CTypesFunctionPtr(CTypesGenericPtr): - __slots__ = ['_own_callback', '_name'] - _ctype = ctypes.CFUNCTYPE(getattr(BResult, '_ctype', None), - *[BArg._ctype for BArg in BArgs], - use_errno=True) - _reftypename = BResult._get_c_name('(* &)(%s)' % (nameargs,)) - - def __init__(self, init, error=None): - # create a callback to the Python callable init() - import traceback - assert not has_varargs, "varargs not supported for callbacks" - if getattr(BResult, '_ctype', None) is not None: - error = BResult._from_ctypes( - BResult._create_ctype_obj(error)) - else: - error = None - def callback(*args): - args2 = [] - for arg, BArg in zip(args, BArgs): - args2.append(BArg._from_ctypes(arg)) - try: - res2 = init(*args2) - res2 = BResult._to_ctypes(res2) - except: - traceback.print_exc() - res2 = error - if issubclass(BResult, CTypesGenericPtr): - if res2: - res2 = ctypes.cast(res2, ctypes.c_void_p).value - # .value: http://bugs.python.org/issue1574593 - else: - res2 = None - #print repr(res2) - return res2 - if issubclass(BResult, CTypesGenericPtr): - # The only pointers callbacks can return are void*s: - # http://bugs.python.org/issue5710 - callback_ctype = ctypes.CFUNCTYPE( - ctypes.c_void_p, - *[BArg._ctype for BArg in BArgs], - use_errno=True) - else: - callback_ctype = CTypesFunctionPtr._ctype - self._as_ctype_ptr = callback_ctype(callback) - self._address = ctypes.cast(self._as_ctype_ptr, - ctypes.c_void_p).value - self._own_callback = init - - @staticmethod - def _initialize(ctypes_ptr, value): - if value: - raise NotImplementedError("ctypes backend: not supported: " - "initializers for function pointers") - - def __repr__(self): - c_name = getattr(self, '_name', None) - if c_name: - i = self._reftypename.index('(* &)') - if self._reftypename[i-1] not in ' )*': - c_name = ' ' + c_name - c_name = self._reftypename.replace('(* &)', c_name) - return CTypesData.__repr__(self, c_name) - - def _get_own_repr(self): - if getattr(self, '_own_callback', None) is not None: - return 'calling %r' % (self._own_callback,) - return super(CTypesFunctionPtr, self)._get_own_repr() - - def __call__(self, *args): - if has_varargs: - assert len(args) >= len(BArgs) - extraargs = args[len(BArgs):] - args = args[:len(BArgs)] - else: - assert len(args) == len(BArgs) - ctypes_args = [] - for arg, BArg in zip(args, BArgs): - ctypes_args.append(BArg._arg_to_ctypes(arg)) - if has_varargs: - for i, arg in enumerate(extraargs): - if arg is None: - ctypes_args.append(ctypes.c_void_p(0)) # NULL - continue - if not isinstance(arg, CTypesData): - raise TypeError( - "argument %d passed in the variadic part " - "needs to be a cdata object (got %s)" % - (1 + len(BArgs) + i, type(arg).__name__)) - ctypes_args.append(arg._arg_to_ctypes(arg)) - result = self._as_ctype_ptr(*ctypes_args) - return BResult._from_ctypes(result) - # - CTypesFunctionPtr._fix_class() - return CTypesFunctionPtr - - def new_enum_type(self, name, enumerators, enumvalues, CTypesInt): - assert isinstance(name, str) - reverse_mapping = dict(zip(reversed(enumvalues), - reversed(enumerators))) - # - class CTypesEnum(CTypesInt): - __slots__ = [] - _reftypename = '%s &' % name - - def _get_own_repr(self): - value = self._value - try: - return '%d: %s' % (value, reverse_mapping[value]) - except KeyError: - return str(value) - - def _to_string(self, maxlen): - value = self._value - try: - return reverse_mapping[value] - except KeyError: - return str(value) - # - CTypesEnum._fix_class() - return CTypesEnum - - def get_errno(self): - return ctypes.get_errno() - - def set_errno(self, value): - ctypes.set_errno(value) - - def string(self, b, maxlen=-1): - return b._to_string(maxlen) - - def buffer(self, bptr, size=-1): - raise NotImplementedError("buffer() with ctypes backend") - - def sizeof(self, cdata_or_BType): - if isinstance(cdata_or_BType, CTypesData): - return cdata_or_BType._get_size_of_instance() - else: - assert issubclass(cdata_or_BType, CTypesData) - return cdata_or_BType._get_size() - - def alignof(self, BType): - assert issubclass(BType, CTypesData) - return BType._alignment() - - def newp(self, BType, source): - if not issubclass(BType, CTypesData): - raise TypeError - return BType._newp(source) - - def cast(self, BType, source): - return BType._cast_from(source) - - def callback(self, BType, source, error, onerror): - assert onerror is None # XXX not implemented - return BType(source, error) - - _weakref_cache_ref = None - - def gcp(self, cdata, destructor, size=0): - if self._weakref_cache_ref is None: - import weakref - class MyRef(weakref.ref): - def __eq__(self, other): - myref = self() - return self is other or ( - myref is not None and myref is other()) - def __ne__(self, other): - return not (self == other) - def __hash__(self): - try: - return self._hash - except AttributeError: - self._hash = hash(self()) - return self._hash - self._weakref_cache_ref = {}, MyRef - weak_cache, MyRef = self._weakref_cache_ref - - if destructor is None: - try: - del weak_cache[MyRef(cdata)] - except KeyError: - raise TypeError("Can remove destructor only on a object " - "previously returned by ffi.gc()") - return None - - def remove(k): - cdata, destructor = weak_cache.pop(k, (None, None)) - if destructor is not None: - destructor(cdata) - - new_cdata = self.cast(self.typeof(cdata), cdata) - assert new_cdata is not cdata - weak_cache[MyRef(new_cdata, remove)] = (cdata, destructor) - return new_cdata - - typeof = type - - def getcname(self, BType, replace_with): - return BType._get_c_name(replace_with) - - def typeoffsetof(self, BType, fieldname, num=0): - if isinstance(fieldname, str): - if num == 0 and issubclass(BType, CTypesGenericPtr): - BType = BType._BItem - if not issubclass(BType, CTypesBaseStructOrUnion): - raise TypeError("expected a struct or union ctype") - BField = BType._bfield_types[fieldname] - if BField is Ellipsis: - raise TypeError("not supported for bitfields") - return (BField, BType._offsetof(fieldname)) - elif isinstance(fieldname, (int, long)): - if issubclass(BType, CTypesGenericArray): - BType = BType._CTPtr - if not issubclass(BType, CTypesGenericPtr): - raise TypeError("expected an array or ptr ctype") - BItem = BType._BItem - offset = BItem._get_size() * fieldname - if offset > sys.maxsize: - raise OverflowError - return (BItem, offset) - else: - raise TypeError(type(fieldname)) - - def rawaddressof(self, BTypePtr, cdata, offset=None): - if isinstance(cdata, CTypesBaseStructOrUnion): - ptr = ctypes.pointer(type(cdata)._to_ctypes(cdata)) - elif isinstance(cdata, CTypesGenericPtr): - if offset is None or not issubclass(type(cdata)._BItem, - CTypesBaseStructOrUnion): - raise TypeError("unexpected cdata type") - ptr = type(cdata)._to_ctypes(cdata) - elif isinstance(cdata, CTypesGenericArray): - ptr = type(cdata)._to_ctypes(cdata) - else: - raise TypeError("expected a ") - if offset: - ptr = ctypes.cast( - ctypes.c_void_p( - ctypes.cast(ptr, ctypes.c_void_p).value + offset), - type(ptr)) - return BTypePtr._from_ctypes(ptr) - - -class CTypesLibrary(object): - - def __init__(self, backend, cdll): - self.backend = backend - self.cdll = cdll - - def load_function(self, BType, name): - c_func = getattr(self.cdll, name) - funcobj = BType._from_ctypes(c_func) - funcobj._name = name - return funcobj - - def read_variable(self, BType, name): - try: - ctypes_obj = BType._ctype.in_dll(self.cdll, name) - except AttributeError as e: - raise NotImplementedError(e) - return BType._from_ctypes(ctypes_obj) - - def write_variable(self, BType, name, value): - new_ctypes_obj = BType._to_ctypes(value) - ctypes_obj = BType._ctype.in_dll(self.cdll, name) - ctypes.memmove(ctypes.addressof(ctypes_obj), - ctypes.addressof(new_ctypes_obj), - ctypes.sizeof(BType._ctype)) diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/misc/roundTools.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/misc/roundTools.py deleted file mode 100644 index 48a47c07c8575895f894a24065046bc308a69b97..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/misc/roundTools.py +++ /dev/null @@ -1,109 +0,0 @@ -""" -Various round-to-integer helpers. -""" - -import math -import functools -import logging - -log = logging.getLogger(__name__) - -__all__ = [ - "noRound", - "otRound", - "maybeRound", - "roundFunc", -] - - -def noRound(value): - return value - - -def otRound(value): - """Round float value to nearest integer towards ``+Infinity``. - - The OpenType spec (in the section on `"normalization" of OpenType Font Variations `_) - defines the required method for converting floating point values to - fixed-point. In particular it specifies the following rounding strategy: - - for fractional values of 0.5 and higher, take the next higher integer; - for other fractional values, truncate. - - This function rounds the floating-point value according to this strategy - in preparation for conversion to fixed-point. - - Args: - value (float): The input floating-point value. - - Returns - float: The rounded value. - """ - # See this thread for how we ended up with this implementation: - # https://github.com/fonttools/fonttools/issues/1248#issuecomment-383198166 - return int(math.floor(value + 0.5)) - - -def maybeRound(v, tolerance, round=otRound): - rounded = round(v) - return rounded if abs(rounded - v) <= tolerance else v - - -def roundFunc(tolerance, round=otRound): - if tolerance < 0: - raise ValueError("Rounding tolerance must be positive") - - if tolerance == 0: - return noRound - - if tolerance >= 0.5: - return round - - return functools.partial(maybeRound, tolerance=tolerance, round=round) - - -def nearestMultipleShortestRepr(value: float, factor: float) -> str: - """Round to nearest multiple of factor and return shortest decimal representation. - - This chooses the float that is closer to a multiple of the given factor while - having the shortest decimal representation (the least number of fractional decimal - digits). - - For example, given the following: - - >>> nearestMultipleShortestRepr(-0.61883544921875, 1.0/(1<<14)) - '-0.61884' - - Useful when you need to serialize or print a fixed-point number (or multiples - thereof, such as F2Dot14 fractions of 180 degrees in COLRv1 PaintRotate) in - a human-readable form. - - Args: - value (value): The value to be rounded and serialized. - factor (float): The value which the result is a close multiple of. - - Returns: - str: A compact string representation of the value. - """ - if not value: - return "0.0" - - value = otRound(value / factor) * factor - eps = 0.5 * factor - lo = value - eps - hi = value + eps - # If the range of valid choices spans an integer, return the integer. - if int(lo) != int(hi): - return str(float(round(value))) - - fmt = "%.8f" - lo = fmt % lo - hi = fmt % hi - assert len(lo) == len(hi) and lo != hi - for i in range(len(lo)): - if lo[i] != hi[i]: - break - period = lo.find(".") - assert period < i - fmt = "%%.%df" % (i - period) - return fmt % value diff --git a/spaces/cihyFjudo/fairness-paper-search/Free _HOT_ Porn Cartoon Pictures.md b/spaces/cihyFjudo/fairness-paper-search/Free _HOT_ Porn Cartoon Pictures.md deleted file mode 100644 index f82cd7ec96d386606e7457b05e3c7d8823e76043..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Free _HOT_ Porn Cartoon Pictures.md +++ /dev/null @@ -1,23 +0,0 @@ -
    -

    You are looking for free animated Gifs, animated images and animations? Then you have come to the right place! Our huge animated pictures archive currently comprises 149790 images in 2102 categories. It was of great importance to us that all images are clearly arranged for you in the different categories.

    -

    Every day at HeisLadyBoy.com you'll find amazing fresh galleries featuring sexy and cute asian ladyboys. There are ladyboy pics, divide into categories for easy surf! Hot ladyboy pictures from thailand, bangkok, indonesia, extreme ladyboy porn scenes. Posing, softcore, hardcore, BDSM and fetish ladyboy pics! Enjoy and bookmark US!

    -

    Free Porn Cartoon Pictures


    Download Zip ✔✔✔ https://tinurli.com/2uwiRk



    -

    I am talking about dozens of exclusive high-definition porn videos you will struggle to get your lusty eyes off. Besides the usual, there are human vs. animal cartoon characters steaming hot scenes. In addition to that, there is silly yet enjoying role-play porn you might want to look up. Just play with the search terms a little bit.

    -

    Are you looking to experience realistic penetration, POV, or the freedom to play around with awesome animation porn features like jiggle dynamics or expressions? Then Yiffalicious is the site for you.

    -

    Welcome to Mega Boobs Cartoons. Free pictures of the biggest collection of cartoon sex photos and videos, parody cartoon characters, busty porn comics, big tits hentai, shemale comic, 3D porn and interracial porn comics. Please check it out and come back later for updates.
    Visit my blog for easy comic reading

    -

    Welcome to Cuckold Cartoons tgp! On our pages you'll find exclusive and quality handpicked images of interracial cuckold stories.
    Every gallery full of explicit hardcore actions generated in dirty and lewd imagination of interracial comics artists. All hidden desires come true here!Cuckold CartoonsCuckold ComicsPRESS CTRL-D AND BOOKMARK US
    Cuckold Cartoon PornPRESS CTRL-D AND BOOKMARK US

    Top Friendly Sites

    • comixporn.net
    • Cartoon Fucking
    • porn-cartoons.net
    • Porn Cartoon Pics
    • Sexy Cartoon Porn
    • Cartoon XXX
    • 3D Sex
    • 3D Sex
    • Comic Book Porn
    • Porn Comix
    • Free 3D Porn
    • 3D Cartoon Porn
    • Cartoon Pron
    • Cartoon Sex Pics
    • 3D Porn Comics
    • Cuckold Comics
    • Interracial Toons
    • Hentai Manga Porn
    • Cartoon Sex Comics
    • Best Cartoon Porn
    • Cartoon comic porn
    • Cuckold Comics
    • John Persons Comics
    • Cartoon Porn Pics
    • Cartoon Tits
    • Cartoon Girl
    • Free Porn Comics
    • Toon Fuck
    • xxxcomicporn.net
    • Hot Cartoons
    • Toon Porn
    • JKR Comix
    • John Persons Comics
    • Jab Comix
    • XXX Comics
    • Comic Porn
    • Taboo Cartoons
    • Black Cock Comics
    • Anime Porn Pics
    • Interracial Comics
    • Comic Porn
    • dirtycomics.net
    • Golden Toons

    Cuckold Porn CartoonsArchived Links
    • Click > 3D toons Kitty and Jenny Summers lick hu...
    • Click > Black chicks hitting on white guys in Jo...
    • Click > John Persons art compilation with wet pu...
    • Click > I swear I felt I unloaded all my insides...
    • Click > Where are many black cocks fuck two poor...
    • Click > Princess and big black cock interracial ...
    • Click > I just want to apologize again
    • Click > Horny cartoon sex with redhead girl
    • Click > John Persons galleries. We should carry ...
    • Click > Jab comix. Farm lessons, ay papi, my hot...
    • Click > Free jab comix. Jake cumming in mother's...
    • Click > Shy girl went through emotional storms o...
    • Click > Life is tough, dear? Yeah, tell me about...
    • Click > I've brought some boys home
    • Click > Comic xxx Art: High-Detail Mouths and Dr...
    • Click > John Persons the PIT comics. Youthful co...
    • Click > JAB porno Comic Art: Nerdy, Pin-Up and R...
    • Click > Jab comix. Farm lessons, ay papi, my hot...
    • Click > So yummy whore enjoys hardcore cartoon p...
    • Click > Hot interracial cartoon sex in the showe...
    • Click > Cartoon fucking. Oh yeah feels great, st...
    • Click > Tsunade check how big Sakura's boobs gro...
    • Click > Flashing Digital Comics: Accurate Design...
    • Click > Epic Comic xxx Book Style: A Blend of Co...
    • Click > What is this dear? I'm sunburned, aren't...
    • Click > We've traveled so far to come here
    • Click > Comic sex Artstyle: A Journey Through Ac...
    • Click > See the most wonderful boob cartoon with...
    • Click > Kinky sluts starve for hardcore John Per...
    • Click > That's what we said the time before that...
    • Click > I want more in the porn comics, I want i...
    • Click > Enjoy her dripping pussy in a stunning j...
    • Click > Handsome black man likes feeling small m...
    • Click > Jab porn cartoons with the wet dreams ab...
    • Click > Comic sex Styled Adventure: A Bottom Sho...
    PRESS CTRL-D AND BOOKMARK US

    Disclaimer:
    Cuckold Cartoons has a zero-tolerance policy against ILLEGAL pornography.
    All galleries and links are provided by 3rd parties. We take no responsibility for content on any website we link to, please use your own discretion while surfing.
    Copyright 2011 www.CartoonWoman.net

    -

    spp 1958 vintage barbie doll case dick lee beauty world pornstars dressed as police video fucking sister's friend free
    bbw porn movie.
    teen driver laws in ohio film porno extrem gratos jwow nude pics
    free really young nude hallery busty polish brides.
    texas jail diversion adults slave tongues her pussy masti adult
    hotel near blackpool pleasure beach emily osment bikini pictures.

    -

    father son 3some slutload ford escort power booster eats
    bulls cum from wifes pussy montana physiological
    treatment sex offenders older silver daddies gay man.
    hot adult free ladies phone talk numbers black hardcore assfucking big boobs blojob pornhub eva
    green pics nude nuns fatties porn movies.
    nc state university towers porn video naked hot blonde lesbians adult hidden cam videos big tit sluts bukkake nureyev bisexual.

    -

    -

    boys eat cum bisexual sexual affects of muscular dystrophy green thumb colorado springs pre-teen asian girl
    sex.
    easy comfort double electricbattery breast pump
    british ass fuck madona's sex pics
    free xxx girls.
    busty dusty stash tastey cum good
    quiz sex long porn movie trailer.
    real 3d hardcoe porn erotic sex stoires gc lingerie models old and young cumshot
    compilations.
    college readhead gang bang free vid men fucking in bed lifes a
    bitch and then you die so fuck marissa miller nude.
    bang porn wife vegas adult massage parlors wid nude elisa bridges nude pictures.

    -

    petey the porno puppet porn stamina tricks how to get pornstar size penis nude massage arcadia ca free girlfriends
    and wives porn videos.
    female sex inhancers xxx skinny porn women want comic strip rubric need for speed boobs.

    -

    its time to kick ass and chew wack sex rfo foto gay gratis pollas sex photos of
    amateur porno couples.
    free asian cum sites dallas swingers 2010 jelsoft enterprises ltd zzz fucking latinas vintage pc game.

    -

    nudist teen camp pictures for free transvestite
    nightclubs new york city busty beauty in bath shredder c-380
    cut strip asian hardcore office.
    vintage england cherilea lead soldier mounted knight mature housewife gallery
    pics dancing naked giants porn drink through a tube jaime lee curtis pussy.

    -

    u s mid amateur teen sexpot vids 1960s sexual revolution fat
    girl gets fucked free reaming shemale sex.
    sweaty armpit fetish vintage anal porn reporting sexual harrassment to an employer assement scale
    of interracial relationships reality tv stars turned porn stars.

    -

    hot soccer players naked camelstyle drunk girl sex escort juan pr san reporter
    suck in van.
    free erotica comics strips movies hairy mature granny
    branding irons bdsm free
    full length tranny porn videos.
    futanari sluts breast and cervical program in arizona courtney
    cx nude free full version porn movies.
    homemade teen fucked ashleys candy naked vid ayesha tyler nude busty naked fittness babes.

    -

    horny family sex katy perry fucks user provided mature women videos costa picture rica sex steve ridgeway virgin.
    big boob porn star fucking hair rollers setting cum minus 33 709 mens bottoms torture tit the dog shoved his dick into
    his young ass.
    milf women porn breast cyst caffeine blowjob recordings
    online sounds free free nude danica patrick photos naked tube
    video free.
    support for bisexuals seeking to change paris hilton suck the dick hustlers young
    girls double dong lesbian movies topless blonde bikini.

    -

    boy is spanked over her knee fat mature pictures free xxx indain interaccial gangbang
    vintage hairy pussy free mpegs.
    drug statistics teen use light bondage and bdsm stories anne hathaway havoc nude picture lingerie swim funny gamesbiz adult.

    -

    somain pussy how to store pumped breast milk jwa homemade young hairy teens masturbating movies flower
    sex video.
    vintage missoni gown tempting orgasm dws free full gay pr www my fucking wife porn.
    mature milf interracial blow job computer generated online adult games
    twe bank briana sex tape twistys christmas teen trivia.

    -

    anak porno spanking porn galleries vintage lego diesel big black dicks free movie pictures of virgin goverment
    cell phones.
    katie morgan masturbates abortion chance of breast cancer hypothesis curve
    big hips thumbs tgp gay cowwboys nude british virgin islands broadband.

    aaccfb2cb3
    -
    -
    \ No newline at end of file diff --git a/spaces/cihyFjudo/fairness-paper-search/Monster High Ghoulfriends Forever Epub Download Software.md b/spaces/cihyFjudo/fairness-paper-search/Monster High Ghoulfriends Forever Epub Download Software.md deleted file mode 100644 index 36cba937cf23599dee7c373aa9c97e61a4d4fcc0..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Monster High Ghoulfriends Forever Epub Download Software.md +++ /dev/null @@ -1,6 +0,0 @@ -

    monster high ghoulfriends forever epub download software


    Download Zip ····· https://tinurli.com/2uwjJP



    -
    - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/cihyFjudo/fairness-paper-search/Pacific Girls 716 Chiho.zip !!INSTALL!!.md b/spaces/cihyFjudo/fairness-paper-search/Pacific Girls 716 Chiho.zip !!INSTALL!!.md deleted file mode 100644 index a8b85065746a84d9d8bc407ac67c68f183b682dd..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Pacific Girls 716 Chiho.zip !!INSTALL!!.md +++ /dev/null @@ -1,6 +0,0 @@ -
    -

    Best Girl 7
    Salt Art Online: AlkalinizationPrev: BG6 (2019)Next: BG8 (2021)poster edited by a Reddit userHostShaKing807DateAll dates in EDT (UTC-4)
    2020 June - July 21Participants512 (Main bracket)LanguageEnglishWebsiteReddit
    + Thread links
    AnimeBracket
    + [Nomination page]
    + [Voting]
    + [Character list] (excluding girls denied by character limit)
    + Full bracket
    Imgur
    + [Elimination results]Final positionsChampion Kaguya Shinomiya (1)Runner-up Mai Sakurajima (6)Third place Ai Hayasaka (4)Fourth place Holo (10)Quarter-finalsTsubasa_sama's rating:
    5th: Chika Fujiwara (3)
    6th: Megumin (2)
    7th: Aqua (5)
    8th: Emilia (9)
    StatisticsBiggest blowout91.67%
    Mai (6) > Emi (MP100) (507) (Round 1)
    3257-296Closest match3 votes (50.03%)
    Shiro (70) > Mugi (51) (Round 3)
    2693-2690Highest voted match20,578 votes
    Kaguya (1) > Mai (6) (Final)
    11,617 - 8961Biggest upsetUpset Index: 4.27
    Rikka (135) > Kei S. (7) (Round 3)
    2292 - 2252

    Winning probability: 2.27%
    Sakura M. (84) > Chitanda (45)
    2580-2348

    -

    Pacific Girls 716 Chiho.zip


    Download > https://tinurli.com/2uwj4z



    -

    The tournament saw the emergence of two very hyped heroines from rival romance shows: Kaguya Shinomiya from Kaguya-sama, and Mai Sakurajima from Bunny Girl Senpai. Both girls reached the final as many people expected, which ended with #1 seed Kaguya triumphing over #6 seed Mai, becoming the first manga character and top seeded contestant to win the competition. The third place was won by Ai Hayasaka, another Kaguya-sama girl, after she defeated Holo in a consolation match that was held on a different site.

    aaccfb2cb3
    -
    -
    \ No newline at end of file diff --git a/spaces/cihyFjudo/fairness-paper-search/She was rumored to have a new romance with Tommaso Buti another millionaire from Florence Italy[1]..md b/spaces/cihyFjudo/fairness-paper-search/She was rumored to have a new romance with Tommaso Buti another millionaire from Florence Italy[1]..md deleted file mode 100644 index d268ee88df61b1b88cbafb33a9255bfd8e9d3c06..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/She was rumored to have a new romance with Tommaso Buti another millionaire from Florence Italy[1]..md +++ /dev/null @@ -1,6 +0,0 @@ -

    video claudia galanti y sebastian escalada


    Download Ziphttps://tinurli.com/2uwizK



    -
    - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/cihyFjudo/fairness-paper-search/Truberbrook PC A Journey to the Eponymous Village of Trberbrook.md b/spaces/cihyFjudo/fairness-paper-search/Truberbrook PC A Journey to the Eponymous Village of Trberbrook.md deleted file mode 100644 index 45796f4190bbdfe4337a2b0cdbdb3f1676148106..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Truberbrook PC A Journey to the Eponymous Village of Trberbrook.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Truberbrook PC


    Download File >>>>> https://tinurli.com/2uwi1B



    -
    - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/cihyFjudo/fairness-paper-search/[PC] BotuPlay - The Extra Disc For RapeLay (hentai) - ENG.zipgolkesl.md b/spaces/cihyFjudo/fairness-paper-search/[PC] BotuPlay - The Extra Disc For RapeLay (hentai) - ENG.zipgolkesl.md deleted file mode 100644 index c94f6a723d68ea86508268c4e6755b9f6e5a25a2..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/[PC] BotuPlay - The Extra Disc For RapeLay (hentai) - ENG.zipgolkesl.md +++ /dev/null @@ -1,6 +0,0 @@ -

    [PC] BotuPlay - The Extra Disc For RapeLay (hentai) - ENG.zipgolkesl


    Downloadhttps://tinurli.com/2uwiEL



    -
    - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/cncn102/bingo1/src/components/header.tsx b/spaces/cncn102/bingo1/src/components/header.tsx deleted file mode 100644 index dc298b722154d1ac6d7a7e148204605562d6cc58..0000000000000000000000000000000000000000 --- a/spaces/cncn102/bingo1/src/components/header.tsx +++ /dev/null @@ -1,12 +0,0 @@ -import * as React from 'react' -import { UserMenu } from './user-menu' - -export async function Header() { - return ( -
    -
    - -
    -
    - ) -} diff --git a/spaces/congsaPfin/Manga-OCR/logs/Assoluto Racing APK 2.11.1 Hile The Ultimate Guide to Modifying Your Car and Drifting Like a Pro.md b/spaces/congsaPfin/Manga-OCR/logs/Assoluto Racing APK 2.11.1 Hile The Ultimate Guide to Modifying Your Car and Drifting Like a Pro.md deleted file mode 100644 index 965060d2ceefd50269225b9a32908439dbcb714e..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Assoluto Racing APK 2.11.1 Hile The Ultimate Guide to Modifying Your Car and Drifting Like a Pro.md +++ /dev/null @@ -1,88 +0,0 @@ -
    -

    Assoluto Racing APK 2.11.1 Hile: A Realistic Racing Game for Android

    -

    If you are a fan of racing games and want to experience a realistic driving sensation on your mobile device, you should try Assoluto Racing APK 2.11.1 hile. This is a modified version of the original Assoluto Racing game that gives you unlimited money and coins to buy and upgrade any car you want.

    -

    assoluto racing apk 2.11.1 hile


    Download File ✔✔✔ https://urlca.com/2uOcB8



    -

    What is Assoluto Racing APK 2.11.1 hile?

    -

    A free-to-play mobile racing game with a realistic feel

    -

    Assoluto Racing is a free-to-play mobile racing game developed by Infinity Vector Ltd for Android devices. It features beautiful graphics, officially licensed cars from top manufacturers, realistic physics engine, and various game modes and tracks to challenge your driving skills.

    -

    A modified version of the original game with unlimited money and coins

    -

    Assoluto Racing APK 2.11.1 hile is a modified version of the original game that gives you unlimited money and coins to buy and upgrade any car you want. You can also unlock all the cars, tracks, and modes without spending real money or watching ads.

    -

    What are the features of Assoluto Racing APK 2.11.1 hile?

    -

    Officially licensed cars from top manufacturers

    -

    Assoluto Racing APK 2.11.1 hile offers you a selection of over 30 cars from European, American, or JDM makers such as McLaren, Toyota, Nissan, BMW, Mercedes-Benz, Porsche, Mitsubishi, and more. You can drive iconic cars like the GTR, Lancer Evolution, or M3 and take them to the limit on the track.

    -

    Customizable car performance and appearance

    -

    Assoluto Racing APK 2.11.1 hile allows you to customize your car performance and appearance to suit your style and preference. You can upgrade your car with new parts such as engine, turbo, suspension, brakes, tires, and more. You can also tune your car with parameters such as camber, toe, ride height, gear ratio, and more. You can also change your car color, wheels, decals, and license plate.

    -

    Realistic physics engine and driving sensation

    -

    Assoluto Racing APK 2.11.1 hile uses a realistic physics engine that simulates the behavior of real cars on different surfaces and conditions. You can feel the weight, traction, grip, and aerodynamics of your car as you accelerate, brake, steer, and drift. You can also choose from different camera angles and control options to get the best driving sensation.

    -

    Various game modes and tracks to challenge your skills

    -

    Assoluto Racing APK 2.11.1 hile offers you various game modes and tracks to challenge your skills and have fun. You can play the driving school mode to learn the basics of driving and racing. You can play the single-player mode to compete in events such as time attack, slalom, drift trial, and more. You can play the online mode to race against other players from around the world and earn rewards and rank. You can also play the custom mode to create your own races with your own rules and settings.

    -

    assoluto racing mod apk 2.11.1 unlimited money
    -assoluto racing 2.11.1 apk download for android
    -assoluto racing hack apk 2.11.1 free download
    -assoluto racing 2.11.1 mod menu
    -assoluto racing apk 2.11.1 hile indir
    -assoluto racing cheats apk 2.11.1
    -assoluto racing 2.11.1 latest version apk
    -assoluto racing apk 2.11.1 hile nasıl yapılır
    -assoluto racing mod apk 2.11.1 android 1
    -assoluto racing 2.11.1 apk obb
    -assoluto racing hack apk 2.11.1 no root
    -assoluto racing 2.11.1 mod apk rexdl
    -assoluto racing apk 2.11.1 hileli oyun indir club
    -assoluto racing mod apk 2.11.1 unlimited coins and gems
    -assoluto racing 2.11.1 apk pure
    -assoluto racing hack apk 2.11.1 online
    -assoluto racing 2.11.1 mod apk revdl
    -assoluto racing apk 2.11.1 hile yapma
    -assoluto racing mod apk 2.11.1 all cars unlocked
    -assoluto racing 2.11.1 apk mirror
    -assoluto racing hack apk 2.11.1 offline
    -assoluto racing 2.11.1 mod apk happymod
    -assoluto racing apk 2.11.1 hileli oyun indir mobi
    -assoluto racing mod apk 2.11.1 unlimited everything
    -assoluto racing 2.11.1 apk uptodown
    -assoluto racing hack apk 2.11.1 mega mod
    -assoluto racing 2.11.1 mod apk an1
    -assoluto racing apk 2.11.1 hileli oyun indir vip
    -assoluto racing mod apk 2.11.1 god mode
    -assoluto racing 2.11.1 apk mod money
    -assoluto racing hack apk 2.11.1 unlimited gold and cash
    -assoluto racing 2.11.1 mod apk unlimited nitro
    -assoluto racing apk 2.11.1 hileli oyun indir club.com
    -assoluto racing mod apk 2.11.1 no ads
    -assoluto racing 2.11.1 apk data
    -assoluto racing hack apk 2.11.1 anti ban
    -assoluto racing 2.11.1 mod apk unlimited rp and coins
    -assoluto racing apk 2.11.1 hileli oyun indirme sitesi.com.tr
    -assoluto racing mod apk 2.11.. high damage and speed

    -

    How to download and install Assoluto Racing APK 2.11.1 hile?

    -

    Download the APK file from a trusted source

    -

    To download Assoluto Racing APK 2.11.1 hile, you need to find a trusted source that provides the latest version of the file. You can search for it on Google or use a link from a reputable website or blog. For example, you can use this link to download the file.

    -

    Enable unknown sources on your device settings

    -

    To install Assoluto Racing APK 2.11.1 hile, you need to enable unknown sources on your device settings. This will allow you to install apps that are not from the Google Play Store. To do this, go to your device settings > security > unknown sources > enable.

    -

    Install the APK file and launch the game

    -

    To install Assoluto Racing APK 2.11.1 hile, you need to locate the downloaded file on your device storage and tap on it to start the installation process. Follow the instructions on the screen and wait for the installation to finish. Then, launch the game and enjoy.

    -

    What are some tips and tricks for playing Assoluto Racing APK 2.11.1 hile?

    -

    Complete the driving school and single-player events to learn the basics

    -

    If you are new to Assoluto Racing APK 2.11.1 hile, you should complete the driving school mode first to learn the basics of driving and racing. This will help you get familiar with the controls, physics, and features of the game. You should also complete the single-player events to earn money and coins, unlock new cars and tracks, and improve your skills.

    -

    Adjust your controls and car assist options to suit your preference

    -

    If you want to have a better driving experience in Assoluto Racing APK 2.11.1 hile, you should adjust your controls and car assist options to suit your preference. You can choose from different control options such as tilt, touch, or steering wheel. You can also choose from different car assist options such as ABS, traction control, stability control, or manual transmission.

    -

    Upgrade your car with new parts and tune it to optimize its performance

    -

    If you want to have a faster and more powerful car in Assoluto Racing APK 2.11.1 hile, you should upgrade your car with new parts and tune it to optimize its performance. You can buy new parts with money or coins or win them from events or online races. You can also tune your car with parameters such as camber, toe, ride height, gear ratio, and more.

    -

    Race online against other players and earn rewards and rank

    -

    If you want to have more fun and challenge in Assoluto Racing APK 2.11.1 hile, you should race online against other players and earn rewards and rank. You can join online races with different modes such as sprint, circuit, drift battle, or elimination. You can also create or join a club to race with your friends or other players. You can earn rewards such as money, coins, parts, or cars from online races. You can also increase your rank and reputation by winning races and completing challenges.

    -

    Conclusion

    -

    Assoluto Racing APK 2.11.1 hile is a fun and realistic racing game for Android devices. You can enjoy a variety of cars, tracks, and modes with unlimited money and coins. You can download it for free from a reliable source and install it easily on your device. If you are looking for a mobile racing game that offers you a realistic driving sensation and a lot of customization options, you should give Assoluto Racing APK 2.11.1 hile a try.

    -

    FAQs

    -

    Is Assoluto Racing APK 2.11.1 hile safe to use?

    -

    Assoluto Racing APK 2.11.1 hile is safe to use as long as you download it from a trusted source and scan it with an antivirus program before installing it. However, you should be aware that using a modified version of the game may violate the terms and conditions of the original game and may result in your account being banned or suspended.

    -

    What are the system requirements for Assoluto Racing APK 2.11.1 hile?

    -

    The system requirements for Assoluto Racing APK 2.11.1 hile are the same as the original game. You need an Android device with at least 4.0 OS version, 1 GB of RAM, and 500 MB of free storage space.

    -

    How can I get more gold coins in Assoluto Racing APK 2.11.1 hile?

    -

    You can get more gold coins in Assoluto Racing APK 2.11.1 hile by completing events, online races, challenges, or achievements. You can also get more gold coins by watching ads or buying them with real money.

    -

    How can I drift in Assoluto Racing APK 2.11.1 hile?

    -

    You can drift in Assoluto Racing APK 2.11.1 hile by using the handbrake button or the tilt control option. You can also drift by adjusting your car settings such as suspension, tires, or differential.

    -

    How can I contact the developers of Assoluto Racing APK 2.11.1 hile?

    -

    You can contact the developers of Assoluto Racing APK 2.11.1 hile by visiting their official website or their social media pages such as Facebook, Twitter, Instagram, or YouTube.

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Bus Simulator Indonesia Skin How to Find and Download the Latest Skins.md b/spaces/congsaPfin/Manga-OCR/logs/Bus Simulator Indonesia Skin How to Find and Download the Latest Skins.md deleted file mode 100644 index 76c01cf4ffc153d3b95c8bb0bdc00ab9aa4b586b..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Bus Simulator Indonesia Skin How to Find and Download the Latest Skins.md +++ /dev/null @@ -1,135 +0,0 @@ - -

    How to Create and Apply Bus Simulator Indonesia Skin

    -

    Have you ever dreamed of driving a bus in Indonesia? If so, you might want to try Bus Simulator Indonesia (BUSSID), a fun and realistic game that lets you experience what it likes being a bus driver in Indonesia. You can choose from different types of buses, drive through authentic Indonesian cities and places, honk your horn with cool and fun sounds, and even design your own livery for your bus.

    -

    What is livery? It is the term used for the paint scheme or decoration of a vehicle, especially a bus. In BUSSID, you can create and apply your own custom bus skin, which is a graphic file that changes the appearance of your bus. You can use your imagination and creativity to make your bus look unique and awesome.

    -

    bus simulator indonesia skin


    Download ••• https://urlca.com/2uOfnN



    -

    In this article, we will show you how to create and apply bus skin for your bus in BUSSID. It is not difficult, but you will need some requirements and follow some steps. Don't worry, we will guide you through the process step by step. Let's get started!

    -

    Requirements for Creating Bus Skin

    -

    To create your own bus skin, you will need the following things:

    -
      -
    • A device with Android OS and BUSSID game installed. You can download the game from the Google Play Store or the official website. The game is free to play, but you can also purchase some premium features and items if you want.
    • -
    • A photo editing app such as PicsArt or Eraser. These apps are also free to download and use, and they have many tools and features that can help you create your bus skin design. You can also use other photo editing apps, but make sure they can save your file as a PNG format with a transparent background.
    • -
    • A template skin file for the bus model you want to customize. You can find these files on the official website or on some social media groups such as Facebook or Telegram. These files are usually in ZIP or RAR format, so you will need to extract them first before using them.
    • -
    -

    Once you have these requirements, you are ready to create your bus skin.

    -

    bus simulator indonesia skin download
    -bus simulator indonesia skin livery
    -bus simulator indonesia skin mod
    -bus simulator indonesia skin apk
    -bus simulator indonesia skin bd
    -bus simulator indonesia skin kerala
    -bus simulator indonesia skin tamil nadu
    -bus simulator indonesia skin app
    -bus simulator indonesia skin editor
    -bus simulator indonesia skin maker
    -bus simulator indonesia skin pack
    -bus simulator indonesia skin hd
    -bus simulator indonesia skin volvo
    -bus simulator indonesia skin ashok leyland
    -bus simulator indonesia skin mercedes
    -bus simulator indonesia skin hino
    -bus simulator indonesia skin scania
    -bus simulator indonesia skin sri lanka
    -bus simulator indonesia skin nepal
    -bus simulator indonesia skin punjab
    -bus simulator indonesia skin telangana
    -bus simulator indonesia skin andhra pradesh
    -bus simulator indonesia skin karnataka
    -bus simulator indonesia skin maharashtra
    -bus simulator indonesia skin gujarat
    -bus simulator indonesia skin rajasthan
    -bus simulator indonesia skin uttar pradesh
    -bus simulator indonesia skin bihar
    -bus simulator indonesia skin west bengal
    -bus simulator indonesia skin odisha
    -bus simulator indonesia skin assam
    -bus simulator indonesia skin manipur
    -bus simulator indonesia skin meghalaya
    -bus simulator indonesia skin mizoram
    -bus simulator indonesia skin nagaland
    -bus simulator indonesia skin tripura
    -bus simulator indonesia skin sikkim
    -bus simulator indonesia skin arunachal pradesh
    -bus simulator indonesia skin jammu and kashmir
    -bus simulator indonesia skin himachal pradesh
    -bus simulator indonesia skin uttarakhand
    -bus simulator indonesia skin madhya pradesh
    -bus simulator indonesia skin chhattisgarh
    -bus simulator indonesia skin jharkhand
    -bus simulator indonesia skin goa
    -bus simulator indonesia skin delhi
    -bus simulator indonesia skin chandigarh
    -bus simulator indonesia skin puducherry

    -

    Steps for Creating Bus Skin

    -

    Here are the steps for creating your bus skin:

    -
      -
    1. Download and open the template skin file in the photo editing app. You will see a blank image with some outlines and markings that indicate the parts of the bus. These are the areas where you can design your livery.
    2. -
    3. Use the tools in the app to design your own livery on the template. You can use colors, shapes, texts, stickers, filters, effects, and anything else you want to make your bus skin look amazing. Be creative and original with your theme and color scheme. You can also use images and graphics from other sources, but make sure they are high-quality and not copyrighted.
    4. -
    5. Save your design as a PNG file with a transparent background. This is important because it will make your bus skin look smooth and realistic in the game. You can name your file anything you want, but make sure it has a .png extension.
    6. -
    -

    Congratulations, you have created your bus skin! Now, let's see how to apply it to your bus in the game.

    -

    Requirements for Applying Bus Skin

    -

    To apply your bus skin, you will need the following things:

    -
      -
    • A device with Android OS and BUSSID game installed. You should have the same device that you used to create your bus skin, or at least one that has the same version of the game.
    • -
    • A file manager app such as ZArchiver or ES File Explorer. These apps are also free to download and use, and they can help you access and manage your files on your device.
    • -
    • Your custom bus skin file in PNG format. You should have this file on your device storage or on an external storage such as a SD card or a USB drive.
    • -
    -

    Once you have these requirements, you are ready to apply your bus skin.

    -

    Steps for Applying Bus Skin

    -

    Here are the steps for applying your bus skin:

    -
      -
    1. Open the file manager app and locate your bus skin file. You can use the search function or browse through the folders to find it.
    2. -
    3. Copy or move your bus skin file to the BUSSID folder in your device storage. This folder is usually located in Internal Storage > Android > data > com.maleo.bussimulatorid > files > BUSSID. If you don't see this folder, you may need to create it manually or enable the show hidden files option in the app settings.
    4. -
    5. Open the BUSSID game and go to the garage menu. This is where you can select and customize your buses.
    6. -
    7. Select the bus model that matches your bus skin file and tap on the livery icon. This is a small icon that looks like a paintbrush on the bottom right corner of the screen.
    8. -
    9. Choose your custom bus skin from the list and apply it to your bus. You should see a preview of how your bus looks like with your livery.
    10. -
    -

    That's it, you have applied your bus skin! Now, you can enjoy driving your bus with your own livery in the game. You can also change or remove your bus skin anytime you want by following the same steps.

    -

    Tips and Tricks for Creating and Applying Bus Skin

    -

    Here are some tips and tricks that can help you create and apply bus skin better:

    -
      -
    • Use high-quality images and graphics for your bus skin design. This will make your bus skin look more realistic and detailed in the game. You can use online sources such as Google Images or Pixabay to find free and royalty-free images and graphics that suit your theme.
    • -
    • Be creative and original with your livery theme and color scheme. You can use any theme or color scheme that you like, as long as it does not violate the game rules or offend anyone. You can also get inspiration from real-life buses, famous brands, celebrities, movies, cartoons, games, etc.
    • -
    • Check the preview of your bus skin in the game before applying it. This will help you see how your bus skin looks like on different angles and lighting conditions. You can also take screenshots or videos of your bus skin and share them with other players online.
    • -
    • Share your bus skin with other players online or download more skins from the official website or social media groups. You can show off your creativity and talent by sharing your bus skin with other players online. You can also download more skins from the official website or from some social media groups such as Facebook or Telegram. You can find many amazing and beautiful skins made by other players from all over the world.
    • -
    -

    These tips and tricks can help you create and apply bus skin more easily and effectively. You can also experiment with different tools, techniques, and styles to make your bus skin more unique and awesome.

    -

    Conclusion

    -

    In conclusion, creating and applying bus skin for BUSSID is a fun and rewarding activity that can enhance your gaming experience. You can express your personality and creativity by designing your own livery for your bus. You can also enjoy driving your bus with your own livery in the game. You can also share your bus skin with other players online or download more skins from the official website or social media groups.

    -

    We hope this article has helped you learn how to create and apply bus skin for BUSSID. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading and happy gaming!

    -

    FAQs

    -

    Q1: What are the best photo editing apps for creating bus skin?

    -

    A1: There is no definitive answer to this question, as different apps may have different features and functions that suit different users' preferences and needs. However, some of the most popular and recommended apps for creating bus skin are PicsArt, Eraser, Photoshop Express, Snapseed, etc. These apps are easy to use, have many tools and options, and can save your file as a PNG format with a transparent background.

    -

    Q2: How can I use my own 3D model for my bus skin?

    -

    A2: Unfortunately, you cannot use your own 3D model for your bus skin in BUSSID. The game only supports the official 3D models that are provided by the developers or by some modders. You can only customize the appearance of these 3D models by creating and applying bus skin.

    -

    Q3: How can I join an online multiplayer convoy with my custom bus skin?

    -

    A3: To join an online multiplayer convoy with your custom bus skin, you need to do the following things:

    -
      -
    1. Create a room or join an existing room in the online multiplayer mode of the game.
    2. -
    3. Select the same server, map, time, weather, traffic, etc. as the other players in the room.
    4. -
    5. Select the same bus model as the other players in the room.
    6. -
    7. Select your custom bus skin from the livery list.
    8. -
    9. Start driving with the other players in the room.
    10. -
    -

    Note that not all rooms or servers may support custom bus skins. Some rooms or servers may have restrictions or rules regarding custom bus skins. You should check with the room owner or server admin before joining an online multiplayer convoy with your custom bus skin.

    -

    Q4: How can I remove or change my bus skin in the game?

    -

    A4: To remove or change your bus skin in the game, you need to do the following things:

    -
      -
    1. Go to the garage menu and select the bus model that has your bus skin applied.
    2. -
    3. Tap on the livery icon and choose another bus skin from the list or the default one.
    4. -
    5. Apply the new bus skin to your bus or leave it as it is.
    6. -
    -

    To remove your bus skin file from your device storage, you can use the file manager app and delete it from the BUSSID folder.

    -

    Q5: Where can I find more information and resources about BUSSID and bus skin?

    -

    A5: You can find more information and resources about BUSSID and bus skin on the following sources:

    -
      -
    • The official website of BUSSID, where you can download the game, get updates, news, tips, tutorials, etc.
    • -
    • The official YouTube channel of BUSSID, where you can watch videos of gameplay, features, events, etc.
    • -
    • The official Instagram account of BUSSID, where you can see photos and stories of the game, the developers, the players, etc.
    • -
    • The official Facebook page of BUSSID, where you can join the community of fans, share your feedback, suggestions, questions, etc.
    • -
    • The official Telegram group of BUSSID, where you can chat with other players, get support, share your bus skin, etc.
    • -
    -

    These sources can help you learn more about BUSSID and bus skin and enjoy the game more.

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Cover Fire The Ultimate Offline Shooter Game for Android.md b/spaces/congsaPfin/Manga-OCR/logs/Cover Fire The Ultimate Offline Shooter Game for Android.md deleted file mode 100644 index f69aca583a8df3e5bff7f5c6d2dfaf4f9b14fc9f..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Cover Fire The Ultimate Offline Shooter Game for Android.md +++ /dev/null @@ -1,127 +0,0 @@ -
    -
    - - -' in styler.to_html() - - -def test_rowspan_w3(): - # GH 38533 - df = DataFrame(data=[[1, 2]], index=[["l0", "l0"], ["l1a", "l1b"]]) - styler = Styler(df, uuid="_", cell_ids=False) - assert '' in styler.to_html() - - -def test_styles(styler): - styler.set_uuid("abc") - styler.set_table_styles([{"selector": "td", "props": "color: red;"}]) - result = styler.to_html(doctype_html=True) - expected = dedent( - """\ - - - - - - - -
    -

    Cover Fire: The Ultimate Offline Shooting Game

    -

    Do you love shooting games but hate online lagging and interruptions? Do you want to experience realistic action and graphics on your mobile device? Do you want to join a resistance movement against a tyrannical corporation?

    -

    cover fire pdalife


    Download Zip >>> https://urlca.com/2uO9fy



    -

    If you answered yes to any of these questions, then you should try Cover Fire, one of the best shooting games you'll ever play on a mobile device. Cover Fire is an offline action game that lets you control a team of elite soldiers who fight against Tetracorp, a greedy corporation that wants to control everything. You can choose from different modes and missions, customize your weapons and characters, use strategy and tactics, and enjoy stunning 3D graphics.

    -

    In this article, we'll show you how to play Cover Fire, give you some tips and tricks for mastering it, and tell you why you should join the resistance today.

    -

    How to Play Cover Fire

    -

    C

    Cover Fire is easy to play but hard to master. Here are the basic steps you need to follow to start playing Cover Fire.

    -

    Download and Install Cover Fire

    -

    The first thing you need to do is to download and install Cover Fire on your mobile device. You can find Cover Fire on Google Play or Pdalife, depending on your device's operating system. Cover Fire is free to download and play, but it contains some in-app purchases that you can buy with real money if you want to enhance your gaming experience.

    -

    To install Cover Fire on your device, you need to have at least 400 MB of free space and a stable internet connection. Once you download the game, you can launch it and follow the instructions on the screen. You can also adjust the settings and preferences according to your liking.

    -

    Choose Your Mode and Mission

    -

    Once you have installed Cover Fire, you can choose from different modes and missions that suit your mood and skill level. Cover Fire has three main modes: Campaign, Sniper Ops, and Zombie Event. Each mode has its own storyline, objectives, rewards, and challenges.

    -

    The Campaign mode is the main mode of Cover Fire, where you join the resistance and fight against Tetracorp in various locations and scenarios. You can choose from different chapters and episodes, each with a different difficulty level and number of missions. You can also unlock new characters and weapons as you progress through the campaign.

    -

    The Sniper Ops mode is a special mode where you play as a sniper and take out your enemies from a distance. You can use different rifles and scopes, as well as special items like drones and grenades. You can also earn coins and medals by completing missions and achieving objectives.

    -

    cover fire offline shooting game download
    -cover fire android gameplay
    -cover fire steam review
    -cover fire best shooter game
    -cover fire resistance mod apk
    -cover fire sniper 3d shooting game
    -cover fire free gun shooting games
    -cover fire on rails shooter
    -cover fire hero shooter pve
    -cover fire realistic 3d graphics
    -cover fire easy controls
    -cover fire fun offline missions
    -cover fire action shooter game
    -cover fire join the resistance
    -cover fire offline action game
    -cover fire upgrade your guns
    -cover fire grenades are your best companion
    -cover fire online sniper tournaments
    -cover fire virus zombies event
    -cover fire shooter duty infiltrate in a terrorist base
    -cover fire survive in competitive sniper shooting battle online
    -cover fire cool war events
    -cover fire drive vehicles or fun shooting from helicopters
    -cover fire challenging single player campaign
    -cover fire 12 new chapters in a thrilling story mode
    -cover fire killing with iconic gun and powerful sniper weapons
    -cover fire customize and upgrade your best guns skills
    -cover fire increase arsenal damage in the war zone
    -cover fire survive with a gun against zombies and save the survivors
    -cover fire aim shoot and kill hordes of zombies
    -cover fire download official game for free and offline
    -cover fire one of the best shooting games on mobile
    -cover fire viva games studios developer
    -cover fire contains ads in app purchases
    -cover fire 4.7 star rating on google play store
    -cover fire 100m+ downloads on google play store
    -cover fire teen rating on google play store
    -cover fire data safety and privacy practices information available on google play store
    -cover fire mostly positive reviews on steam store
    -cover fire release date nov 4 2021 on steam store
    -cover fire 1mb developer and publisher on steam store
    -cover fire popular user defined tags for this product on steam store
    -cover fire autoplay videos on steam store
    -cover fire 2021 gameplay video by uptodown on youtube
    -cover fire 562k views 2 years ago on youtube
    -cover fire offline shooting game best offline shooter and sniper game warning it's addictive
    -cover fire net energy gain when carrying out a nuclear fusion experiment
    -cover fire holy grail fusion experiment to create a mini sun
    -cover fire achieved temperatures nearly seven times hotter than the core of the sun
    -cover fire korea superconducting tokamak advanced research facility korea institute of fusion energy

    -

    The Zombie Event mode is a seasonal mode where you face hordes of zombies in a post-apocalyptic world. You can use different weapons and items, as well as team up with other players online. You can also earn rewards and prizes by surviving the zombie onslaught.

    -

    To choose your mode and mission, you just need to tap on the mode icon on the main menu and select the mission you want to play. You can also see the details and requirements of each mission before you start playing.

    -

    Control Your Character and Shoot Your Enemies

    -

    The last step is to control your character and shoot your enemies in Cover Fire. The game has simple and intuitive controls that allow you to aim, shoot, reload, switch weapons, use cover, and more. You can also use special abilities and items that give you an edge in combat.

    -

    To control your character in Cover Fire, you just need to use your fingers on the screen. You can swipe left or right to move between covers, tap on the enemy to aim and shoot, swipe down to reload, tap on the weapon icon to switch weapons, tap on the ability icon to use your special ability, tap on the item icon to use an item, and more. You can also adjust the sensitivity and layout of the controls in the settings menu.

    -

    To shoot your enemies in Cover Fire, you need to be accurate and fast. You can use different types of weapons, such as pistols, rifles, shotguns, snipers, machine guns, rocket launchers, etc. Each weapon has its own stats, such as damage, range, accuracy, fire rate, etc. You can also upgrade your weapons by spending coins or gold.

    -

    You also need to be aware of your surroundings and use cover wisely. You can hide behind walls, barrels, crates, cars, etc., to avoid enemy fire. You can also move between covers to flank your enemies or surprise them. However, be careful not to expose yourself too much or stay in one place for too long, as some covers can be destroyed or enemies can throw grenades at you.

    -

    Finally, you need to use your special abilities and items strategically. Each character has a unique special ability that can turn the tide of battle. For example, Lynx can slow down time and aim better; Siegfried can deploy a shield that protects him from bullets; O'Neal can unleash a barrage of rockets; etc. You can also use items like medkits, grenades, drones, etc., that can help you heal yourself or damage your enemies.

    -

    Tips and Tricks for Cover Fire

    -

    Cover Fire is a fun and addictive game that will keep you entertained for hours. However, if you want to become a pro at it and complete all the missions with ease, you need to follow some tips and tricks that will improve your skills and performance. Here are some of them:

    -

    Upgrade Your Weapons and Characters

    -

    One of the most important things you need to do in Cover Fire is to upgrade your weapons and characters regularly. Upgrading your weapons will increase their stats and

    One of the most important things you need to do in Cover Fire is to upgrade your weapons and characters regularly. Upgrading your weapons will increase their stats and make them more effective in combat. Upgrading your characters will unlock new abilities and items that will give you an edge in battle.

    -

    To upgrade your weapons and characters, you need to earn and spend currency in Cover Fire. There are two types of currency in the game: coins and gold. Coins are the basic currency that you can earn by completing missions, achieving objectives, watching ads, etc. Gold is the premium currency that you can buy with real money or earn by completing special tasks, such as daily missions, achievements, etc.

    -

    You can use coins and gold to upgrade your weapons and characters in the armory and the barracks, respectively. You can also use cards to upgrade your characters, which you can obtain by opening crates or buying them with gold. Upgrading your weapons and characters will require more coins, gold, and cards as you progress through the game, so make sure to save up and spend wisely.

    -

    Upgrading your weapons and characters will not only make them stronger, but also more versatile and adaptable. You can customize your weapons by changing their skins, scopes, magazines, barrels, etc. You can also customize your characters by changing their outfits, helmets, vests, etc. You can also equip different items and abilities to your characters, such as medkits, grenades, drones, etc.

    -

    Use Strategy and Tactics

    -

    Another thing you need to do in Cover Fire is to use strategy and tactics to overcome your enemies and complete your missions. Cover Fire is not just a mindless shooting game where you can blast your way through everything. You need to plan your moves and actions carefully and use the environment and cover to your advantage.

    -

    To use strategy and tactics in Cover Fire, you need to be aware of your surroundings and the situation. You need to know where your enemies are, what type of weapons they have, how many of them are there, etc. You also need to know where the cover is, what type of cover it is, how durable it is, etc. You also need to know what your objectives are, how much time you have, what rewards you can get, etc.

    -

    You also need to use different approaches and methods depending on the mode and mission you are playing. For example, in the Campaign mode, you may need to be more stealthy and cautious, as you are outnumbered and outgunned by Tetracorp. In the Sniper Ops mode, you may need to be more precise and patient, as you have limited ammo and targets. In the Zombie Event mode, you may need to be more aggressive and fast, as you have unlimited ammo but endless zombies.

    -

    You also need to use different techniques and skills depending on the type of enemies and situations you face. For example, you may need to aim for the head or weak spots of some enemies to deal more damage or kill them instantly. You may also need to move between covers or dodge enemy fire by swiping on the screen. You may also need to use grenades or drones to clear out groups of enemies or distract them.

    -

    Join the Resistance and Have Fun

    -

    The last thing you need to do in Cover Fire is to join the resistance and have fun. Cover Fire is not just a game, but a story of courage and heroism against oppression and injustice. You are not just a soldier, but a leader of a rebellion that fights for freedom and peace.

    -

    To join the resistance in Cover Fire, you need to follow the storyline of the Campaign mode and complete all the chapters and episodes. You will meet different characters who will join your team and help you in your missions. You will also face different enemies who will try to stop you at all costs. You will also discover the secrets and motives behind Tetracorp's actions and plans.

    -

    To have fun in Cover Fire, you need to interact with other players and characters in the game. You can chat with other players online through the chat feature or join a clan with them. You can also compete with other players in the leaderboards or challenge them in duels. You can also enjoy the offline action and realistic graphics of Cover Fire without any internet connection or interruptions.

    -

    Conclusion

    -

    Cover Fire is one of the best shooting games on mobile that offers offline action, realistic graphics, diverse modes and missions, customizable weapons and characters, strategic gameplay, and an engaging storyline. If you love shooting games but hate online lagging and interruptions, then Cover Fire is the perfect game for you.

    -

    So what are you waiting for? Download Cover Fire today from Google Play or Pdalife and join the resistance against Tetracorp. You won't regret it!

    -

    FAQsFAQs

    -

    Here are some frequently asked questions about Cover Fire that you may find helpful:

    - - - - - - - - - - - - - - - - - - - - - - - - - -
    QuestionAnswer
    Is Cover Fire an online or offline game?Cover Fire is an offline game that you can play without any internet connection or interruptions. However, some features and modes may require an internet connection, such as the Zombie Event mode, the chat feature, the leaderboards, etc.
    How can I get more coins and gold in Cover Fire?You can get more coins and gold in Cover Fire by completing missions, achieving objectives, watching ads, etc. You can also buy them with real money through in-app purchases if you want to support the developers and enhance your gaming experience.
    How can I unlock new weapons and characters in Cover Fire?You can unlock new weapons and characters in Cover Fire by progressing through the Campaign mode and completing different chapters and episodes. You can also unlock them by spending coins or gold in the armory and the barracks, respectively.
    How can I join a clan or chat with other players in Cover Fire?You can join a clan or chat with other players in Cover Fire by tapping on the clan or chat icon on the main menu. You will need an internet connection to access these features. You can also create your own clan or invite your friends to join your clan.
    How can I contact the developers or report a bug in Cover Fire?You can contact the developers or report a bug in Cover Fire by tapping on the settings icon on the main menu and then tapping on the support icon. You can also email them at support@generagames.com or visit their website at https://www.generagames.com/.
    -

    I hope you enjoyed this article and learned something new about Cover Fire. If you have any questions or feedback, feel free to leave a comment below. Thank you for reading and happy shooting!

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Download Mafia City Wars Mod APK and Experience the Most Realistic Gangster Simulation.md b/spaces/congsaPfin/Manga-OCR/logs/Download Mafia City Wars Mod APK and Experience the Most Realistic Gangster Simulation.md deleted file mode 100644 index bd376179471be94cf13d13ba08d6c480e340e355..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Download Mafia City Wars Mod APK and Experience the Most Realistic Gangster Simulation.md +++ /dev/null @@ -1,112 +0,0 @@ - -

    Mafia City Wars Mod APK: How to Download and Play the Ultimate Crime Simulator

    -

    Do you love crime movies and games? Do you want to experience the thrill of living in a city ruled by gangs, violence, and corruption? If yes, then you should try Mafia City Wars, a realistic and immersive crime simulator game for Android devices. In this game, you can choose your role, join a gang, complete missions, fight with other players, and become the most powerful crime lord in the city.

    -

    But what if you want to enjoy the game without any limitations or restrictions? What if you want to have unlimited money, weapons, items, and resources? Well, there is a way to do that. You can download and install Mafia City Wars Mod APK, a modified version of the game that gives you access to all the features and benefits of the game for free. In this article, we will show you how to download and install Mafia City Wars Mod APK, how to play the game, and some tips and tricks to help you succeed in the game.

    -

    mafia city wars mod apk


    Download File ►►► https://urlca.com/2uOf5r



    -

    What is Mafia City Wars?

    -

    Mafia City Wars is a 3D open-world crime simulator game developed by Naxeex Studio. The game is inspired by popular crime movies and games like The Godfather, Scarface, Grand Theft Auto, and more. The game lets you explore a huge city full of opportunities and dangers. You can drive cars, bikes, boats, helicopters, and tanks. You can use guns, knives, grenades, rockets, and other weapons. You can rob banks, shops, casinos, and other places. You can recruit gang members, bribe cops, extort businesses, and more.

    -

    The game also has a multiplayer mode where you can compete with other players from all over the world. You can join or create clans, chat with other players, trade items, form alliances, or declare wars. You can also participate in events, tournaments, raids, and battles for rewards and glory.

    -

    Features of Mafia City Wars

    -

    Some of the main features of Mafia City Wars are:

    -
      -
    • A huge open-world city with realistic graphics and physics
    • -
    • A variety of roles and gangs to choose from
    • -
    • A lot of missions and activities to complete
    • -
    • A wide range of vehicles and weapons to use
    • -
    • A dynamic day-night cycle and weather system
    • -
    • A multiplayer mode with online chat and clan system
    • -
    • A mod version with unlimited money, resources, items, and more
    • -
    -

    How to download and install Mafia City Wars Mod APK

    -

    If you want to download and install Mafia City Wars Mod APK on your Android device, you need to follow these steps:

    -
      -
    1. Go to [this link](^1^) and download the APK file of Mafia City Wars Mod.
    2. -
    3. Go to your device settings and enable the installation of apps from unknown sources.
    4. -
    5. Locate the downloaded APK file on your device storage and tap on it.
    6. -
    7. Follow the instructions on the screen to install the app.
    8. -
    9. Launch the app and enjoy the game.
    10. -

    How to play Mafia City Wars

    -

    Now that you have downloaded and installed Mafia City Wars Mod APK, you are ready to play the game. Here are some basic steps to help you get started:

    -

    Choose your role and gang

    -

    The first thing you need to do is to choose your role and gang in the game. You can choose from four roles: Boss, Hitman, Driver, or Hacker. Each role has its own advantages and disadvantages, as well as different skills and abilities. For example, the Boss can recruit more gang members, the Hitman can use more weapons, the Driver can drive faster and better, and the Hacker can hack into systems and devices.

    -

    mafia city wars mod apk unlimited money
    -mafia city wars mod apk download for android
    -mafia city wars mod apk latest version
    -mafia city wars mod apk free shopping
    -mafia city wars mod apk offline
    -mafia city wars mod apk no ads
    -mafia city wars mod apk hack
    -mafia city wars mod apk revdl
    -mafia city wars mod apk rexdl
    -mafia city wars mod apk android 1
    -mafia city wars mod apk 2023
    -mafia city wars mod apk unlimited gems
    -mafia city wars mod apk unlimited coins
    -mafia city wars mod apk unlimited everything
    -mafia city wars mod apk unlimited health
    -mafia city wars mod apk unlimited ammo
    -mafia city wars mod apk god mode
    -mafia city wars mod apk mega mod
    -mafia city wars mod apk vip unlocked
    -mafia city wars mod apk all weapons unlocked
    -mafia city wars mod apk all levels unlocked
    -mafia city wars mod apk all characters unlocked
    -mafia city wars mod apk all cars unlocked
    -mafia city wars mod apk all missions unlocked
    -mafia city wars mod apk all outfits unlocked
    -mafia city wars mod apk all skills unlocked
    -mafia city wars mod apk all upgrades unlocked
    -mafia city wars mod apk all items unlocked
    -mafia city wars mod apk all cheats unlocked
    -mafia city wars mod apk all features unlocked
    -rope hero: mafia city wars mod apk
    -rope hero: mafia city wars mod apk unlimited money
    -rope hero: mafia city wars mod apk download for android
    -rope hero: mafia city wars mod apk latest version[^1^]
    -rope hero: mafia city wars mod apk free shopping[^1^]
    -rope hero: mafia city wars mod apk offline[^1^]
    -rope hero: mafia city wars mod apk no ads[^1^]
    -rope hero: mafia city wars mod apk hack[^1^]
    -rope hero: mafia city wars mod apk revdl[^1^]
    -rope hero: mafia city wars mod apk rexdl[^1^]
    -rope hero: mafia city wars mod apk android 1[^1^]
    -rope hero: mafia city wars mod apk 2023[^1^]
    -rope hero: mafia city wars mod apk unlimited gems[^1^]
    -rope hero: mafia city wars mod apk unlimited coins[^1^]
    -rope hero: mafia city wars mod apk unlimited everything[^1^]
    -rope hero: mafia city wars mod apk unlimited health[^1^]
    -rope hero: mafia city wars mod apk unlimited ammo[^1^]
    -rope hero: mafia city wars mod apk god mode[^1^]

    -

    You also need to choose your gang from four options: Italian Mafia, Russian Mafia, Yakuza, or Cartel. Each gang has its own territory, reputation, and enemies in the city. For example, the Italian Mafia controls the downtown area, the Russian Mafia controls the industrial zone, the Yakuza controls the Chinatown, and the Cartel controls the slums.

    -

    Complete missions and earn money

    -

    The next thing you need to do is to complete missions and earn money in the game. You can find missions from various sources, such as your gang leader, your contacts, your phone, or the map. Missions can range from simple tasks like delivering packages or stealing cars, to complex operations like robbing banks or assassinating targets. Completing missions will reward you with money, experience points, items, and reputation.

    -

    You can use money to buy vehicles, weapons, clothes, properties, and other things in the game. You can also use money to upgrade your skills and abilities, as well as bribe cops or other people. Money is essential for your survival and success in the game.

    -

    Upgrade your skills and weapons

    -

    Another thing you need to do is to upgrade your skills and weapons in the game. You can upgrade your skills by spending experience points that you earn from completing missions or killing enemies. You can upgrade your weapons by buying new ones or modifying them with attachments and accessories. Upgrading your skills and weapons will make you more powerful and efficient in the game.

    -

    Some of the skills you can upgrade are: health, stamina, accuracy, speed, stealth, hacking, driving, charisma, and leadership. Some of the weapons you can use are: pistols, shotguns, rifles, snipers, machine guns, rocket launchers, grenades, knives, bats, and more.

    Fight with other players and gangs

    -

    The last thing you need to do is to fight with other players and gangs in the game. You can fight with other players in the multiplayer mode, where you can join or create clans, chat with other players, trade items, form alliances, or declare wars. You can also participate in events, tournaments, raids, and battles for rewards and glory.

    -

    You can also fight with other gangs in the city, who will try to attack you or your territory. You can defend your turf or invade theirs, using your vehicles, weapons, and gang members. Fighting with other gangs will affect your reputation and influence in the city.

    -

    Tips and tricks for Mafia City Wars

    -

    Here are some tips and tricks to help you play Mafia City Wars better:

    -

    Use stealth and strategy

    -

    One of the most important skills in the game is stealth. You can use stealth to avoid detection, escape from enemies, or sneak up on them. You can use cover, shadows, disguises, silencers, and other tools to enhance your stealth. You can also use strategy to plan your moves, choose your targets, and execute your missions. You can use the map, the phone, the contacts, and the radar to gather information and coordinate your actions.

    -

    Collect resources and items

    -

    Another important skill in the game is collecting resources and items. You can collect resources like money, ammo, health kits, armor, and more by looting places, killing enemies, or completing missions. You can also collect items like weapons, vehicles, clothes, properties, and more by buying them, finding them, or earning them. Collecting resources and items will help you survive and progress in the game.

    -

    Join a clan and cooperate with others

    -

    One of the most fun aspects of the game is joining a clan and cooperating with others. You can join or create a clan in the multiplayer mode, where you can chat with other players, trade items, form alliances, or declare wars. You can also cooperate with other players in missions, events, raids, and battles. Joining a clan and cooperating with others will make the game more enjoyable and rewarding.

    -

    Use mod features wisely

    -

    One of the most tempting aspects of the game is using mod features wisely. You can use mod features like unlimited money, resources, items, and more to enhance your gameplay and experience. However, you should be careful not to abuse or overuse these features, as they may ruin the balance and challenge of the game. You should also be aware of the risks and consequences of using mod features, such as bans or errors. Use mod features wisely and responsibly.

    -

    Conclusion

    -

    Mafia City Wars is a great game for anyone who loves crime movies and games. It is a realistic and immersive crime simulator game that lets you choose your role, join a gang, complete missions, fight with other players, and become the most powerful crime lord in the city. You can also download and install Mafia City Wars Mod APK to enjoy the game without any limitations or restrictions.

    -

    We hope this article has helped you learn how to download and play Mafia City Wars Mod APK. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading!

    -

    FAQs

    -

    Here are some frequently asked questions about Mafia City Wars Mod APK:

    -
      -
    1. Is Mafia City Wars Mod APK safe to download?
      Yes, Mafia City Wars Mod APK is safe to download as long as you use a trusted source like [this link]. However, you should always scan any file you download with an antivirus software before installing it on your device.
    2. -
    3. Is Mafia City Wars Mod APK compatible with my device?
      Mafia City Wars Mod APK is compatible with most Android devices that have Android 4.4 or higher version installed. However, some devices may have compatibility issues due to different specifications or settings. If you encounter any problems while playing the game on your device, you can try adjusting the graphics settings or contacting the developer for support.
    4. -
    5. How do I update Mafia City Wars Mod APK?
      To update Mafia City Wars Mod APK, you need to download and install the latest version of the APK file from [this link]. You do not need to uninstall the previous version of the app before installing the new one. However, you should always back up your data before updating any app to avoid losing any progress or information.
    6. -
    7. How do I uninstall Mafia City Wars Mod APK?
      To uninstall Mafia City Wars Mod APK, you need to go to your device settings and find the app in the list of installed apps. Then, you need to tap on the app and select the uninstall option. You can also uninstall the app by long-pressing the app icon on your home screen and dragging it to the trash bin.
    8. -
    9. Where can I find more games like Mafia City Wars Mod APK?
      If you like Mafia City Wars Mod APK, you may also enjoy other games like Grand Theft Auto, Gangstar Vegas, Crime City, and more. You can find these games on the Google Play Store or other online platforms. You can also search for mod versions of these games if you want to have more features and benefits.
    10. -

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Download Vampire The Masquerade - Bloodhunt and Join the Supernatural War.md b/spaces/congsaPfin/Manga-OCR/logs/Download Vampire The Masquerade - Bloodhunt and Join the Supernatural War.md deleted file mode 100644 index de911cee2fd3cbd89e39361772c15bc53905d8c9..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Download Vampire The Masquerade - Bloodhunt and Join the Supernatural War.md +++ /dev/null @@ -1,152 +0,0 @@ -
    -

    Vampire: The Masquerade - Bloodhunt: Everything You Need to Know

    -

    If you are a fan of vampires, battle royales, or both, you might be interested in Vampire: The Masquerade - Bloodhunt, a new free-to-play game that combines these two elements in a thrilling and immersive way. In this article, we will tell you everything you need to know about Bloodhunt, from what it is, how to play it, where to download it, what are the reviews of it, and more. Let's dive in!

    -

    vampire the masquerade bloodhunt download


    Download ►►►►► https://urlca.com/2uO9K2



    -

    What is Bloodhunt?

    -

    Bloodhunt is a free-to-play battle royale game developed and published by Swedish developer Sharkmob. It is based on the tabletop role-playing game Vampire: The Masquerade, and is part of the larger World of Darkness series. The game was released on 27 April 2022 for both Windows and PlayStation 5.

    -

    A free-to-play battle royale game set in the World of Darkness

    -

    Bloodhunt is set in Prague, a city consumed by a ruthless war between vampire factions. You play as one of these vampires, who have to fight against other vampires, hunters, and soldiers in a third-person action shooter. You can either hunt solo or team up with your friends in squads of three. The last vampire or team standing wins the match.

    -

    A faithful adaptation of the Vampire: The Masquerade lore and mechanics

    -

    Bloodhunt draws from the rich and dark lore of the World of Darkness universe, where vampires hide in plain sight and struggle to maintain their humanity and their secrets. You can choose from four different clans, each with their own history, culture, and abilities: Brujah, Toreador, Nosferatu, and Ventrue. You also have to follow the Masquerade, a code of conduct that forbids vampires from revealing their true nature to humans. If you break the Masquerade, you will become more visible to your enemies and risk being hunted down by the Entity, a secret society that wants to wipe out all vampires.

    -

    A fast-paced and action-packed gameplay with supernatural powers and weapons

    -

    Bloodhunt offers a unique gameplay experience that combines stealth, strategy, and combat. You can use your supernatural powers to defy gravity, move faster, heal yourself, or unleash devastating attacks on your enemies. You can also use various weapons, such as guns, melee weapons, or explosives, to suit your playstyle. Moreover, you can feed on human blood to gain more power and health, but be careful not to lose control or harm innocent people.

    -

    How to play Bloodhunt?

    -

    Before you jump into a match of Bloodhunt, you have to choose your clan and archetype. These will determine your appearance, abilities, and playstyle.

    -

    Choose your clan and archetype to define your playstyle and abilities

    -

    There are four clans available in Bloodhunt: Brujah, Toreador, Nosferatu, and Ventrue. Each clan has two archetypes that have different skills and passives. Here is a brief overview of each clan and archetype:

    -

    How to download vampire the masquerade bloodhunt for free
    -Vampire the masquerade bloodhunt steam download link
    -Vampire the masquerade bloodhunt system requirements and download size
    -Vampire the masquerade bloodhunt best clan and powers guide
    -Vampire the masquerade bloodhunt gameplay and review
    -Vampire the masquerade bloodhunt tips and tricks for beginners
    -Vampire the masquerade bloodhunt patch notes and updates
    -Vampire the masquerade bloodhunt cheats and hacks
    -Vampire the masquerade bloodhunt custom outfits and skins
    -Vampire the masquerade bloodhunt crossplay and multiplayer options
    -Vampire the masquerade bloodhunt lore and world of darkness connection
    -Vampire the masquerade bloodhunt trailer and release date
    -Vampire the masquerade bloodhunt beta access and feedback
    -Vampire the masquerade bloodhunt official website and social media
    -Vampire the masquerade bloodhunt developer interview and behind the scenes
    -Vampire the masquerade bloodhunt soundtrack and voice actors
    -Vampire the masquerade bloodhunt fan art and cosplay
    -Vampire the masquerade bloodhunt merchandise and giveaways
    -Vampire the masquerade bloodhunt comparison and difference with other vampire games
    -Vampire the masquerade bloodhunt mods and community support
    -Vampire the masquerade bloodhunt error and bug fixes
    -Vampire the masquerade bloodhunt minimum and recommended specs
    -Vampire the masquerade bloodhunt achievements and trophies
    -Vampire the masquerade bloodhunt characters and backstory
    -Vampire the masquerade bloodhunt weapons and items list
    -Vampire the masquerade bloodhunt map and locations guide
    -Vampire the masquerade bloodhunt modes and features overview
    -Vampire the masquerade bloodhunt ranking and leaderboards system
    -Vampire the masquerade bloodhunt future plans and roadmap
    -Vampire the masquerade bloodhunt easter eggs and secrets
    -Vampire the masquerade bloodhunt performance and optimization tips
    -Vampire the masquerade bloodhunt controller and keyboard support
    -Vampire the masquerade bloodhunt graphics and settings options
    -Vampire the masquerade bloodhunt discord server and forums
    -Vampire the masquerade bloodhunt streamers and youtubers to watch
    -Vampire the masquerade bloodhunt tournaments and events schedule
    -Vampire the masquerade bloodhunt feedback survey and contact information
    -Vampire the masquerade bloodhunt refund policy and customer service
    -Vampire the masquerade bloodhunt memes and funny moments

    -
      -
    • Brujah: The rebels and fighters of the vampire society
        -
      • Brawler: A melee-focused archetype that can deal high damage and stun enemies with their fists. Their passive skill allows them to gain more health from feeding.
      • -
      • Warrior: A ranged-focused archetype that can use firearms more effectively and reload faster. Their passive skill allows them to deal more damage to enemies with low health.
      • -
      -
    • Toreador: The artists and seducers of the vampire society
        -
      • Muse: A support-focused archetype that can heal themselves and their allies with their blood. Their passive skill allows them to gain more experience from feeding.
      • -
      • Siren: A stealth-focused archetype that can charm and manipulate enemies with their voice. Their passive skill allows them to move faster and quieter.
      • -
      -
    • Nosferatu: The outcasts and spies of the vampire society
        -
      • Prowler: A mobility-focused archetype that can climb walls and leap across buildings. Their passive skill allows them to regenerate health while out of combat.
      • -
      • Saboteur: A trap-focused archetype that can deploy mines and grenades to damage and disorient enemies. Their passive skill allows them to hack cameras and drones to reveal enemy locations.
      • -
      -
    • Ventrue: The leaders and aristocrats of the vampire society
        -
      • Vanquisher: A tank-focused archetype that can absorb damage and shield themselves and their allies with their blood. Their passive skill allows them to gain more armor from feeding.
      • -
      • Executor: A crowd-control-focused archetype that can stun and knock back enemies with their blood. Their passive skill allows them to deal more damage to enemies with high health.
      • -
      -
    • -
    -

    You can customize your character's appearance, clothing, and accessories to suit your preferences. You can also unlock more options by leveling up your clan or by purchasing them with real money.

    -

    Feed on blood to grow in power and avoid frenzy

    -

    Blood is essential for your survival and strength in Bloodhunt. You can feed on human NPCs that roam around the city, but be careful not to kill them or feed on the same person twice, as this will break the Masquerade and attract unwanted attention. You can also feed on enemy vampires, but this will expose you to their clan's curse, which will affect your abilities negatively for a short time.

    -

    Feeding on blood will fill up your blood meter, which will allow you to use your skills more often and heal yourself faster. However, if your blood meter becomes too full, you will enter a state of frenzy, which will make you lose control of your character and attack anyone nearby, friend or foe. To avoid frenzy, you have to manage your blood meter carefully and use your skills wisely.

    -

    Hunt your enemies and survive the night in Prague

    -

    Once you have chosen your clan and archetype, you are ready to enter a match of Bloodhunt. You will start by parachuting from a helicopter into one of the four districts of Prague: Old Town, Castle Hill, New Town, or Industrial Zone. Each district has its own layout, landmarks, loot, and hazards. You have to explore the city, scavenge for weapons and items, and hunt down your enemies while avoiding the Entity's forces.

    -

    The match will last for about 15 minutes, during which the playable area will shrink as a red mist closes in. You have to stay within the safe zone or risk taking damage from the mist. The last vampire or team alive wins the match and earns rewards based on their performance.

    -

    Where to download Bloodhunt?

    Available on Steam and PlayStation 5

    -

    If you are interested in playing Bloodhunt, you can download it for free on Steam or PlayStation 5. The game is currently in early access, which means that it is still in development and may have bugs, glitches, or missing features. However, the developers are constantly working on improving the game and adding new content, such as clans, modes, maps, and cosmetics.

    -

    To download Bloodhunt on Steam, you will need to have a Steam account and the Steam client installed on your PC. You can create a free account and download the client from the official Steam website. Once you have done that, you can search for Bloodhunt on the Steam store or click on this link to go directly to the game's page. There, you can click on the "Play Game" button to start downloading and installing the game.

    -

    To download Bloodhunt on PlayStation 5, you will need to have a PlayStation Network account and a PlayStation Plus subscription. You can create a free account and sign up for PlayStation Plus from the official PlayStation website. Once you have done that, you can search for Bloodhunt on the PlayStation Store or click on this link to go directly to the game's page. There, you can click on the "Download" button to start downloading and installing the game.

    -

    System requirements and performance modes

    -

    Before you download Bloodhunt, you should make sure that your PC or PS5 meets the minimum or recommended system requirements for the game. This will ensure that you have a smooth and enjoyable gameplay experience. Here are the system requirements for Bloodhunt according to the official website and Steam page:

    - - - -
    Minimum (PC)Recommended (PC)PS5
    OS: Windows 10 64-bit
    CPU: Intel i5-7400 / AMD Ryzen 1300X or better
    Memory: 8 GB RAM
    GPU: Nvidia GTX 970 / AMD Radeon RX 580 or better
    Disk: HDD
    OS: Windows 10 64-bit
    CPU: Intel i7-8700K / AMD Ryzen 5 3600X or better
    Memory: 16 GB RAM
    GPU: Nvidia GTX 1080 / AMD Radeon RX Vega 64 or better
    Disk: SSD
    OS: PlayStation 5
    CPU: AMD Zen 2-based CPU with 8 cores at 3.5GHz
    Memory: 16 GB GDDR6 RAM
    GPU: AMD RDNA 2-based GPU with 36 CUs at up to 2.23GHz
    Disk: Custom SSD
    -

    Bloodhunt also offers different performance modes for PC and PS5 players to choose from. These modes allow you to adjust the graphics quality and frame rate of the game according to your preference. Here are the performance modes for Bloodhunt according to the official website and IGN:

    - - - -
    PCPS5
    You can choose from four graphics presets: Low, Medium, High, and Ultra. You can also customize the graphics settings individually, such as resolution, anti-aliasing, shadows, textures, etc. You can also enable or disable vertical sync (V-Sync) and dynamic resolution scaling (DRS). The frame rate is uncapped by default, but you can limit it to 30 FPS, 60 FPS, or 120 FPS.You can choose from two graphics modes: Performance Mode and Quality Mode. Performance Mode prioritizes frame rate over graphics quality, aiming for up to 120 FPS at dynamic resolution scaling (DRS). Quality Mode prioritizes graphics quality over frame rate, aiming for up to 60 FPS at native resolution.
    -

    Crossplay and cross-progression features

    -

    Bloodhunt supports crossplay between PC and PS5 players, which means that you can play with or against players from different platforms in the same match. However, there are some limitations and conditions for crossplay in Bloodhunt. Here are some of them according to the official website and GamesRadar+:

    -
      -
    • PC players have crossplay enabled by default and cannot disable it.
    • -
    • PS5 players can opt in or out of crossplay via the in-game settings menu.
    • -
    • All game modes have crossplay enabled.
    • -
    • You cannot tell which platform other players are on.
    • -
    • You cannot group up or communicate with friends cross-platform in Elysium ( the social hub of the game). You can only do so in the main menu or in a match.
    • -
    • You can add friends cross-platform via the in-game friend system.
    • -
    -

    Bloodhunt also supports cross-progression between PC and PS5 players, which means that you can access your account, progress, and cosmetics on both platforms. However, there are some limitations and conditions for cross-progression in Bloodhunt. Here are some of them according to the official website and GamesRadar+:

    -
      -
    • You have to link your Steam account and your PlayStation Network account to your Sharkmob account to enable cross-progression.
    • -
    • You can only link one Steam account and one PlayStation Network account to your Sharkmob account.
    • -
    • You cannot unlink your accounts once they are linked.
    • -
    • You cannot transfer your progress or cosmetics between different Sharkmob accounts.
    • -
    • Some items or features may not be available on both platforms due to technical or legal reasons.
    • -
    -

    What are the reviews of Bloodhunt?

    -

    Bloodhunt has received mostly positive feedback from critics and players since its release. The game currently has a "Mostly Positive" rating on Steam based on over 10,000 user reviews, and a "Generally Favorable" rating on Metacritic based on 11 critic reviews. Here are some of the highlights of the game's strengths and weaknesses according to the reviews:

    -

    Highlights of the game's strengths

    -
      -
    • The game has stunning graphics, sound, and music that create a captivating atmosphere and immersion.
    • -
    • The game has a unique and original premise that blends vampires and battle royales in a creative way.
    • -
    • The game has a faithful and respectful adaptation of the Vampire: The Masquerade lore and mechanics that appeals to fans of the franchise.
    • -
    • The game has a fast-paced and action-packed gameplay that offers a lot of variety, strategy, and fun.
    • -
    • The game has a diverse and customizable character system that allows players to express their personality and playstyle.
    • -
    • The game has a smooth and responsive performance that runs well on both PC and PS5.
    • -
    • The game has a friendly and supportive community that welcomes new players and provides feedback to the developers.
    • -
    -

    Highlights of the game's weaknesses

    -
      -
    • The game has some bugs, glitches, and crashes that affect the gameplay experience and stability.
    • -
    • The game has some balance issues that make some clans, archetypes, or skills more powerful or weaker than others.
    • -
    • The game has some matchmaking issues that make it hard to find matches or join friends cross-platform.
    • -
    • The game has some content issues that make it repetitive or boring after a while.
    • -
    • The game has some monetization issues that make it pay-to-win or unfair for free-to-play players.
    • -
    • The game has some communication issues that make it hard to coordinate with teammates or chat with other players.
    • -
    -

    Conclusion and FAQs

    -

    Bloodhunt is a free-to-play battle royale game that lets you play as a vampire in a war-torn Prague. The game offers a unique gameplay experience that combines stealth, strategy, and combat with supernatural powers and weapons. You can choose from four different clans and eight different archetypes to define your playstyle and abilities. You can also customize your character's appearance, clothing, and accessories. You can download the game for free on Steam or PlayStation 5, as long as you meet the system requirements. The game supports crossplay and cross-progression between PC and PS5 players. The game has received mostly positive reviews from critics and players, who praised its graphics, premise, gameplay, character system, performance, and community. However, the game also has some flaws, such as bugs, balance issues, matchmaking issues, content issues, monetization issues, and communication issues. The developers are working on fixing these issues and adding more content to the game in the future.

    -

    If you have any questions about Bloodhunt, you might find the answers in these FAQs:

    -

    Q: Is Bloodhunt free?

    -

    A: Yes, Bloodhunt is free-to-play. You can download it for free on Steam or PlayStation 5. However, the game also has optional in-game purchases that allow you to buy cosmetics or currency with real money.

    -

    Q: Is Bloodhunt online only?

    -

    A: Yes, Bloodhunt is online only. You need an internet connection and an online account to play the game. You cannot play the game offline or solo.

    -

    Q: Is Bloodhunt single-player or multiplayer?

    -

    A: Bloodhunt is multiplayer A: Bloodhunt is multiplayer only. You can play with or against other players in matches of up to 45 players. You can either play solo or in squads of three. You cannot play with bots or AI opponents.

    -

    Q: Is Bloodhunt canon?

    -

    A: Yes, Bloodhunt is canon. The game is set in the same universe and timeline as the tabletop role-playing game Vampire: The Masquerade and its other adaptations, such as video games, novels, comics, etc. The game follows the lore and rules of the World of Darkness setting and respects its established characters and events.

    -

    Q: Is Bloodhunt scary?

    -

    A: Bloodhunt is not a horror game, but it does have some elements that might be scary or disturbing for some players. The game has a dark and mature theme that deals with violence, blood, gore, death, and supernatural creatures. The game also has some jump scares, loud noises, and intense moments that might startle or frighten some players. The game has a PEGI 18 rating and an ESRB M rating for these reasons.

    -

    Q: Is Bloodhunt fun?

    -

    A: Bloodhunt is fun if you enjoy vampires, battle royales, or both. The game offers a unique and original gameplay experience that combines stealth, strategy, and combat with supernatural powers and weapons. The game also has a stunning graphics, sound, and music that create a captivating atmosphere and immersion. The game also has a diverse and customizable character system that allows you to express your personality and playstyle. The game also has a friendly and supportive community that welcomes new players and provides feedback to the developers. However, the game also has some flaws, such as bugs, balance issues, matchmaking issues, content issues, monetization issues, and communication issues. The developers are working on fixing these issues and adding more content to the game in the future.

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Enjoy Fast and Fun Cricket Matches with Cricket League APK 1.9.0.md b/spaces/congsaPfin/Manga-OCR/logs/Enjoy Fast and Fun Cricket Matches with Cricket League APK 1.9.0.md deleted file mode 100644 index 28a7de9e8b52f53457f5e3a25dec11f62e05666c..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Enjoy Fast and Fun Cricket Matches with Cricket League APK 1.9.0.md +++ /dev/null @@ -1,111 +0,0 @@ - -

    Cricket League APK 1.9.0: A Free Online Cricket Game for Android Users

    -

    If you are a cricket fan and looking for a fun and exciting way to enjoy your favorite sport on your mobile device, then you should definitely check out Cricket League APK 1.9.0. This is a free online cricket game that lets you play quick two over matches against your friends or players around the world in just a few minutes. You can also create your own team and compete in various leagues to become the ultimate champion.

    -

    cricket league apk 1.9.0


    Download File ››››› https://urlca.com/2uO7HW



    -

    Introduction

    -

    Cricket is one of the most popular sports in the world, especially in countries like India, Pakistan, Australia, England, and South Africa. Millions of people watch and play cricket every day, either on TV, online, or in stadiums. However, not everyone has the time or opportunity to play cricket in real life, especially during these challenging times of pandemic and lockdowns.

    -

    That's why online cricket games are a great alternative for cricket lovers who want to experience the thrill and excitement of cricket anytime and anywhere. Online cricket games allow you to play with real players from different countries, test your skills and strategies, and have fun with your friends and family.

    -

    One of the best online cricket games that you can download and play on your Android device is Cricket League APK 1.9.0. This is a fast, fun, exciting, and authentic 3D real-time multiplayer cricket game that will keep you hooked for hours.

    -

    cricket league game free download
    -cricket league 3d multiplayer apk
    -cricket league online sports game
    -cricket league apk latest version
    -cricket league mod apk unlimited coins
    -cricket league app for android
    -cricket league miniclip game
    -cricket league apk old version
    -cricket league hack apk download
    -cricket league best team players
    -cricket league tips and tricks
    -cricket league update 1.9.0 features
    -cricket league gameplay video
    -cricket league review and rating
    -cricket league how to play with friends
    -cricket league easy batting and bowling
    -cricket league apk file size
    -cricket league compatible devices
    -cricket league offline mode available
    -cricket league bugs and issues
    -cricket league customer support contact
    -cricket league feedback and suggestions
    -cricket league tournaments and leagues
    -cricket league rewards and achievements
    -cricket league customise your team
    -cricket league realistic graphics and sound
    -cricket league fun and addictive game
    -cricket league install and play now
    -cricket league new version 1.9.0 download
    -cricket league join the saga today
    -cricket league create your dream team
    -cricket league challenge your friends online
    -cricket league win matches and get coins
    -cricket league top the leaderboards and rankings
    -cricket league best online cricket game 2023
    -cricket league free coins and gems generator
    -cricket league cheats and hacks no survey
    -cricket league how to unlock all players
    -cricket league improve your skills and strategy
    -cricket league learn from the pros and experts
    -cricket league compare with other cricket games
    -cricket league download from apkcombo.com[^1^]
    -cricket league safe and secure apk download[^1^]
    -cricket league fast and easy apk installation[^1^]
    -cricket league no ads or in-app purchases[^1^]
    -cricket league user-friendly interface and controls[^1^]
    -cricket league regular updates and improvements[^1^]
    -cricket league share your experience and feedback[^1^]
    -cricket league invite your family and friends[^1^]

    -

    What is Cricket League APK 1.9.0?

    -

    Cricket League APK 1.9.0 is an online cricket game developed by Miniclip, a leading company in the gaming industry that has created many popular games such as 8 Ball Pool, Soccer Stars, Agar.io, and more.

    -

    Cricket League APK 1.9.0 is a game that lets you bat, bowl, and field your way to the top of the league in this realistic and immersive cricket game. You can choose from different modes such as Quick Match, Tournament, or League, and play with different teams such as India, Australia, England, Pakistan, South Africa, New Zealand, Sri Lanka, Bangladesh, West Indies, Afghanistan, Ireland, Zimbabwe, Nepal, Scotland, UAE, Canada, USA, Oman, Namibia.

    -

    Cricket League APK 1.9.0 is a game that is easy to learn but hard to master. You can customize your players with different outfits, bats, balls, helmets, gloves, pads, shoes, etc., and upgrade their skills with coins that you earn by winning matches.

    -

    Why should you download Cricket League APK 1.9.0?

    -

    There are many reasons why you should download Cricket League APK 1.9.0 on your Android device right now:

    -
      -
    • It is a free online cricket game that does not require any registration or subscription.
    • -
    • It is a game that has stunning 3D graphics and animations that make you feel like you are playing in a real stadium.
    • -

      Features of Cricket League APK 1.9.0

      -

      Cricket League APK 1.9.0 is a game that has many amazing features that make it one of the best online cricket games for Android users. Here are some of the features that you can enjoy in this game:

      -

      3D Multiplayer Cricket Sports Game

      -

      Cricket League APK 1.9.0 is a game that lets you play cricket with real players from all over the world in real-time. You can join or create a match and invite your friends or random players to join you. You can also chat with your opponents and teammates during the match and send them emojis and stickers.

      -

      Easy to Learn Batting and Bowling

      -

      Cricket League APK 1.9.0 is a game that has simple and intuitive controls that make it easy for anyone to learn how to bat and bowl. You can swipe on the screen to hit the ball or to deliver the ball. You can also adjust the direction, speed, and spin of the ball or the bat with a simple tap.

      -

      Win Matches to Get Coins and Build Your Dream Team

      -

      Cricket League APK 1.9.0 is a game that rewards you with coins for every match that you win. You can use these coins to buy new players, equipment, and skills for your team. You can also upgrade your players' attributes such as power, accuracy, stamina, speed, etc., to make them more effective on the field.

      -

      Play with Your Friends and Family

      -

      Cricket League APK 1.9.0 is a game that lets you play with your friends and family in a fun and friendly way. You can create your own private matches and invite your friends or family members to join you. You can also chat with them during the match and share your scores and achievements on social media.

      -

      Create Your Team and Top the Leagues

      -

      Cricket League APK 1.9.0 is a game that lets you create your own team and compete in various leagues to become the ultimate champion. You can choose from different leagues such as Rookie, Amateur, Professional, Elite, Legend, etc., and play against other teams of your level. You can also track your progress and performance on the leaderboard and see how you rank among other players.

      -

      How to Download and Install Cricket League APK 1.9.0

      -

      If you are interested in playing Cricket League APK 1.9.0 on your Android device, then you need to follow these simple steps to download and install it:

      -

      Step 1: Go to the official website of Cricket League APK 1.9.0 or click on this link

      -

      The first step is to go to the official website of Cricket League APK 1.9.0 or click on this link to access the download page of the game.

      -

      Step 2: Tap on the download button and wait for the file to be downloaded

      -

      The next step is to tap on the download button on the download page and wait for the file to be downloaded on your device.

      -

      Step 3: Go to your device settings and enable unknown sources installation

      -

      The third step is to go to your device settings and enable unknown sources installation. This will allow you to install apps from sources other than Google Play Store.

      -

      Step 4: Locate the downloaded file and tap on it to start the installation process

      -

      The fourth step is to locate the downloaded file on your device and tap on it to start the installation process.

      -

      Step 5: Follow the on-screen instructions and enjoy the game

      -

      The final step is to follow the on-screen instructions and complete the installation process.

      -

      Congratulations! You have successfully installed Cricket League APK 1.9.0 on your Android device.

      -

      Conclusion

      -

      Cricket League APK 1.9.0 is a free online cricket game that lets you play quick two over matches against your friends or players around the world in just a few minutes. You can also create your own team and compete in various leagues to become the ultimate champion.

      -

      This game has stunning 3D graphics, realistic physics, easy controls, multiple modes, teams, players, equipment, skills, etc., that make it one of the best online cricket games for Android users.

      -

      If you are a cricket fan and looking for a fun and exciting way to enjoy your favorite sport on your mobile device, then you should definitely download Cricket League APK 1.9.0 and give it a try. You will not regret it.

      -

      FAQs

      -

      Here are some of the frequently asked questions about Cricket League APK 1.9.0:

      -
        -
      1. Is Cricket League APK 1.9.0 safe to download and install?
      2. -

        Yes, Cricket League APK 1.9.0 is safe to download and install on your Android device. It does not contain any viruses, malware, or spyware that can harm your device or data.

        -
      3. Is Cricket League APK 1.9.0 compatible with all Android devices?
      4. -

        Cricket League APK 1.9.0 is compatible with most Android devices that have Android 4.4 or higher version installed. However, some older or low-end devices may experience some lag or performance issues while playing the game.

        -
      5. How much space does Cricket League APK 1.9.0 require on my device?
      6. -

        Cricket League APK 1.9.0 requires about 100 MB of free space on your device to download and install the game.

        -
      7. Can I play Cricket League APK 1.9.0 offline?
      8. -

        No, Cricket League APK 1.9.0 is an online game that requires a stable internet connection to play.

        -
      9. Can I play Cricket League APK 1.9.0 with other players from different countries?
      10. -

        Yes, Cricket League APK 1.9.0 is a global game that lets you play with other players from different countries in real-time.

        -

      401be4b1e0
      -
      -
      \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Enjoy Hay Day Online for Free - No Download Required.md b/spaces/congsaPfin/Manga-OCR/logs/Enjoy Hay Day Online for Free - No Download Required.md deleted file mode 100644 index 8444eb78eebca84ae3870657d6bcb94ba1f1c09a..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Enjoy Hay Day Online for Free - No Download Required.md +++ /dev/null @@ -1,209 +0,0 @@ -
      -

      How to Play Hay Day Without Downloading It

      -

      Hay Day is one of the most popular and successful farming simulation games in the world. It has millions of players and fans who enjoy growing crops, raising animals, trading goods, and building their own farm. But what if you want to play Hay Day without downloading it? Is there a way to play Hay Day online for free? And what are some alternatives to Hay Day that you can try? In this article, we will answer these questions and more. Read on to find out how you can enjoy Hay Day without downloading it.

      -

      hay day without download


      Download ○○○ https://urlca.com/2uO98F



      -

      What is Hay Day and Why is it Popular?

      -

      Hay Day is a game developed by Supercell, a Finnish company that also created other hit games like Clash of Clans, Clash Royale, and Brawl Stars. Hay Day was released in 2012 for iOS devices and in 2013 for Android devices. Since then, it has been downloaded over 100 million times and has received positive reviews from critics and players alike.

      -

      Hay Day is a farming simulation game with many features and activities

      -

      In Hay Day, you inherit a farm from your uncle, who can no longer take care of it. Your goal is to turn this farm into a thriving business by planting crops, harvesting them, making products, selling them, and expanding your land. You can also raise animals like chickens, cows, pigs, horses, and more. You can feed them, collect their products, and even pet them. You can also fish in the lake, repair the town, explore the valley, join a neighborhood, participate in events, and much more. There is always something new and fun to do in Hay Day.

      -

      Hay Day has a large and active community of players and fans

      -

      One of the reasons why Hay Day is so popular is because it has a large and active community of players and fans. You can interact with other players by trading goods with them, helping them with their orders, chatting with them, competing with them in derbies, or visiting their farms. You can also follow Hay Day on social media platforms like Facebook, Twitter, Instagram, YouTube, or Reddit. There you can find news, updates, tips, tricks, contests, fan art, videos, memes, and more. You can also share your own feedback, opinions, suggestions, or questions with the developers or other players.

      -

      How to Play Hay Day Online for Free on Yandex Games

      -

      If you want to play Hay Day without downloading it, one option is to play it online for free on Yandex Games. Yandex Games is a platform that offers many browser-based games that you can play on your computer or mobile device without installing anything. One of these games is called Hay Day Farm.

      -

      Yandex Games is a platform that offers many browser-based games

      -

      Yandex Games is a service provided by Yandex, a Russian company that operates various internet products and services. Yandex Games

      How to access and play Hay Day Farm on Yandex Games

      -

      To access and play Hay Day Farm on Yandex Games, you need to have a Yandex account. You can create one for free by visiting the Yandex website and clicking on the "Create account" button. You can also sign in with your Google, Facebook, or Twitter account. Once you have an account, you can go to the Yandex Games website and search for Hay Day Farm. Alternatively, you can use this link: https://games.yandex.com/games/hay-day-farm. Then, you can click on the "Play" button and start playing Hay Day Farm on your browser.

      -

      What are the advantages and disadvantages of playing Hay Day Farm online

      -

      Playing Hay Day Farm online has some advantages and disadvantages compared to playing Hay Day on your device. Here are some of them:

      - - - - - - - - - - - - - - - - - - - - - - - - - -
      AdvantagesDisadvantages
      You don't need to download or install anything.You need a stable internet connection.
      You can play on any device that supports a browser.You can't play offline or without a browser.
      You can save your progress and data on the cloud.You can't sync your progress and data with the original Hay Day game.
      You can enjoy most of the features and activities of Hay Day.You might encounter some bugs, glitches, or errors.
      You can play for free without any ads or in-app purchases.You might miss some updates, events, or content from the original Hay Day game.
      -

      How to Play Hay Day on PC or Mobile Devices

      -

      If you prefer to play Hay Day on your PC or mobile devices, you need to download and install it first. You can find Hay Day on the App Store for iOS devices, on the Google Play Store for Android devices, or on the Amazon Appstore for Kindle devices. You can also play Hay Day on your PC using an emulator like BlueStacks or Nox Player. Here are the steps to follow:

      How to download and install Hay Day on PC or mobile devices

      -

      To download and install Hay Day on your PC or mobile devices, follow these steps:

      -
        -
      1. Go to the App Store, Google Play Store, Amazon Appstore, or the emulator's app store and search for Hay Day.
      2. -
      3. Tap or click on the Hay Day icon and then tap or click on the "Install" or "Get" button.
      4. -
      5. Wait for the download and installation to complete. You might need to grant some permissions or accept some terms and conditions.
      6. -
      7. Once the installation is done, tap or click on the Hay Day icon to launch the game.
      8. -
      9. Follow the instructions on the screen to set up your account, choose your language, and start playing.
      10. -
      -

      How to sync your progress and data across different devices

      -

      If you want to sync your progress and data across different devices, you need to connect your Hay Day account to a Facebook account. This way, you can save your farm on the cloud and access it from any device that has Hay Day installed. To do this, follow these steps:

      -

      hay day farm online game free
      -hay day supercell play on browser
      -hay day app store alternative
      -hay day webgl version yandex
      -hay day farming simulator no download
      -hay day ecaps games 12+
      -hay day anniversary event online
      -hay day simple life of working the land
      -hay day crops never die
      -hay day neighbors and friends
      -hay day constantly evolving game
      -hay day in-game news section
      -hay day social media sneak peeks
      -hay day birthday cake too much wool
      -hay day google play store
      -hay day farm pass and derby valley
      -hay day town and neighborhoods
      -hay day animals and food
      -hay day real special place
      -hay day experience the nature
      -hay day online experiences for gamers
      -hay day countless updates since 2012
      -hay day license agreement terms and conditions
      -hay day support webgl browser
      -hay day buildweb json compatibility check
      -hay day firefox chrome safari browser
      -hay day cancel loading button
      -hay day learn more about the game
      -hay day rating and players number
      -hay day global launch 2012
      -hay day stay up to date
      -hay day don't be a stranger
      -hay day follow us on social media
      -hay day bacon and pigs neighbors
      -hay day net energy gain experiment
      -hay day holy grail fusion mini sun
      -hay day 100 million degrees celsius reactor
      -hay day south korea kstar facility
      -hay day nuclear fusion reaction 30 seconds
      -hay day physics problem to engineering one
      -hay day new scientist article source
      -hay day the sun article source
      -hay day yahoo news article source

      -
        -
      1. Open Hay Day on your device and tap or click on the gear icon in the top left corner of the screen.
      2. -
      3. Tap or click on the "Settings" option and then tap or click on the "Facebook" button.
      4. -
      5. Log in with your Facebook account and grant Hay Day permission to access it.
      6. -
      7. You will see a confirmation message that says "You are now connected to Facebook". Tap or click on "OK".
      8. -
      9. Now you can sync your farm across different devices by logging in with the same Facebook account on each device.
      10. -
      -

      What are the benefits and drawbacks of playing Hay Day on PC or mobile devices

      -

      Playing Hay Day on your PC or mobile devices has some benefits and drawbacks compared to playing it online. Here are some of them:

      - - - - - - - - - - - - - - - - - - - - - -
      BenefitsDrawbacks
      You can play offline or without a browser.You need to download and install the game.
      You can sync your progress and data with Facebook.You need a Facebook account to do so.
      You can enjoy all the updates, events, and content from the original Hay Day game.You might encounter some ads or in-app purchases.
      You can play with better graphics, sound, and performance.You might need a powerful device or emulator to do so.
      -

      How to Find Alternatives to Hay Day

      If you are looking for some alternatives to Hay Day, you might want to try other games similar to Hay Day. There are many games like Hay Day that offer different themes, features, and experiences. You might find some of them more appealing, challenging, or fun than Hay Day.

      -

      Why you might want to try other games similar to Hay Day

      -

      There are several reasons why you might want to try other games similar to Hay Day. Some of them are:

      -
        -
      • You want to explore new genres, settings, or stories.
      • -
      • You want to experience different gameplay mechanics, strategies, or challenges.
      • -
      • You want to discover new features, activities, or content.
      • -
      • You want to compare different games and find your favorite one.
      • -
      • You want to take a break from Hay Day and try something new.
      • -
      -

      How to find and compare different games like Hay Day

      -

      To find and compare different games like Hay Day, you can use various methods and sources. Some of them are:

      -
        -
      • Use search engines like Google or Bing to look for keywords like "games like Hay Day", "farming simulation games", or "best farming games".
      • -
      • Use online platforms like Steam, App Store, Google Play Store, or Amazon Appstore to browse, filter, or sort games by categories, tags, ratings, reviews, or popularity.
      • -
      • Use online forums like Reddit, Quora, or GameFAQs to ask for recommendations, opinions, or suggestions from other players or experts.
      • -
      • Use online articles, blogs, videos, podcasts, or magazines that review, rank, or feature games like Hay Day or farming simulation games.
      • -
      • Use online tools like Similar Games Finder (https://www.similargamesfinder.com/) or Games Finder (https://gameslikefinder.com/) that help you find and compare games based on your preferences and criteria.
      • -
      -

      Some examples of games like Hay Day and their features

      -

      Here are some examples of games like Hay Day and their features. Note that these are not the only games like Hay Day and that you might find other games that suit your taste better.

      - - - - - - - - - - - - - - - - - - - - - - - - - -
      GameFeatures
      FarmVille 2: Country Escape- A sequel to the popular FarmVille game that lets you build your own farm and join a co-op with other players.
      - You can grow crops, raise animals, craft goods, trade with friends, and explore new areas.
      - You can play offline or online and sync your progress across devices.
      - You can enjoy regular updates, events, and quests.
      Farm Frenzy 4- A time management game that challenges you to run a farm business in different locations.
      - You can grow crops, feed animals, produce goods, sell them at the market, and upgrade your facilities.
      - You can play 90 levels with varying objectives and difficulties.
      - You can enjoy colorful graphics, funny animations, and catchy music.
      Township- A game that combines farming and city building elements.
      - You can grow crops, process them at factories, sell goods at the market, and build your own town.
      - You can also interact with townspeople, complete orders, join a co-op, and visit other players' towns.
      - You can also explore mines, islands, zoos, and more.
      Farm Story 2- A game that lets you create your own farm story with various characters and animals.
      - You can grow crops, raise pets and livestock, make products, decorate your farm, and discover hidden treasures.
      - You can also play mini-games like fishing or mining.
      - You can also connect with other players through social features.
      Stardew Valley- A game that lets you escape to a rural life in a charming pixelated world.
      - You can inherit a farm from your grandfather and turn it into your dream farm.
      - You can also explore the town, meet and befriend the locals, get married, have children, and fight monsters.
      - You can also customize your character, farm, home, and tools.
      -

      Conclusion

      -

      In conclusion, Hay Day is a fun and addictive farming simulation game that has many features and activities to enjoy. However, if you want to play Hay Day without downloading it , you can try playing it online for free on Yandex Games. However, you might miss some features, updates, or content from the original game. Alternatively, you can download and install Hay Day on your PC or mobile devices and sync your progress and data with Facebook. However, you might encounter some ads or in-app purchases. Finally, you can also find and compare different games like Hay Day that offer similar or different experiences. However, you might not find a game that matches your preferences exactly.

      -

      Whatever option you choose, we hope you have fun playing Hay Day or its alternatives. Hay Day is a great game that can keep you entertained, relaxed, and creative for hours. It can also help you learn more about farming, business, and community. If you have any questions, feedback, or suggestions about Hay Day or this article, please feel free to share them with us in the comments section below. We would love to hear from you.

      -

      FAQs

      -

      Here are some frequently asked questions about Hay Day and its alternatives:

      -
        -
      1. How can I get more coins and diamonds in Hay Day?
      2. -

        You can get more coins and diamonds in Hay Day by completing orders, achievements, events, quests, or derbies. You can also watch ads, spin the wheel of fortune, open mystery boxes, or mine ores. You can also buy them with real money or exchange them with other players.

        -
      3. How can I join a neighborhood in Hay Day?
      4. -

        You can join a neighborhood in Hay Day by tapping or clicking on the house icon in the bottom right corner of the screen. Then, you can search for a neighborhood by name, tag, level, language, or type. You can also create your own neighborhood by tapping or clicking on the "Create" button.

        -
      5. How can I play Hay Day with my friends?
      6. -

        You can play Hay Day with your friends by connecting your game to Facebook. Then, you can see your friends' farms on the map and visit them, trade with them, help them, chat with them, or compete with them. You can also invite your friends to join your neighborhood or co-op.

        -
      7. What are some tips and tricks for playing Hay Day?
      8. -

        Some tips and tricks for playing Hay Day are:

        -
          -
        • Plant crops that take longer to grow at night or when you are away from the game.
        • -
        • Use Tom the delivery boy to buy rare or expensive items for cheap.
        • -
        • Use the roadside shop to sell your goods at higher prices than the market.
        • -
        • Use the newspaper to find good deals from other players or advertise your own goods.
        • -
        • Use the town visitors to earn more coins and reputation points.
        • -
        • Use the valley to earn more rewards and tokens.
        • -
        -
      9. What are some other games like Hay Day that are not mentioned in this article?
      10. -

        Some other games like Hay Day that are not mentioned in this article are:

        -
          -
        • Farmville 3: A farming simulation game that lets you build your own farm and animal sanctuary.
        • -
        • Farm Together: A farming simulation game that lets you grow crops, raise animals, decorate your farm, and play with other players online.
        • -
        • Farming Simulator 22: A farming simulation game that lets you operate various vehicles and machines, cultivate crops, breed animals, and manage your farm business.
        • -
        • Harvest Moon: One World: A farming simulation game that lets you explore a vast world, grow crops, raise animals, befriend villagers, and find love.
        • -
        • Gardenscapes: A casual game that lets you restore a beautiful garden by solving match-3 puzzles and completing tasks.
        • -

        197e85843d
        -
        -
        \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/How to Use APK Mirror to Download and Update Xiaomi Apps.md b/spaces/congsaPfin/Manga-OCR/logs/How to Use APK Mirror to Download and Update Xiaomi Apps.md deleted file mode 100644 index 2293f54cfb5529e5b86be20c884a4299b8fa7144..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/How to Use APK Mirror to Download and Update Xiaomi Apps.md +++ /dev/null @@ -1,179 +0,0 @@ - -

        What is APK Mirror and Why You Should Use It for Your Xiaomi Device

        -

        If you own a Xiaomi device, you may have noticed that some apps take a long time to get updated or are not available in your region. This can be frustrating, especially if you want to enjoy the latest features and improvements of your favorite apps.

        -

        Fortunately, there is a solution for this problem: APK Mirror. APK Mirror is a website that hosts thousands of Android apps in their original APK format, which means you can download and install them directly on your device without going through Google Play Store.

        -

        apk mirror xiaomi


        Download Zip ——— https://urlca.com/2uO9lC



        -

        APK Mirror can help you get access to the newest versions of apps before they are officially released, as well as apps that are not compatible with your device or region. You can also find older versions of apps in case you want to downgrade or avoid bugs.

        -

        In this article, we will show you how to use APK Mirror for your Xiaomi device, how to download and install APKs from it, how to update your apps from it, how to troubleshoot common issues with it, and how to stay safe and secure when using it.

        -

        How to Download and Install APKs from APK Mirror on Your Xiaomi Device

        -

        Downloading and installing APKs from APK Mirror is easy and straightforward, but you need to follow some steps to make sure everything works smoothly.

        -

        Step 1: Enable installation from unknown sources on your device settings

        -

        Before you can install any APK file on your device, you need to allow installation from unknown sources, which means sources other than Google Play Store. To do this, follow these steps:

        -
          -
        • Go to Settings > Apps > Manage apps > More settings > Install apps from unknown sources.
        • -
        • Toggle on the switch next to Allow from this source.
        • -
        • If prompted, tap OK to confirm.
        • -
        -

        Note that this setting may vary depending on your device model and Android version, so you may need to look for it in different places.

        -

        Step 2: Visit APK Mirror website and search for the app you want to download

        -

        Next, you need to visit the APK Mirror website and find the app you want to download. You can use the search bar or browse by categories to find the app. You can also use filters to sort the results by date, popularity, rating, or device compatibility.

        -

        Once you find the app you want, tap on it to see more details, such as the app description, screenshots, ratings, reviews, and version history. You can also see the APK file size, signature, and permissions.

        -

        apk mirror xiaomi miui system launcher
        -apk mirror xiaomi home
        -apk mirror xiaomi miui gallery
        -apk mirror xiaomi miui camera
        -apk mirror xiaomi miui music player
        -apk mirror xiaomi miui security
        -apk mirror xiaomi miui browser
        -apk mirror xiaomi miui calculator
        -apk mirror xiaomi miui weather
        -apk mirror xiaomi miui clock
        -apk mirror xiaomi miui compass
        -apk mirror xiaomi miui contacts and dialer
        -apk mirror xiaomi miui file manager
        -apk mirror xiaomi miui notes
        -apk mirror xiaomi miui recorder
        -apk mirror xiaomi miui scanner
        -apk mirror xiaomi miui screen recorder
        -apk mirror xiaomi miui settings
        -apk mirror xiaomi miui themes
        -apk mirror xiaomi miui video player
        -apk mirror xiaomi pocophone launcher
        -apk mirror xiaomi mint browser
        -apk mirror xiaomi mint keyboard
        -apk mirror xiaomi mint launcher
        -apk mirror xiaomi getapps
        -apk mirror xiaomi shareme
        -apk mirror xiaomi quick apps
        -apk mirror xiaomi app vault
        -apk mirror xiaomi cleaner lite
        -apk mirror xiaomi feedback
        -apk mirror xiaomi game turbo
        -apk mirror xiaomi health
        -apk mirror xiaomi joyose
        -apk mirror xiaomi mimoji ar camera
        -apk mirror xiaomi mipay wallet
        -apk mirror xiaomi mixplorer silver file manager
        -apk mirror xiaomi poco m3 wallpapers 4k hd backgrounds pro
        -apk mirror xiaomi redmi note 10 pro wallpapers 4k hd backgrounds pro
        -apk mirror xiaomi smart home
        -apk mirror xiaomi wallpaper carousel.

        -

        Step 3: Choose the version that is compatible with your device and download the APK file

        -

        After you select the app you want, you need to choose the version that is compatible with your device. APK Mirror offers different versions of the same app for different devices, Android versions, architectures, and DPIs. You can check these details on your device settings or use an app like CPU-Z to find out.

        -

        To choose the right version, look for the one that matches your device specifications and has a green check mark next to it. This means that the APK file is verified by APK Mirror and safe to install. Avoid downloading versions that have a red exclamation mark or a yellow warning sign next to them, as they may not work properly or contain malware.

        -

        Once you choose the version you want, tap on the Download APK button and wait for the download to start. You may see some ads or pop-ups before the download begins, so be careful not to click on anything suspicious.

        -

        Step 4: Locate the downloaded file on your device and tap on it to install it

        -

        Finally, you need to locate the downloaded file on your device and tap on it to install it. You can use a file manager app like Mi File Manager or Google Files to find the file. It is usually stored in the Downloads folder or in a folder named after the app.

        -

        Once you find the file, tap on it to open it. You may see a prompt asking you to confirm the installation or grant permissions to the app. Tap on Install or Allow as needed and wait for the installation to finish. You may also see a warning message saying that installing this app may harm your device. This is normal and you can ignore it as long as you trust the source of the APK file.

        -

        When the installation is done, you can open the app and enjoy its features. You can also delete the APK file from your device if you want to save some space.

        -

        How to Update Your Apps from APK Mirror on Your Xiaomi Device

        -

        If you want to keep your apps updated with the latest versions from APK Mirror, you need to follow some steps as well. Here is how you can do it:

        -

        Step 1: Check for updates on APK Mirror website or app

        -

        The first thing you need to do is check if there are any updates available for your apps on APK Mirror. You can do this by visiting the website and looking for a notification icon next to your apps. You can also use the APK Mirror app, which is an unofficial client that lets you browse, download, and update apps from APK Mirror more easily.

        -

        If you see any updates available for your apps, tap on them to see more details and download them.

        -

        Step 2: Download the latest version of the app you want to update

        -

        Next, you need to download the latest version of the app you want to update from APK Mirror. You can do this by following the same steps as in the previous section. Make sure you choose the version that is compatible with your device and has a green check mark next to it.

        -

        Once you download the APK file, you can proceed to the next step.

        -

        Step 3: Uninstall the old version of the app from your device

        -

        Before you can install the new version of the app, you need to uninstall the old version from your device. This is because APK Mirror does not offer incremental updates, which means you cannot install a new version over an old one. You need to remove the old one first and then install the new one.

        -

        To uninstall the old version of the app, follow these steps:

        -
          -
        • Go to Settings > Apps > Manage apps > More settings > Uninstall apps.
        • -
        • Find the app you want to uninstall and tap on it.
        • -
        • Tap on Uninstall and confirm.
        • -
        -

        Note that uninstalling an app may delete its data and settings, so make sure you back up any important information before doing so.

        -

        Step 4: Install the new version of the app from the downloaded file

        -

        Finally, you need to install the new version of the app from the downloaded file. You can do this by following the same steps as in the previous section. Locate the file on your device and tap on it to install it. Grant any permissions or access requests as needed and wait for the installation to finish.

        -

        When the installation is done, you can open the app and enjoy its updated features. You can also delete the APK file from your device if you want to save some space.

        -

        How to Troubleshoot Common Issues with APK Mirror on Your Xiaomi Device

        -

        Sometimes, you may encounter some issues when using APK Mirror on your Xiaomi device. These issues may include the app not installing or crashing after installation, the app not working properly or showing errors, or the app being incompatible with your device or region. Here are some ways to troubleshoot these common issues:

        -

        Issue 1: The app does not install or crashes after installation

        -

        If you have trouble installing an app from APK Mirror or if it crashes after installation, here are some possible solutions:

        -

        Solution 1: Make sure you have enough storage space on your device and clear cache and data of the app

        -

        One of the reasons why an app may not install or crash is because you do not have enough storage space on your device. To check your storage space, go to Settings > Storage and see how much free space you have. If you have less than 10% of free space, you may need to delete some files or apps to make room for new ones.

        -

        Another reason why an app may not install or crash is because its cache or data is corrupted or outdated. To clear cache and data of an app, follow these steps:

        -
          -
        • Go to Settings > Apps > Manage apps > More settings > Clear cache and data.
        • -
        • Find the app you want to clear and tap on it.
        • -
        • Tap on Clear cache and Clear data and confirm.
        • -
        -

        Note that clearing data may delete any information or settings associated with the app, so make sure you back up any important data before doing so.

        -

        Solution 2: Try installing a different version of the app or a different APK file from another source

        -

        Sometimes, the version of the app you downloaded from APK Mirror may not be compatible with your device or may have some bugs or errors. In this case, you can try installing a different version of the app from APK Mirror or a different APK file from another source.

        -

        To install a different version of the app from APK Mirror, follow these steps:

        -
          -
        • Go to the app page on APK Mirror and scroll down to the version history section.
        • -
        • Find a version that is compatible with your device and has a green check mark next to it.
        • -
        • Tap on the Download APK button and follow the same steps as in the previous section to install it.
        • -
        -

        To install a different APK file from another source, follow these steps:

        -
          -
        • Find a reputable source that offers APK files for Android apps, such as APKPure, Aptoide, or Uptodown.
        • -
        • Search for the app you want to download and choose the version that is compatible with your device.
        • -
        • Download the APK file and follow the same steps as in the previous section to install it.
        • -
        -

        Note that downloading APK files from other sources may be risky, as they may contain malware or viruses. Make sure you scan the files with a reliable antivirus or malware scanner before installing them.

        -

        Solution 3: Contact the app developer or APK Mirror support for help

        -

        If none of the above solutions work, you may need to contact the app developer or APK Mirror support for help. They may be able to provide you with more information or solutions for your issue.

        -

        To contact the app developer, follow these steps:

        -
          -
        • Go to the app page on Google Play Store or APK Mirror and look for the contact details of the developer.
        • -
        • Send them an email or a message explaining your issue and providing any relevant details, such as your device model, Android version, app version, and error messages.
        • -
        • Wait for their response and follow their instructions.
        • -
        -

        To contact APK Mirror support, follow these steps:

        -
          -
        • Go to the APK Mirror website and tap on the menu icon on the top left corner.
        • -
        • Tap on Contact us and fill out the form with your name, email, subject, and message.
        • -
        • Describe your issue and provide any relevant details, such as your device model, Android version, app version, and error messages.
        • -
        • Tap on Submit and wait for their response.
        • -
        -

        How to Stay Safe and Secure When Using APK Mirror on Your Xiaomi Device

        -

        Using APK Mirror on your Xiaomi device can be very beneficial, but it also comes with some risks. You need to be careful and vigilant when downloading and installing APK files from unknown sources, as they may contain malware or viruses that can harm your device or compromise your privacy. Here are some tips to stay safe and secure when using APK Mirror on your Xiaomi device:

        -

        Tip 1: Only download APK files from trusted sources like APK Mirror and avoid third-party links or ads

        -

        One of the most important things you need to do is only download APK files from trusted sources like APK Mirror and avoid clicking on any third-party links or ads that may appear on the website or app. These links or ads may lead you to malicious websites or downloads that can infect your device with malware or viruses.

        -

        To avoid third-party links or ads, follow these tips:

        -
          -
        • Use an ad blocker or a browser that blocks ads by default, such as Brave or Firefox Focus.
        • -
        • Look for the official logo and domain name of APK Mirror on the website or app. The domain name should be apkmirror.com or apkmirror.app. If you see any other domain name, such as apkmirror.net or apkmirror.xyz, do not trust it.
        • -
        • Check the URL of the download link before tapping on it. It should start with https://www.apkmirror.com/ or https://www.apkmirror.app/. If you see any other URL, such as http://apkmirror.download/ or https://apkmirror.co/, do not trust it.
        • -
        -

        Tip 2: Scan the APK files with a reputable antivirus or malware scanner before installing them

        -

        Another thing you need to do is scan the APK files with a reputable antivirus or malware scanner before installing them on your device. This will help you detect and remove any potential threats that may be hidden in the files.

        -

        To scan the APK files with an antivirus or malware scanner, follow these steps:

        -
          -
        • Download and install a reputable antivirus or malware scanner for Android, such as Bitdefender, Avast, AVG, Norton, or McAfee .
        • -
        • Open the antivirus or malware scanner app and scan the APK file you downloaded from APK Mirror. You can do this by tapping on the Scan option and selecting the file from your device.
        • -
        • Wait for the scan to finish and see if there are any threats or issues detected. If there are, delete the file and do not install it. If there are not, proceed to the next step.
        • -
        -

        Tip 3: Review the permissions and access requests of the apps you install and deny any unnecessary or suspicious ones

        -

        The last thing you need to do is review the permissions and access requests of the apps you install from APK Mirror and deny any unnecessary or suspicious ones. Permissions and access requests are what the apps need to function properly on your device, such as accessing your camera, microphone, contacts, location, etc.

        -

        However, some apps may ask for more permissions or access than they need, which can compromise your privacy or security. For example, a flashlight app does not need to access your contacts or location. To review the permissions and access requests of the apps you install, follow these steps:

        -
          -
        • Go to Settings > Apps > Manage apps > More settings > App permissions.
        • -
        • Find the app you installed from APK Mirror and tap on it.
        • -
        • See what permissions and access requests it has and toggle them on or off as needed. You can also tap on each permission or access request to see more details and options.
        • -
        -

        Note that denying some permissions or access requests may affect the functionality of the app, so only do so if you are sure you do not need them.

        -

        Conclusion and FAQs

        -

        In conclusion, APK Mirror is a great website that can help you get the latest updates and features for your Android apps on your Xiaomi device. It can also help you access apps that are not available in your region or compatible with your device. However, you need to be careful and vigilant when using APK Mirror, as it may also pose some risks to your device or privacy.

        -

        To use APK Mirror safely and securely, you need to follow some steps, such as enabling installation from unknown sources, choosing the right version of the app, scanning the APK file with an antivirus or malware scanner, reviewing the permissions and access requests of the app, and uninstalling the old version of the app before installing the new one.

        -

        By following these steps, you can enjoy the benefits of APK Mirror without compromising your security. You can also troubleshoot any issues that may arise when using APK Mirror by following some solutions, such as clearing cache and data of the app, installing a different version of the app or a different APK file from another source, or contacting the app developer or APK Mirror support for help.

        -

        If you have any questions about APK Mirror and Xiaomi devices, here are some FAQs that may answer them:

        -

        FAQs

        -
          -
        1. Is APK Mirror safe?
        2. -

          APK Mirror is generally safe to use, as it verifies and tests all the APK files it hosts before making them available for download. However, there is still a possibility that some malicious files may slip through its security checks. Therefore, you should always scan the APK files with an antivirus or malware scanner before installing them and avoid clicking on any third-party links or ads that may appear on the website or app.

          -
        3. Is APK Mirror legal?
        4. -

          APK Mirror is legal to use in most countries, as it does not host any pirated or cracked apps. It only hosts free apps that are available on Google Play Store or other official sources. However, some countries may have different laws regarding downloading and installing apps from unknown sources. Therefore, you should check your local laws before using APK Mirror.

          -
        5. Does APK Mirror have an app?
        6. -

          APK Mirror does not have an official app, but it has an unofficial client called APKMirror Installer (Official), which lets you browse, download, and update apps from APK Mirror more easily. You can download it from Google Play Store or from APK Mirror itself.

          -
        7. Does APK Mirror work on other Android devices?
        8. -

          APK Mirror works on most Android devices that run Android 5.0 Lollipop or higher. However, some devices may have specific requirements or limitations that may affect the compatibility of some apps. Therefore, you should always check the device specifications and compatibility of the apps before downloading and installing them from APK Mirror.

          -
        9. Does APK Mirror require root access?
        10. -

          APK Mirror does not require root access to use, as it does not modify or alter any system files or settings. It only installs apps as normal APK files that can be removed or updated easily. However, some apps that you download from APK Mirror may require root access to function properly, such as apps that tweak or customize your device. Therefore, you should always check the app description and requirements before installing it from APK Mirror.

          -
        -

        I hope this article has helped you understand what APK Mirror is and how to use it for your Xiaomi device. If you have any feedback or suggestions, please let me know in the comments below. Thank you for reading and happy downloading!

        401be4b1e0
        -
        -
        \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Temple Run Oz APK - Discover the Secrets of Oz in this Amazing Game.md b/spaces/congsaPfin/Manga-OCR/logs/Temple Run Oz APK - Discover the Secrets of Oz in this Amazing Game.md deleted file mode 100644 index b81f9899b0b4e664d818651437b51b9fdfc4fcdf..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Temple Run Oz APK - Discover the Secrets of Oz in this Amazing Game.md +++ /dev/null @@ -1,148 +0,0 @@ - -

        Temple Run Oz APK Download: How to Play the Most Thrilling Running Game on Your Android Device

        -

        Introduction

        -

        If you are a fan of endless runner games, you must have heard of Temple Run, one of the most popular and addictive games in the genre. But did you know that there is a spin-off game based on the movie Oz the Great and Powerful? It's called Temple Run Oz, and it's a brand-new running experience that takes you to the magical land of Oz.

        -

        temple run oz apk download


        Download Ziphttps://urlca.com/2uO5Jz



        -

        In this article, we will tell you everything you need to know about Temple Run Oz, including what it is, why you should download it, how to download and install it on your Android device, and how to play it. So, if you are ready to embark on an exhilarating adventure with Oz and his friends, read on!

        -

        What is Temple Run Oz?

        -

        Temple Run Oz is a game developed by Disney and Imangi Studios, the creators of Temple Run and Temple Run 2. It is inspired by both the original Temple Run game and the film Oz the Great and Powerful, which is a prequel to The Wizard of Oz.

        -

        In Temple Run Oz, you play as Oz, a circus magician who finds himself in the land of Oz after a hot air balloon accident. There, he meets China Girl, a living porcelain doll, and Finley, a flying monkey. Together, they have to outrun the wicked witch's flying baboons and other dangers as they explore different locations in Oz.

        -

        temple run oz apk free download for android
        -temple run oz apk mod unlimited coins and gems
        -temple run oz apk latest version download
        -temple run oz apk download for pc
        -temple run oz apk obb download
        -temple run oz apk hack download
        -temple run oz apk full version free download
        -temple run oz apk offline download
        -temple run oz apk revdl download
        -temple run oz apk uptodown download
        -temple run oz apk pure download
        -temple run oz apk mirror download
        -temple run oz apk rexdl download
        -temple run oz apk old version download
        -temple run oz apk data download
        -temple run oz apk android 1 download
        -temple run oz apk mob.org download
        -temple run oz apk 1.6.2 download
        -temple run oz apk andropalace download
        -temple run oz apk apkpure.com download
        -temple run oz apk cracked download
        -temple run oz apk direct download
        -temple run oz apk fileplanet download
        -temple run oz apk gamestechy download
        -temple run oz apk highly compressed download
        -temple run oz apk install download
        -temple run oz apk ios download
        -temple run oz apk jio phone download
        -temple run oz apk kickass download
        -temple run oz apk latest update download
        -temple run oz apk modded download
        -temple run oz apk no ads download
        -temple run oz apk original download
        -temple run oz apk play store download
        -temple run oz apk qooapp download
        -temple run oz apk rexnet download
        -temple run oz apk samsung galaxy y download
        -temple run oz apk unlimited everything download
        -temple run oz apk vipmodpro download
        -temple run oz apk xda developers download
        -temple run oz apk youtube video downloader app free online hd 1080p 4k mp3 mp4 converter and editor for android mobile phone tablet laptop pc windows mac linux chrome firefox safari opera uc browser internet explorer edge brave tor browser duckduckgo bing google yahoo yandex baidu naver daum sogou 360 search engine web browser application software tool program extension add-on plugin widget gadget feature function service website portal platform network system solution option alternative choice method way technique strategy approach procedure process operation workflow action task activity function performance result outcome output input feedback loop cycle mechanism dynamics model framework structure design pattern architecture blueprint plan scheme outline sketch draft diagram illustration drawing picture image graphic art visual art creative art fine art fine arts digital art digital painting digital drawing digital illustration digital sketch digital draft digital diagram digital picture digital image digital graphic art digital visual art artificial intelligence ai machine learning ml deep learning dl natural language processing nlp natural language generation nlg natural language understanding nlu computer vision cv computer graphics cg computer animation ca computer simulation cs computer game cg augmented reality ar virtual reality vr mixed reality mr extended reality xr human-computer interaction hci user interface ui user experience ux voice user interface vui voice user experience vux chatbot conversational agent conversational interface conversational ai conversational ui conversational ux natural language interface nli natural language query nlq question answering qa question generation qg text summarization ts text generation tg text analysis ta text mining tm sentiment analysis sa sentiment mining sm opinion mining om emotion detection ed emotion recognition er emotion analysis ea emotion mining em text classification tc text categorization tc topic modeling tm topic extraction te keyword extraction ke named entity recognition ner named entity extraction nee entity linking el entity resolution er coreference resolution cr relation extraction re relation detection rd relation recognition rr relation classification rc relation categorization rc event extraction ee event detection ed event recognition er event classification ec event categorization ec fact extraction fe fact detection fd fact recognition fr fact verification fv fact checking fc information extraction ie information retrieval ir information synthesis is information fusion if information generation ig information visualization iv data visualization dv data analysis da data mining dm data science ds data engineering de data wrangling dw data cleaning dc data preprocessing dp data transformation dt data integration di data enrichment de data augmentation da data generation dg data modeling dm data management dm database db relational database rdb non-relational database nrdb nosql database nosql sql database sql graph database gdb document database ddb key-value database kvdb column-oriented database codb row-oriented database rodb object-oriented database oodb object-relational database ordb xml database xmldb json database jsondb multimedia database mdb spatial database sdb temporal database tdb time series database tsdb real-time database rtb in-memory database imd distributed database ddb cloud database cdb big data bd big data analytics bda big data engineering bde big data science bds big data visualization bdv business intelligence bi business analytics ba business process management bpm business process modeling bpm business process automation bpa business process improvement bpi business process reengineering bpr business process optimization bpo business process simulation bps business process monitoring bpm

        -

        Why should you download Temple Run Oz APK?

        -

        Temple Run Oz is not just another Temple Run game. It has many features that make it stand out from the rest. Here are some of them:

        -
          -
        • Stunning environments inspired by the film: You can run through the Emerald City, the Dark Forest, the Whimsie Woods, and more. Each location has its own unique scenery, obstacles, and challenges.
        • -
        • Fly in a hot air balloon: You can switch from running to flying in a hot air balloon at certain points in the game. This adds more variety and fun to the gameplay.
        • -
        • Run as China Girl and see Oz in different costumes: You can unlock China Girl as a playable character and see Oz in various outfits from the film. You can also customize your characters with different hats and accessories.
        • -
        • Compete in weekly challenges and leaderboards: You can test your skills and compete with your friends and other players around the world in weekly challenges and leaderboards. You can also earn coins and gems as rewards.
        • -
        -

        Temple Run Oz is a game that will keep you entertained for hours. It has amazing graphics, sound effects, music, and gameplay. It is also easy to play but hard to master. You will never get bored of running in Oz!

        -

        How to download and install Temple Run Oz APK on your Android device

        -

        If you want to play Temple Run Oz on your Android device, you will need to download and install its APK file. An APK file is an application package file that contains all the files needed to run an app on an Android device. You can download Temple Run Oz APK from various sources online, but make sure you choose a trusted one.

        -

        Here are the steps to download and install Temple Run Oz APK on your Android device:

        -

        Step 1: Download the APK file from a trusted sourceStep 1: Download the APK file from a trusted source

        -

        The first thing you need to do is to find a reliable website that offers Temple Run Oz APK for download. You can use a search engine like Google or Bing to look for one, or you can visit some of the following websites that we recommend:

        -
          -
        • APKPure: This is one of the most popular and trusted sources for downloading APK files. It has a large collection of apps and games, including Temple Run Oz. It also provides detailed information about each app, such as its version, size, rating, and screenshots.
        • -
        • APKMirror: This is another reputable website that hosts APK files for various apps and games. It has a simple and user-friendly interface that allows you to search and download Temple Run Oz APK easily. It also updates its content regularly and ensures that the APK files are safe and virus-free.
        • -
        • APKMonk: This is a website that offers APK files for free download. It has a wide range of apps and games, including Temple Run Oz. It also provides information about the app's developer, category, and permissions.
        • -
        -

        Once you have chosen a website, you can follow these steps to download Temple Run Oz APK:

        -
          -
        1. Go to the website and search for Temple Run Oz in the search bar.
        2. -
        3. Select the app from the search results and click on the download button.
        4. -
        5. Wait for the download to complete and save the APK file to your device's storage.
        6. -
        -

        Step 2: Enable unknown sources on your device settings

        -

        Before you can install Temple Run Oz APK on your device, you need to enable unknown sources on your device settings. This will allow you to install apps from sources other than the Google Play Store. To do this, follow these steps:

        -
          -
        1. Go to your device's settings and tap on security or privacy.
        2. -
        3. Find the option that says unknown sources or install unknown apps and toggle it on.
        4. -
        5. A warning message may appear, asking you to confirm your action. Tap on OK or allow to proceed.
        6. -
        -

        Step 3: Install the APK file and launch the game

        -

        Now that you have downloaded the APK file and enabled unknown sources, you can install Temple Run Oz on your device. To do this, follow these steps:

        -
          -
        1. Locate the APK file on your device's storage and tap on it.
        2. -
        3. A prompt will appear, asking you to install the app. Tap on install and wait for the installation to finish.
        4. -
        5. Once the installation is done, you can tap on open to launch the game or find it on your app drawer.
        6. -
        -

        Congratulations! You have successfully installed Temple Run Oz on your Android device. You can now enjoy running in Oz with your favorite characters!

        -

        How to play Temple Run Oz on your Android device

        -

        Temple Run Oz is a fun and easy game to play. All you need is your finger to swipe left, right, up, or down on the screen to control your character. Here are some tips on how to play Temple Run Oz on your Android device:

        -

        Choose your character and costume

        -

        When you start the game, you can choose between Oz or China Girl as your character. You can also unlock different costumes for them by collecting coins and gems in the game. Some of the costumes are based on the film, such as Oz's magician outfit or China Girl's blue dress. You can also customize your characters with hats and accessories that you can buy with coins and gems.

        -

        Run, jump, slide, and fly across stunning environments

        -

        The main objective of Temple Run Oz is to run as far as you can without getting caught by the flying baboons or falling off the edge. You can swipe left or right to turn at corners, swipe up to jump over obstacles, and swipe down to slide under them. You can also tilt your device to move left or right on the path.

        -

        You will encounter different environments in Temple Run Oz, such as the Emerald City, the Dark Forest, the Whimsie Woods, and more. Each environment has its own unique features and challenges. For example, in the Emerald City, you can run on yellow brick roads and see colorful buildings. In the Dark Forest, you can run through spooky trees and encounter winged monkeys.

        -

        Sometimes, you will see a hot air balloon icon on the path. If you swipe up when you reach it, you will enter a flying mode, where you can control the hot air balloon by tilting your device. You can collect coins and gems in the air, but watch out for obstacles and enemies.

        -

        Collect coins, gems, and power-ups

        -

        As you run, you will see coins and gems on the path. You can collect them by running over them or using a magnet power-up. Coins and gems are useful for buying costumes, hats, accessories, and power-ups in the game. You can also use gems to revive yourself if you die.

        -

        Power-ups are special items that give you an advantage in the game. You can activate them by tapping on the screen when you see a power-up icon on the path. Some of the power-ups are:

        -
          -
        • Coin Bonus: This gives you extra coins for a short time.
        • -
        • Gem Bonus: This gives you extra gems for a short time.
        • -
        • Magnet: This attracts all coins and gems to you for a short time.
        • -
        • Shield: This protects you from obstacles and enemies for a short time.
        • -
        • Boost: This makes you run faster and invincible for a short time.
        • -
        -

        Avoid obstacles and enemies

        -

        While running, you will encounter various obstacles and enemies that will try to stop you. You need to avoid them by jumping, sliding, or turning. Some of the obstacles and enemies are:

        -
          -
        • Barricades: These are wooden or metal barriers that block your way. You can jump over them or slide under them.
        • -
        • Gaps: These are holes or cliffs that you need to jump over or fly across.
        • -
        • Fireballs: These are balls of fire that fly towards you. You can dodge them by moving left or right.
        • -
        • Baboons: These are flying monkeys that chase you and try to catch you. You can outrun them by using a boost power-up or flying in a hot air balloon.
        • -
        • Winged Monkeys: These are flying monkeys that attack you from above. You can avoid them by moving left or right or using a shield power-up.
        • -
        -

        Complete challenges and achievements

        -

        To make the game more interesting and rewarding, you can complete challenges and achievements in Temple Run Oz. Challenges are tasks that you need to do in a single run, such as collecting a certain number of coins or gems, running a certain distance, or using a certain power-up. Achievements are goals that you need to accomplish over time, such as unlocking all costumes, running in all environments, or completing all challenges. You can earn coins and gems as rewards for completing challenges and achievements.

        -

        Conclusion

        -

        Temple Run Oz is a fantastic game that combines the best elements of Temple Run and Oz the Great and Powerful. It is a game that will keep you hooked with its stunning graphics, sound effects, music, and gameplay. It is also a game that will challenge your reflexes, skills, and strategy. If you love endless runner games, you should definitely download Temple Run Oz APK and play it on your Android device.

        -

        FAQs

        -

        Here are some frequently asked questions about Temple Run Oz:

        -
          -
        1. Is Temple Run Oz free to play?
        2. -

          Yes, Temple Run Oz is free to play. However, it contains in-app purchases that allow you to buy more coins and gems with real money. You can disable in-app purchases in your device settings if you don't want to use them.

          -
        3. Is Temple Run Oz safe to download?
        4. -

          Yes, Temple Run Oz is safe to download as long as you download it from a trusted source. We recommend using one of the websites that we mentioned above, such as APKPure, APKMirror, or APKMonk. These websites scan the APK files for viruses and malware before uploading them.

          -
        5. How do I update Temple Run Oz?
        6. -

          If you download Temple Run Oz from the Google Play Store, it will update automatically when there is a new version available. If you download it from an APK website, you will need to check the website regularly for updates and download the latest version manually.

          -
        7. How do I uninstall Temple Run Oz?
        8. -

          If you want to uninstall Temple Run Oz from your device, you can follow these steps:

          -
            -
          1. Go to your device's settings and tap on apps or applications.
          2. -
          3. Find Temple Run Oz from the list of apps and tap on it.
          4. -
          5. Tap on uninstall and confirm your action.
          6. -
          -
        9. How do I contact the developers of Temple Run Oz?
        10. -

          If you have any questions, feedback, or issues with Temple Run Oz, you can contact the developers of the game by using one of the following methods:

          -
            -
          • Email: You can send an email to support@imangistudios.com and they will get back to you as soon as possible.
          • -
          • Facebook: You can visit their Facebook page at https://www.facebook.com/TempleRun and leave a message or comment.
          • -
          • Twitter: You can follow them on Twitter at https://twitter.com/TempleRun and tweet them your query or suggestion.
          • -
          -
        -

        I hope you enjoyed this article and found it helpful. If you did, please share it with your friends and family who might also be interested in Temple Run Oz. And don't forget to download Temple Run Oz APK and play it on your Android device. It's a game that you won't regret playing!

        401be4b1e0
        -
        -
        \ No newline at end of file diff --git a/spaces/contluForse/HuggingGPT/assets/CSGO Xoracle hack for mac working 2018 with aimbot features Download now!.md b/spaces/contluForse/HuggingGPT/assets/CSGO Xoracle hack for mac working 2018 with aimbot features Download now!.md deleted file mode 100644 index 08134eb7be4d21b73aed3d7f1e65d9a6c651a40d..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/CSGO Xoracle hack for mac working 2018 with aimbot features Download now!.md +++ /dev/null @@ -1,6 +0,0 @@ -

        CSGO Xoracle hack for mac working 2018 it got aimbot new 01.04.2018 – NEW MacOSX


        DOWNLOAD ->>> https://ssurll.com/2uzvSR



        -
        - aaccfb2cb3
        -
        -
        -

        diff --git a/spaces/cooelf/Multimodal-CoT/timm/models/layers/involution.py b/spaces/cooelf/Multimodal-CoT/timm/models/layers/involution.py deleted file mode 100644 index ccdeefcbe96cabb9285e08408a447ce8a89435db..0000000000000000000000000000000000000000 --- a/spaces/cooelf/Multimodal-CoT/timm/models/layers/involution.py +++ /dev/null @@ -1,50 +0,0 @@ -""" PyTorch Involution Layer - -Official impl: https://github.com/d-li14/involution/blob/main/cls/mmcls/models/utils/involution_naive.py -Paper: `Involution: Inverting the Inherence of Convolution for Visual Recognition` - https://arxiv.org/abs/2103.06255 -""" -import torch.nn as nn -from .conv_bn_act import ConvBnAct -from .create_conv2d import create_conv2d - - -class Involution(nn.Module): - - def __init__( - self, - channels, - kernel_size=3, - stride=1, - group_size=16, - rd_ratio=4, - norm_layer=nn.BatchNorm2d, - act_layer=nn.ReLU, - ): - super(Involution, self).__init__() - self.kernel_size = kernel_size - self.stride = stride - self.channels = channels - self.group_size = group_size - self.groups = self.channels // self.group_size - self.conv1 = ConvBnAct( - in_channels=channels, - out_channels=channels // rd_ratio, - kernel_size=1, - norm_layer=norm_layer, - act_layer=act_layer) - self.conv2 = self.conv = create_conv2d( - in_channels=channels // rd_ratio, - out_channels=kernel_size**2 * self.groups, - kernel_size=1, - stride=1) - self.avgpool = nn.AvgPool2d(stride, stride) if stride == 2 else nn.Identity() - self.unfold = nn.Unfold(kernel_size, 1, (kernel_size-1)//2, stride) - - def forward(self, x): - weight = self.conv2(self.conv1(self.avgpool(x))) - B, C, H, W = weight.shape - KK = int(self.kernel_size ** 2) - weight = weight.view(B, self.groups, KK, H, W).unsqueeze(2) - out = self.unfold(x).view(B, self.groups, self.group_size, KK, H, W) - out = (weight * out).sum(dim=3).view(B, self.channels, H, W) - return out diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmcv/ops/deprecated_wrappers.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmcv/ops/deprecated_wrappers.py deleted file mode 100644 index a2e593df9ee57637038683d7a1efaa347b2b69e7..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmcv/ops/deprecated_wrappers.py +++ /dev/null @@ -1,43 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -# This file is for backward compatibility. -# Module wrappers for empty tensor have been moved to mmcv.cnn.bricks. -import warnings - -from ..cnn.bricks.wrappers import Conv2d, ConvTranspose2d, Linear, MaxPool2d - - -class Conv2d_deprecated(Conv2d): - - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - warnings.warn( - 'Importing Conv2d wrapper from "mmcv.ops" will be deprecated in' - ' the future. Please import them from "mmcv.cnn" instead') - - -class ConvTranspose2d_deprecated(ConvTranspose2d): - - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - warnings.warn( - 'Importing ConvTranspose2d wrapper from "mmcv.ops" will be ' - 'deprecated in the future. Please import them from "mmcv.cnn" ' - 'instead') - - -class MaxPool2d_deprecated(MaxPool2d): - - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - warnings.warn( - 'Importing MaxPool2d wrapper from "mmcv.ops" will be deprecated in' - ' the future. Please import them from "mmcv.cnn" instead') - - -class Linear_deprecated(Linear): - - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - warnings.warn( - 'Importing Linear wrapper from "mmcv.ops" will be deprecated in' - ' the future. Please import them from "mmcv.cnn" instead') diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmcv/parallel/scatter_gather.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmcv/parallel/scatter_gather.py deleted file mode 100644 index 900ff88566f8f14830590459dc4fd16d4b382e47..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmcv/parallel/scatter_gather.py +++ /dev/null @@ -1,59 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from torch.nn.parallel._functions import Scatter as OrigScatter - -from ._functions import Scatter -from .data_container import DataContainer - - -def scatter(inputs, target_gpus, dim=0): - """Scatter inputs to target gpus. - - The only difference from original :func:`scatter` is to add support for - :type:`~mmcv.parallel.DataContainer`. - """ - - def scatter_map(obj): - if isinstance(obj, torch.Tensor): - if target_gpus != [-1]: - return OrigScatter.apply(target_gpus, None, dim, obj) - else: - # for CPU inference we use self-implemented scatter - return Scatter.forward(target_gpus, obj) - if isinstance(obj, DataContainer): - if obj.cpu_only: - return obj.data - else: - return Scatter.forward(target_gpus, obj.data) - if isinstance(obj, tuple) and len(obj) > 0: - return list(zip(*map(scatter_map, obj))) - if isinstance(obj, list) and len(obj) > 0: - out = list(map(list, zip(*map(scatter_map, obj)))) - return out - if isinstance(obj, dict) and len(obj) > 0: - out = list(map(type(obj), zip(*map(scatter_map, obj.items())))) - return out - return [obj for targets in target_gpus] - - # After scatter_map is called, a scatter_map cell will exist. This cell - # has a reference to the actual function scatter_map, which has references - # to a closure that has a reference to the scatter_map cell (because the - # fn is recursive). To avoid this reference cycle, we set the function to - # None, clearing the cell - try: - return scatter_map(inputs) - finally: - scatter_map = None - - -def scatter_kwargs(inputs, kwargs, target_gpus, dim=0): - """Scatter with support for kwargs dictionary.""" - inputs = scatter(inputs, target_gpus, dim) if inputs else [] - kwargs = scatter(kwargs, target_gpus, dim) if kwargs else [] - if len(inputs) < len(kwargs): - inputs.extend([() for _ in range(len(kwargs) - len(inputs))]) - elif len(kwargs) < len(inputs): - kwargs.extend([{} for _ in range(len(inputs) - len(kwargs))]) - inputs = tuple(inputs) - kwargs = tuple(kwargs) - return inputs, kwargs diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmseg/models/decode_heads/ann_head.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmseg/models/decode_heads/ann_head.py deleted file mode 100644 index 958c88e0ca4b9acdaf146b836462b9a101b2cdad..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmseg/models/decode_heads/ann_head.py +++ /dev/null @@ -1,245 +0,0 @@ -import torch -import torch.nn as nn -from annotator.mmpkg.mmcv.cnn import ConvModule - -from ..builder import HEADS -from ..utils import SelfAttentionBlock as _SelfAttentionBlock -from .decode_head import BaseDecodeHead - - -class PPMConcat(nn.ModuleList): - """Pyramid Pooling Module that only concat the features of each layer. - - Args: - pool_scales (tuple[int]): Pooling scales used in Pooling Pyramid - Module. - """ - - def __init__(self, pool_scales=(1, 3, 6, 8)): - super(PPMConcat, self).__init__( - [nn.AdaptiveAvgPool2d(pool_scale) for pool_scale in pool_scales]) - - def forward(self, feats): - """Forward function.""" - ppm_outs = [] - for ppm in self: - ppm_out = ppm(feats) - ppm_outs.append(ppm_out.view(*feats.shape[:2], -1)) - concat_outs = torch.cat(ppm_outs, dim=2) - return concat_outs - - -class SelfAttentionBlock(_SelfAttentionBlock): - """Make a ANN used SelfAttentionBlock. - - Args: - low_in_channels (int): Input channels of lower level feature, - which is the key feature for self-attention. - high_in_channels (int): Input channels of higher level feature, - which is the query feature for self-attention. - channels (int): Output channels of key/query transform. - out_channels (int): Output channels. - share_key_query (bool): Whether share projection weight between key - and query projection. - query_scale (int): The scale of query feature map. - key_pool_scales (tuple[int]): Pooling scales used in Pooling Pyramid - Module of key feature. - conv_cfg (dict|None): Config of conv layers. - norm_cfg (dict|None): Config of norm layers. - act_cfg (dict|None): Config of activation layers. - """ - - def __init__(self, low_in_channels, high_in_channels, channels, - out_channels, share_key_query, query_scale, key_pool_scales, - conv_cfg, norm_cfg, act_cfg): - key_psp = PPMConcat(key_pool_scales) - if query_scale > 1: - query_downsample = nn.MaxPool2d(kernel_size=query_scale) - else: - query_downsample = None - super(SelfAttentionBlock, self).__init__( - key_in_channels=low_in_channels, - query_in_channels=high_in_channels, - channels=channels, - out_channels=out_channels, - share_key_query=share_key_query, - query_downsample=query_downsample, - key_downsample=key_psp, - key_query_num_convs=1, - key_query_norm=True, - value_out_num_convs=1, - value_out_norm=False, - matmul_norm=True, - with_out=True, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - - -class AFNB(nn.Module): - """Asymmetric Fusion Non-local Block(AFNB) - - Args: - low_in_channels (int): Input channels of lower level feature, - which is the key feature for self-attention. - high_in_channels (int): Input channels of higher level feature, - which is the query feature for self-attention. - channels (int): Output channels of key/query transform. - out_channels (int): Output channels. - and query projection. - query_scales (tuple[int]): The scales of query feature map. - Default: (1,) - key_pool_scales (tuple[int]): Pooling scales used in Pooling Pyramid - Module of key feature. - conv_cfg (dict|None): Config of conv layers. - norm_cfg (dict|None): Config of norm layers. - act_cfg (dict|None): Config of activation layers. - """ - - def __init__(self, low_in_channels, high_in_channels, channels, - out_channels, query_scales, key_pool_scales, conv_cfg, - norm_cfg, act_cfg): - super(AFNB, self).__init__() - self.stages = nn.ModuleList() - for query_scale in query_scales: - self.stages.append( - SelfAttentionBlock( - low_in_channels=low_in_channels, - high_in_channels=high_in_channels, - channels=channels, - out_channels=out_channels, - share_key_query=False, - query_scale=query_scale, - key_pool_scales=key_pool_scales, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg)) - self.bottleneck = ConvModule( - out_channels + high_in_channels, - out_channels, - 1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=None) - - def forward(self, low_feats, high_feats): - """Forward function.""" - priors = [stage(high_feats, low_feats) for stage in self.stages] - context = torch.stack(priors, dim=0).sum(dim=0) - output = self.bottleneck(torch.cat([context, high_feats], 1)) - return output - - -class APNB(nn.Module): - """Asymmetric Pyramid Non-local Block (APNB) - - Args: - in_channels (int): Input channels of key/query feature, - which is the key feature for self-attention. - channels (int): Output channels of key/query transform. - out_channels (int): Output channels. - query_scales (tuple[int]): The scales of query feature map. - Default: (1,) - key_pool_scales (tuple[int]): Pooling scales used in Pooling Pyramid - Module of key feature. - conv_cfg (dict|None): Config of conv layers. - norm_cfg (dict|None): Config of norm layers. - act_cfg (dict|None): Config of activation layers. - """ - - def __init__(self, in_channels, channels, out_channels, query_scales, - key_pool_scales, conv_cfg, norm_cfg, act_cfg): - super(APNB, self).__init__() - self.stages = nn.ModuleList() - for query_scale in query_scales: - self.stages.append( - SelfAttentionBlock( - low_in_channels=in_channels, - high_in_channels=in_channels, - channels=channels, - out_channels=out_channels, - share_key_query=True, - query_scale=query_scale, - key_pool_scales=key_pool_scales, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg)) - self.bottleneck = ConvModule( - 2 * in_channels, - out_channels, - 1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - - def forward(self, feats): - """Forward function.""" - priors = [stage(feats, feats) for stage in self.stages] - context = torch.stack(priors, dim=0).sum(dim=0) - output = self.bottleneck(torch.cat([context, feats], 1)) - return output - - -@HEADS.register_module() -class ANNHead(BaseDecodeHead): - """Asymmetric Non-local Neural Networks for Semantic Segmentation. - - This head is the implementation of `ANNNet - `_. - - Args: - project_channels (int): Projection channels for Nonlocal. - query_scales (tuple[int]): The scales of query feature map. - Default: (1,) - key_pool_scales (tuple[int]): The pooling scales of key feature map. - Default: (1, 3, 6, 8). - """ - - def __init__(self, - project_channels, - query_scales=(1, ), - key_pool_scales=(1, 3, 6, 8), - **kwargs): - super(ANNHead, self).__init__( - input_transform='multiple_select', **kwargs) - assert len(self.in_channels) == 2 - low_in_channels, high_in_channels = self.in_channels - self.project_channels = project_channels - self.fusion = AFNB( - low_in_channels=low_in_channels, - high_in_channels=high_in_channels, - out_channels=high_in_channels, - channels=project_channels, - query_scales=query_scales, - key_pool_scales=key_pool_scales, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - self.bottleneck = ConvModule( - high_in_channels, - self.channels, - 3, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - self.context = APNB( - in_channels=self.channels, - out_channels=self.channels, - channels=project_channels, - query_scales=query_scales, - key_pool_scales=key_pool_scales, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - - def forward(self, inputs): - """Forward function.""" - low_feats, high_feats = self._transform_inputs(inputs) - output = self.fusion(low_feats, high_feats) - output = self.dropout(output) - output = self.bottleneck(output) - output = self.context(output) - output = self.cls_seg(output) - - return output diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/detectron2/layers/shape_spec.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/detectron2/layers/shape_spec.py deleted file mode 100644 index 8dac3c59b96576710656abebe9b5eac25868abbb..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/detectron2/layers/shape_spec.py +++ /dev/null @@ -1,18 +0,0 @@ -# -*- coding: utf-8 -*- -# Copyright (c) Facebook, Inc. and its affiliates. -from dataclasses import dataclass -from typing import Optional - - -@dataclass -class ShapeSpec: - """ - A simple structure that contains basic shape specification about a tensor. - It is often used as the auxiliary inputs/outputs of models, - to complement the lack of shape inference ability among pytorch modules. - """ - - channels: Optional[int] = None - height: Optional[int] = None - width: Optional[int] = None - stride: Optional[int] = None diff --git a/spaces/coutant/multilingual-sentence-similarity/README.md b/spaces/coutant/multilingual-sentence-similarity/README.md deleted file mode 100644 index 0047e27ad37036154d1e9f338e0ea0199d7f25c4..0000000000000000000000000000000000000000 --- a/spaces/coutant/multilingual-sentence-similarity/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Multilingual Sentence Similarity -emoji: 🐠 -colorFrom: indigo -colorTo: green -sdk: gradio -sdk_version: 3.16.1 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/cymic/Waifu_Diffusion_Webui/scripts/loopback.py b/spaces/cymic/Waifu_Diffusion_Webui/scripts/loopback.py deleted file mode 100644 index 2fb41e4b38817379e7e3733b5ff7c64cf0eab9b4..0000000000000000000000000000000000000000 --- a/spaces/cymic/Waifu_Diffusion_Webui/scripts/loopback.py +++ /dev/null @@ -1,83 +0,0 @@ -import numpy as np -from tqdm import trange - -import modules.scripts as scripts -import gradio as gr - -from modules import processing, shared, sd_samplers, images -from modules.processing import Processed -from modules.sd_samplers import samplers -from modules.shared import opts, cmd_opts, state - -class Script(scripts.Script): - def title(self): - return "Loopback" - - def show(self, is_img2img): - return is_img2img - - def ui(self, is_img2img): - loops = gr.Slider(minimum=1, maximum=32, step=1, label='Loops', value=4) - denoising_strength_change_factor = gr.Slider(minimum=0.9, maximum=1.1, step=0.01, label='Denoising strength change factor', value=1) - - return [loops, denoising_strength_change_factor] - - def run(self, p, loops, denoising_strength_change_factor): - processing.fix_seed(p) - batch_count = p.n_iter - p.extra_generation_params = { - "Denoising strength change factor": denoising_strength_change_factor, - } - - p.batch_size = 1 - p.n_iter = 1 - - output_images, info = None, None - initial_seed = None - initial_info = None - - grids = [] - all_images = [] - state.job_count = loops * batch_count - - initial_color_corrections = [processing.setup_color_correction(p.init_images[0])] - - for n in range(batch_count): - history = [] - - for i in range(loops): - p.n_iter = 1 - p.batch_size = 1 - p.do_not_save_grid = True - - if opts.img2img_color_correction: - p.color_corrections = initial_color_corrections - - state.job = f"Iteration {i + 1}/{loops}, batch {n + 1}/{batch_count}" - - processed = processing.process_images(p) - - if initial_seed is None: - initial_seed = processed.seed - initial_info = processed.info - - init_img = processed.images[0] - - p.init_images = [init_img] - p.seed = processed.seed + 1 - p.denoising_strength = min(max(p.denoising_strength * denoising_strength_change_factor, 0.1), 1) - history.append(processed.images[0]) - - grid = images.image_grid(history, rows=1) - if opts.grid_save: - images.save_image(grid, p.outpath_grids, "grid", initial_seed, p.prompt, opts.grid_format, info=info, short_filename=not opts.grid_extended_filename, grid=True, p=p) - - grids.append(grid) - all_images += history - - if opts.return_grid: - all_images = grids + all_images - - processed = Processed(p, all_images, initial_seed, initial_info) - - return processed diff --git a/spaces/datagpt/pdf2gpt/app.py b/spaces/datagpt/pdf2gpt/app.py deleted file mode 100644 index 7d9c135d8d47583be1f1333706edefeb878ecab7..0000000000000000000000000000000000000000 --- a/spaces/datagpt/pdf2gpt/app.py +++ /dev/null @@ -1,46 +0,0 @@ -import gradio as gr -import PyPDF2 -from langchain.embeddings.openai import OpenAIEmbeddings -from langchain.vectorstores.faiss import FAISS -from langchain.text_splitter import RecursiveCharacterTextSplitter -from langchain import OpenAI, VectorDBQA - -import os - -def pdf_to_text(pdf_file, contraseña, query): - os.environ["OPENAI_API_KEY"] = contraseña - - # Open the PDF file in binary mode - with open(pdf_file.name, 'rb') as pdf_file: - # Create a PDF reader object - pdf_reader = PyPDF2.PdfReader(pdf_file) - - # Create an empty string to store the text - text = "" - - # Loop through each page of the PDF - for page_num in range(len(pdf_reader.pages)): - # Get the page object - page = pdf_reader.pages[page_num] - # Extract the texst from the page and add it to the text variable - text += page.extract_text() - #embedding step - text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=0) - texts = text_splitter.split_text(text) - - embeddings = OpenAIEmbeddings(openai_api_key=contraseña) - #vector store - vectorstore = FAISS.from_texts(texts, embeddings) - - #inference - qa = VectorDBQA.from_chain_type(llm=OpenAI(), chain_type="stuff", vectorstore=vectorstore) - return qa.run(query) - -# Define the Gradio interface -pdf_input = gr.inputs.File(label="PDF File") -query_input = gr.inputs.Textbox(label="Query") -outputs = gr.outputs.Textbox(label="Chatbot Response") -interface = gr.Interface(fn=pdf_to_text, inputs=[pdf_input, gr.Textbox(lines=1, placeholder="Enter your API-key here...", label="API-Key:", type="password"), query_input], outputs=outputs) - -# Run the interface -interface.launch(debug = True) \ No newline at end of file diff --git a/spaces/dawood/Kanye-AI/vdecoder/hifigan/models.py b/spaces/dawood/Kanye-AI/vdecoder/hifigan/models.py deleted file mode 100644 index 9747301f350bb269e62601017fe4633ce271b27e..0000000000000000000000000000000000000000 --- a/spaces/dawood/Kanye-AI/vdecoder/hifigan/models.py +++ /dev/null @@ -1,503 +0,0 @@ -import os -import json -from .env import AttrDict -import numpy as np -import torch -import torch.nn.functional as F -import torch.nn as nn -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from .utils import init_weights, get_padding - -LRELU_SLOPE = 0.1 - - -def load_model(model_path, device='cuda'): - config_file = os.path.join(os.path.split(model_path)[0], 'config.json') - with open(config_file) as f: - data = f.read() - - global h - json_config = json.loads(data) - h = AttrDict(json_config) - - generator = Generator(h).to(device) - - cp_dict = torch.load(model_path) - generator.load_state_dict(cp_dict['generator']) - generator.eval() - generator.remove_weight_norm() - del cp_dict - return generator, h - - -class ResBlock1(torch.nn.Module): - def __init__(self, h, channels, kernel_size=3, dilation=(1, 3, 5)): - super(ResBlock1, self).__init__() - self.h = h - self.convs1 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]))) - ]) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))) - ]) - self.convs2.apply(init_weights) - - def forward(self, x): - for c1, c2 in zip(self.convs1, self.convs2): - xt = F.leaky_relu(x, LRELU_SLOPE) - xt = c1(xt) - xt = F.leaky_relu(xt, LRELU_SLOPE) - xt = c2(xt) - x = xt + x - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_weight_norm(l) - for l in self.convs2: - remove_weight_norm(l) - - -class ResBlock2(torch.nn.Module): - def __init__(self, h, channels, kernel_size=3, dilation=(1, 3)): - super(ResBlock2, self).__init__() - self.h = h - self.convs = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))) - ]) - self.convs.apply(init_weights) - - def forward(self, x): - for c in self.convs: - xt = F.leaky_relu(x, LRELU_SLOPE) - xt = c(xt) - x = xt + x - return x - - def remove_weight_norm(self): - for l in self.convs: - remove_weight_norm(l) - - -def padDiff(x): - return F.pad(F.pad(x, (0,0,-1,1), 'constant', 0) - x, (0,0,0,-1), 'constant', 0) - -class SineGen(torch.nn.Module): - """ Definition of sine generator - SineGen(samp_rate, harmonic_num = 0, - sine_amp = 0.1, noise_std = 0.003, - voiced_threshold = 0, - flag_for_pulse=False) - samp_rate: sampling rate in Hz - harmonic_num: number of harmonic overtones (default 0) - sine_amp: amplitude of sine-wavefrom (default 0.1) - noise_std: std of Gaussian noise (default 0.003) - voiced_thoreshold: F0 threshold for U/V classification (default 0) - flag_for_pulse: this SinGen is used inside PulseGen (default False) - Note: when flag_for_pulse is True, the first time step of a voiced - segment is always sin(np.pi) or cos(0) - """ - - def __init__(self, samp_rate, harmonic_num=0, - sine_amp=0.1, noise_std=0.003, - voiced_threshold=0, - flag_for_pulse=False): - super(SineGen, self).__init__() - self.sine_amp = sine_amp - self.noise_std = noise_std - self.harmonic_num = harmonic_num - self.dim = self.harmonic_num + 1 - self.sampling_rate = samp_rate - self.voiced_threshold = voiced_threshold - self.flag_for_pulse = flag_for_pulse - - def _f02uv(self, f0): - # generate uv signal - uv = (f0 > self.voiced_threshold).type(torch.float32) - return uv - - def _f02sine(self, f0_values): - """ f0_values: (batchsize, length, dim) - where dim indicates fundamental tone and overtones - """ - # convert to F0 in rad. The interger part n can be ignored - # because 2 * np.pi * n doesn't affect phase - rad_values = (f0_values / self.sampling_rate) % 1 - - # initial phase noise (no noise for fundamental component) - rand_ini = torch.rand(f0_values.shape[0], f0_values.shape[2], \ - device=f0_values.device) - rand_ini[:, 0] = 0 - rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini - - # instantanouse phase sine[t] = sin(2*pi \sum_i=1 ^{t} rad) - if not self.flag_for_pulse: - # for normal case - - # To prevent torch.cumsum numerical overflow, - # it is necessary to add -1 whenever \sum_k=1^n rad_value_k > 1. - # Buffer tmp_over_one_idx indicates the time step to add -1. - # This will not change F0 of sine because (x-1) * 2*pi = x * 2*pi - tmp_over_one = torch.cumsum(rad_values, 1) % 1 - tmp_over_one_idx = (padDiff(tmp_over_one)) < 0 - cumsum_shift = torch.zeros_like(rad_values) - cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0 - - sines = torch.sin(torch.cumsum(rad_values + cumsum_shift, dim=1) - * 2 * np.pi) - else: - # If necessary, make sure that the first time step of every - # voiced segments is sin(pi) or cos(0) - # This is used for pulse-train generation - - # identify the last time step in unvoiced segments - uv = self._f02uv(f0_values) - uv_1 = torch.roll(uv, shifts=-1, dims=1) - uv_1[:, -1, :] = 1 - u_loc = (uv < 1) * (uv_1 > 0) - - # get the instantanouse phase - tmp_cumsum = torch.cumsum(rad_values, dim=1) - # different batch needs to be processed differently - for idx in range(f0_values.shape[0]): - temp_sum = tmp_cumsum[idx, u_loc[idx, :, 0], :] - temp_sum[1:, :] = temp_sum[1:, :] - temp_sum[0:-1, :] - # stores the accumulation of i.phase within - # each voiced segments - tmp_cumsum[idx, :, :] = 0 - tmp_cumsum[idx, u_loc[idx, :, 0], :] = temp_sum - - # rad_values - tmp_cumsum: remove the accumulation of i.phase - # within the previous voiced segment. - i_phase = torch.cumsum(rad_values - tmp_cumsum, dim=1) - - # get the sines - sines = torch.cos(i_phase * 2 * np.pi) - return sines - - def forward(self, f0): - """ sine_tensor, uv = forward(f0) - input F0: tensor(batchsize=1, length, dim=1) - f0 for unvoiced steps should be 0 - output sine_tensor: tensor(batchsize=1, length, dim) - output uv: tensor(batchsize=1, length, 1) - """ - with torch.no_grad(): - f0_buf = torch.zeros(f0.shape[0], f0.shape[1], self.dim, - device=f0.device) - # fundamental component - fn = torch.multiply(f0, torch.FloatTensor([[range(1, self.harmonic_num + 2)]]).to(f0.device)) - - # generate sine waveforms - sine_waves = self._f02sine(fn) * self.sine_amp - - # generate uv signal - # uv = torch.ones(f0.shape) - # uv = uv * (f0 > self.voiced_threshold) - uv = self._f02uv(f0) - - # noise: for unvoiced should be similar to sine_amp - # std = self.sine_amp/3 -> max value ~ self.sine_amp - # . for voiced regions is self.noise_std - noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3 - noise = noise_amp * torch.randn_like(sine_waves) - - # first: set the unvoiced part to 0 by uv - # then: additive noise - sine_waves = sine_waves * uv + noise - return sine_waves, uv, noise - - -class SourceModuleHnNSF(torch.nn.Module): - """ SourceModule for hn-nsf - SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1, - add_noise_std=0.003, voiced_threshod=0) - sampling_rate: sampling_rate in Hz - harmonic_num: number of harmonic above F0 (default: 0) - sine_amp: amplitude of sine source signal (default: 0.1) - add_noise_std: std of additive Gaussian noise (default: 0.003) - note that amplitude of noise in unvoiced is decided - by sine_amp - voiced_threshold: threhold to set U/V given F0 (default: 0) - Sine_source, noise_source = SourceModuleHnNSF(F0_sampled) - F0_sampled (batchsize, length, 1) - Sine_source (batchsize, length, 1) - noise_source (batchsize, length 1) - uv (batchsize, length, 1) - """ - - def __init__(self, sampling_rate, harmonic_num=0, sine_amp=0.1, - add_noise_std=0.003, voiced_threshod=0): - super(SourceModuleHnNSF, self).__init__() - - self.sine_amp = sine_amp - self.noise_std = add_noise_std - - # to produce sine waveforms - self.l_sin_gen = SineGen(sampling_rate, harmonic_num, - sine_amp, add_noise_std, voiced_threshod) - - # to merge source harmonics into a single excitation - self.l_linear = torch.nn.Linear(harmonic_num + 1, 1) - self.l_tanh = torch.nn.Tanh() - - def forward(self, x): - """ - Sine_source, noise_source = SourceModuleHnNSF(F0_sampled) - F0_sampled (batchsize, length, 1) - Sine_source (batchsize, length, 1) - noise_source (batchsize, length 1) - """ - # source for harmonic branch - sine_wavs, uv, _ = self.l_sin_gen(x) - sine_merge = self.l_tanh(self.l_linear(sine_wavs)) - - # source for noise branch, in the same shape as uv - noise = torch.randn_like(uv) * self.sine_amp / 3 - return sine_merge, noise, uv - - -class Generator(torch.nn.Module): - def __init__(self, h): - super(Generator, self).__init__() - self.h = h - - self.num_kernels = len(h["resblock_kernel_sizes"]) - self.num_upsamples = len(h["upsample_rates"]) - self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(h["upsample_rates"])) - self.m_source = SourceModuleHnNSF( - sampling_rate=h["sampling_rate"], - harmonic_num=8) - self.noise_convs = nn.ModuleList() - self.conv_pre = weight_norm(Conv1d(h["inter_channels"], h["upsample_initial_channel"], 7, 1, padding=3)) - resblock = ResBlock1 if h["resblock"] == '1' else ResBlock2 - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(h["upsample_rates"], h["upsample_kernel_sizes"])): - c_cur = h["upsample_initial_channel"] // (2 ** (i + 1)) - self.ups.append(weight_norm( - ConvTranspose1d(h["upsample_initial_channel"] // (2 ** i), h["upsample_initial_channel"] // (2 ** (i + 1)), - k, u, padding=(k - u) // 2))) - if i + 1 < len(h["upsample_rates"]): # - stride_f0 = np.prod(h["upsample_rates"][i + 1:]) - self.noise_convs.append(Conv1d( - 1, c_cur, kernel_size=stride_f0 * 2, stride=stride_f0, padding=stride_f0 // 2)) - else: - self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1)) - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = h["upsample_initial_channel"] // (2 ** (i + 1)) - for j, (k, d) in enumerate(zip(h["resblock_kernel_sizes"], h["resblock_dilation_sizes"])): - self.resblocks.append(resblock(h, ch, k, d)) - - self.conv_post = weight_norm(Conv1d(ch, 1, 7, 1, padding=3)) - self.ups.apply(init_weights) - self.conv_post.apply(init_weights) - self.cond = nn.Conv1d(h['gin_channels'], h['upsample_initial_channel'], 1) - - def forward(self, x, f0, g=None): - # print(1,x.shape,f0.shape,f0[:, None].shape) - f0 = self.f0_upsamp(f0[:, None]).transpose(1, 2) # bs,n,t - # print(2,f0.shape) - har_source, noi_source, uv = self.m_source(f0) - har_source = har_source.transpose(1, 2) - x = self.conv_pre(x) - x = x + self.cond(g) - # print(124,x.shape,har_source.shape) - for i in range(self.num_upsamples): - x = F.leaky_relu(x, LRELU_SLOPE) - # print(3,x.shape) - x = self.ups[i](x) - x_source = self.noise_convs[i](har_source) - # print(4,x_source.shape,har_source.shape,x.shape) - x = x + x_source - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - print('Removing weight norm...') - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - remove_weight_norm(self.conv_pre) - remove_weight_norm(self.conv_post) - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))), - norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))), - norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))), - norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))), - norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(2, 0))), - ]) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, periods=None): - super(MultiPeriodDiscriminator, self).__init__() - self.periods = periods if periods is not None else [2, 3, 5, 7, 11] - self.discriminators = nn.ModuleList() - for period in self.periods: - self.discriminators.append(DiscriminatorP(period)) - - def forward(self, y, y_hat): - y_d_rs = [] - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - y_d_rs.append(y_d_r) - fmap_rs.append(fmap_r) - y_d_gs.append(y_d_g) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv1d(1, 128, 15, 1, padding=7)), - norm_f(Conv1d(128, 128, 41, 2, groups=4, padding=20)), - norm_f(Conv1d(128, 256, 41, 2, groups=16, padding=20)), - norm_f(Conv1d(256, 512, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(512, 1024, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 1, groups=16, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ]) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class MultiScaleDiscriminator(torch.nn.Module): - def __init__(self): - super(MultiScaleDiscriminator, self).__init__() - self.discriminators = nn.ModuleList([ - DiscriminatorS(use_spectral_norm=True), - DiscriminatorS(), - DiscriminatorS(), - ]) - self.meanpools = nn.ModuleList([ - AvgPool1d(4, 2, padding=2), - AvgPool1d(4, 2, padding=2) - ]) - - def forward(self, y, y_hat): - y_d_rs = [] - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - if i != 0: - y = self.meanpools[i - 1](y) - y_hat = self.meanpools[i - 1](y_hat) - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - y_d_rs.append(y_d_r) - fmap_rs.append(fmap_r) - y_d_gs.append(y_d_g) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -def feature_loss(fmap_r, fmap_g): - loss = 0 - for dr, dg in zip(fmap_r, fmap_g): - for rl, gl in zip(dr, dg): - loss += torch.mean(torch.abs(rl - gl)) - - return loss * 2 - - -def discriminator_loss(disc_real_outputs, disc_generated_outputs): - loss = 0 - r_losses = [] - g_losses = [] - for dr, dg in zip(disc_real_outputs, disc_generated_outputs): - r_loss = torch.mean((1 - dr) ** 2) - g_loss = torch.mean(dg ** 2) - loss += (r_loss + g_loss) - r_losses.append(r_loss.item()) - g_losses.append(g_loss.item()) - - return loss, r_losses, g_losses - - -def generator_loss(disc_outputs): - loss = 0 - gen_losses = [] - for dg in disc_outputs: - l = torch.mean((1 - dg) ** 2) - gen_losses.append(l) - loss += l - - return loss, gen_losses diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/jinja2/loaders.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/jinja2/loaders.py deleted file mode 100644 index d2f98093cde425fad2c4bbf2a07e383fce5e4a38..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/jinja2/loaders.py +++ /dev/null @@ -1,661 +0,0 @@ -"""API and implementations for loading templates from different data -sources. -""" -import importlib.util -import os -import posixpath -import sys -import typing as t -import weakref -import zipimport -from collections import abc -from hashlib import sha1 -from importlib import import_module -from types import ModuleType - -from .exceptions import TemplateNotFound -from .utils import internalcode -from .utils import open_if_exists - -if t.TYPE_CHECKING: - from .environment import Environment - from .environment import Template - - -def split_template_path(template: str) -> t.List[str]: - """Split a path into segments and perform a sanity check. If it detects - '..' in the path it will raise a `TemplateNotFound` error. - """ - pieces = [] - for piece in template.split("/"): - if ( - os.path.sep in piece - or (os.path.altsep and os.path.altsep in piece) - or piece == os.path.pardir - ): - raise TemplateNotFound(template) - elif piece and piece != ".": - pieces.append(piece) - return pieces - - -class BaseLoader: - """Baseclass for all loaders. Subclass this and override `get_source` to - implement a custom loading mechanism. The environment provides a - `get_template` method that calls the loader's `load` method to get the - :class:`Template` object. - - A very basic example for a loader that looks up templates on the file - system could look like this:: - - from jinja2 import BaseLoader, TemplateNotFound - from os.path import join, exists, getmtime - - class MyLoader(BaseLoader): - - def __init__(self, path): - self.path = path - - def get_source(self, environment, template): - path = join(self.path, template) - if not exists(path): - raise TemplateNotFound(template) - mtime = getmtime(path) - with open(path) as f: - source = f.read() - return source, path, lambda: mtime == getmtime(path) - """ - - #: if set to `False` it indicates that the loader cannot provide access - #: to the source of templates. - #: - #: .. versionadded:: 2.4 - has_source_access = True - - def get_source( - self, environment: "Environment", template: str - ) -> t.Tuple[str, t.Optional[str], t.Optional[t.Callable[[], bool]]]: - """Get the template source, filename and reload helper for a template. - It's passed the environment and template name and has to return a - tuple in the form ``(source, filename, uptodate)`` or raise a - `TemplateNotFound` error if it can't locate the template. - - The source part of the returned tuple must be the source of the - template as a string. The filename should be the name of the - file on the filesystem if it was loaded from there, otherwise - ``None``. The filename is used by Python for the tracebacks - if no loader extension is used. - - The last item in the tuple is the `uptodate` function. If auto - reloading is enabled it's always called to check if the template - changed. No arguments are passed so the function must store the - old state somewhere (for example in a closure). If it returns `False` - the template will be reloaded. - """ - if not self.has_source_access: - raise RuntimeError( - f"{type(self).__name__} cannot provide access to the source" - ) - raise TemplateNotFound(template) - - def list_templates(self) -> t.List[str]: - """Iterates over all templates. If the loader does not support that - it should raise a :exc:`TypeError` which is the default behavior. - """ - raise TypeError("this loader cannot iterate over all templates") - - @internalcode - def load( - self, - environment: "Environment", - name: str, - globals: t.Optional[t.MutableMapping[str, t.Any]] = None, - ) -> "Template": - """Loads a template. This method looks up the template in the cache - or loads one by calling :meth:`get_source`. Subclasses should not - override this method as loaders working on collections of other - loaders (such as :class:`PrefixLoader` or :class:`ChoiceLoader`) - will not call this method but `get_source` directly. - """ - code = None - if globals is None: - globals = {} - - # first we try to get the source for this template together - # with the filename and the uptodate function. - source, filename, uptodate = self.get_source(environment, name) - - # try to load the code from the bytecode cache if there is a - # bytecode cache configured. - bcc = environment.bytecode_cache - if bcc is not None: - bucket = bcc.get_bucket(environment, name, filename, source) - code = bucket.code - - # if we don't have code so far (not cached, no longer up to - # date) etc. we compile the template - if code is None: - code = environment.compile(source, name, filename) - - # if the bytecode cache is available and the bucket doesn't - # have a code so far, we give the bucket the new code and put - # it back to the bytecode cache. - if bcc is not None and bucket.code is None: - bucket.code = code - bcc.set_bucket(bucket) - - return environment.template_class.from_code( - environment, code, globals, uptodate - ) - - -class FileSystemLoader(BaseLoader): - """Load templates from a directory in the file system. - - The path can be relative or absolute. Relative paths are relative to - the current working directory. - - .. code-block:: python - - loader = FileSystemLoader("templates") - - A list of paths can be given. The directories will be searched in - order, stopping at the first matching template. - - .. code-block:: python - - loader = FileSystemLoader(["/override/templates", "/default/templates"]) - - :param searchpath: A path, or list of paths, to the directory that - contains the templates. - :param encoding: Use this encoding to read the text from template - files. - :param followlinks: Follow symbolic links in the path. - - .. versionchanged:: 2.8 - Added the ``followlinks`` parameter. - """ - - def __init__( - self, - searchpath: t.Union[str, os.PathLike, t.Sequence[t.Union[str, os.PathLike]]], - encoding: str = "utf-8", - followlinks: bool = False, - ) -> None: - if not isinstance(searchpath, abc.Iterable) or isinstance(searchpath, str): - searchpath = [searchpath] - - self.searchpath = [os.fspath(p) for p in searchpath] - self.encoding = encoding - self.followlinks = followlinks - - def get_source( - self, environment: "Environment", template: str - ) -> t.Tuple[str, str, t.Callable[[], bool]]: - pieces = split_template_path(template) - for searchpath in self.searchpath: - # Use posixpath even on Windows to avoid "drive:" or UNC - # segments breaking out of the search directory. - filename = posixpath.join(searchpath, *pieces) - f = open_if_exists(filename) - if f is None: - continue - try: - contents = f.read().decode(self.encoding) - finally: - f.close() - - mtime = os.path.getmtime(filename) - - def uptodate() -> bool: - try: - return os.path.getmtime(filename) == mtime - except OSError: - return False - - # Use normpath to convert Windows altsep to sep. - return contents, os.path.normpath(filename), uptodate - raise TemplateNotFound(template) - - def list_templates(self) -> t.List[str]: - found = set() - for searchpath in self.searchpath: - walk_dir = os.walk(searchpath, followlinks=self.followlinks) - for dirpath, _, filenames in walk_dir: - for filename in filenames: - template = ( - os.path.join(dirpath, filename)[len(searchpath) :] - .strip(os.path.sep) - .replace(os.path.sep, "/") - ) - if template[:2] == "./": - template = template[2:] - if template not in found: - found.add(template) - return sorted(found) - - -class PackageLoader(BaseLoader): - """Load templates from a directory in a Python package. - - :param package_name: Import name of the package that contains the - template directory. - :param package_path: Directory within the imported package that - contains the templates. - :param encoding: Encoding of template files. - - The following example looks up templates in the ``pages`` directory - within the ``project.ui`` package. - - .. code-block:: python - - loader = PackageLoader("project.ui", "pages") - - Only packages installed as directories (standard pip behavior) or - zip/egg files (less common) are supported. The Python API for - introspecting data in packages is too limited to support other - installation methods the way this loader requires. - - There is limited support for :pep:`420` namespace packages. The - template directory is assumed to only be in one namespace - contributor. Zip files contributing to a namespace are not - supported. - - .. versionchanged:: 3.0 - No longer uses ``setuptools`` as a dependency. - - .. versionchanged:: 3.0 - Limited PEP 420 namespace package support. - """ - - def __init__( - self, - package_name: str, - package_path: "str" = "templates", - encoding: str = "utf-8", - ) -> None: - package_path = os.path.normpath(package_path).rstrip(os.path.sep) - - # normpath preserves ".", which isn't valid in zip paths. - if package_path == os.path.curdir: - package_path = "" - elif package_path[:2] == os.path.curdir + os.path.sep: - package_path = package_path[2:] - - self.package_path = package_path - self.package_name = package_name - self.encoding = encoding - - # Make sure the package exists. This also makes namespace - # packages work, otherwise get_loader returns None. - import_module(package_name) - spec = importlib.util.find_spec(package_name) - assert spec is not None, "An import spec was not found for the package." - loader = spec.loader - assert loader is not None, "A loader was not found for the package." - self._loader = loader - self._archive = None - template_root = None - - if isinstance(loader, zipimport.zipimporter): - self._archive = loader.archive - pkgdir = next(iter(spec.submodule_search_locations)) # type: ignore - template_root = os.path.join(pkgdir, package_path).rstrip(os.path.sep) - else: - roots: t.List[str] = [] - - # One element for regular packages, multiple for namespace - # packages, or None for single module file. - if spec.submodule_search_locations: - roots.extend(spec.submodule_search_locations) - # A single module file, use the parent directory instead. - elif spec.origin is not None: - roots.append(os.path.dirname(spec.origin)) - - for root in roots: - root = os.path.join(root, package_path) - - if os.path.isdir(root): - template_root = root - break - - if template_root is None: - raise ValueError( - f"The {package_name!r} package was not installed in a" - " way that PackageLoader understands." - ) - - self._template_root = template_root - - def get_source( - self, environment: "Environment", template: str - ) -> t.Tuple[str, str, t.Optional[t.Callable[[], bool]]]: - # Use posixpath even on Windows to avoid "drive:" or UNC - # segments breaking out of the search directory. Use normpath to - # convert Windows altsep to sep. - p = os.path.normpath( - posixpath.join(self._template_root, *split_template_path(template)) - ) - up_to_date: t.Optional[t.Callable[[], bool]] - - if self._archive is None: - # Package is a directory. - if not os.path.isfile(p): - raise TemplateNotFound(template) - - with open(p, "rb") as f: - source = f.read() - - mtime = os.path.getmtime(p) - - def up_to_date() -> bool: - return os.path.isfile(p) and os.path.getmtime(p) == mtime - - else: - # Package is a zip file. - try: - source = self._loader.get_data(p) # type: ignore - except OSError as e: - raise TemplateNotFound(template) from e - - # Could use the zip's mtime for all template mtimes, but - # would need to safely reload the module if it's out of - # date, so just report it as always current. - up_to_date = None - - return source.decode(self.encoding), p, up_to_date - - def list_templates(self) -> t.List[str]: - results: t.List[str] = [] - - if self._archive is None: - # Package is a directory. - offset = len(self._template_root) - - for dirpath, _, filenames in os.walk(self._template_root): - dirpath = dirpath[offset:].lstrip(os.path.sep) - results.extend( - os.path.join(dirpath, name).replace(os.path.sep, "/") - for name in filenames - ) - else: - if not hasattr(self._loader, "_files"): - raise TypeError( - "This zip import does not have the required" - " metadata to list templates." - ) - - # Package is a zip file. - prefix = ( - self._template_root[len(self._archive) :].lstrip(os.path.sep) - + os.path.sep - ) - offset = len(prefix) - - for name in self._loader._files.keys(): # type: ignore - # Find names under the templates directory that aren't directories. - if name.startswith(prefix) and name[-1] != os.path.sep: - results.append(name[offset:].replace(os.path.sep, "/")) - - results.sort() - return results - - -class DictLoader(BaseLoader): - """Loads a template from a Python dict mapping template names to - template source. This loader is useful for unittesting: - - >>> loader = DictLoader({'index.html': 'source here'}) - - Because auto reloading is rarely useful this is disabled per default. - """ - - def __init__(self, mapping: t.Mapping[str, str]) -> None: - self.mapping = mapping - - def get_source( - self, environment: "Environment", template: str - ) -> t.Tuple[str, None, t.Callable[[], bool]]: - if template in self.mapping: - source = self.mapping[template] - return source, None, lambda: source == self.mapping.get(template) - raise TemplateNotFound(template) - - def list_templates(self) -> t.List[str]: - return sorted(self.mapping) - - -class FunctionLoader(BaseLoader): - """A loader that is passed a function which does the loading. The - function receives the name of the template and has to return either - a string with the template source, a tuple in the form ``(source, - filename, uptodatefunc)`` or `None` if the template does not exist. - - >>> def load_template(name): - ... if name == 'index.html': - ... return '...' - ... - >>> loader = FunctionLoader(load_template) - - The `uptodatefunc` is a function that is called if autoreload is enabled - and has to return `True` if the template is still up to date. For more - details have a look at :meth:`BaseLoader.get_source` which has the same - return value. - """ - - def __init__( - self, - load_func: t.Callable[ - [str], - t.Optional[ - t.Union[ - str, t.Tuple[str, t.Optional[str], t.Optional[t.Callable[[], bool]]] - ] - ], - ], - ) -> None: - self.load_func = load_func - - def get_source( - self, environment: "Environment", template: str - ) -> t.Tuple[str, t.Optional[str], t.Optional[t.Callable[[], bool]]]: - rv = self.load_func(template) - - if rv is None: - raise TemplateNotFound(template) - - if isinstance(rv, str): - return rv, None, None - - return rv - - -class PrefixLoader(BaseLoader): - """A loader that is passed a dict of loaders where each loader is bound - to a prefix. The prefix is delimited from the template by a slash per - default, which can be changed by setting the `delimiter` argument to - something else:: - - loader = PrefixLoader({ - 'app1': PackageLoader('mypackage.app1'), - 'app2': PackageLoader('mypackage.app2') - }) - - By loading ``'app1/index.html'`` the file from the app1 package is loaded, - by loading ``'app2/index.html'`` the file from the second. - """ - - def __init__( - self, mapping: t.Mapping[str, BaseLoader], delimiter: str = "/" - ) -> None: - self.mapping = mapping - self.delimiter = delimiter - - def get_loader(self, template: str) -> t.Tuple[BaseLoader, str]: - try: - prefix, name = template.split(self.delimiter, 1) - loader = self.mapping[prefix] - except (ValueError, KeyError) as e: - raise TemplateNotFound(template) from e - return loader, name - - def get_source( - self, environment: "Environment", template: str - ) -> t.Tuple[str, t.Optional[str], t.Optional[t.Callable[[], bool]]]: - loader, name = self.get_loader(template) - try: - return loader.get_source(environment, name) - except TemplateNotFound as e: - # re-raise the exception with the correct filename here. - # (the one that includes the prefix) - raise TemplateNotFound(template) from e - - @internalcode - def load( - self, - environment: "Environment", - name: str, - globals: t.Optional[t.MutableMapping[str, t.Any]] = None, - ) -> "Template": - loader, local_name = self.get_loader(name) - try: - return loader.load(environment, local_name, globals) - except TemplateNotFound as e: - # re-raise the exception with the correct filename here. - # (the one that includes the prefix) - raise TemplateNotFound(name) from e - - def list_templates(self) -> t.List[str]: - result = [] - for prefix, loader in self.mapping.items(): - for template in loader.list_templates(): - result.append(prefix + self.delimiter + template) - return result - - -class ChoiceLoader(BaseLoader): - """This loader works like the `PrefixLoader` just that no prefix is - specified. If a template could not be found by one loader the next one - is tried. - - >>> loader = ChoiceLoader([ - ... FileSystemLoader('/path/to/user/templates'), - ... FileSystemLoader('/path/to/system/templates') - ... ]) - - This is useful if you want to allow users to override builtin templates - from a different location. - """ - - def __init__(self, loaders: t.Sequence[BaseLoader]) -> None: - self.loaders = loaders - - def get_source( - self, environment: "Environment", template: str - ) -> t.Tuple[str, t.Optional[str], t.Optional[t.Callable[[], bool]]]: - for loader in self.loaders: - try: - return loader.get_source(environment, template) - except TemplateNotFound: - pass - raise TemplateNotFound(template) - - @internalcode - def load( - self, - environment: "Environment", - name: str, - globals: t.Optional[t.MutableMapping[str, t.Any]] = None, - ) -> "Template": - for loader in self.loaders: - try: - return loader.load(environment, name, globals) - except TemplateNotFound: - pass - raise TemplateNotFound(name) - - def list_templates(self) -> t.List[str]: - found = set() - for loader in self.loaders: - found.update(loader.list_templates()) - return sorted(found) - - -class _TemplateModule(ModuleType): - """Like a normal module but with support for weak references""" - - -class ModuleLoader(BaseLoader): - """This loader loads templates from precompiled templates. - - Example usage: - - >>> loader = ChoiceLoader([ - ... ModuleLoader('/path/to/compiled/templates'), - ... FileSystemLoader('/path/to/templates') - ... ]) - - Templates can be precompiled with :meth:`Environment.compile_templates`. - """ - - has_source_access = False - - def __init__( - self, path: t.Union[str, os.PathLike, t.Sequence[t.Union[str, os.PathLike]]] - ) -> None: - package_name = f"_jinja2_module_templates_{id(self):x}" - - # create a fake module that looks for the templates in the - # path given. - mod = _TemplateModule(package_name) - - if not isinstance(path, abc.Iterable) or isinstance(path, str): - path = [path] - - mod.__path__ = [os.fspath(p) for p in path] - - sys.modules[package_name] = weakref.proxy( - mod, lambda x: sys.modules.pop(package_name, None) - ) - - # the only strong reference, the sys.modules entry is weak - # so that the garbage collector can remove it once the - # loader that created it goes out of business. - self.module = mod - self.package_name = package_name - - @staticmethod - def get_template_key(name: str) -> str: - return "tmpl_" + sha1(name.encode("utf-8")).hexdigest() - - @staticmethod - def get_module_filename(name: str) -> str: - return ModuleLoader.get_template_key(name) + ".py" - - @internalcode - def load( - self, - environment: "Environment", - name: str, - globals: t.Optional[t.MutableMapping[str, t.Any]] = None, - ) -> "Template": - key = self.get_template_key(name) - module = f"{self.package_name}.{key}" - mod = getattr(self.module, module, None) - - if mod is None: - try: - mod = __import__(module, None, None, ["root"]) - except ImportError as e: - raise TemplateNotFound(name) from e - - # remove the entry from sys.modules, we only want the attribute - # on the module object we have stored on the loader. - sys.modules.pop(module, None) - - if globals is None: - globals = {} - - return environment.template_class.from_module_dict( - environment, mod.__dict__, globals - ) diff --git a/spaces/declare-lab/tango/diffusers/src/diffusers/commands/__init__.py b/spaces/declare-lab/tango/diffusers/src/diffusers/commands/__init__.py deleted file mode 100644 index 4ad4af9199bbe297dbc6679fd9ecb46baa976053..0000000000000000000000000000000000000000 --- a/spaces/declare-lab/tango/diffusers/src/diffusers/commands/__init__.py +++ /dev/null @@ -1,27 +0,0 @@ -# Copyright 2023 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -from abc import ABC, abstractmethod -from argparse import ArgumentParser - - -class BaseDiffusersCLICommand(ABC): - @staticmethod - @abstractmethod - def register_subcommand(parser: ArgumentParser): - raise NotImplementedError() - - @abstractmethod - def run(self): - raise NotImplementedError() diff --git a/spaces/deelerb/3dselfie/PIFu/lib/model/ResBlkPIFuNet.py b/spaces/deelerb/3dselfie/PIFu/lib/model/ResBlkPIFuNet.py deleted file mode 100644 index 26848408569fd3903a338e023aefb832f942f0e3..0000000000000000000000000000000000000000 --- a/spaces/deelerb/3dselfie/PIFu/lib/model/ResBlkPIFuNet.py +++ /dev/null @@ -1,201 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F -from .BasePIFuNet import BasePIFuNet -import functools -from .SurfaceClassifier import SurfaceClassifier -from .DepthNormalizer import DepthNormalizer -from ..net_util import * - - -class ResBlkPIFuNet(BasePIFuNet): - def __init__(self, opt, - projection_mode='orthogonal'): - if opt.color_loss_type == 'l1': - error_term = nn.L1Loss() - elif opt.color_loss_type == 'mse': - error_term = nn.MSELoss() - - super(ResBlkPIFuNet, self).__init__( - projection_mode=projection_mode, - error_term=error_term) - - self.name = 'respifu' - self.opt = opt - - norm_type = get_norm_layer(norm_type=opt.norm_color) - self.image_filter = ResnetFilter(opt, norm_layer=norm_type) - - self.surface_classifier = SurfaceClassifier( - filter_channels=self.opt.mlp_dim_color, - num_views=self.opt.num_views, - no_residual=self.opt.no_residual, - last_op=nn.Tanh()) - - self.normalizer = DepthNormalizer(opt) - - init_net(self) - - def filter(self, images): - ''' - Filter the input images - store all intermediate features. - :param images: [B, C, H, W] input images - ''' - self.im_feat = self.image_filter(images) - - def attach(self, im_feat): - self.im_feat = torch.cat([im_feat, self.im_feat], 1) - - def query(self, points, calibs, transforms=None, labels=None): - ''' - Given 3D points, query the network predictions for each point. - Image features should be pre-computed before this call. - store all intermediate features. - query() function may behave differently during training/testing. - :param points: [B, 3, N] world space coordinates of points - :param calibs: [B, 3, 4] calibration matrices for each image - :param transforms: Optional [B, 2, 3] image space coordinate transforms - :param labels: Optional [B, Res, N] gt labeling - :return: [B, Res, N] predictions for each point - ''' - if labels is not None: - self.labels = labels - - xyz = self.projection(points, calibs, transforms) - xy = xyz[:, :2, :] - z = xyz[:, 2:3, :] - - z_feat = self.normalizer(z) - - # This is a list of [B, Feat_i, N] features - point_local_feat_list = [self.index(self.im_feat, xy), z_feat] - # [B, Feat_all, N] - point_local_feat = torch.cat(point_local_feat_list, 1) - - self.preds = self.surface_classifier(point_local_feat) - - def forward(self, images, im_feat, points, calibs, transforms=None, labels=None): - self.filter(images) - - self.attach(im_feat) - - self.query(points, calibs, transforms, labels) - - res = self.get_preds() - error = self.get_error() - - return res, error - -class ResnetBlock(nn.Module): - """Define a Resnet block""" - - def __init__(self, dim, padding_type, norm_layer, use_dropout, use_bias, last=False): - """Initialize the Resnet block - A resnet block is a conv block with skip connections - We construct a conv block with build_conv_block function, - and implement skip connections in function. - Original Resnet paper: https://arxiv.org/pdf/1512.03385.pdf - """ - super(ResnetBlock, self).__init__() - self.conv_block = self.build_conv_block(dim, padding_type, norm_layer, use_dropout, use_bias, last) - - def build_conv_block(self, dim, padding_type, norm_layer, use_dropout, use_bias, last=False): - """Construct a convolutional block. - Parameters: - dim (int) -- the number of channels in the conv layer. - padding_type (str) -- the name of padding layer: reflect | replicate | zero - norm_layer -- normalization layer - use_dropout (bool) -- if use dropout layers. - use_bias (bool) -- if the conv layer uses bias or not - Returns a conv block (with a conv layer, a normalization layer, and a non-linearity layer (ReLU)) - """ - conv_block = [] - p = 0 - if padding_type == 'reflect': - conv_block += [nn.ReflectionPad2d(1)] - elif padding_type == 'replicate': - conv_block += [nn.ReplicationPad2d(1)] - elif padding_type == 'zero': - p = 1 - else: - raise NotImplementedError('padding [%s] is not implemented' % padding_type) - - conv_block += [nn.Conv2d(dim, dim, kernel_size=3, padding=p, bias=use_bias), norm_layer(dim), nn.ReLU(True)] - if use_dropout: - conv_block += [nn.Dropout(0.5)] - - p = 0 - if padding_type == 'reflect': - conv_block += [nn.ReflectionPad2d(1)] - elif padding_type == 'replicate': - conv_block += [nn.ReplicationPad2d(1)] - elif padding_type == 'zero': - p = 1 - else: - raise NotImplementedError('padding [%s] is not implemented' % padding_type) - if last: - conv_block += [nn.Conv2d(dim, dim, kernel_size=3, padding=p, bias=use_bias)] - else: - conv_block += [nn.Conv2d(dim, dim, kernel_size=3, padding=p, bias=use_bias), norm_layer(dim)] - - return nn.Sequential(*conv_block) - - def forward(self, x): - """Forward function (with skip connections)""" - out = x + self.conv_block(x) # add skip connections - return out - - -class ResnetFilter(nn.Module): - """Resnet-based generator that consists of Resnet blocks between a few downsampling/upsampling operations. - We adapt Torch code and idea from Justin Johnson's neural style transfer project(https://github.com/jcjohnson/fast-neural-style) - """ - - def __init__(self, opt, input_nc=3, output_nc=256, ngf=64, norm_layer=nn.BatchNorm2d, use_dropout=False, - n_blocks=6, padding_type='reflect'): - """Construct a Resnet-based generator - Parameters: - input_nc (int) -- the number of channels in input images - output_nc (int) -- the number of channels in output images - ngf (int) -- the number of filters in the last conv layer - norm_layer -- normalization layer - use_dropout (bool) -- if use dropout layers - n_blocks (int) -- the number of ResNet blocks - padding_type (str) -- the name of padding layer in conv layers: reflect | replicate | zero - """ - assert (n_blocks >= 0) - super(ResnetFilter, self).__init__() - if type(norm_layer) == functools.partial: - use_bias = norm_layer.func == nn.InstanceNorm2d - else: - use_bias = norm_layer == nn.InstanceNorm2d - - model = [nn.ReflectionPad2d(3), - nn.Conv2d(input_nc, ngf, kernel_size=7, padding=0, bias=use_bias), - norm_layer(ngf), - nn.ReLU(True)] - - n_downsampling = 2 - for i in range(n_downsampling): # add downsampling layers - mult = 2 ** i - model += [nn.Conv2d(ngf * mult, ngf * mult * 2, kernel_size=3, stride=2, padding=1, bias=use_bias), - norm_layer(ngf * mult * 2), - nn.ReLU(True)] - - mult = 2 ** n_downsampling - for i in range(n_blocks): # add ResNet blocks - if i == n_blocks - 1: - model += [ResnetBlock(ngf * mult, padding_type=padding_type, norm_layer=norm_layer, - use_dropout=use_dropout, use_bias=use_bias, last=True)] - else: - model += [ResnetBlock(ngf * mult, padding_type=padding_type, norm_layer=norm_layer, - use_dropout=use_dropout, use_bias=use_bias)] - - if opt.use_tanh: - model += [nn.Tanh()] - self.model = nn.Sequential(*model) - - def forward(self, input): - """Standard forward""" - return self.model(input) diff --git a/spaces/deepskyreal/ai-mixer-hotchpotch/sad_talker/src/generate_facerender_batch.py b/spaces/deepskyreal/ai-mixer-hotchpotch/sad_talker/src/generate_facerender_batch.py deleted file mode 100644 index dca8e2ced1847d9c4c1092a0e4af8e76a040605e..0000000000000000000000000000000000000000 --- a/spaces/deepskyreal/ai-mixer-hotchpotch/sad_talker/src/generate_facerender_batch.py +++ /dev/null @@ -1,140 +0,0 @@ -import os -import numpy as np -from PIL import Image -from skimage import io, transform -from skimage.util import img_as_float32 -import torch -import scipy.io as scio - - -def get_facerender_data(coeff_path, pic_path, first_coeff_path, audio_path, - batch_size, input_yaw_list=None, input_pitch_list=None, input_roll_list=None, - expression_scale=1.0, still_mode=False, preprocess='crop', size=256): - semantic_radius = 13 - video_name = os.path.splitext(os.path.split(coeff_path)[-1])[0] - txt_path = os.path.splitext(coeff_path)[0] - - data = {} - - img1 = Image.open(pic_path) - source_image = np.array(img1) - source_image = img_as_float32(source_image) - source_image = transform.resize(source_image, (size, size, 3)) - source_image = source_image.transpose((2, 0, 1)) - source_image_ts = torch.FloatTensor(source_image).unsqueeze(0) - source_image_ts = source_image_ts.repeat(batch_size, 1, 1, 1) - data['source_image'] = source_image_ts - - source_semantics_dict = scio.loadmat(first_coeff_path) - generated_dict = scio.loadmat(coeff_path) - - if 'full' not in preprocess.lower(): - source_semantics = source_semantics_dict['coeff_3dmm'][:1, :70] # 1 70 - generated_3dmm = generated_dict['coeff_3dmm'][:, :70] - - else: - source_semantics = source_semantics_dict['coeff_3dmm'][:1, :73] # 1 70 - generated_3dmm = generated_dict['coeff_3dmm'][:, :70] - - source_semantics_new = transform_semantic_1(source_semantics, semantic_radius) - source_semantics_ts = torch.FloatTensor(source_semantics_new).unsqueeze(0) - source_semantics_ts = source_semantics_ts.repeat(batch_size, 1, 1) - data['source_semantics'] = source_semantics_ts - - # target - generated_3dmm[:, :64] = generated_3dmm[:, :64] * expression_scale - - if 'full' in preprocess.lower(): - generated_3dmm = np.concatenate([generated_3dmm, np.repeat(source_semantics[:, 70:], generated_3dmm.shape[0], axis=0)], - axis=1) - - if still_mode: - generated_3dmm[:, 64:] = np.repeat(source_semantics[:, 64:], generated_3dmm.shape[0], axis=0) - - with open(txt_path + '.txt', 'w') as f: - for coeff in generated_3dmm: - for i in coeff: - f.write(str(i)[:7] + ' ' + '\t') - f.write('\n') - - target_semantics_list = [] - frame_num = generated_3dmm.shape[0] - data['frame_num'] = frame_num - for frame_idx in range(frame_num): - target_semantics = transform_semantic_target(generated_3dmm, frame_idx, semantic_radius) - target_semantics_list.append(target_semantics) - - remainder = frame_num % batch_size - if remainder != 0: - for _ in range(batch_size - remainder): - target_semantics_list.append(target_semantics) - - target_semantics_np = np.array(target_semantics_list) # frame_num 70 semantic_radius*2+1 - target_semantics_np = target_semantics_np.reshape(batch_size, -1, target_semantics_np.shape[-2], - target_semantics_np.shape[-1]) - data['target_semantics_list'] = torch.FloatTensor(target_semantics_np) - data['video_name'] = video_name - data['audio_path'] = audio_path - - if input_yaw_list is not None: - yaw_c_seq = gen_camera_pose(input_yaw_list, frame_num, batch_size) - data['yaw_c_seq'] = torch.FloatTensor(yaw_c_seq) - if input_pitch_list is not None: - pitch_c_seq = gen_camera_pose(input_pitch_list, frame_num, batch_size) - data['pitch_c_seq'] = torch.FloatTensor(pitch_c_seq) - if input_roll_list is not None: - roll_c_seq = gen_camera_pose(input_roll_list, frame_num, batch_size) - data['roll_c_seq'] = torch.FloatTensor(roll_c_seq) - - return data - - -def transform_semantic_1(semantic, semantic_radius): - semantic_list = [semantic for i in range(0, semantic_radius * 2 + 1)] - coeff_3dmm = np.concatenate(semantic_list, 0) - return coeff_3dmm.transpose(1, 0) - - -def transform_semantic_target(coeff_3dmm, frame_index, semantic_radius): - num_frames = coeff_3dmm.shape[0] - seq = list(range(frame_index - semantic_radius, frame_index + semantic_radius + 1)) - index = [min(max(item, 0), num_frames - 1) for item in seq] - coeff_3dmm_g = coeff_3dmm[index, :] - return coeff_3dmm_g.transpose(1, 0) - - -def gen_camera_pose(camera_degree_list, frame_num, batch_size): - new_degree_list = [] - if len(camera_degree_list) == 1: - for _ in range(frame_num): - new_degree_list.append(camera_degree_list[0]) - remainder = frame_num % batch_size - if remainder != 0: - for _ in range(batch_size - remainder): - new_degree_list.append(new_degree_list[-1]) - new_degree_np = np.array(new_degree_list).reshape(batch_size, -1) - return new_degree_np - - degree_sum = 0. - for i, degree in enumerate(camera_degree_list[1:]): - degree_sum += abs(degree - camera_degree_list[i]) - - degree_per_frame = degree_sum / (frame_num - 1) - for i, degree in enumerate(camera_degree_list[1:]): - degree_last = camera_degree_list[i] - degree_step = degree_per_frame * abs(degree - degree_last) / (degree - degree_last) - new_degree_list = new_degree_list + list(np.arange(degree_last, degree, degree_step)) - if len(new_degree_list) > frame_num: - new_degree_list = new_degree_list[:frame_num] - elif len(new_degree_list) < frame_num: - for _ in range(frame_num - len(new_degree_list)): - new_degree_list.append(new_degree_list[-1]) - print(len(new_degree_list)) - print(frame_num) - - remainder = frame_num % batch_size - if remainder != 0: - for _ in range(batch_size - remainder): - new_degree_list.append(new_degree_list[-1]) - new_degree_np = np.array(new_degree_list).reshape(batch_size, -1) - return new_degree_np diff --git a/spaces/deepskyreal/ai-mixer-hotchpotch/sad_talker/src/utils/videoio.py b/spaces/deepskyreal/ai-mixer-hotchpotch/sad_talker/src/utils/videoio.py deleted file mode 100644 index 08bfbdd7d4be97dc17fea4ad7b2733e9eb0ef975..0000000000000000000000000000000000000000 --- a/spaces/deepskyreal/ai-mixer-hotchpotch/sad_talker/src/utils/videoio.py +++ /dev/null @@ -1,41 +0,0 @@ -import shutil -import uuid - -import os - -import cv2 - -def load_video_to_cv2(input_path): - video_stream = cv2.VideoCapture(input_path) - fps = video_stream.get(cv2.CAP_PROP_FPS) - full_frames = [] - while 1: - still_reading, frame = video_stream.read() - if not still_reading: - video_stream.release() - break - full_frames.append(cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)) - return full_frames - -def save_video_with_watermark(video, audio, save_path, watermark=False): - temp_file = str(uuid.uuid4())+'.mp4' - cmd = r'ffmpeg -y -hide_banner -loglevel error -i "%s" -i "%s" -vcodec copy "%s"' % (video, audio, temp_file) - os.system(cmd) - - if watermark is False: - shutil.move(temp_file, save_path) - else: - # watermark - try: - ##### check if stable-diffusion-webui - import webui - from modules import paths - watarmark_path = paths.script_path+"/extensions/SadTalker/docs/sadtalker_logo.png" - except: - # get the root path of sadtalker. - dir_path = os.path.dirname(os.path.realpath(__file__)) - watarmark_path = dir_path+"/../../docs/sadtalker_logo.png" - - cmd = r'ffmpeg -y -hide_banner -loglevel error -i "%s" -i "%s" -filter_complex "[1]scale=100:-1[wm];[0][wm]overlay=(main_w-overlay_w)-10:10" "%s"' % (temp_file, watarmark_path, save_path) - os.system(cmd) - os.remove(temp_file) \ No newline at end of file diff --git a/spaces/derful/Chatgpt-academic/crazy_functions/test_project/python/dqn/__init__.py b/spaces/derful/Chatgpt-academic/crazy_functions/test_project/python/dqn/__init__.py deleted file mode 100644 index 4ae42872c812a7c8a18dff002086c7e6e935f580..0000000000000000000000000000000000000000 --- a/spaces/derful/Chatgpt-academic/crazy_functions/test_project/python/dqn/__init__.py +++ /dev/null @@ -1,2 +0,0 @@ -from stable_baselines3.dqn.dqn import DQN -from stable_baselines3.dqn.policies import CnnPolicy, MlpPolicy diff --git a/spaces/diacanFperku/AutoGPT/Nokia Bootmgr Driver Windows 7.md b/spaces/diacanFperku/AutoGPT/Nokia Bootmgr Driver Windows 7.md deleted file mode 100644 index 363a1a6dea97ff23d521d485362b00f73a39b00d..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Nokia Bootmgr Driver Windows 7.md +++ /dev/null @@ -1,6 +0,0 @@ -

        Nokia Bootmgr Driver Windows 7


        DOWNLOADhttps://gohhs.com/2uFTyv



        - -Logiciel Windows 4. wlan wifi applcation pour nokia 6220 c; pilote RM 914 nokia lumia ... Rm 914 Pour Lumia 520 , Nokia Lumia 920 Nokia Bootmgr Driver Sorunu çözümü Driver ... Free Lexmark Z645 drivers for Windows 7. 4d29de3e1b
        -
        -
        -

        diff --git a/spaces/diacanFperku/AutoGPT/Pcpdfwin Jcpds Software 13.md b/spaces/diacanFperku/AutoGPT/Pcpdfwin Jcpds Software 13.md deleted file mode 100644 index 56c3424d2746ebef13df3b5b279a04730e1449f5..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Pcpdfwin Jcpds Software 13.md +++ /dev/null @@ -1,6 +0,0 @@ -

        Pcpdfwin Jcpds Software 13


        Downloadhttps://gohhs.com/2uFVG1



        - -by Y Sundarayya · 2012 · Cited by 1 — which they only differ in the orientation of the staggered spins [3 – 13]. On the other ... carried out from the X-ray data using a software package Fullprof [25]. 1fdad05405
        -
        -
        -

        diff --git a/spaces/digitalxingtong/Azusa-Bert-VITS2/modules.py b/spaces/digitalxingtong/Azusa-Bert-VITS2/modules.py deleted file mode 100644 index 92e0f32a51c472bfd1659a50a95a95d195281d2b..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Azusa-Bert-VITS2/modules.py +++ /dev/null @@ -1,452 +0,0 @@ -import copy -import math -import numpy as np -import scipy -import torch -from torch import nn -from torch.nn import functional as F - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm - -import commons -from commons import init_weights, get_padding -from transforms import piecewise_rational_quadratic_transform -from attentions import Encoder - -LRELU_SLOPE = 0.1 - -class LayerNorm(nn.Module): - def __init__(self, channels, eps=1e-5): - super().__init__() - self.channels = channels - self.eps = eps - - self.gamma = nn.Parameter(torch.ones(channels)) - self.beta = nn.Parameter(torch.zeros(channels)) - - def forward(self, x): - x = x.transpose(1, -1) - x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps) - return x.transpose(1, -1) - -class ConvReluNorm(nn.Module): - def __init__(self, in_channels, hidden_channels, out_channels, kernel_size, n_layers, p_dropout): - super().__init__() - self.in_channels = in_channels - self.hidden_channels = hidden_channels - self.out_channels = out_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - assert n_layers > 1, "Number of layers should be larger than 0." - - self.conv_layers = nn.ModuleList() - self.norm_layers = nn.ModuleList() - self.conv_layers.append(nn.Conv1d(in_channels, hidden_channels, kernel_size, padding=kernel_size//2)) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.relu_drop = nn.Sequential( - nn.ReLU(), - nn.Dropout(p_dropout)) - for _ in range(n_layers-1): - self.conv_layers.append(nn.Conv1d(hidden_channels, hidden_channels, kernel_size, padding=kernel_size//2)) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask): - x_org = x - for i in range(self.n_layers): - x = self.conv_layers[i](x * x_mask) - x = self.norm_layers[i](x) - x = self.relu_drop(x) - x = x_org + self.proj(x) - return x * x_mask - - -class DDSConv(nn.Module): - """ - Dialted and Depth-Separable Convolution - """ - def __init__(self, channels, kernel_size, n_layers, p_dropout=0.): - super().__init__() - self.channels = channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - - self.drop = nn.Dropout(p_dropout) - self.convs_sep = nn.ModuleList() - self.convs_1x1 = nn.ModuleList() - self.norms_1 = nn.ModuleList() - self.norms_2 = nn.ModuleList() - for i in range(n_layers): - dilation = kernel_size ** i - padding = (kernel_size * dilation - dilation) // 2 - self.convs_sep.append(nn.Conv1d(channels, channels, kernel_size, - groups=channels, dilation=dilation, padding=padding - )) - self.convs_1x1.append(nn.Conv1d(channels, channels, 1)) - self.norms_1.append(LayerNorm(channels)) - self.norms_2.append(LayerNorm(channels)) - - def forward(self, x, x_mask, g=None): - if g is not None: - x = x + g - for i in range(self.n_layers): - y = self.convs_sep[i](x * x_mask) - y = self.norms_1[i](y) - y = F.gelu(y) - y = self.convs_1x1[i](y) - y = self.norms_2[i](y) - y = F.gelu(y) - y = self.drop(y) - x = x + y - return x * x_mask - - -class WN(torch.nn.Module): - def __init__(self, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=0, p_dropout=0): - super(WN, self).__init__() - assert(kernel_size % 2 == 1) - self.hidden_channels =hidden_channels - self.kernel_size = kernel_size, - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - self.p_dropout = p_dropout - - self.in_layers = torch.nn.ModuleList() - self.res_skip_layers = torch.nn.ModuleList() - self.drop = nn.Dropout(p_dropout) - - if gin_channels != 0: - cond_layer = torch.nn.Conv1d(gin_channels, 2*hidden_channels*n_layers, 1) - self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name='weight') - - for i in range(n_layers): - dilation = dilation_rate ** i - padding = int((kernel_size * dilation - dilation) / 2) - in_layer = torch.nn.Conv1d(hidden_channels, 2*hidden_channels, kernel_size, - dilation=dilation, padding=padding) - in_layer = torch.nn.utils.weight_norm(in_layer, name='weight') - self.in_layers.append(in_layer) - - # last one is not necessary - if i < n_layers - 1: - res_skip_channels = 2 * hidden_channels - else: - res_skip_channels = hidden_channels - - res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1) - res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name='weight') - self.res_skip_layers.append(res_skip_layer) - - def forward(self, x, x_mask, g=None, **kwargs): - output = torch.zeros_like(x) - n_channels_tensor = torch.IntTensor([self.hidden_channels]) - - if g is not None: - g = self.cond_layer(g) - - for i in range(self.n_layers): - x_in = self.in_layers[i](x) - if g is not None: - cond_offset = i * 2 * self.hidden_channels - g_l = g[:,cond_offset:cond_offset+2*self.hidden_channels,:] - else: - g_l = torch.zeros_like(x_in) - - acts = commons.fused_add_tanh_sigmoid_multiply( - x_in, - g_l, - n_channels_tensor) - acts = self.drop(acts) - - res_skip_acts = self.res_skip_layers[i](acts) - if i < self.n_layers - 1: - res_acts = res_skip_acts[:,:self.hidden_channels,:] - x = (x + res_acts) * x_mask - output = output + res_skip_acts[:,self.hidden_channels:,:] - else: - output = output + res_skip_acts - return output * x_mask - - def remove_weight_norm(self): - if self.gin_channels != 0: - torch.nn.utils.remove_weight_norm(self.cond_layer) - for l in self.in_layers: - torch.nn.utils.remove_weight_norm(l) - for l in self.res_skip_layers: - torch.nn.utils.remove_weight_norm(l) - - -class ResBlock1(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)): - super(ResBlock1, self).__init__() - self.convs1 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]))) - ]) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))) - ]) - self.convs2.apply(init_weights) - - def forward(self, x, x_mask=None): - for c1, c2 in zip(self.convs1, self.convs2): - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c1(xt) - xt = F.leaky_relu(xt, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c2(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_weight_norm(l) - for l in self.convs2: - remove_weight_norm(l) - - -class ResBlock2(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3)): - super(ResBlock2, self).__init__() - self.convs = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))) - ]) - self.convs.apply(init_weights) - - def forward(self, x, x_mask=None): - for c in self.convs: - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs: - remove_weight_norm(l) - - -class Log(nn.Module): - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask - logdet = torch.sum(-y, [1, 2]) - return y, logdet - else: - x = torch.exp(x) * x_mask - return x - - -class Flip(nn.Module): - def forward(self, x, *args, reverse=False, **kwargs): - x = torch.flip(x, [1]) - if not reverse: - logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device) - return x, logdet - else: - return x - - -class ElementwiseAffine(nn.Module): - def __init__(self, channels): - super().__init__() - self.channels = channels - self.m = nn.Parameter(torch.zeros(channels,1)) - self.logs = nn.Parameter(torch.zeros(channels,1)) - - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = self.m + torch.exp(self.logs) * x - y = y * x_mask - logdet = torch.sum(self.logs * x_mask, [1,2]) - return y, logdet - else: - x = (x - self.m) * torch.exp(-self.logs) * x_mask - return x - - -class ResidualCouplingLayer(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=0, - gin_channels=0, - mean_only=False): - assert channels % 2 == 0, "channels should be divisible by 2" - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.half_channels = channels // 2 - self.mean_only = mean_only - - self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1) - self.enc = WN(hidden_channels, kernel_size, dilation_rate, n_layers, p_dropout=p_dropout, gin_channels=gin_channels) - self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1) - self.post.weight.data.zero_() - self.post.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) * x_mask - h = self.enc(h, x_mask, g=g) - stats = self.post(h) * x_mask - if not self.mean_only: - m, logs = torch.split(stats, [self.half_channels]*2, 1) - else: - m = stats - logs = torch.zeros_like(m) - - if not reverse: - x1 = m + x1 * torch.exp(logs) * x_mask - x = torch.cat([x0, x1], 1) - logdet = torch.sum(logs, [1,2]) - return x, logdet - else: - x1 = (x1 - m) * torch.exp(-logs) * x_mask - x = torch.cat([x0, x1], 1) - return x - - -class ConvFlow(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, n_layers, num_bins=10, tail_bound=5.0): - super().__init__() - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.num_bins = num_bins - self.tail_bound = tail_bound - self.half_channels = in_channels // 2 - - self.pre = nn.Conv1d(self.half_channels, filter_channels, 1) - self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.) - self.proj = nn.Conv1d(filter_channels, self.half_channels * (num_bins * 3 - 1), 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) - h = self.convs(h, x_mask, g=g) - h = self.proj(h) * x_mask - - b, c, t = x0.shape - h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?] - - unnormalized_widths = h[..., :self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_heights = h[..., self.num_bins:2*self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_derivatives = h[..., 2 * self.num_bins:] - - x1, logabsdet = piecewise_rational_quadratic_transform(x1, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=reverse, - tails='linear', - tail_bound=self.tail_bound - ) - - x = torch.cat([x0, x1], 1) * x_mask - logdet = torch.sum(logabsdet * x_mask, [1,2]) - if not reverse: - return x, logdet - else: - return x -class TransformerCouplingLayer(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - n_layers, - n_heads, - p_dropout=0, - filter_channels=0, - mean_only=False, - wn_sharing_parameter=None, - gin_channels = 0 - ): - assert channels % 2 == 0, "channels should be divisible by 2" - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.half_channels = channels // 2 - self.mean_only = mean_only - - self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1) - self.enc = Encoder(hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout, isflow = True, gin_channels = gin_channels) if wn_sharing_parameter is None else wn_sharing_parameter - self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1) - self.post.weight.data.zero_() - self.post.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) * x_mask - h = self.enc(h, x_mask, g=g) - stats = self.post(h) * x_mask - if not self.mean_only: - m, logs = torch.split(stats, [self.half_channels]*2, 1) - else: - m = stats - logs = torch.zeros_like(m) - - if not reverse: - x1 = m + x1 * torch.exp(logs) * x_mask - x = torch.cat([x0, x1], 1) - logdet = torch.sum(logs, [1,2]) - return x, logdet - else: - x1 = (x1 - m) * torch.exp(-logs) * x_mask - x = torch.cat([x0, x1], 1) - return x - - x1, logabsdet = piecewise_rational_quadratic_transform(x1, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=reverse, - tails='linear', - tail_bound=self.tail_bound - ) - - x = torch.cat([x0, x1], 1) * x_mask - logdet = torch.sum(logabsdet * x_mask, [1,2]) - if not reverse: - return x, logdet - else: - return x diff --git a/spaces/digitalxingtong/Bufeiyan-b-Bert-VITS2/text/symbols.py b/spaces/digitalxingtong/Bufeiyan-b-Bert-VITS2/text/symbols.py deleted file mode 100644 index 9dfae4e633829f20c4fd767b1c7a9198911ed801..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Bufeiyan-b-Bert-VITS2/text/symbols.py +++ /dev/null @@ -1,51 +0,0 @@ -punctuation = ['!', '?', '…', ",", ".", "'", '-'] -pu_symbols = punctuation + ["SP", "UNK"] -pad = '_' - -# chinese -zh_symbols = ['E', 'En', 'a', 'ai', 'an', 'ang', 'ao', 'b', 'c', 'ch', 'd', 'e', 'ei', 'en', 'eng', 'er', 'f', 'g', 'h', - 'i', 'i0', 'ia', 'ian', 'iang', 'iao', 'ie', 'in', 'ing', 'iong', 'ir', 'iu', 'j', 'k', 'l', 'm', 'n', 'o', - 'ong', - 'ou', 'p', 'q', 'r', 's', 'sh', 't', 'u', 'ua', 'uai', 'uan', 'uang', 'ui', 'un', 'uo', 'v', 'van', 've', 'vn', - 'w', 'x', 'y', 'z', 'zh', - "AA", "EE", "OO"] -num_zh_tones = 6 - -# japanese -ja_symbols = ['I', 'N', 'U', 'a', 'b', 'by', 'ch', 'cl', 'd', 'dy', 'e', 'f', 'g', 'gy', 'h', 'hy', 'i', 'j', 'k', 'ky', - 'm', 'my', 'n', 'ny', 'o', 'p', 'py', 'r', 'ry', 's', 'sh', 't', 'ts', 'u', 'V', 'w', 'y', 'z'] -num_ja_tones = 1 - -# English -en_symbols = ['aa', 'ae', 'ah', 'ao', 'aw', 'ay', 'b', 'ch', 'd', 'dh', 'eh', 'er', 'ey', 'f', 'g', 'hh', 'ih', 'iy', - 'jh', 'k', 'l', 'm', 'n', 'ng', 'ow', 'oy', 'p', 'r', 's', - 'sh', 't', 'th', 'uh', 'uw', 'V', 'w', 'y', 'z', 'zh'] -num_en_tones = 4 - -# combine all symbols -normal_symbols = sorted(set(zh_symbols + ja_symbols + en_symbols)) -symbols = [pad] + normal_symbols + pu_symbols -sil_phonemes_ids = [symbols.index(i) for i in pu_symbols] - -# combine all tones -num_tones = num_zh_tones + num_ja_tones + num_en_tones - -# language maps -language_id_map = { - 'ZH': 0, - "JA": 1, - "EN": 2 -} -num_languages = len(language_id_map.keys()) - -language_tone_start_map = { - 'ZH': 0, - "JA": num_zh_tones, - "EN": num_zh_tones + num_ja_tones -} - -if __name__ == '__main__': - a = set(zh_symbols) - b = set(en_symbols) - print(sorted(a&b)) - diff --git a/spaces/digitalxingtong/Jiaran-Bert-VITS2/attentions.py b/spaces/digitalxingtong/Jiaran-Bert-VITS2/attentions.py deleted file mode 100644 index ecbdbc8be941a962046fc11fd6739b093112123e..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Jiaran-Bert-VITS2/attentions.py +++ /dev/null @@ -1,343 +0,0 @@ -import copy -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - -import commons -import modules -from torch.nn.utils import weight_norm, remove_weight_norm -class LayerNorm(nn.Module): - def __init__(self, channels, eps=1e-5): - super().__init__() - self.channels = channels - self.eps = eps - - self.gamma = nn.Parameter(torch.ones(channels)) - self.beta = nn.Parameter(torch.zeros(channels)) - - def forward(self, x): - x = x.transpose(1, -1) - x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps) - return x.transpose(1, -1) - - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - -class Encoder(nn.Module): - def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., window_size=4, isflow = True, **kwargs): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.window_size = window_size - if isflow: - cond_layer = torch.nn.Conv1d(256, 2*hidden_channels*n_layers, 1) - self.cond_pre = torch.nn.Conv1d(hidden_channels, 2*hidden_channels, 1) - self.cond_layer = weight_norm(cond_layer, name='weight') - self.gin_channels = 256 - self.cond_layer_idx = self.n_layers - if 'gin_channels' in kwargs: - self.gin_channels = kwargs['gin_channels'] - if self.gin_channels != 0: - self.spk_emb_linear = nn.Linear(self.gin_channels, self.hidden_channels) - # vits2 says 3rd block, so idx is 2 by default - self.cond_layer_idx = kwargs['cond_layer_idx'] if 'cond_layer_idx' in kwargs else 2 - print(self.gin_channels, self.cond_layer_idx) - assert self.cond_layer_idx < self.n_layers, 'cond_layer_idx should be less than n_layers' - self.drop = nn.Dropout(p_dropout) - self.attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, window_size=window_size)) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout)) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - def forward(self, x, x_mask, g=None): - attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - if i == self.cond_layer_idx and g is not None: - g = self.spk_emb_linear(g.transpose(1, 2)) - g = g.transpose(1, 2) - x = x + g - x = x * x_mask - y = self.attn_layers[i](x, x, attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class Decoder(nn.Module): - def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., proximal_bias=False, proximal_init=True, **kwargs): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - - self.drop = nn.Dropout(p_dropout) - self.self_attn_layers = nn.ModuleList() - self.norm_layers_0 = nn.ModuleList() - self.encdec_attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.self_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, proximal_bias=proximal_bias, proximal_init=proximal_init)) - self.norm_layers_0.append(LayerNorm(hidden_channels)) - self.encdec_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout)) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout, causal=True)) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask, h, h_mask): - """ - x: decoder input - h: encoder output - """ - self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to(device=x.device, dtype=x.dtype) - encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.self_attn_layers[i](x, x, self_attn_mask) - y = self.drop(y) - x = self.norm_layers_0[i](x + y) - - y = self.encdec_attn_layers[i](x, h, encdec_attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class MultiHeadAttention(nn.Module): - def __init__(self, channels, out_channels, n_heads, p_dropout=0., window_size=None, heads_share=True, block_length=None, proximal_bias=False, proximal_init=False): - super().__init__() - assert channels % n_heads == 0 - - self.channels = channels - self.out_channels = out_channels - self.n_heads = n_heads - self.p_dropout = p_dropout - self.window_size = window_size - self.heads_share = heads_share - self.block_length = block_length - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - self.attn = None - - self.k_channels = channels // n_heads - self.conv_q = nn.Conv1d(channels, channels, 1) - self.conv_k = nn.Conv1d(channels, channels, 1) - self.conv_v = nn.Conv1d(channels, channels, 1) - self.conv_o = nn.Conv1d(channels, out_channels, 1) - self.drop = nn.Dropout(p_dropout) - - if window_size is not None: - n_heads_rel = 1 if heads_share else n_heads - rel_stddev = self.k_channels**-0.5 - self.emb_rel_k = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev) - self.emb_rel_v = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev) - - nn.init.xavier_uniform_(self.conv_q.weight) - nn.init.xavier_uniform_(self.conv_k.weight) - nn.init.xavier_uniform_(self.conv_v.weight) - if proximal_init: - with torch.no_grad(): - self.conv_k.weight.copy_(self.conv_q.weight) - self.conv_k.bias.copy_(self.conv_q.bias) - - def forward(self, x, c, attn_mask=None): - q = self.conv_q(x) - k = self.conv_k(c) - v = self.conv_v(c) - - x, self.attn = self.attention(q, k, v, mask=attn_mask) - - x = self.conv_o(x) - return x - - def attention(self, query, key, value, mask=None): - # reshape [b, d, t] -> [b, n_h, t, d_k] - b, d, t_s, t_t = (*key.size(), query.size(2)) - query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3) - key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - - scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1)) - if self.window_size is not None: - assert t_s == t_t, "Relative attention is only available for self-attention." - key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s) - rel_logits = self._matmul_with_relative_keys(query /math.sqrt(self.k_channels), key_relative_embeddings) - scores_local = self._relative_position_to_absolute_position(rel_logits) - scores = scores + scores_local - if self.proximal_bias: - assert t_s == t_t, "Proximal bias is only available for self-attention." - scores = scores + self._attention_bias_proximal(t_s).to(device=scores.device, dtype=scores.dtype) - if mask is not None: - scores = scores.masked_fill(mask == 0, -1e4) - if self.block_length is not None: - assert t_s == t_t, "Local attention is only available for self-attention." - block_mask = torch.ones_like(scores).triu(-self.block_length).tril(self.block_length) - scores = scores.masked_fill(block_mask == 0, -1e4) - p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s] - p_attn = self.drop(p_attn) - output = torch.matmul(p_attn, value) - if self.window_size is not None: - relative_weights = self._absolute_position_to_relative_position(p_attn) - value_relative_embeddings = self._get_relative_embeddings(self.emb_rel_v, t_s) - output = output + self._matmul_with_relative_values(relative_weights, value_relative_embeddings) - output = output.transpose(2, 3).contiguous().view(b, d, t_t) # [b, n_h, t_t, d_k] -> [b, d, t_t] - return output, p_attn - - def _matmul_with_relative_values(self, x, y): - """ - x: [b, h, l, m] - y: [h or 1, m, d] - ret: [b, h, l, d] - """ - ret = torch.matmul(x, y.unsqueeze(0)) - return ret - - def _matmul_with_relative_keys(self, x, y): - """ - x: [b, h, l, d] - y: [h or 1, m, d] - ret: [b, h, l, m] - """ - ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1)) - return ret - - def _get_relative_embeddings(self, relative_embeddings, length): - max_relative_position = 2 * self.window_size + 1 - # Pad first before slice to avoid using cond ops. - pad_length = max(length - (self.window_size + 1), 0) - slice_start_position = max((self.window_size + 1) - length, 0) - slice_end_position = slice_start_position + 2 * length - 1 - if pad_length > 0: - padded_relative_embeddings = F.pad( - relative_embeddings, - commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]])) - else: - padded_relative_embeddings = relative_embeddings - used_relative_embeddings = padded_relative_embeddings[:,slice_start_position:slice_end_position] - return used_relative_embeddings - - def _relative_position_to_absolute_position(self, x): - """ - x: [b, h, l, 2*l-1] - ret: [b, h, l, l] - """ - batch, heads, length, _ = x.size() - # Concat columns of pad to shift from relative to absolute indexing. - x = F.pad(x, commons.convert_pad_shape([[0,0],[0,0],[0,0],[0,1]])) - - # Concat extra elements so to add up to shape (len+1, 2*len-1). - x_flat = x.view([batch, heads, length * 2 * length]) - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0,0],[0,0],[0,length-1]])) - - # Reshape and slice out the padded elements. - x_final = x_flat.view([batch, heads, length+1, 2*length-1])[:, :, :length, length-1:] - return x_final - - def _absolute_position_to_relative_position(self, x): - """ - x: [b, h, l, l] - ret: [b, h, l, 2*l-1] - """ - batch, heads, length, _ = x.size() - # padd along column - x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length-1]])) - x_flat = x.view([batch, heads, length**2 + length*(length -1)]) - # add 0's in the beginning that will skew the elements after reshape - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]])) - x_final = x_flat.view([batch, heads, length, 2*length])[:,:,:,1:] - return x_final - - def _attention_bias_proximal(self, length): - """Bias for self-attention to encourage attention to close positions. - Args: - length: an integer scalar. - Returns: - a Tensor with shape [1, 1, length, length] - """ - r = torch.arange(length, dtype=torch.float32) - diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1) - return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0) - - -class FFN(nn.Module): - def __init__(self, in_channels, out_channels, filter_channels, kernel_size, p_dropout=0., activation=None, causal=False): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.activation = activation - self.causal = causal - - if causal: - self.padding = self._causal_padding - else: - self.padding = self._same_padding - - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size) - self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size) - self.drop = nn.Dropout(p_dropout) - - def forward(self, x, x_mask): - x = self.conv_1(self.padding(x * x_mask)) - if self.activation == "gelu": - x = x * torch.sigmoid(1.702 * x) - else: - x = torch.relu(x) - x = self.drop(x) - x = self.conv_2(self.padding(x * x_mask)) - return x * x_mask - - def _causal_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = self.kernel_size - 1 - pad_r = 0 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x - - def _same_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = (self.kernel_size - 1) // 2 - pad_r = self.kernel_size // 2 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x diff --git a/spaces/digitalxingtong/Jiaran-Bert-VITS2/text/english.py b/spaces/digitalxingtong/Jiaran-Bert-VITS2/text/english.py deleted file mode 100644 index 781d0a56cef71f66fc67db51d76538be90d3ddd2..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Jiaran-Bert-VITS2/text/english.py +++ /dev/null @@ -1,138 +0,0 @@ -import pickle -import os -import re -from g2p_en import G2p -from string import punctuation - -from text import symbols - -current_file_path = os.path.dirname(__file__) -CMU_DICT_PATH = os.path.join(current_file_path, 'cmudict.rep') -CACHE_PATH = os.path.join(current_file_path, 'cmudict_cache.pickle') -_g2p = G2p() - -arpa = {'AH0', 'S', 'AH1', 'EY2', 'AE2', 'EH0', 'OW2', 'UH0', 'NG', 'B', 'G', 'AY0', 'M', 'AA0', 'F', 'AO0', 'ER2', 'UH1', 'IY1', 'AH2', 'DH', 'IY0', 'EY1', 'IH0', 'K', 'N', 'W', 'IY2', 'T', 'AA1', 'ER1', 'EH2', 'OY0', 'UH2', 'UW1', 'Z', 'AW2', 'AW1', 'V', 'UW2', 'AA2', 'ER', 'AW0', 'UW0', 'R', 'OW1', 'EH1', 'ZH', 'AE0', 'IH2', 'IH', 'Y', 'JH', 'P', 'AY1', 'EY0', 'OY2', 'TH', 'HH', 'D', 'ER0', 'CH', 'AO1', 'AE1', 'AO2', 'OY1', 'AY2', 'IH1', 'OW0', 'L', 'SH'} - - -def post_replace_ph(ph): - rep_map = { - ':': ',', - ';': ',', - ',': ',', - '。': '.', - '!': '!', - '?': '?', - '\n': '.', - "·": ",", - '、': ",", - '...': '…', - 'v': "V" - } - if ph in rep_map.keys(): - ph = rep_map[ph] - if ph in symbols: - return ph - if ph not in symbols: - ph = 'UNK' - return ph - -def read_dict(): - g2p_dict = {} - start_line = 49 - with open(CMU_DICT_PATH) as f: - line = f.readline() - line_index = 1 - while line: - if line_index >= start_line: - line = line.strip() - word_split = line.split(' ') - word = word_split[0] - - syllable_split = word_split[1].split(' - ') - g2p_dict[word] = [] - for syllable in syllable_split: - phone_split = syllable.split(' ') - g2p_dict[word].append(phone_split) - - line_index = line_index + 1 - line = f.readline() - - return g2p_dict - - -def cache_dict(g2p_dict, file_path): - with open(file_path, 'wb') as pickle_file: - pickle.dump(g2p_dict, pickle_file) - - -def get_dict(): - if os.path.exists(CACHE_PATH): - with open(CACHE_PATH, 'rb') as pickle_file: - g2p_dict = pickle.load(pickle_file) - else: - g2p_dict = read_dict() - cache_dict(g2p_dict, CACHE_PATH) - - return g2p_dict - -eng_dict = get_dict() - -def refine_ph(phn): - tone = 0 - if re.search(r'\d$', phn): - tone = int(phn[-1]) + 1 - phn = phn[:-1] - return phn.lower(), tone - -def refine_syllables(syllables): - tones = [] - phonemes = [] - for phn_list in syllables: - for i in range(len(phn_list)): - phn = phn_list[i] - phn, tone = refine_ph(phn) - phonemes.append(phn) - tones.append(tone) - return phonemes, tones - - -def text_normalize(text): - # todo: eng text normalize - return text - -def g2p(text): - - phones = [] - tones = [] - words = re.split(r"([,;.\-\?\!\s+])", text) - for w in words: - if w.upper() in eng_dict: - phns, tns = refine_syllables(eng_dict[w.upper()]) - phones += phns - tones += tns - else: - phone_list = list(filter(lambda p: p != " ", _g2p(w))) - for ph in phone_list: - if ph in arpa: - ph, tn = refine_ph(ph) - phones.append(ph) - tones.append(tn) - else: - phones.append(ph) - tones.append(0) - # todo: implement word2ph - word2ph = [1 for i in phones] - - phones = [post_replace_ph(i) for i in phones] - return phones, tones, word2ph - -if __name__ == "__main__": - # print(get_dict()) - # print(eng_word_to_phoneme("hello")) - print(g2p("In this paper, we propose 1 DSPGAN, a GAN-based universal vocoder.")) - # all_phones = set() - # for k, syllables in eng_dict.items(): - # for group in syllables: - # for ph in group: - # all_phones.add(ph) - # print(all_phones) \ No newline at end of file diff --git a/spaces/digitalxingtong/Un-Bert-Vits2/text/english_bert_mock.py b/spaces/digitalxingtong/Un-Bert-Vits2/text/english_bert_mock.py deleted file mode 100644 index 3b894ced5b6d619a18d6bdd7d7606ba9e6532050..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Un-Bert-Vits2/text/english_bert_mock.py +++ /dev/null @@ -1,5 +0,0 @@ -import torch - - -def get_bert_feature(norm_text, word2ph): - return torch.zeros(1024, sum(word2ph)) diff --git a/spaces/digitalxingtong/Xingtong-2dall-Bert-VITS2/monotonic_align/core.py b/spaces/digitalxingtong/Xingtong-2dall-Bert-VITS2/monotonic_align/core.py deleted file mode 100644 index 5ff728cd74c9228346a82ec64a9829cb98ad315e..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Xingtong-2dall-Bert-VITS2/monotonic_align/core.py +++ /dev/null @@ -1,36 +0,0 @@ -import numba - - -@numba.jit(numba.void(numba.int32[:, :, ::1], numba.float32[:, :, ::1], numba.int32[::1], numba.int32[::1]), - nopython=True, nogil=True) -def maximum_path_jit(paths, values, t_ys, t_xs): - b = paths.shape[0] - max_neg_val = -1e9 - for i in range(int(b)): - path = paths[i] - value = values[i] - t_y = t_ys[i] - t_x = t_xs[i] - - v_prev = v_cur = 0.0 - index = t_x - 1 - - for y in range(t_y): - for x in range(max(0, t_x + y - t_y), min(t_x, y + 1)): - if x == y: - v_cur = max_neg_val - else: - v_cur = value[y - 1, x] - if x == 0: - if y == 0: - v_prev = 0. - else: - v_prev = max_neg_val - else: - v_prev = value[y - 1, x - 1] - value[y, x] += max(v_prev, v_cur) - - for y in range(t_y - 1, -1, -1): - path[y, index] = 1 - if index != 0 and (index == y or value[y - 1, index] < value[y - 1, index - 1]): - index = index - 1 \ No newline at end of file diff --git a/spaces/dmeck/RVC-Speakers/start.py b/spaces/dmeck/RVC-Speakers/start.py deleted file mode 100644 index 4020aedd9b12896eeb27730921f6259c57230d71..0000000000000000000000000000000000000000 --- a/spaces/dmeck/RVC-Speakers/start.py +++ /dev/null @@ -1,4 +0,0 @@ -from speakers.__main__ import main - -if __name__ == '__main__': - main() diff --git a/spaces/dotmet/chatgpt_webui/README.md b/spaces/dotmet/chatgpt_webui/README.md deleted file mode 100644 index 3aafed6ada8feae6a0790a750793ce83fa5fd04f..0000000000000000000000000000000000000000 --- a/spaces/dotmet/chatgpt_webui/README.md +++ /dev/null @@ -1,15 +0,0 @@ ---- -license: bsd-2-clause -title: ChatGPT WebUI -sdk: gradio -emoji: 👀 -colorFrom: yellow -colorTo: red -app_file: app.py ---- -# chatgpt_webui -Build an WebUI of ChatGPT with multiple authentication method using Gradio and revChatGPT - -clone this space to run for your own account - -### This project will not SAVE/DISPLAY/SHARE the ACCOUNT INFO of any user!! \ No newline at end of file diff --git a/spaces/dylanebert/igf/viewer/src/lib/index.ts b/spaces/dylanebert/igf/viewer/src/lib/index.ts deleted file mode 100644 index 856f2b6c38aec1085db88189bcf492dbb49a1c45..0000000000000000000000000000000000000000 --- a/spaces/dylanebert/igf/viewer/src/lib/index.ts +++ /dev/null @@ -1 +0,0 @@ -// place files you want to import through the `$lib` alias in this folder. diff --git a/spaces/ejbejaranos/somos-alpaca-es/load_data.py b/spaces/ejbejaranos/somos-alpaca-es/load_data.py deleted file mode 100644 index 5b98290f9d972a5301d0df81db0872aff92479dc..0000000000000000000000000000000000000000 --- a/spaces/ejbejaranos/somos-alpaca-es/load_data.py +++ /dev/null @@ -1,73 +0,0 @@ -# Copyright 2021-present, the Recognai S.L. team. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import sys -import time - -import argilla as rg -import pandas as pd -import requests -from argilla.labeling.text_classification import Rule, add_rules -from datasets import load_dataset - - -class LoadDatasets: - def __init__(self, api_key, workspace="team"): - rg.init(api_key=api_key, workspace=workspace) - - - @staticmethod - def load_somos(): - print("Loading somos dataset") - # Leer el dataset del Hub - dataset = load_dataset("somosnlp/somos-alpaca-es", split="train") - dataset = dataset.remove_columns("metrics") # si falla se puede comentar esta linea - records = rg.DatasetForTextClassification.from_datasets(dataset) - - # Log the dataset - rg.log( - records, - name="somos-alpaca-es", - tags={"description": "SomosNLP Hackathon dataset"}, - ) - settings = rg.TextClassificationSettings( - label_schema=["BAD INSTRUCTION", "BAD INPUT", "BAD OUTPUT", "INAPPROPRIATE", "BIASED", "ALL GOOD"] - ) - rg.configure_dataset(name="somos-alpaca-es", settings=settings, workspace="team") - - -if __name__ == "__main__": - API_KEY = sys.argv[1] - LOAD_DATASETS = sys.argv[2] - - if LOAD_DATASETS.lower() == "none": - print("No datasets being loaded") - else: - while True: - try: - response = requests.get("http://0.0.0.0:6900/") - if response.status_code == 200: - ld = LoadDatasets(API_KEY) - - ld.load_somos() - break - - except requests.exceptions.ConnectionError: - pass - except Exception as e: - print(e) - time.sleep(10) - pass - - time.sleep(5) diff --git a/spaces/elkraken/Video-Object-Detection/detect_or_track.py b/spaces/elkraken/Video-Object-Detection/detect_or_track.py deleted file mode 100644 index d16bf2b6a8d946324458adc8b95093c4d9d7bc21..0000000000000000000000000000000000000000 --- a/spaces/elkraken/Video-Object-Detection/detect_or_track.py +++ /dev/null @@ -1,285 +0,0 @@ -import argparse -import time -from pathlib import Path -import cv2 -import torch -import torch.backends.cudnn as cudnn -from numpy import random - -from models.experimental import attempt_load -from utils.datasets import LoadStreams, LoadImages -from utils.general import check_img_size, check_requirements, \ - check_imshow, non_max_suppression, apply_classifier, \ - scale_coords, xyxy2xywh, strip_optimizer, set_logging, \ - increment_path -from utils.plots import plot_one_box -from utils.torch_utils import select_device, load_classifier, time_synchronized, TracedModel - -from sort import * - - -"""Function to Draw Bounding boxes""" -def draw_boxes(img, bbox, identities=None, categories=None, confidences = None, names=None, colors = None): - for i, box in enumerate(bbox): - x1, y1, x2, y2 = [int(i) for i in box] - tl = opt.thickness or round(0.002 * (img.shape[0] + img.shape[1]) / 2) + 1 # line/font thickness - - cat = int(categories[i]) if categories is not None else 0 - id = int(identities[i]) if identities is not None else 0 - # conf = confidences[i] if confidences is not None else 0 - - color = colors[cat] - - if not opt.nobbox: - cv2.rectangle(img, (x1, y1), (x2, y2), color, tl) - - if not opt.nolabel: - label = str(id) + ":"+ names[cat] if identities is not None else f'{names[cat]} {confidences[i]:.2f}' - tf = max(tl - 1, 1) # font thickness - t_size = cv2.getTextSize(label, 0, fontScale=tl / 3, thickness=tf)[0] - c2 = x1 + t_size[0], y1 - t_size[1] - 3 - cv2.rectangle(img, (x1, y1), c2, color, -1, cv2.LINE_AA) # filled - cv2.putText(img, label, (x1, y1 - 2), 0, tl / 3, [225, 255, 255], thickness=tf, lineType=cv2.LINE_AA) - - - return img - - -def detect(save_img=False): - source, weights, view_img, save_txt, imgsz, trace = opt.source, opt.weights, opt.view_img, opt.save_txt, opt.img_size, not opt.no_trace - save_img = not opt.nosave and not source.endswith('.txt') # save inference images - webcam = source.isnumeric() or source.endswith('.txt') or source.lower().startswith( - ('rtsp://', 'rtmp://', 'http://', 'https://')) - save_dir = Path(increment_path(Path(opt.project) / opt.name, exist_ok=opt.exist_ok)) # increment run - if not opt.nosave: - (save_dir / 'labels' if save_txt else save_dir).mkdir(parents=True, exist_ok=True) # make dir - - # Initialize - set_logging() - device = select_device(opt.device) - half = device.type != 'cpu' # half precision only supported on CUDA - - # Load model - model = attempt_load(weights, map_location=device) # load FP32 model - stride = int(model.stride.max()) # model stride - imgsz = check_img_size(imgsz, s=stride) # check img_size - - if trace: - model = TracedModel(model, device, opt.img_size) - - if half: - model.half() # to FP16 - - # Second-stage classifier - classify = False - if classify: - modelc = load_classifier(name='resnet101', n=2) # initialize - modelc.load_state_dict(torch.load('weights/resnet101.pt', map_location=device)['model']).to(device).eval() - - # Set Dataloader - vid_path, vid_writer = None, None - if webcam: - view_img = check_imshow() - cudnn.benchmark = True # set True to speed up constant image size inference - dataset = LoadStreams(source, img_size=imgsz, stride=stride) - else: - dataset = LoadImages(source, img_size=imgsz, stride=stride) - - # Get names and colors - names = model.module.names if hasattr(model, 'module') else model.names - colors = [[random.randint(0, 255) for _ in range(3)] for _ in names] - - # Run inference - if device.type != 'cpu': - model(torch.zeros(1, 3, imgsz, imgsz).to(device).type_as(next(model.parameters()))) # run once - old_img_w = old_img_h = imgsz - old_img_b = 1 - - t0 = time.time() - ################################### - startTime = 0 - ################################### - for path, img, im0s, vid_cap in dataset: - img = torch.from_numpy(img).to(device) - img = img.half() if half else img.float() # uint8 to fp16/32 - img /= 255.0 # 0 - 255 to 0.0 - 1.0 - if img.ndimension() == 3: - img = img.unsqueeze(0) - - # Warmup - if device.type != 'cpu' and (old_img_b != img.shape[0] or old_img_h != img.shape[2] or old_img_w != img.shape[3]): - old_img_b = img.shape[0] - old_img_h = img.shape[2] - old_img_w = img.shape[3] - for i in range(3): - model(img, augment=opt.augment)[0] - - # Inference - t1 = time_synchronized() - pred = model(img, augment=opt.augment)[0] - t2 = time_synchronized() - - # Apply NMS - pred = non_max_suppression(pred, opt.conf_thres, opt.iou_thres, classes=opt.classes, agnostic=opt.agnostic_nms) - t3 = time_synchronized() - - # Apply Classifier - if classify: - pred = apply_classifier(pred, modelc, img, im0s) - - # Process detections - for i, det in enumerate(pred): # detections per image - if webcam: # batch_size >= 1 - p, s, im0, frame = path[i], '%g: ' % i, im0s[i].copy(), dataset.count - else: - p, s, im0, frame = path, '', im0s, getattr(dataset, 'frame', 0) - - p = Path(p) # to Path - save_path = str(save_dir / p.name) # img.jpg - txt_path = str(save_dir / 'labels' / p.stem) + ('' if dataset.mode == 'image' else f'_{frame}') # img.txt - gn = torch.tensor(im0.shape)[[1, 0, 1, 0]] # normalization gain whwh - if len(det): - # Rescale boxes from img_size to im0 size - det[:, :4] = scale_coords(img.shape[2:], det[:, :4], im0.shape).round() - - # Print results - for c in det[:, -1].unique(): - n = (det[:, -1] == c).sum() # detections per class - s += f"{n} {names[int(c)]}{'s' * (n > 1)}, " # add to string - - dets_to_sort = np.empty((0,6)) - # NOTE: We send in detected object class too - for x1,y1,x2,y2,conf,detclass in det.cpu().detach().numpy(): - dets_to_sort = np.vstack((dets_to_sort, - np.array([x1, y1, x2, y2, conf, detclass]))) - - - if opt.track: - - tracked_dets = sort_tracker.update(dets_to_sort, opt.unique_track_color) - tracks =sort_tracker.getTrackers() - - # draw boxes for visualization - if len(tracked_dets)>0: - bbox_xyxy = tracked_dets[:,:4] - identities = tracked_dets[:, 8] - categories = tracked_dets[:, 4] - confidences = None - - if opt.show_track: - #loop over tracks - for t, track in enumerate(tracks): - - track_color = colors[int(track.detclass)] if not opt.unique_track_color else sort_tracker.color_list[t] - - [cv2.line(im0, (int(track.centroidarr[i][0]), - int(track.centroidarr[i][1])), - (int(track.centroidarr[i+1][0]), - int(track.centroidarr[i+1][1])), - track_color, thickness=opt.thickness) - for i,_ in enumerate(track.centroidarr) - if i < len(track.centroidarr)-1 ] - else: - bbox_xyxy = dets_to_sort[:,:4] - identities = None - categories = dets_to_sort[:, 5] - confidences = dets_to_sort[:, 4] - - im0 = draw_boxes(im0, bbox_xyxy, identities, categories, confidences, names, colors) - - - - - - # Print time (inference + NMS) - print(f'{s}Done. ({(1E3 * (t2 - t1)):.1f}ms) Inference, ({(1E3 * (t3 - t2)):.1f}ms) NMS') - - # Stream results - ###################################################### - if dataset.mode != 'image' and opt.show_fps: - currentTime = time.time() - - fps = 1/(currentTime - startTime) - startTime = currentTime - cv2.putText(im0, "FPS: " + str(int(fps)), (20, 70), cv2.FONT_HERSHEY_PLAIN, 2, (0,255,0),2) - - ####################################################### - if view_img: - cv2.imshow(str(p), im0) - cv2.waitKey(1) # 1 millisecond - - # Save results (image with detections) - if save_img: - if dataset.mode == 'image': - cv2.imwrite(save_path, im0) - print(f" The image with the result is saved in: {save_path}") - else: # 'video' or 'stream' - if vid_path != save_path: # new video - vid_path = save_path - if isinstance(vid_writer, cv2.VideoWriter): - vid_writer.release() # release previous video writer - if vid_cap: # video - fps = vid_cap.get(cv2.CAP_PROP_FPS) - w = int(vid_cap.get(cv2.CAP_PROP_FRAME_WIDTH)) - h = int(vid_cap.get(cv2.CAP_PROP_FRAME_HEIGHT)) - else: # stream - fps, w, h = 30, im0.shape[1], im0.shape[0] - save_path += '.mp4' - vid_writer = cv2.VideoWriter(save_path, cv2.VideoWriter_fourcc(*'mp4v'), fps, (w, h)) - vid_writer.write(im0) - - if save_txt or save_img: - s = f"\n{len(list(save_dir.glob('labels/*.txt')))} labels saved to {save_dir / 'labels'}" if save_txt else '' - #print(f"Results saved to {save_dir}{s}") - - print(f'Done. ({time.time() - t0:.3f}s)') - - -if __name__ == '__main__': - parser = argparse.ArgumentParser() - parser.add_argument('--weights', nargs='+', type=str, default='yolov7.pt', help='model.pt path(s)') - parser.add_argument('--source', type=str, default='inference/images', help='source') # file/folder, 0 for webcam - parser.add_argument('--img-size', type=int, default=640, help='inference size (pixels)') - parser.add_argument('--conf-thres', type=float, default=0.25, help='object confidence threshold') - parser.add_argument('--iou-thres', type=float, default=0.45, help='IOU threshold for NMS') - parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu') - parser.add_argument('--view-img', action='store_true', help='display results') - parser.add_argument('--save-txt', action='store_true', help='save results to *.txt') - parser.add_argument('--save-conf', action='store_true', help='save confidences in --save-txt labels') - parser.add_argument('--nosave', action='store_true', help='do not save images/videos') - parser.add_argument('--classes', nargs='+', type=int, help='filter by class: --class 0, or --class 0 2 3') - parser.add_argument('--agnostic-nms', action='store_true', help='class-agnostic NMS') - parser.add_argument('--augment', action='store_true', help='augmented inference') - parser.add_argument('--update', action='store_true', help='update all models') - parser.add_argument('--project', default='runs/detect', help='save results to project/name') - parser.add_argument('--name', default='exp', help='save results to project/name') - parser.add_argument('--exist-ok', action='store_true', help='existing project/name ok, do not increment') - parser.add_argument('--no-trace', action='store_true', help='don`t trace model') - - parser.add_argument('--track', action='store_true', help='run tracking') - parser.add_argument('--show-track', action='store_true', help='show tracked path') - parser.add_argument('--show-fps', action='store_true', help='show fps') - parser.add_argument('--thickness', type=int, default=2, help='bounding box and font size thickness') - parser.add_argument('--seed', type=int, default=1, help='random seed to control bbox colors') - parser.add_argument('--nobbox', action='store_true', help='don`t show bounding box') - parser.add_argument('--nolabel', action='store_true', help='don`t show label') - parser.add_argument('--unique-track-color', action='store_true', help='show each track in unique color') - - - opt = parser.parse_args() - print(opt) - np.random.seed(opt.seed) - - sort_tracker = Sort(max_age=5, - min_hits=2, - iou_threshold=0.2) - - #check_requirements(exclude=('pycocotools', 'thop')) - - with torch.no_grad(): - if opt.update: # update all models (to fix SourceChangeWarning) - for opt.weights in ['yolov7.pt']: - detect() - strip_optimizer(opt.weights) - else: - detect() diff --git a/spaces/elplaguister/Yuuka_TTS/src/commons.py b/spaces/elplaguister/Yuuka_TTS/src/commons.py deleted file mode 100644 index 5e6fa0e298bc63a0494041672eb8a889644b3280..0000000000000000000000000000000000000000 --- a/spaces/elplaguister/Yuuka_TTS/src/commons.py +++ /dev/null @@ -1,173 +0,0 @@ -import math -import torch -from torch.nn import functional as F -import torch.jit - - -def script_method(fn, _rcb=None): - return fn - - -def script(obj, optimize=True, _frames_up=0, _rcb=None): - return obj - - -torch.jit.script_method = script_method -torch.jit.script = script - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size*dilation - dilation)/2) - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def intersperse(lst, item): - result = [item] * (len(lst) * 2 + 1) - result[1::2] = lst - return result - - -def kl_divergence(m_p, logs_p, m_q, logs_q): - """KL(P||Q)""" - kl = (logs_q - logs_p) - 0.5 - kl += 0.5 * (torch.exp(2. * logs_p) + ((m_p - m_q)**2)) * torch.exp(-2. * logs_q) - return kl - - -def rand_gumbel(shape): - """Sample from the Gumbel distribution, protect from overflows.""" - uniform_samples = torch.rand(shape) * 0.99998 + 0.00001 - return -torch.log(-torch.log(uniform_samples)) - - -def rand_gumbel_like(x): - g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device) - return g - - -def slice_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, :, idx_str:idx_end] - return ret - - -def rand_slice_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def get_timing_signal_1d( - length, channels, min_timescale=1.0, max_timescale=1.0e4): - position = torch.arange(length, dtype=torch.float) - num_timescales = channels // 2 - log_timescale_increment = ( - math.log(float(max_timescale) / float(min_timescale)) / - (num_timescales - 1)) - inv_timescales = min_timescale * torch.exp( - torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment) - scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1) - signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0) - signal = F.pad(signal, [0, 0, 0, channels % 2]) - signal = signal.view(1, channels, length) - return signal - - -def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return x + signal.to(dtype=x.dtype, device=x.device) - - -def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis) - - -def subsequent_mask(length): - mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0) - return mask - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def shift_1d(x): - x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1] - return x - - -def sequence_mask(length, max_length=None): - if max_length is None: - max_length = length.max() - x = torch.arange(max_length, dtype=length.dtype, device=length.device) - return x.unsqueeze(0) < length.unsqueeze(1) - - -def generate_path(duration, mask): - """ - duration: [b, 1, t_x] - mask: [b, 1, t_y, t_x] - """ - device = duration.device - - b, _, t_y, t_x = mask.shape - cum_duration = torch.cumsum(duration, -1) - - cum_duration_flat = cum_duration.view(b * t_x) - path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype) - path = path.view(b, t_x, t_y) - path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1] - path = path.unsqueeze(1).transpose(2,3) * mask - return path - - -def clip_grad_value_(parameters, clip_value, norm_type=2): - if isinstance(parameters, torch.Tensor): - parameters = [parameters] - parameters = list(filter(lambda p: p.grad is not None, parameters)) - norm_type = float(norm_type) - if clip_value is not None: - clip_value = float(clip_value) - - total_norm = 0 - for p in parameters: - param_norm = p.grad.data.norm(norm_type) - total_norm += param_norm.item() ** norm_type - if clip_value is not None: - p.grad.data.clamp_(min=-clip_value, max=clip_value) - total_norm = total_norm ** (1. / norm_type) - return total_norm - diff --git a/spaces/emc348/faces-through-time/criteria/id_loss.py b/spaces/emc348/faces-through-time/criteria/id_loss.py deleted file mode 100644 index 3504bdffc00082bddf6e758a9ae69b0ab7384466..0000000000000000000000000000000000000000 --- a/spaces/emc348/faces-through-time/criteria/id_loss.py +++ /dev/null @@ -1,64 +0,0 @@ -import torch -from torch import nn -import torch.nn.functional as F -from criteria.model_irse import Backbone -from criteria.backbones import get_model - - -class IDLoss(nn.Module): - """ - Computes a cosine similarity between people in two images. - Taken from TreB1eN's [1] implementation of InsightFace [2, 3], as used in pixel2style2pixel [4]. - - [1] https://github.com/TreB1eN/InsightFace_Pytorch - [2] https://github.com/deepinsight/insightface - [3] Deng, Jiankang and Guo, Jia and Niannan, Xue and Zafeiriou, Stefanos. - ArcFace: Additive Angular Margin Loss for Deep Face Recognition. In CVPR, 2019 - [4] https://github.com/eladrich/pixel2style2pixel - """ - - def __init__(self, model_path, official=False, device="cpu"): - """ - Arguments: - model_path (str): Path to IR-SE50 model. - """ - super(IDLoss, self).__init__() - print("Loading ResNet ArcFace") - self.official = official - if official: - self.facenet = get_model("r100", fp16=False) - else: - self.facenet = Backbone( - input_size=112, num_layers=50, drop_ratio=0.6, mode="ir_se" - ) - - self.facenet.load_state_dict(torch.load(model_path, map_location=device)) - self.face_pool = torch.nn.AdaptiveAvgPool2d((112, 112)) - self.facenet.eval() - - def extract_feats(self, x): - x = x[:, :, 35:223, 32:220] # Crop interesting region - x = self.face_pool(x) - x_feats = self.facenet(x) - return x_feats - - def forward(self, x, y): - """ - Arguments: - x (Tensor): The batch of original images - y (Tensor): The batch of generated images - - Returns: - loss (Tensor): Cosine similarity between the - features of the original and generated images. - - """ - - x_feats = self.extract_feats(x) - y_feats = self.extract_feats(y) - if self.official: - x_feats = F.normalize(x_feats) - y_feats = F.normalize(y_feats) - - loss = (1 - (x_feats * y_feats).sum(dim=1)).mean() - return loss diff --git a/spaces/emc348/faces-through-time/utils/log_utils.py b/spaces/emc348/faces-through-time/utils/log_utils.py deleted file mode 100644 index 7149cf8877be2759ed885901946db683d1295768..0000000000000000000000000000000000000000 --- a/spaces/emc348/faces-through-time/utils/log_utils.py +++ /dev/null @@ -1,79 +0,0 @@ -import numpy as np -from PIL import Image -import wandb -from configs import global_config -import torch -import matplotlib.pyplot as plt - - -def log_image_from_w(w, G, name): - img = get_image_from_w(w, G) - pillow_image = Image.fromarray(img) - wandb.log( - {f"{name}": [ - wandb.Image(pillow_image, caption=f"current inversion {name}")]}, - step=global_config.training_step) - - -def log_images_from_w(ws, G, names): - for name, w in zip(names, ws): - w = w.to(global_config.device) - log_image_from_w(w, G, name) - - -def plot_image_from_w(w, G): - img = get_image_from_w(w, G) - pillow_image = Image.fromarray(img) - plt.imshow(pillow_image) - plt.show() - - -def plot_image(img): - img = (img.permute(0, 2, 3, 1) * 127.5 + 128).clamp(0, 255).to(torch.uint8).detach().cpu().numpy() - pillow_image = Image.fromarray(img[0]) - plt.imshow(pillow_image) - plt.show() - - -def save_image(name, method_type, results_dir, image, run_id): - image.save(f'{results_dir}/{method_type}_{name}_{run_id}.jpg') - - -def save_w(w, G, name, method_type, results_dir): - im = get_image_from_w(w, G) - im = Image.fromarray(im, mode='RGB') - save_image(name, method_type, results_dir, im) - - -def save_concat_image(base_dir, image_latents, new_inv_image_latent, new_G, - old_G, - file_name, - extra_image=None): - images_to_save = [] - if extra_image is not None: - images_to_save.append(extra_image) - for latent in image_latents: - images_to_save.append(get_image_from_w(latent, old_G)) - images_to_save.append(get_image_from_w(new_inv_image_latent, new_G)) - result_image = create_alongside_images(images_to_save) - result_image.save(f'{base_dir}/{file_name}.jpg') - - -def save_single_image(base_dir, image_latent, G, file_name): - image_to_save = get_image_from_w(image_latent, G) - image_to_save = Image.fromarray(image_to_save, mode='RGB') - image_to_save.save(f'{base_dir}/{file_name}.jpg') - - -def create_alongside_images(images): - res = np.concatenate([np.array(image) for image in images], axis=1) - return Image.fromarray(res, mode='RGB') - - -def get_image_from_w(w, G): - if len(w.size()) <= 2: - w = w.unsqueeze(0) - with torch.no_grad(): - img = G.synthesis(w, noise_mode='const') - img = (img.permute(0, 2, 3, 1) * 127.5 + 128).clamp(0, 255).to(torch.uint8).detach().cpu().numpy() - return img[0] diff --git a/spaces/exbert-project/exbert/README.md b/spaces/exbert-project/exbert/README.md deleted file mode 100644 index e0aa5b529d1d534375c665a9fae555c34526f2cd..0000000000000000000000000000000000000000 --- a/spaces/exbert-project/exbert/README.md +++ /dev/null @@ -1,40 +0,0 @@ ---- -title: Exbert -emoji: 🌍 -colorFrom: green -colorTo: green -sdk: docker -pinned: false -license: apache-2.0 -base_path: /client/exBERT.html ---- - -# exFormer - -[![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) - -## Description -This repository contains the attention visualization component from exBERT and a minimalized server that does not support corpus indexing or search by embedding. - -The performance of this app will exceed that of exBERT on a slower internet connection as signifcantly less information (like that of the embeddings and results from FAISS searches) is needed to be sent over the REST API. - - -## Getting Started -### Install the environment -You can install the environment needed to run the server with conda: - -`conda env create -f environment.yml` - -This will create an environment named `exformer`. - -### Backend -You can start the server by `conda activate exformer` followed by `python server/main.py`. - -### Frontend -The compiled versions of the frontend are already included in the `client/dist` folder. You can get setup to develop on the frontend by the following: - -1. `cd client/src` -2. `npm install` -3. `npm run ww` - -This will allow you to change the typescript files and see the changes in your browser on refresh. diff --git a/spaces/farkmu45/instagram-clothes-psychology-streamlit/app.py b/spaces/farkmu45/instagram-clothes-psychology-streamlit/app.py deleted file mode 100644 index 11beb49c02528c847f04efe72eb77a95200c0f9a..0000000000000000000000000000000000000000 --- a/spaces/farkmu45/instagram-clothes-psychology-streamlit/app.py +++ /dev/null @@ -1,69 +0,0 @@ -from statistics import mode - -import streamlit as st -from fastai.vision.all import * -from PIL import Image - -from Processor import Processor - - -@st.experimental_singleton -def initialize_app(): - return Processor(load_learner('model.pkl')) - - -def process_images(images, processor: Processor): - filtered_images = [] - result = [] - class_names = list( - map(lambda name: {name: 0}, processor.inference.dls.vocab)) - - for image in images: - image = Image.open(image) - if processor.filter_image(image): - filtered_images.append(np.asarray(image)) - - for img in filtered_images: - result.append(processor.classify_image(img)[0]) - - if len(result) == 0: - return None - - for res_name in result: - for idx, class_name in enumerate(class_names): - for key, value in class_name.items(): - if res_name == key: - class_names[idx][key] = value + 1 - - outfit = mode(result) - - with open(f'./texts/{outfit}.txt') as text: - personality = text.read() - - return {'outfit': outfit.title(), 'personality': personality, - 'chart': class_names} - - -# Streamlit UI - -processor = initialize_app() - -st.title('Instagram Clothes Psychology (Photos)') -uploaded_photos = st.file_uploader(label="Upload photos", type=[ - 'jpg', 'jpeg'], accept_multiple_files=True) - -photos_empty = True if len(uploaded_photos) == 0 else False - -is_clicked = st.button(label='Predict Personality', - disabled=photos_empty) - -if is_clicked: - with st.spinner('Please wait...'): - result = process_images(uploaded_photos, processor) - if result is None: - st.write('Tidak ditemukan gambar yang valid') - else: - st.header('Your personality is..') - st.subheader(result['outfit']) - st.markdown(result['personality']) - st.bar_chart(result['chart']) diff --git a/spaces/fatiXbelha/sd/APK One Shot One Kill How to Survive and Destroy Your Enemies.md b/spaces/fatiXbelha/sd/APK One Shot One Kill How to Survive and Destroy Your Enemies.md deleted file mode 100644 index de5e2db3d90113f7ce0b579b51777a5133300f1a..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/APK One Shot One Kill How to Survive and Destroy Your Enemies.md +++ /dev/null @@ -1,137 +0,0 @@ - -

        APK One Shot One Kill: A Guide for Beginners

        -

        If you are looking for a fast-paced, action-packed, and thrilling mobile shooter game, you might want to check out APK One Shot One Kill. This game is one of the most popular online multiplayer games on Android devices, with millions of players around the world. In this article, we will give you a comprehensive guide on how to download, install, play, win, and enjoy APK One Shot One Kill.

        -

        apk one shot one kill


        DOWNLOAD →→→ https://urllie.com/2uNwJp



        -

        What is APK One Shot One Kill?

        -

        APK One Shot One Kill is a first-person shooter game that lets you compete with other players in various modes and maps. You can choose from a wide range of weapons, such as assault rifles, sniper rifles, shotguns, pistols, grenades, and more. You can also customize your character and weapon with different skins, stickers, and accessories.

        -

        The game has four main modes: Deathmatch, Team Deathmatch, Capture The Flag, and Zombie Mode. In Deathmatch mode, you have to kill as many enemies as possible within a time limit. In Team Deathmatch mode, you have to work with your team members to kill more enemies than the opposing team. In Capture The Flag mode, you have to capture the enemy's flag and bring it back to your base while defending your own flag. In Zombie Mode, you have to survive against waves of zombies that become stronger as time goes by.

        -

        The game also has various maps that offer different challenges and environments. You can play in urban settings, desert landscapes, snowy mountains, tropical islands, and more. Each map has its own features and obstacles that you have to adapt to.

        -

        How to Download and Install APK One Shot One Kill?

        -

        Downloading the APK file

        -

        To download APK One Shot

        One Shot One Kill, you need to find a reliable source that offers the latest version of the game. You can use a search engine like Bing to find such sources, or you can use the link below to download the APK file from a trusted website. The APK file is about 100 MB in size, so make sure you have enough storage space and a stable internet connection before downloading it.

        -

        [Download APK One Shot One Kill here]

        -

        apk one shot one kill download
        -apk one shot one kill game
        -apk one shot one kill android
        -apk one shot one kill online
        -apk one shot one kill free
        -apk one shot one kill pvp
        -apk one shot one kill mod
        -apk one shot one kill hack
        -apk one shot one kill cheats
        -apk one shot one kill tips
        -apk one shot one kill review
        -apk one shot one kill gameplay
        -apk one shot one kill guide
        -apk one shot one kill strategy
        -apk one shot one kill weapons
        -apk one shot one kill maps
        -apk one shot one kill sniper
        -apk one shot one kill rifle
        -apk one shot one kill grenades
        -apk one shot one kill headshots
        -apk one shot one kill action
        -apk one shot one kill shooter
        -apk one shot one kill first-person
        -apk one shot one kill dynamic
        -apk one shot one kill 8x8
        -opm one hit one kill apk
        -opm one hit one kill game
        -opm one hit one kill android
        -opm one hit one kill online
        -opm one hit one kill free
        -opm one hit o

        -

        Installing the APK file

        -

        Once you have downloaded the APK file, you need to install it on your Android device. To do this, you need to enable the installation of apps from unknown sources on your device. This is a security feature that prevents malicious apps from harming your device. To enable this feature, follow these steps:

        -
          -
        1. Go to your device's Settings and tap on Security or Privacy.
        2. -
        3. Find the option that says Unknown Sources or Install Unknown Apps and toggle it on.
        4. -
        5. You may see a warning message that says installing apps from unknown sources may harm your device. Tap on OK or Allow to proceed.
        6. -
        -

        Now, you can install the APK file by following these steps:

        -
          -
        1. Locate the APK file on your device's file manager or download folder and tap on it.
        2. -
        3. You may see a pop-up window that asks you to confirm the installation. Tap on Install or Next to continue.
        4. -
        5. Wait for the installation process to finish. It may take a few minutes depending on your device's performance.
        6. -
        7. Once the installation is done, you may see a message that says App Installed or Done. Tap on Open or Launch to start playing the game.
        8. -
        -

        How to Play APK One Shot One Kill?

        -

        Choosing a weapon and a mode

        -

        When you launch the game, you will see the main menu where you can choose your weapon and mode. You can swipe left or right to browse through the different weapons available in the game. You can also tap on the weapon icon to see its stats, such as damage, accuracy, fire rate, and magazine size. You can also tap on the skin icon to change the appearance of your weapon with different colors and patterns.

        -

        To choose a mode, you can tap on the mode icon at the bottom of the screen. You will see four options: Deathmatch, Team Deathmatch, Capture The Flag, and Zombie Mode. You can tap on each option to see its description and rules. You can also tap on the map icon to see the available maps for each mode. You can swipe left or right to browse through the maps, and tap on one to select it.

        -

        Once you have chosen your weapon and mode, you can tap on the play icon at the top right corner of the screen. You will be matched with other players who have chosen the same mode and map as you. The game will start after a few seconds of loading.

        -

        Moving and shooting on the battlefield

        -

        To move on the battlefield, you can use the virtual joystick on the left side of the screen. You can drag it in any direction to move your character accordingly. To look around, you can swipe on the right side of the screen. You can also double-tap on the right side of the screen to switch between first-person and third-person views.

        -

        To shoot, you can tap on the fire button on the right side of earn coins and diamonds, which are the in-game currencies. You can earn coins by playing and winning matches, completing events and challenges, and watching ads. You can earn diamonds by buying them with real money, or by getting them as rewards from events and challenges.

        -

        To access the shop, you can tap on the shop icon on the main menu. You will see four tabs: Weapons, Equipment, Skins, and Stickers. You can tap on each tab to see the items available for purchase or upgrade.

        -

        In the Weapons tab, you can buy new weapons or upgrade your existing ones. Each weapon has four attributes: Damage, Accuracy, Fire Rate, and Magazine Size. You can upgrade each attribute by spending coins or diamonds. Upgrading your weapons will make them more powerful and effective in combat.

        -

        In the Equipment tab, you can buy new equipment or upgrade your existing ones. Each equipment has a specific function and benefit. For example, you can buy a helmet that reduces headshot damage, a vest that increases your health, a backpack that increases your ammo capacity, and so on. You can upgrade your equipment by spending coins or diamonds. Upgrading your equipment will make you more durable and versatile in combat.

        -

        In the Skins tab, you can buy new skins for your character and weapon. Skins are cosmetic items that change the appearance of your character and weapon. They do not affect your performance or stats, but they can make you look more cool and unique. You can buy skins by spending coins or diamonds.

        -

        In the Stickers tab, you can buy new stickers for your weapon. Stickers are cosmetic items that add some flair and personality to your weapon. They do not affect your performance or stats, but they can make your weapon more fun and expressive. You can buy stickers by spending coins or diamonds.

        -

        How to Win in APK One Shot One Kill?

        -

        Mastering the headshot

        -

        One of the most important skills to master in APK One Shot One Kill is the headshot. A headshot is when you hit an enemy's head with your bullet, which deals more damage than hitting any other part of their body. A headshot can kill an enemy instantly, or at least severely injure them.

        -

        To master the headshot, you need to practice your aim and timing. You need to aim for the head of your enemy, which is usually the highest point of their body. You also need to time your shot when they are exposed and not moving too fast. You can use the aim button to zoom in and adjust your aim more precisely.

        -

        You also need to consider the distance and the bullet drop of your weapon. The farther away your enemy is, the more you have to adjust your aim upwards to compensate for the gravity that pulls your bullet down. The bullet drop varies depending on the type of weapon you are using. For example, sniper rifles have less bullet drop than assault rifles.

        -

        You can practice your headshot skills in the training mode, where you can shoot at dummy targets that have different distances and movements. You can also practice in the real matches, where you can challenge yourself against real players who have different skills and strategies.

        -

        Using grenades and other items

        -

        Another skill to master in APK One Shot One Kill is using grenades and other items effectively. Grenades and other items are consumable items that you can use in combat to gain an advantage over your enemies. You can find grenades and other items scattered around the map, or you can buy them from the shop.

        -

        There are four types of grenades in the game: Frag Grenade, Smoke Grenade, Flash Grenade, and Molotov Cocktail. Each grenade has a different effect and purpose.

        -
          -
        • Frag Grenade: This grenade explodes after a few seconds of being thrown, dealing damage to anyone nearby. You can use this grenade to kill or injure enemies who are hiding behind cover or clustered together.
        • -
        • Smoke Grenade: This grenade releases a cloud of smoke after being thrown, obscuring the vision of anyone inside or outside it. You can use this grenade to create a diversion, escape from a dangerous situation, or cover your movement.
        • -
        • Flash Grenade: This grenade emits a bright flash of light after being thrown, blinding anyone who looks at it for a few seconds. You can use this grenade to stun enemies who are facing you, giving you a chance to shoot them while they are vulnerable.
        • -
        • Molotov Cocktail: This grenade creates a fire after being thrown, burning anyone who steps on it for a few seconds. You can use this grenade to block an enemy's path, force them out of cover, or damage them over time.
        • -
        -

        To use a grenade or another item, you need to tap on the item icon on the bottom right corner of the screen. You will see a list of items that you have in your inventory. You can swipe left or right to select the item you want to use. Then, you can tap on the item icon again to throw it. You can also drag the item icon to aim and adjust the trajectory of your throw.

        -

        You need to use grenades and other items strategically, depending on the situation and the mode. You need to consider the timing, the distance, the angle, and the effect of your throw. You also need to be careful not to harm yourself or your teammates with your own grenades or items.

        -

        Working with your team

        -

        The final skill to master in APK One Shot One Kill is working with your team. Working with your team is essential for winning in Team Deathmatch and Capture The Flag modes, where you have to cooperate and coordinate with your teammates to defeat the enemy team.

        -

        To work with your team, you need to communicate, support, and follow your teammates. You can communicate with your teammates by using the chat feature on the top right corner of the screen. You can type or use voice messages to talk to your teammates. You can also use the quick chat feature on the bottom left corner of the screen, where you can tap on preset messages such as "Follow me", "Cover me", "Need backup", and so on.

        -

        You can support your teammates by providing them with fire cover, health packs, ammo boxes, or other items. You can also revive them if they are downed by tapping on their icon on the screen. You can follow your teammates by sticking close to them, moving as a group, and following their lead.

        -

        Working with your team will make you more effective and efficient in combat, as you can share information, resources, and tactics. You can also create synergy and teamwork, which will give you an edge over your enemies.

        -

        How to Enjoy APK One Shot One Kill?

        -

        Customizing your character and weapon

        -

        One way to enjoy APK One Shot One Kill is by customizing your character and weapon with different skins, stickers, and accessories. Customizing your character and weapon will make you stand out from the crowd, express your personality, and have fun.

        -

        To customize your character and weapon, you can tap on the customize icon on the main menu. You will see two tabs: Character and Weapon. You can tap on each tab to see the options available for customization.

        -

        In the Character tab, you can change the appearance of your character's face, hair, eyes, mouth, nose, and ears. You can also change the color of your character's skin, hair, eyes, and clothes. You can also add accessories such as hats, glasses, masks, earrings, necklaces, and more.

        -

        In the Weapon tab, you can change the appearance of your weapon's body, barrel, scope, magazine, stock, and grip. You can also change the color of your weapon's parts and add stickers that show different images or texts.

        -

        You can buy new skins, stickers, and accessories by spending coins or diamonds in the shop. You can also get them as rewards from events and challenges. You can mix and match different skins, stickers, and accessories to create your own unique look.

        -

        Joining a clan or creating your own

        -

        Another way to enjoy APK One Shot One Kill is by joining a clan or creating your own. A clan is a group of players who share a common name, tag, logo, and chat room. Joining or creating a clan will allow you to make friends, chat with other players, and participate in clan wars.

        -

        To join or create a clan, you can tap on the clan icon on the main menu. You will see two tabs: Join and Create. You can tap on each tab to see the options available for joining or creating a clan.

        -

        In the Join tab, you can see a list of clans that are open for new members. You can browse through the clans by swiping left or right, and tap on one to see its details, such as name, tag, logo, description, members, and rank. You can also use the search feature to find a specific clan by its name or tag. To join a clan, you need to tap on the join button and wait for the clan leader to accept your request.

        -

        In the Create tab, you can create your own clan by filling in the required information, such as name, tag, logo, description, and language. You also need to set the clan type, which can be open, closed, or invite only. Open clans are open for anyone to join without approval. Closed clans are closed for anyone to join unless they are invited by the clan leader. Invite only clans are only open for players who are invited by the clan leader. To create a clan, you need to spend 1000 coins or 100 diamonds.

        -

        Once you have joined or created a clan, you can access the clan chat room by tapping on the chat icon on the main menu. You can chat with your clan members, send them gifts, invite them to matches, and challenge them to duels. You can also participate in clan wars, which are competitions between clans that happen every week. Clan wars will reward you with coins, diamonds, and other items based on your clan's performance.

        -

        Participating in events and challenges

        -

        The final way to enjoy APK One Shot One Kill is by participating in events and challenges. Events and challenges are special missions that give you extra rewards for completing certain tasks or objectives. You can access the events and challenges by tapping on the event icon on the main menu.

        -

        There are two types of events and challenges: daily and weekly. Daily events and challenges reset every day, while weekly events and challenges reset every week. You can see a list of events and challenges that are available for you to complete by swiping left or right. You can also see the rewards that you will get for completing them by tapping on them.

        -

        Some examples of events and challenges are:

        -
          -
        • Kill 10 enemies with a headshot in Deathmatch mode.
        • -
        • Win 5 matches in Team Deathmatch mode.
        • -
        • Capture 3 flags in Capture The Flag mode.
        • -
        • Survive 10 waves in Zombie Mode.
        • -
        • Spend 5000 coins in the shop.
        • -
        • Join or create a clan.
        • -
        -

        To participate in an event or challenge, you need to tap on the start button and follow the instructions. You will see your progress on the top of the screen as you play. Once you have completed an event or challenge, you will see a message that says Event Completed or Challenge Completed. You can then tap on the claim button to claim your reward.

        -

        Conclusion

        -

        APK One Shot One Kill is a fun and exciting mobile shooter game that offers you a variety of weapons, modes, maps, and features. You can download and install it easily on your Android device by following our guide. You can also play it skillfully and win it easily by following our tips and tricks. You can also enjoy it fully by customizing your character and weapon, joining or creating a clan, and participating in events and challenges.

        -

        If you are looking for a game that will challenge your reflexes, strategy, and teamwork, APK One Shot One Kill is the game for you. Download it now and join the millions of players who are having a blast with this game. You will not regret it!

        -

        FAQs

        -

        Here are some frequently asked questions about APK One Shot One Kill, along with their answers.

        -
          -
        1. Q: Is APK One Shot One Kill free to play?
          -A: Yes, APK One Shot One Kill is free to play. You can download and install it without paying anything. However, the game does offer in-app purchases that allow you to buy diamonds, which are used to buy or upgrade weapons, equipment, skins, stickers, and other items.
        2. -
        3. Q: Is APK One Shot One Kill safe to download and install?
          -A: Yes, APK One Shot One Kill is safe to download and install, as long as you download it from a trusted source. We recommend downloading it from the link we provided in this article, which is a verified and secure website. You should also enable the installation of apps from unknown sources on your device, as explained in our guide.
        4. -
        5. Q: Is APK One Shot One Kill compatible with my device?
          -A: APK One Shot One Kill is compatible with most Android devices that have Android 4.4 or higher. However, some devices may have performance issues or bugs due to different specifications or settings. If you encounter any problems while playing the game, you can contact the developer through their email or social media accounts.
        6. -
        7. Q: How can I contact the developer of APK One Shot One Kill?
          -A: You can contact the developer of APK One Shot One Kill through their email address or their social media accounts. Their email address is [email protected], and their social media accounts are Facebook, Twitter, Instagram, and YouTube. You can also visit their website for more information about the game.
        8. -
        9. Q: How can I get more coins and diamonds in APK One Shot One Kill?
          -A: You can get more coins and diamonds in APK One Shot One Kill by playing and winning matches, completing events and challenges, watching ads, or buying them with real money. You can also get coins and diamonds as rewards from clan wars or other special events.
        10. -

        197e85843d
        -
        -
        \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Build Your Dream City with Mod SimCity BuildIt APK Terbaru (Free Money and Golden Keys).md b/spaces/fatiXbelha/sd/Build Your Dream City with Mod SimCity BuildIt APK Terbaru (Free Money and Golden Keys).md deleted file mode 100644 index 11240a594888f6e52f5c9331773f7859825eb93e..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Build Your Dream City with Mod SimCity BuildIt APK Terbaru (Free Money and Golden Keys).md +++ /dev/null @@ -1,100 +0,0 @@ - -

        Mod SimCity BuildIt APK Terbaru: A New Way to Enjoy the City Building Game

        -

        If you are a fan of city building games, you might have heard of SimCity BuildIt, a popular mobile game by Electronic Arts (EA). In this game, you can create and manage your own virtual city, with various buildings, services, and specializations. You can also trade with other players, join clubs, and participate in contests and wars. However, if you want to experience the game in a different way, you might want to try Mod SimCity BuildIt APK Terbaru, a modified version of the game that gives you unlimited money and golden keys. In this article, we will explain what SimCity BuildIt is, what Mod SimCity BuildIt APK Terbaru is, and what are the pros and cons of using it.

        -

        mod simcity buildit apk terbaru


        Download --->>> https://urllie.com/2uND4p



        -

        What is SimCity BuildIt?

        -

        A mobile city building game by EA

        -

        SimCity BuildIt is a free-to-play mobile game that was released in 2014 by EA. It is part of the SimCity series, which has been around since 1989. The game allows you to build your own city from scratch, starting with basic roads and residential zones. As your city grows, you need to provide services such as power, water, sewage, waste management, fire, police, health, and education. You also need to balance your budget, population, happiness, and environment. You can customize your city with various specializations, such as parks, landmarks, entertainment, gambling, education, transportation, beach, mountain, and more. You can also unlock different regions, such as Green Valley, Cactus Canyon, Sunny Isles, Frosty Fjords, and Limestone Cliffs.

        -

        Features of the game

        -

        SimCity BuildIt has many features that make it an engaging and fun game. Some of these features are:

        -
          -
        • You can design your city in any way you want, with flexible road placement and building rotation.
        • -
        • You can interact with your citizens and see their opinions and needs.
        • -
        • You can trade with other players in the Global Trade HQ or join clubs to chat and cooperate.
        • -
        • You can compete with other players in the Contest of Mayors or join forces in Club Wars.
        • -
        • You can create futuristic cities with OMEGA buildings and drones.
        • -
        • You can unleash disasters on your city or other players' cities with Vu Tower.
        • -
        • You can complete daily challenges and quests to earn rewards.
        • -
        -

        What is Mod SimCity BuildIt APK Terbaru?

        -

        A modified version of the game with unlimited money and golden keys

        -

        Mod SimCity BuildIt APK Terbaru is a modified version of the original game that gives you unlimited money (simoleons) and golden keys. These are two important resources in the game that allow you to buy buildings, services, specializations, expansions, upgrades, and more. Normally, you would have to earn these resources by playing the game or spending real money. However, with Mod SimCity BuildIt APK Terbaru, you can get them for free without any effort.

        -

        How to download and install it

        -

        To download and install Mod SimCity BuildIt APK Terbaru, you need to follow these steps:

        -
          -
        1. Go to a website that provides the modded APK file. For example, you can go to this link to download the latest version of the mod.
        2. -
        3. Before installing the APK file, you need to enable the installation of apps from unknown sources on your device. To do this, go to Settings > Security > Unknown Sources and toggle it on.
        4. -
        5. Locate the downloaded APK file on your device and tap on it to start the installation process. Follow the instructions on the screen to complete the installation.
        6. -
        7. Launch the game and enjoy the unlimited money and golden keys.
        8. -
        -

        What are the benefits of using Mod SimCity BuildIt APK Terbaru?

        -

        Build your dream city without any limitations

        -

        One of the main benefits of using Mod SimCity BuildIt APK Terbaru is that you can build your dream city without any limitations. You don't have to worry about running out of money or golden keys, which means you can buy and place any buildings, services, specializations, expansions, upgrades, and more that you want. You can also speed up the production and construction time with unlimited simcash. You can create a city that suits your style and preferences, whether it's a modern metropolis, a green paradise, a coastal resort, or a mountain retreat.

        -

        Unlock special buildings and services

        -

        Another benefit of using Mod SimCity BuildIt APK Terbaru is that you can unlock special buildings and services that are normally hard to get or require real money. For example, you can unlock the Maxis Manor, which provides fire, police, and health coverage to a large area. You can also unlock the OMEGA Research Center, which allows you to produce OMEGA items and drones. You can also unlock premium buildings, such as landmarks, stadiums, casinos, amusement parks, and more. These buildings and services can enhance the look and functionality of your city.

        -

        Compete with other players online

        -

        A third benefit of using Mod SimCity BuildIt APK Terbaru is that you can compete with other players online in various modes. You can join the Contest of Mayors and rank up in the leaderboards by completing tasks and earning points. You can also join Club Wars and team up with other players to attack and defend cities. You can also trade with other players in the Global Trade HQ or join clubs to chat and cooperate. These modes can make the game more fun and social.

        -

        mod simcity buildit apk terbaru unlimited money
        -download mod simcity buildit apk terbaru offline
        -cara instal mod simcity buildit apk terbaru 2023
        -mod simcity buildit apk terbaru tanpa root
        -review mod simcity buildit apk terbaru android
        -mod simcity buildit apk terbaru mega
        -link mod simcity buildit apk terbaru gratis
        -mod simcity buildit apk terbaru no ads
        -tutorial mod simcity buildit apk terbaru lengkap
        -mod simcity buildit apk terbaru update
        -mod simcity buildit apk terbaru full version
        -cheat mod simcity buildit apk terbaru 2022
        -mod simcity buildit apk terbaru anti banned
        -fitur mod simcity buildit apk terbaru premium
        -mod simcity buildit apk terbaru hack
        -gameplay mod simcity buildit apk terbaru indonesia
        -mod simcity buildit apk terbaru latest
        -tips mod simcity buildit apk terbaru pro
        -mod simcity buildit apk terbaru free download
        -mod simcity buildit apk terbaru cracked
        -mod simcity buildit apk terbaru unlimited coins and keys
        -kelebihan dan kekurangan mod simcity buildit apk terbaru 2021
        -mod simcity buildit apk terbaru for pc
        -perbedaan mod simcity buildit apk terbaru dengan versi original
        -mod simcity buildit apk terbaru new version
        -rating mod simcity buildit apk terbaru 2020
        -mod simcity buildit apk terbaru online
        -solusi masalah mod simcity buildit apk terbaru error
        -mod simcity buildit apk terbaru best
        -panduan mod simcity buildit apk terbaru bahasa indonesia
        -mod simcity buildit apk terbaru unlimited everything
        -video mod simcity buildit apk terbaru youtube
        -cara mendapatkan mod simcity buildit apk terbaru vip
        -mod simcity buildit apk terbaru no verification
        -rekomendasi mod simcity buildit apk terbaru 2019
        -mod simcity buildit apk terbaru for ios
        -spesifikasi minimal untuk menjalankan mod simcity buildit apk terbaru 2018
        -testimoni pengguna mod simcity buildit apk terbaru 2017
        -screenshot mod simcity buildit apk terbaru 2016
        -keuntungan menggunakan mod simcity buildit apk terbaru 2015

        -

        What are the drawbacks of using Mod SimCity BuildIt APK Terbaru?

        -

        Possible security risks and compatibility issues

        -

        One of the drawbacks of using Mod SimCity BuildIt APK Terbaru is that it may pose some security risks and compatibility issues for your device. Since the modded APK file is not from an official source, it may contain viruses, malware, or spyware that can harm your device or steal your personal information. It may also not be compatible with your device's operating system or hardware specifications, which may cause crashes, glitches, or errors. It may also not work with the latest updates or patches of the original game.

        -

        Loss of challenge and satisfaction

        -

        Another drawback of using Mod SimCity BuildIt APK Terbaru is that it may reduce the challenge and satisfaction of playing the game. Since you have unlimited money and golden keys, you don't have to work hard or plan carefully to build your city. You don't have to face any difficulties or obstacles that make the game interesting and rewarding. You don't have to earn your achievements or rewards by playing fair and square. You may lose the sense of accomplishment and enjoyment that comes from playing the game normally.

        -

        Violation of the game's terms of service

        -

        A third drawback of using Mod SimCity BuildIt APK Terbaru is that it may violate the game's terms of service. By using a modified version of the game, you are breaking the rules and agreements that you accepted when you downloaded the original game. This may result in penalties or consequences from EA, such as banning your account, deleting your progress, or suspending your access to online features. You may also face legal actions from EA for infringing their intellectual property rights.

        -

        Conclusion

        -

        Mod SimCity BuildIt APK Terbaru is a modified version of SimCity BuildIt that gives you unlimited money and golden keys. It has some benefits, such as building your dream city without any limitations, unlocking special buildings and services, and competing with other players online. However, it also has some drawbacks, such as possible security risks and compatibility issues, loss of challenge and satisfaction, and violation of the game's terms of service. Therefore, you should weigh the pros and cons carefully before deciding whether to use it or not.

        -

        FAQs

        -
          -
        • Q: Is Mod SimCity BuildIt APK Terbaru safe to use? -A: Mod SimCity BuildIt APK Terbaru is not an official version of the game, and it may contain viruses, malware, or spyware that can harm your device or steal your personal information. It may also not be compatible with your device's operating system or hardware specifications, which may cause crashes, glitches, or errors. Therefore, it is not safe to use, and you should download it at your own risk.
        • -
        • Q: How can I get money and golden keys in SimCity BuildIt without using Mod SimCity BuildIt APK Terbaru? -A: There are several ways to get money and golden keys in SimCity BuildIt without using Mod SimCity BuildIt APK Terbaru. Some of these ways are: - Completing tasks and quests in the game. - Selling items in the Global Trade HQ or trading with other players. - Collecting taxes from your citizens and revenue from your buildings. - Completing achievements and milestones in the game. - Watching ads or completing offers in the game. - Buying them with real money in the game store.
        • -
        • Q: What are some alternatives to Mod SimCity BuildIt APK Terbaru? -A: If you are looking for some alternatives to Mod SimCity BuildIt APK Terbaru, you can try some other city building games that are similar to SimCity BuildIt. Some of these games are: - Megapolis: A city building game that lets you create a megacity with various buildings, infrastructure, and technologies. - City Island 5: A city building game that lets you explore and develop different islands with various themes and features. - Township: A city building game that lets you combine farming and town management with various crops, animals, and facilities. - Pocket City: A city building game that lets you create a city with simple and intuitive controls and graphics.
        • -
        • Q: How can I contact EA if I have any questions or issues with SimCity BuildIt? -A: If you have any questions or issues with SimCity BuildIt, you can contact EA through the following ways: - Visiting their official website at https://www.ea.com/games/simcity/simcity-buildit - Visiting their help center at https://help.ea.com/en/simcity/simcity-buildit/ - Visiting their community forums at https://answers.ea.com/t5/SimCity-BuildIt/ct-p/SimCity_BuildIt - Visiting their social media pages on Facebook, Twitter, Instagram, or YouTube.
        • -
        • Q: How can I give feedback or suggestions for SimCity BuildIt? -A: If you want to give feedback or suggestions for SimCity BuildIt, you can do so by: - Rating and reviewing the game on the app store or Google Play store. - Posting your feedback or suggestions on the community forums at https://answers.ea.com/t5/SimCity-BuildIt/ct-p/SimCity_BuildIt - Sending an email to simcity-buildit-support@ea.com
        • -

        401be4b1e0
        -
        -
        \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Download Policegiri Dvdrip Download.md b/spaces/fatiXbelha/sd/Download Policegiri Dvdrip Download.md deleted file mode 100644 index 4d8147dc27e2e2b9c7cea5c55c232d3d51ad1307..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Download Policegiri Dvdrip Download.md +++ /dev/null @@ -1,80 +0,0 @@ -## download Policegiri dvdrip download - - - - - - - - - -**LINK ---> [https://tweeat.com/2txiyM](https://tweeat.com/2txiyM)** - - - - - - - - - - - - I'll try to create that. - -# How to Download Policegiri DVDRip Online - - - -Policegiri is a 2013 Bollywood action comedy film starring Sanjay Dutt, Prachi Desai and Prakash Raj. The film is a remake of the 2003 Tamil film Saamy, directed by Hari. Policegiri follows the story of DCP Rudra Aditya Devraj, a corrupt but fearless cop who takes on the local mafia kingpin Nagori Subramaniam. - - - -If you want to watch Policegiri online, you can download the DVDRip version from various websites. A DVDRip is a copy of the original DVD that has been compressed to fit on a single CD or DVD. The quality of a DVDRip is usually good, but not as good as the original DVD. - - - -Here are some steps to download Policegiri DVDRip online: - - - -1. Find a reliable website that offers Policegiri DVDRip download. You can use a search engine like Google or Bing to find such websites. Some examples are moviescounter.com, filmywap.com and worldfree4u.lol. - -2. Choose the download link that suits your preference. Some websites may offer different formats, sizes and languages for the DVDRip. You may also need to register or create an account on some websites before downloading. - -3. Click on the download link and wait for the file to be downloaded. Depending on your internet speed and the size of the file, this may take some time. You may also need to enter a captcha code or complete a survey to verify that you are not a robot. - -4. Once the file is downloaded, you can open it with a media player that supports the format. You may also need to extract the file if it is in a compressed format like ZIP or RAR. Enjoy watching Policegiri online! - - - -I'll try to create that. - -Policegiri is a film that combines action, comedy and romance. The film showcases Sanjay Dutt's charisma and versatility as an actor. He plays the role of a cop who bends the rules to bring justice to the common people. He also romances Prachi Desai, who plays a software engineer and the daughter of a politician. Prakash Raj plays the role of the villain, who is a ruthless and powerful gangster. - - - -The film has some memorable scenes and dialogues that will entertain the audience. Some of the highlights are the chase sequences, the fight scenes and the songs. The film also has a message of honesty and courage. The film is directed by K.S. Ravikumar, who is known for his blockbuster films in Tamil and Telugu cinema. - - - -Policegiri is a film that you can watch with your family and friends. It is a fun-filled and action-packed entertainer that will keep you hooked till the end. If you are a fan of Sanjay Dutt or Bollywood masala movies, you should not miss Policegiri. - -I'll try to create that. - -Policegiri is a film that has received mixed reviews from critics and audiences. Some have praised the film for its entertainment value and Sanjay Dutt's performance, while others have criticized the film for its lack of originality and logic. The film has also faced some controversy due to Sanjay Dutt's conviction in the 1993 Mumbai blasts case. The film was released on July 5, 2013, just a few days before Sanjay Dutt surrendered to serve his sentence. - - - -Policegiri is a film that has a loyal fan base among Sanjay Dutt's admirers. The film has also gained popularity among the online viewers who want to watch it for free. The film is available on various websites that offer Policegiri DVDRip download. However, downloading the film from these websites may be illegal and unsafe. It may also harm the film industry and the artists who work hard to make the films. - - - -Policegiri is a film that deserves a fair chance to be watched legally and ethically. The film is a tribute to Sanjay Dutt's career and legacy as an actor. The film is also a source of entertainment and inspiration for the viewers who love action and comedy. If you want to watch Policegiri online, you should download the DVDRip from a trusted and authorized website that respects the rights of the filmmakers and the viewers. - - dfd1c89656 - - - - - diff --git a/spaces/fatiXbelha/sd/Enjoy the Best Stickman Superhero Experience with MOD APK Latest Version.md b/spaces/fatiXbelha/sd/Enjoy the Best Stickman Superhero Experience with MOD APK Latest Version.md deleted file mode 100644 index 666797fd9e9857cf4192eb834d46228f8e8b9dcf..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Enjoy the Best Stickman Superhero Experience with MOD APK Latest Version.md +++ /dev/null @@ -1,73 +0,0 @@ - -

        Stickman Superhero Mod APK Latest Version: A Fun and Action-Packed Game for Android Users

        -

        If you are a fan of stickman games and superhero movies, then you will love Stickman Superhero Mod APK. This is a game that lets you play as various stickman superheroes with different costumes and abilities. You can fight against evil forces, save the world, and have fun at the same time.

        -

        What is Stickman Superhero Mod APK?

        -

        Stickman Superhero Mod APK is a modified version of the original Stickman Superhero game developed by Naxeex LLC. The original game is a free-to-play action-adventure game that features stickman characters inspired by popular superheroes like Spider-Man, Iron Man, Hulk, Thor, Captain America, Batman, Superman, and more. You can choose your favorite superhero and costume, complete missions and challenges, use your superpowers and skills, and enjoy the thrilling gameplay.

        -

        stickman superhero mod apk latest version


        Downloadhttps://urllie.com/2uNIjH



        -

        The mod apk version of the game offers some extra features that make the game more enjoyable and easier to play. These features include unlocked all superheroes and costumes, unlimited coins and gems, no ads, and no root required. With these features, you can access all the content of the game without spending any money or watching any ads. You can also play the game on any Android device without rooting it.

        -

        Features of Stickman Superhero Mod APK

        -

        - Unlocked all superheroes and costumes

        -

        One of the best features of Stickman Superhero Mod APK is that it unlocks all the superheroes and costumes in the game. You can choose from over 50 stickman superheroes with different costumes and abilities. You can play as Spider-Stickman, Iron-Stickman, Hulk-Stickman, Thor-Stickman, Captain-Stickman, Batman-Stickman, Superman-Stickman, and many more. You can also customize your superhero with different colors, masks, capes, weapons, and accessories.

        -

        - Unlimited coins and gems

        -

        Another great feature of Stickman Superhero Mod APK is that it gives you unlimited coins and gems in the game. Coins and gems are the main currencies in the game that you can use to buy new superheroes, costumes, weapons, upgrades, skills, and more. With unlimited coins and gems, you can buy anything you want without worrying about running out of money. You can also upgrade your superhero to make them stronger, faster, and more powerful.

        -

        - No ads and no root required

        -

        The last but not least feature of Stickman Superhero Mod APK is that it removes all the ads in the game and does not require root access to install. Ads can be annoying and distracting when you are playing a game. They can also slow down your device performance and consume your data. With Stickman Superhero Mod APK, you can enjoy the game without any ads or interruptions. You can also install the game on any Android device without rooting it. This makes the game more compatible and safe to use.

        -Pros and cons of Stickman Superhero Mod APK -

        Like any other game, Stickman Superhero Mod APK has its pros and cons. Here are some of them:

        -

        - Pros: Fun, addictive, and challenging gameplay; Variety of superheroes and costumes; High-quality graphics and sound effects; Free to play and modded features

        -

        One of the main advantages of Stickman Superhero Mod APK is that it offers a fun, addictive, and challenging gameplay. You can enjoy playing as different stickman superheroes with different costumes and abilities. You can also complete various missions and challenges, and use your superpowers and skills to defeat enemies and bosses. The game also has high-quality graphics and sound effects that make the game more realistic and immersive. Moreover, the game is free to play and has modded features that make the game more enjoyable and easier to play.

        -

        stickman superhero unlimited money mod apk
        -stickman superhero hack apk download
        -stickman superhero mod apk latest update
        -stickman superhero mod apk android 1
        -stickman superhero mod apk revdl
        -stickman superhero mod apk free shopping
        -stickman superhero mod apk offline
        -stickman superhero mod apk no ads
        -stickman superhero mod apk all characters unlocked
        -stickman superhero mod apk unlimited gems
        -stickman superhero mod apk rexdl
        -stickman superhero mod apk 2023
        -stickman superhero mod apk happymod
        -stickman superhero mod apk unlimited everything
        -stickman superhero mod apk premium
        -stickman superhero mod apk online
        -stickman superhero mod apk unlimited coins
        -stickman superhero mod apk new version
        -stickman superhero mod apk obb
        -stickman superhero mod apk full version
        -stickman superhero cheat apk download
        -stickman superhero cracked apk download
        -stickman superhero pro apk download
        -stickman superhero mega mod apk download
        -stickman superhero vip mod apk download
        -stickman superhero latest version hack apk
        -stickman superhero latest version mod menu apk
        -stickman superhero latest version unlocked apk
        -stickman superhero latest version premium apk
        -stickman superhero latest version cheat apk
        -download game stickman superhero mod apk terbaru
        -download game stickman superhero mod apk versi terbaru
        -download game stickman superhero mod apk unlimited money and gems
        -download game stickman superhero mod apk free shopping and no ads
        -download game stickman superhero mod apk offline and online mode
        -cara download game stickman superhero mod apk gratis
        -cara download game stickman superhero mod apk tanpa root
        -cara download game stickman superhero mod apk dengan mudah dan cepat
        -cara download game stickman superhero mod apk di android dan ios
        -cara download game stickman superhero mod apk versi terbaru 2023

        -

        - Cons: May not work on some devices; May cause some bugs and glitches; May not be compatible with the original version of the game

        -

        One of the main disadvantages of Stickman Superhero Mod APK is that it may not work on some devices. The game requires Android 4.4 or higher to run, and some devices may not support the game or the mod apk file. The game may also cause some bugs and glitches, such as crashing, freezing, lagging, or errors. These may affect your gaming experience and progress. Furthermore, the game may not be compatible with the original version of the game. This means that you may not be able to play online with other players or update the game to the latest version.

        -

        Conclusion

        -

        Stickman Superhero Mod APK is a fun and action-packed game for Android users who love stickman games and superhero movies. The game lets you play as various stickman superheroes with different costumes and abilities. You can fight against evil forces, save the world, and have fun at the same time. The game also has modded features that make the game more enjoyable and easier to play. However, the game may not work on some devices, may cause some bugs and glitches, and may not be compatible with the original version of the game. Therefore, you should download and install the game at your own risk.

        -

        FAQs

        -

        Here are some frequently asked questions about Stickman Superhero Mod APK:

        - - - - - - -
        Q: Is Stickman Superhero Mod APK safe to use?A: Stickman Superhero Mod APK is safe to use as long as you download it from a trusted source. However, you should always scan the file for viruses before installing it on your device.
        Q: Can I play Stickman Superhero Mod APK offline?A: Yes, you can play Stickman Superhero Mod APK offline without any internet connection. However, you may not be able to access some features or content that require online connection.
        Q: Can I play Stickman Superhero Mod APK online with other players?A: No, you cannot play Stickman Superhero Mod APK online with other players. The mod apk version of the game is not compatible with the original version of the game. Therefore, you can only play the game solo or with bots.
        Q: How can I update Stickman Superhero Mod APK to the latest version?A: You cannot update Stickman Superhero Mod APK to the latest version through the Google Play Store or the app itself. You need to download and install the latest mod apk file from a trusted source every time there is a new update.
        Q: How can I contact the developer of Stickman Superhero Mod APK?A: You can contact the developer of Stickman Superhero Mod APK by visiting their official website or their social media pages. You can also leave a comment or a review on their app page on the Google Play Store.

        197e85843d
        -
        -
        \ No newline at end of file diff --git a/spaces/felix-weiland/appstore-search/app.py b/spaces/felix-weiland/appstore-search/app.py deleted file mode 100644 index dc87de391eafc34e3e35e52a51b68d5b2a757f51..0000000000000000000000000000000000000000 --- a/spaces/felix-weiland/appstore-search/app.py +++ /dev/null @@ -1,40 +0,0 @@ -import requests -import pandas as pd -import streamlit as st -import base64 -import functions as f - -st.title("App Store Search") - -# User input -search_terms = st.text_input("Enter keyword(s) or phrase(s) to search for apps (comma-separated):") - -cc, sl = st.columns((5,2)) - -with cc: - country_codes = st.text_input("Enter one or more two-letter country code (e.g., 'GB' for the UK):", "GB") -with sl: - search_limit = st.number_input("Number of results per keyword:\n\n", min_value=1, max_value=1000, value=100, step=50) - -# Add rating filter slider -rating_filter = st.slider("Show apps with rating under:", min_value=0.0, max_value=5.0, value=5.0, step=0.1) - -if st.button("Search"): - if search_terms and country_codes: - app_data = f.init_search(search_terms, country_codes, limit=search_limit) - - # Filter rows based on rating - app_data = app_data[app_data['average_rating'] <= rating_filter] - - st.write(app_data) - - # Add download button - st.download_button( - label="Download CSV File", - data=f.to_csv(app_data), - file_name="app_data.csv", - mime="text/csv", - ) - - else: - st.warning("Please enter both search term(s) and country code.") diff --git a/spaces/feng2022/Time-TravelRephotography/Time_TravelRephotography/tools/__init__.py b/spaces/feng2022/Time-TravelRephotography/Time_TravelRephotography/tools/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/feng2022/styleganhuman_copy/torch_utils/ops/grid_sample_gradfix.py b/spaces/feng2022/styleganhuman_copy/torch_utils/ops/grid_sample_gradfix.py deleted file mode 100644 index 4f69aad7510d49d55cd865b5e2554703f979b185..0000000000000000000000000000000000000000 --- a/spaces/feng2022/styleganhuman_copy/torch_utils/ops/grid_sample_gradfix.py +++ /dev/null @@ -1,85 +0,0 @@ -# Copyright (c) SenseTime Research. All rights reserved. - -# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -"""Custom replacement for `torch.nn.functional.grid_sample` that -supports arbitrarily high order gradients between the input and output. -Only works on 2D images and assumes -`mode='bilinear'`, `padding_mode='zeros'`, `align_corners=False`.""" - -import warnings -import torch - -# pylint: disable=redefined-builtin -# pylint: disable=arguments-differ -# pylint: disable=protected-access - -#---------------------------------------------------------------------------- - -enabled = False # Enable the custom op by setting this to true. - -#---------------------------------------------------------------------------- - -def grid_sample(input, grid): - if _should_use_custom_op(): - return _GridSample2dForward.apply(input, grid) - return torch.nn.functional.grid_sample(input=input, grid=grid, mode='bilinear', padding_mode='zeros', align_corners=False) - -#---------------------------------------------------------------------------- - -def _should_use_custom_op(): - if not enabled: - return False - if any(torch.__version__.startswith(x) for x in ['1.7.', '1.8.', '1.9']): - return True - warnings.warn(f'grid_sample_gradfix not supported on PyTorch {torch.__version__}. Falling back to torch.nn.functional.grid_sample().') - return False - -#---------------------------------------------------------------------------- - -class _GridSample2dForward(torch.autograd.Function): - @staticmethod - def forward(ctx, input, grid): - assert input.ndim == 4 - assert grid.ndim == 4 - output = torch.nn.functional.grid_sample(input=input, grid=grid, mode='bilinear', padding_mode='zeros', align_corners=False) - ctx.save_for_backward(input, grid) - return output - - @staticmethod - def backward(ctx, grad_output): - input, grid = ctx.saved_tensors - grad_input, grad_grid = _GridSample2dBackward.apply(grad_output, input, grid) - return grad_input, grad_grid - -#---------------------------------------------------------------------------- - -class _GridSample2dBackward(torch.autograd.Function): - @staticmethod - def forward(ctx, grad_output, input, grid): - op = torch._C._jit_get_operation('aten::grid_sampler_2d_backward') - grad_input, grad_grid = op(grad_output, input, grid, 0, 0, False) - ctx.save_for_backward(grid) - return grad_input, grad_grid - - @staticmethod - def backward(ctx, grad2_grad_input, grad2_grad_grid): - _ = grad2_grad_grid # unused - grid, = ctx.saved_tensors - grad2_grad_output = None - grad2_input = None - grad2_grid = None - - if ctx.needs_input_grad[0]: - grad2_grad_output = _GridSample2dForward.apply(grad2_grad_input, grid) - - assert not ctx.needs_input_grad[2] - return grad2_grad_output, grad2_input, grad2_grid - -#---------------------------------------------------------------------------- diff --git a/spaces/fengmuxi/ChatGpt-Web/docs/faq-en.md b/spaces/fengmuxi/ChatGpt-Web/docs/faq-en.md deleted file mode 100644 index 319fc7dea861e0451b3d17c8391dfce82daf2c26..0000000000000000000000000000000000000000 --- a/spaces/fengmuxi/ChatGpt-Web/docs/faq-en.md +++ /dev/null @@ -1,136 +0,0 @@ -# Frequently Asked Questions - -## How to get help quickly? -1. Ask ChatGPT / Bing / Baidu / Google, etc. -2. Ask online friends. Please provide background information and a detailed description of the problem. High-quality questions are more likely to get useful answers. - -# Deployment Related Questions - -## Why does the Docker deployment version always prompt for updates -The Docker version is equivalent to the stable version, and the latest Docker is always consistent with the latest release version. Currently, our release frequency is once every one to two days, so the Docker version will always be one to two days behind the latest commit, which is expected. - -## How to deploy on Vercel -1. Register a Github account and fork this project. -2. Register Vercel (mobile phone verification required, Chinese number can be used), and connect your Github account. -3. Create a new project on Vercel, select the project you forked on Github, fill in the required environment variables, and start deploying. After deployment, you can access your project through the domain provided by Vercel. (Requires proxy in mainland China) -* If you need to access it directly in China: At your DNS provider, add a CNAME record for the domain name, pointing to cname.vercel-dns.com. Then set up your domain access on Vercel. - -## How to modify Vercel environment variables -- Enter the Vercel console page; -- Select your chatgpt-next-web project; -- Click on the Settings option at the top of the page; -- Find the Environment Variables option in the sidebar; -- Modify the corresponding values as needed. - -## What is the environment variable CODE? Is it necessary to set it? -This is your custom access password, you can choose: -1. Do not set it, delete the environment variable. Be cautious: anyone can access your project at this time. -2. When deploying the project, set the environment variable CODE (supports multiple passwords, separated by commas). After setting the access password, users need to enter the access password in the settings page to use it. See [related instructions](https://github.com/Yidadaa/ChatGPT-Next-Web#access-password) - -## Why doesn't the version I deployed have streaming response -> Related discussion: [#386](https://github.com/Yidadaa/ChatGPT-Next-Web/issues/386) - -If you use nginx reverse proxy, you need to add the following code to the configuration file: -``` -# No caching, support streaming output -proxy_cache off; # Turn off caching -proxy_buffering off; # Turn off proxy buffering -chunked_transfer_encoding on; # Turn on chunked transfer encoding -tcp_nopush on; # Turn on TCP NOPUSH option, disable Nagle algorithm -tcp_nodelay on; # Turn on TCP NODELAY option, disable delay ACK algorithm -keepalive_timeout 300; # Set keep-alive timeout to 65 seconds -``` - -If you are deploying on netlify, this issue is still waiting to be resolved, please be patient. - -## I've deployed, but it's not accessible -Please check and troubleshoot the following issues: -- Is the service started? -- Is the port correctly mapped? -- Is the firewall port open? -- Is the route to the server okay? -- Is the domain name resolved correctly? - -# Usage Related Questions - -## Why does it always prompt "An error occurred, please try again later" -There could be many reasons, please check the following in order: -- First, check if your code version is the latest version, update to the latest version and try again; -- Check if the api key is set correctly, the environment variable name must be uppercase with underscores; -- Check if the api key is available; -- If you still cannot determine the problem after going through the above steps, please submit a new issue in the issue area and attach the runtime log of vercel or the log of docker runtime. - -## Why does ChatGPT's reply get garbled -In the settings page - model settings, there is an item called `temperature`. If this value is greater than 1, it may cause garbled replies. Adjust it back to within 1. - -## It prompts "Now it's unauthorized, please enter the access password on the settings page" when using? -The project has set an access password through the environment variable CODE. When using it for the first time, you need to go to settings and enter the access code to use. - -## It prompts "You exceeded your current quota, ..." when using? -The API KEY is problematic. Insufficient balance. - -## What is a proxy and how to use it? -Due to IP restrictions of OpenAI, China and some other countries/regions cannot directly connect to OpenAI API and need to go through a proxy. You can use a proxy server (forward proxy) or a pre-configured OpenAI API reverse proxy. -- Forward proxy example: VPN ladder. In the case of docker deployment, set the environment variable HTTP_PROXY to your proxy address (http://address:port). -- Reverse proxy example: You can use someone else's proxy address or set it up for free through Cloudflare. Set the project environment variable BASE_URL to your proxy address. - -## Can I deploy it on a server in China? -It is possible but there are issues to be addressed: -- Proxy is required to connect to websites such as Github and OpenAI; -- Domain name resolution requires filing for servers in China; -- Chinese policy restricts proxy access to foreign websites/ChatGPT-related applications, which may be blocked. - -# Network Service Related Questions -## What is Cloudflare? -Cloudflare (CF) is a network service provider offering CDN, domain management, static page hosting, edge computing function deployment, and more. Common use cases: purchase and/or host your domain (resolution, dynamic domain, etc.), apply CDN to your server (can hide IP to avoid being blocked), deploy websites (CF Pages). CF offers most services for free. - -## What is Vercel? -Vercel is a global cloud platform designed to help developers build and deploy modern web applications more quickly. This project and many web applications can be deployed on Vercel with a single click for free. No need to understand code, Linux, have a server, pay, or set up an OpenAI API proxy. The downside is that you need to bind a domain name to access it without restrictions in China. - -## How to obtain a domain name? -1. Register with a domain provider, such as Namesilo (supports Alipay) or Cloudflare for international providers, and Wanwang for domestic providers in China. -2. Free domain name providers: eu.org (second-level domain), etc. -3. Ask friends for a free second-level domain. - -## How to obtain a server -- Examples of international server providers: Amazon Web Services, Google Cloud, Vultr, Bandwagon, Hostdare, etc. - International server considerations: Server lines affect access speed in China; CN2 GIA and CN2 lines are recommended. If the server has difficulty accessing in China (serious packet loss, etc.), you can try using a CDN (from providers like Cloudflare). -- Domestic server providers: Alibaba Cloud, Tencent, etc. - Domestic server considerations: Domain name resolution requires filing; domestic server bandwidth is relatively expensive; accessing foreign websites (Github, OpenAI, etc.) requires a proxy. - -# OpenAI-related Questions -## How to register an OpenAI account? -Go to chat.openai.com to register. You will need: -- A good VPN (OpenAI only allows native IP addresses of supported regions) -- A supported email (e.g., Gmail or a company/school email, not Outlook or QQ email) -- A way to receive SMS verification (e.g., SMS-activate website) - -## How to activate OpenAI API? How to check API balance? -Official website (requires VPN): https://platform.openai.com/account/usage -Some users have set up a proxy to check the balance without a VPN; ask online friends for access. Please verify the source is reliable to avoid API Key leakage. - -## Why doesn't my new OpenAI account have an API balance? -(Updated April 6th) Newly registered accounts usually display API balance within 24 hours. New accounts are currently given a $5 balance. - -## How to recharge OpenAI API? -OpenAI only accepts credit cards from designated regions (Chinese credit cards cannot be used). If the credit cards from your region is not supported, some options include: -1. Depay virtual credit card -2. Apply for a foreign credit card -3. Find someone online to top up - -## How to access the GPT-4 API? -(Updated April 6th) Access to the GPT-4 API requires a separate application. Go to the following address and enter your information to join the waitlist (prepare your OpenAI organization ID): https://openai.com/waitlist/gpt-4-api -Wait for email updates afterwards. - -## How to use the Azure OpenAI interface -Please refer to: [#371](https://github.com/Yidadaa/ChatGPT-Next-Web/issues/371) - -## Why is my Token consumed so fast? -> Related discussion: [#518](https://github.com/Yidadaa/ChatGPT-Next-Web/issues/518) -- If you have GPT-4 access and use GPT-4 API regularly, your bill will increase rapidly since GPT-4 pricing is about 15 times higher than GPT-3.5; -- If you are using GPT-3.5 and not using it frequently, but still find your bill increasing fast, please troubleshoot immediately using these steps: - - Check your API key consumption record on the OpenAI website; if your token is consumed every hour and each time consumes tens of thousands of tokens, your key must have been leaked. Please delete it and regenerate it immediately. **Do not check your balance on random websites.** - - If your password is short, such as 5 characters or fewer, the cost of brute-forcing is very low. It is recommended to search docker logs to confirm whether someone has tried a large number of password combinations. Keyword: got access code -- By following these two methods, you can locate the reason for your token's rapid consumption: - - If the OpenAI consumption record is abnormal but the Docker log has no issues, it means your API key has been leaked; - - If the Docker log shows a large number of got access code brute-force attempts, your password has been cracked. diff --git a/spaces/fffiloni/Image-to-MusicGen/setup.py b/spaces/fffiloni/Image-to-MusicGen/setup.py deleted file mode 100644 index 78a172b7c90003b689bde40b49cc8fe1fb8107d4..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/Image-to-MusicGen/setup.py +++ /dev/null @@ -1,65 +0,0 @@ -""" - Copyright (c) Meta Platforms, Inc. and affiliates. - All rights reserved. - - This source code is licensed under the license found in the - LICENSE file in the root directory of this source tree. - -""" - -from pathlib import Path - -from setuptools import setup, find_packages - - -NAME = 'audiocraft' -DESCRIPTION = 'Audio research library for PyTorch' - -URL = 'https://github.com/fairinternal/audiocraft' -AUTHOR = 'FAIR Speech & Audio' -EMAIL = 'defossez@meta.com' -REQUIRES_PYTHON = '>=3.8.0' - -for line in open('audiocraft/__init__.py'): - line = line.strip() - if '__version__' in line: - context = {} - exec(line, context) - VERSION = context['__version__'] - -HERE = Path(__file__).parent - -try: - with open(HERE / "README.md", encoding='utf-8') as f: - long_description = '\n' + f.read() -except FileNotFoundError: - long_description = DESCRIPTION - -REQUIRED = [i.strip() for i in open(HERE / 'requirements.txt') if not i.startswith('#')] - -setup( - name=NAME, - version=VERSION, - description=DESCRIPTION, - author_email=EMAIL, - long_description=long_description, - long_description_content_type='text/markdown', - author=AUTHOR, - url=URL, - python_requires=REQUIRES_PYTHON, - install_requires=REQUIRED, - extras_require={ - 'dev': ['coverage', 'flake8', 'mypy', 'pdoc3', 'pytest'], - }, - packages=find_packages(), - package_data={'audiocraft': ['py.typed']}, - include_package_data=True, - license='MIT License', - classifiers=[ - # Trove classifiers - # Full list: https://pypi.python.org/pypi?%3Aaction=list_classifiers - 'License :: OSI Approved :: MIT License', - 'Topic :: Multimedia :: Sound/Audio', - 'Topic :: Scientific/Engineering :: Artificial Intelligence', - ], -) diff --git a/spaces/fffiloni/Music_Source_Separation/bytesep/plot_results/__init__.py b/spaces/fffiloni/Music_Source_Separation/bytesep/plot_results/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/engine.io/build/transport.js b/spaces/fffiloni/controlnet-animation-doodle/node_modules/engine.io/build/transport.js deleted file mode 100644 index 78b15fe78dc9818da5039813c6ac9e54c190a6e5..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/engine.io/build/transport.js +++ /dev/null @@ -1,113 +0,0 @@ -"use strict"; -Object.defineProperty(exports, "__esModule", { value: true }); -exports.Transport = void 0; -const events_1 = require("events"); -const parser_v4 = require("engine.io-parser"); -const parser_v3 = require("./parser-v3/index"); -const debug_1 = require("debug"); -const debug = (0, debug_1.default)("engine:transport"); -/** - * Noop function. - * - * @api private - */ -function noop() { } -class Transport extends events_1.EventEmitter { - /** - * Transport constructor. - * - * @param {http.IncomingMessage} request - * @api public - */ - constructor(req) { - super(); - this.readyState = "open"; - this.discarded = false; - this.protocol = req._query.EIO === "4" ? 4 : 3; // 3rd revision by default - this.parser = this.protocol === 4 ? parser_v4 : parser_v3; - } - get readyState() { - return this._readyState; - } - set readyState(state) { - debug("readyState updated from %s to %s (%s)", this._readyState, state, this.name); - this._readyState = state; - } - /** - * Flags the transport as discarded. - * - * @api private - */ - discard() { - this.discarded = true; - } - /** - * Called with an incoming HTTP request. - * - * @param {http.IncomingMessage} request - * @api protected - */ - onRequest(req) { - debug("setting request"); - this.req = req; - } - /** - * Closes the transport. - * - * @api private - */ - close(fn) { - if ("closed" === this.readyState || "closing" === this.readyState) - return; - this.readyState = "closing"; - this.doClose(fn || noop); - } - /** - * Called with a transport error. - * - * @param {String} message error - * @param {Object} error description - * @api protected - */ - onError(msg, desc) { - if (this.listeners("error").length) { - const err = new Error(msg); - // @ts-ignore - err.type = "TransportError"; - // @ts-ignore - err.description = desc; - this.emit("error", err); - } - else { - debug("ignored transport error %s (%s)", msg, desc); - } - } - /** - * Called with parsed out a packets from the data stream. - * - * @param {Object} packet - * @api protected - */ - onPacket(packet) { - this.emit("packet", packet); - } - /** - * Called with the encoded packet data. - * - * @param {String} data - * @api protected - */ - onData(data) { - this.onPacket(this.parser.decodePacket(data)); - } - /** - * Called upon transport close. - * - * @api protected - */ - onClose() { - this.readyState = "closed"; - this.emit("close"); - } -} -exports.Transport = Transport; diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/vary/HISTORY.md b/spaces/fffiloni/controlnet-animation-doodle/node_modules/vary/HISTORY.md deleted file mode 100644 index f6cbcf7f9be9d45391c5e4e14d02541f59087351..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/vary/HISTORY.md +++ /dev/null @@ -1,39 +0,0 @@ -1.1.2 / 2017-09-23 -================== - - * perf: improve header token parsing speed - -1.1.1 / 2017-03-20 -================== - - * perf: hoist regular expression - -1.1.0 / 2015-09-29 -================== - - * Only accept valid field names in the `field` argument - - Ensures the resulting string is a valid HTTP header value - -1.0.1 / 2015-07-08 -================== - - * Fix setting empty header from empty `field` - * perf: enable strict mode - * perf: remove argument reassignments - -1.0.0 / 2014-08-10 -================== - - * Accept valid `Vary` header string as `field` - * Add `vary.append` for low-level string manipulation - * Move to `jshttp` orgainzation - -0.1.0 / 2014-06-05 -================== - - * Support array of fields to set - -0.0.0 / 2014-06-04 -================== - - * Initial release diff --git a/spaces/fffiloni/video_frame_interpolation/examples/readme.md b/spaces/fffiloni/video_frame_interpolation/examples/readme.md deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/finlaymacklon/boxy_violet/app.py b/spaces/finlaymacklon/boxy_violet/app.py deleted file mode 100644 index b3738d580c730eb30a0628f1c2693c0ec2df6bb8..0000000000000000000000000000000000000000 --- a/spaces/finlaymacklon/boxy_violet/app.py +++ /dev/null @@ -1,147 +0,0 @@ -import time - -from theme_dropdown import create_theme_dropdown # noqa: F401 - -import gradio as gr - -dropdown, js = create_theme_dropdown() - -with gr.Blocks(theme='finlaymacklon/boxy_violet') as demo: - with gr.Row().style(equal_height=True): - with gr.Column(scale=10): - gr.Markdown( - """ - # Theme preview: `boxy_violet` - To use this theme, set `theme='finlaymacklon/boxy_violet'` in `gr.Blocks()` or `gr.Interface()`. - You can append an `@` and a semantic version expression, e.g. @>=1.0.0,<2.0.0 to pin to a given version - of this theme. - """ - ) - with gr.Column(scale=3): - with gr.Box(): - dropdown.render() - toggle_dark = gr.Button(value="Toggle Dark").style(full_width=True) - - dropdown.change(None, dropdown, None, _js=js) - toggle_dark.click( - None, - _js=""" - () => { - document.body.classList.toggle('dark'); - document.querySelector('gradio-app').style.backgroundColor = 'var(--color-background-primary)' - } - """, - ) - - name = gr.Textbox( - label="Name", - info="Full name, including middle name. No special characters.", - placeholder="John Doe", - value="John Doe", - interactive=True, - ) - - with gr.Row(): - slider1 = gr.Slider(label="Slider 1") - slider2 = gr.Slider(label="Slider 2") - gr.CheckboxGroup(["A", "B", "C"], label="Checkbox Group") - - with gr.Row(): - with gr.Column(variant="panel", scale=1): - gr.Markdown("## Panel 1") - radio = gr.Radio( - ["A", "B", "C"], - label="Radio", - info="Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.", - ) - drop = gr.Dropdown(["Option 1", "Option 2", "Option 3"], show_label=False) - drop_2 = gr.Dropdown( - ["Option A", "Option B", "Option C"], - multiselect=True, - value=["Option A"], - label="Dropdown", - interactive=True, - ) - check = gr.Checkbox(label="Go") - with gr.Column(variant="panel", scale=2): - img = gr.Image( - "https://gradio-static-files.s3.us-west-2.amazonaws.com/header-image.jpg", label="Image" - ).style(height=320) - with gr.Row(): - go_btn = gr.Button("Go", label="Primary Button", variant="primary") - clear_btn = gr.Button( - "Clear", label="Secondary Button", variant="secondary" - ) - - def go(*args): - time.sleep(3) - return "https://gradio-static-files.s3.us-west-2.amazonaws.com/header-image.jpg" - - go_btn.click(go, [radio, drop, drop_2, check, name], img, api_name="go") - - def clear(): - time.sleep(0.2) - return None - - clear_btn.click(clear, None, img) - - with gr.Row(): - btn1 = gr.Button("Button 1").style(size="sm") - btn2 = gr.UploadButton().style(size="sm") - stop_btn = gr.Button("Stop", label="Stop Button", variant="stop").style( - size="sm" - ) - - with gr.Row(): - gr.Dataframe(value=[[1, 2, 3], [4, 5, 6], [7, 8, 9]], label="Dataframe") - gr.JSON( - value={"a": 1, "b": 2, "c": {"test": "a", "test2": [1, 2, 3]}}, label="JSON" - ) - gr.Label(value={"cat": 0.7, "dog": 0.2, "fish": 0.1}) - gr.File() - with gr.Row(): - gr.ColorPicker() - gr.Video("https://gradio-static-files.s3.us-west-2.amazonaws.com/world.mp4") - gr.Gallery( - [ - ( - "https://gradio-static-files.s3.us-west-2.amazonaws.com/lion.jpg", - "lion", - ), - ( - "https://gradio-static-files.s3.us-west-2.amazonaws.com/logo.png", - "logo", - ), - ( - "https://gradio-static-files.s3.us-west-2.amazonaws.com/tower.jpg", - "tower", - ), - ] - ).style(height="200px", grid=2) - - with gr.Row(): - with gr.Column(scale=2): - chatbot = gr.Chatbot([("Hello", "Hi")], label="Chatbot") - chat_btn = gr.Button("Add messages") - - def chat(history): - time.sleep(2) - yield [["How are you?", "I am good."]] - - chat_btn.click( - lambda history: history - + [["How are you?", "I am good."]] - + (time.sleep(2) or []), - chatbot, - chatbot, - ) - with gr.Column(scale=1): - with gr.Accordion("Advanced Settings"): - gr.Markdown("Hello") - gr.Number(label="Chatbot control 1") - gr.Number(label="Chatbot control 2") - gr.Number(label="Chatbot control 3") - - -if __name__ == "__main__": - demo.queue().launch() diff --git a/spaces/firefighter/PdfSumGPT/utils/read_pdf.py b/spaces/firefighter/PdfSumGPT/utils/read_pdf.py deleted file mode 100644 index 5cc3f16bf8253b66f2852732ef70e9bbd6ee2ded..0000000000000000000000000000000000000000 --- a/spaces/firefighter/PdfSumGPT/utils/read_pdf.py +++ /dev/null @@ -1,17 +0,0 @@ -from typing import List - -import pypdf - - -def read_pdf(filepath: str) -> List[str]: - outputs = [] - with open(filepath, 'rb') as f: - pdf_reader = pypdf.PdfReader(f) - for page in pdf_reader.pages: - outputs.append(page.extract_text()) - return outputs - - -if __name__ == '__main__': - r = read_pdf('data/109-411-2-PB.pdf') - print(r) diff --git a/spaces/flax-community/multilingual-image-captioning/sections/pretraining.md b/spaces/flax-community/multilingual-image-captioning/sections/pretraining.md deleted file mode 100644 index 7a612ca35f1e299b3ff66508dc4c3628ef9a0318..0000000000000000000000000000000000000000 --- a/spaces/flax-community/multilingual-image-captioning/sections/pretraining.md +++ /dev/null @@ -1,18 +0,0 @@ -We follow an encoder-decoder approach for image captioning, where the image encoder is the CLIP Vision model (a ViT transformer). The pre-training task is image-to-text generation. We take the input tokens and shift them using an `` token towards right in order to create the inputs for our model, while the original input tokens become labels. The model is trained on the dataset. in an end-to-end fashion. - -**Dataset** - -The dataset we use for pre-training is a cleaned version of Conceptual 12M. The dataset is downloaded and then broken images are removed which gives us about 10M images. To save time, we use 2.5M of these image-text pairs. Then we use the MarianMT `Helsinki-NLP/opus-mt-{src}-{tgt}` checkpoint to translate the dataset into four different languages - English, French, German, and Spanish, keeping approximately 2.5M examples of each language. - -**Model** - -The model is shown in the image above. We create a custom model in Flax which integerates the CLIP Vision model as an encoder inside mBART model. We also use custom configs and modules in order to accomodate for these changes, and allow loading from mBART and CLIP Vision checkpoints. The image is fed to the CLIP Vision encoder and the shifted token ids are fed to the mBART decoder. We use the `facebook/mbart-large-50` and `openai/clip-vit-base-patch32` checkpoints for mBART and CLIP Vision models, respectively. All our code is available on [GitHub](https://github.com/gchhablani/multilingual-image-captioning). - -Our model reached **eval loss of ~2.6** around ~70K steps. Here are the BLEU scores (out of 1) for different languages: - -|Language |BLEU-1|BLEU-2|BLEU-3|BLEU-4| -|--------------------------|------|------|------|------| -|English | 0.13083| 0.08887| 0.06681 | 0.04899| -|Spanish | 0.15981| 0.09858| 0.06918| 0.04776| -|German | 0.14234| 0.09817| 0.07405| 0.0515| -|French | 0.13021| 0.08862| 0.06598| 0.04647| \ No newline at end of file diff --git a/spaces/flowers-team/SocialAISchool/gym-minigrid/gym_minigrid/social_ai_envs/__init__.py b/spaces/flowers-team/SocialAISchool/gym-minigrid/gym_minigrid/social_ai_envs/__init__.py deleted file mode 100644 index d257e316295bcf6550d6b89d9e997f744731ea31..0000000000000000000000000000000000000000 --- a/spaces/flowers-team/SocialAISchool/gym-minigrid/gym_minigrid/social_ai_envs/__init__.py +++ /dev/null @@ -1,31 +0,0 @@ -from gym_minigrid.social_ai_envs.informationseekingenv import * - -from gym_minigrid.social_ai_envs.leverdoorenv import * -from gym_minigrid.social_ai_envs.marblepassenv import * -from gym_minigrid.social_ai_envs.marblepushenv import * -from gym_minigrid.social_ai_envs.objectscollaborationenv import * - -from gym_minigrid.social_ai_envs.applestealingenv import * - -# from gym_minigrid.social_ai_envs.othersperceptioninferenceparamenv import * -# from gym_minigrid.social_ai_envs.informationseekingparamenv import * -# from gym_minigrid.social_ai_envs.collaborationparamenv import * - -from gym_minigrid.social_ai_envs.socialaiparamenv import * - -# from gym_minigrid.social_ai_envs.testsocialaienvs import * - -from gym_minigrid.social_ai_envs.case_studies_envs.casestudiesenvs import * - -# from gym_minigrid.social_ai_envs.case_studies_envs.pointingcasestudyenvs import * -# from gym_minigrid.social_ai_envs.case_studies_envs.langcolorcasestudyenvs import * -# from gym_minigrid.social_ai_envs.case_studies_envs.langfeedbackcasestudyenvs import * -from gym_minigrid.social_ai_envs.case_studies_envs.informationseekingcasestudyenvs import * - -from gym_minigrid.social_ai_envs.case_studies_envs.imitationcasestudyenvs import * - -from gym_minigrid.social_ai_envs.case_studies_envs.formatscasestudyenvs import * - -from gym_minigrid.social_ai_envs.case_studies_envs.applestealingcasestudiesenvs import * - -from gym_minigrid.social_ai_envs.case_studies_envs.LLMcasestudyenvs import * diff --git a/spaces/fuckyoudeki/AutoGPT/tests/local_cache_test.py b/spaces/fuckyoudeki/AutoGPT/tests/local_cache_test.py deleted file mode 100644 index bb10862656bb500f319ac231ff5bd5438d6fe7e2..0000000000000000000000000000000000000000 --- a/spaces/fuckyoudeki/AutoGPT/tests/local_cache_test.py +++ /dev/null @@ -1,67 +0,0 @@ -# sourcery skip: snake-case-functions -"""Tests for LocalCache class""" -import os -import sys -import unittest - -import pytest - -from autogpt.memory.local import LocalCache - - -def mock_config() -> dict: - """Mock the Config class""" - return type( - "MockConfig", - (object,), - { - "debug_mode": False, - "continuous_mode": False, - "speak_mode": False, - "memory_index": "auto-gpt", - }, - ) - - -@pytest.mark.integration_test -class TestLocalCache(unittest.TestCase): - """Tests for LocalCache class""" - - def setUp(self) -> None: - """Set up the test environment""" - self.cfg = mock_config() - self.cache = LocalCache(self.cfg) - - def test_add(self) -> None: - """Test adding a text to the cache""" - text = "Sample text" - self.cache.add(text) - self.assertIn(text, self.cache.data.texts) - - def test_clear(self) -> None: - """Test clearing the cache""" - self.cache.clear() - self.assertEqual(self.cache.data.texts, []) - - def test_get(self) -> None: - """Test getting a text from the cache""" - text = "Sample text" - self.cache.add(text) - result = self.cache.get(text) - self.assertEqual(result, [text]) - - def test_get_relevant(self) -> None: - """Test getting relevant texts from the cache""" - text1 = "Sample text 1" - text2 = "Sample text 2" - self.cache.add(text1) - self.cache.add(text2) - result = self.cache.get_relevant(text1, 1) - self.assertEqual(result, [text1]) - - def test_get_stats(self) -> None: - """Test getting the cache stats""" - text = "Sample text" - self.cache.add(text) - stats = self.cache.get_stats() - self.assertEqual(stats, (4, self.cache.data.embeddings.shape)) diff --git a/spaces/gradio/dpt-depth-estimation-3d-obj/app.py b/spaces/gradio/dpt-depth-estimation-3d-obj/app.py deleted file mode 100644 index e03e734dc952b388f89c99dda1b7106a4f886079..0000000000000000000000000000000000000000 --- a/spaces/gradio/dpt-depth-estimation-3d-obj/app.py +++ /dev/null @@ -1,119 +0,0 @@ -import gradio as gr -from transformers import DPTFeatureExtractor, DPTForDepthEstimation -import torch -import numpy as np -from PIL import Image -import open3d as o3d -from pathlib import Path -import os - -feature_extractor = DPTFeatureExtractor.from_pretrained("Intel/dpt-large") -model = DPTForDepthEstimation.from_pretrained("Intel/dpt-large") - - -def process_image(image_path): - image_path = Path(image_path) - image_raw = Image.open(image_path) - image = image_raw.resize( - (800, int(800 * image_raw.size[1] / image_raw.size[0])), - Image.Resampling.LANCZOS) - - # prepare image for the model - encoding = feature_extractor(image, return_tensors="pt") - - # forward pass - with torch.no_grad(): - outputs = model(**encoding) - predicted_depth = outputs.predicted_depth - - # interpolate to original size - prediction = torch.nn.functional.interpolate( - predicted_depth.unsqueeze(1), - size=image.size[::-1], - mode="bicubic", - align_corners=False, - ).squeeze() - output = prediction.cpu().numpy() - depth_image = (output * 255 / np.max(output)).astype('uint8') - try: - gltf_path = create_3d_obj(np.array(image), depth_image, image_path) - img = Image.fromarray(depth_image) - return [img, gltf_path, gltf_path] - except Exception as e: - gltf_path = create_3d_obj( - np.array(image), depth_image, image_path, depth=8) - img = Image.fromarray(depth_image) - return [img, gltf_path, gltf_path] - except: - print("Error reconstructing 3D model") - raise Exception("Error reconstructing 3D model") - - -def create_3d_obj(rgb_image, depth_image, image_path, depth=10): - depth_o3d = o3d.geometry.Image(depth_image) - image_o3d = o3d.geometry.Image(rgb_image) - rgbd_image = o3d.geometry.RGBDImage.create_from_color_and_depth( - image_o3d, depth_o3d, convert_rgb_to_intensity=False) - w = int(depth_image.shape[1]) - h = int(depth_image.shape[0]) - - camera_intrinsic = o3d.camera.PinholeCameraIntrinsic() - camera_intrinsic.set_intrinsics(w, h, 500, 500, w/2, h/2) - - pcd = o3d.geometry.PointCloud.create_from_rgbd_image( - rgbd_image, camera_intrinsic) - - print('normals') - pcd.normals = o3d.utility.Vector3dVector( - np.zeros((1, 3))) # invalidate existing normals - pcd.estimate_normals( - search_param=o3d.geometry.KDTreeSearchParamHybrid(radius=0.01, max_nn=30)) - pcd.orient_normals_towards_camera_location( - camera_location=np.array([0., 0., 1000.])) - pcd.transform([[1, 0, 0, 0], - [0, -1, 0, 0], - [0, 0, -1, 0], - [0, 0, 0, 1]]) - pcd.transform([[-1, 0, 0, 0], - [0, 1, 0, 0], - [0, 0, 1, 0], - [0, 0, 0, 1]]) - - print('run Poisson surface reconstruction') - with o3d.utility.VerbosityContextManager(o3d.utility.VerbosityLevel.Debug) as cm: - mesh_raw, densities = o3d.geometry.TriangleMesh.create_from_point_cloud_poisson( - pcd, depth=depth, width=0, scale=1.1, linear_fit=True) - - voxel_size = max(mesh_raw.get_max_bound() - mesh_raw.get_min_bound()) / 256 - print(f'voxel_size = {voxel_size:e}') - mesh = mesh_raw.simplify_vertex_clustering( - voxel_size=voxel_size, - contraction=o3d.geometry.SimplificationContraction.Average) - - # vertices_to_remove = densities < np.quantile(densities, 0.001) - # mesh.remove_vertices_by_mask(vertices_to_remove) - bbox = pcd.get_axis_aligned_bounding_box() - mesh_crop = mesh.crop(bbox) - gltf_path = f'./{image_path.stem}.gltf' - o3d.io.write_triangle_mesh( - gltf_path, mesh_crop, write_triangle_uvs=True) - return gltf_path - - -title = "Demo: zero-shot depth estimation with DPT + 3D Point Cloud" -description = "This demo is a variation from the original DPT Demo. It uses the DPT model to predict the depth of an image and then uses 3D Point Cloud to create a 3D object." -examples = [["examples/" + img] for img in os.listdir("examples/")] - -iface = gr.Interface(fn=process_image, - inputs=[gr.Image( - type="filepath", label="Input Image")], - outputs=[gr.Image(label="predicted depth", type="pil"), - gr.Model3D(label="3d mesh reconstruction", clear_color=[ - 1.0, 1.0, 1.0, 1.0]), - gr.File(label="3d gLTF")], - title=title, - description=description, - examples=examples, - allow_flagging="never", - cache_examples=False) -iface.launch(debug=True, enable_queue=False) diff --git a/spaces/gsaivinay/open_llm_leaderboard/Makefile b/spaces/gsaivinay/open_llm_leaderboard/Makefile deleted file mode 100644 index b5685772804c8af4235a8504dc6752bfc9ae5d1d..0000000000000000000000000000000000000000 --- a/spaces/gsaivinay/open_llm_leaderboard/Makefile +++ /dev/null @@ -1,13 +0,0 @@ -.PHONY: style format - - -style: - python -m black --line-length 119 . - python -m isort . - ruff check --fix . - - -quality: - python -m black --check --line-length 119 . - python -m isort --check-only . - ruff check . diff --git a/spaces/gwang-kim/DATID-3D/pose_estimation/nvdiffrast/build/lib/nvdiffrast/common/glutil.cpp b/spaces/gwang-kim/DATID-3D/pose_estimation/nvdiffrast/build/lib/nvdiffrast/common/glutil.cpp deleted file mode 100644 index 2af3e931b6808e2575d8a209d5485746499b3374..0000000000000000000000000000000000000000 --- a/spaces/gwang-kim/DATID-3D/pose_estimation/nvdiffrast/build/lib/nvdiffrast/common/glutil.cpp +++ /dev/null @@ -1,403 +0,0 @@ -// Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved. -// -// NVIDIA CORPORATION and its licensors retain all intellectual property -// and proprietary rights in and to this software, related documentation -// and any modifications thereto. Any use, reproduction, disclosure or -// distribution of this software and related documentation without an express -// license agreement from NVIDIA CORPORATION is strictly prohibited. - -//------------------------------------------------------------------------ -// Common. -//------------------------------------------------------------------------ - -#include "framework.h" -#include "glutil.h" -#include -#include - -// Create the function pointers. -#define GLUTIL_EXT(return_type, name, ...) return_type (GLAPIENTRY* name)(__VA_ARGS__) = 0; -#include "glutil_extlist.h" -#undef GLUTIL_EXT - -// Track initialization status. -static volatile bool s_glExtInitialized = false; - -// Error strings. -const char* getGLErrorString(GLenum err) -{ - switch(err) - { - case GL_NO_ERROR: return "GL_NO_ERROR"; - case GL_INVALID_ENUM: return "GL_INVALID_ENUM"; - case GL_INVALID_VALUE: return "GL_INVALID_VALUE"; - case GL_INVALID_OPERATION: return "GL_INVALID_OPERATION"; - case GL_STACK_OVERFLOW: return "GL_STACK_OVERFLOW"; - case GL_STACK_UNDERFLOW: return "GL_STACK_UNDERFLOW"; - case GL_OUT_OF_MEMORY: return "GL_OUT_OF_MEMORY"; - case GL_INVALID_FRAMEBUFFER_OPERATION: return "GL_INVALID_FRAMEBUFFER_OPERATION"; - case GL_TABLE_TOO_LARGE: return "GL_TABLE_TOO_LARGE"; - case GL_CONTEXT_LOST: return "GL_CONTEXT_LOST"; - } - return "Unknown error"; -} - -//------------------------------------------------------------------------ -// Windows. -//------------------------------------------------------------------------ - -#ifdef _WIN32 - -static CRITICAL_SECTION getInitializedCriticalSection(void) -{ - CRITICAL_SECTION cs; - InitializeCriticalSection(&cs); - return cs; -} - -static CRITICAL_SECTION s_getProcAddressMutex = getInitializedCriticalSection(); - -static void safeGetProcAddress(const char* name, PROC* pfn) -{ - PROC result = wglGetProcAddress(name); - if (!result) - { - LeaveCriticalSection(&s_getProcAddressMutex); // Prepare for thread exit. - LOG(FATAL) << "wglGetProcAddress() failed for '" << name << "'"; - exit(1); // Should never get here but make sure we exit. - } - *pfn = result; -} - -static void initializeGLExtensions(void) -{ - // Use critical section for thread safety. - EnterCriticalSection(&s_getProcAddressMutex); - - // Only dig function pointers if not done already. - if (!s_glExtInitialized) - { - // Generate code to populate the function pointers. -#define GLUTIL_EXT(return_type, name, ...) safeGetProcAddress(#name, (PROC*)&name); -#include "glutil_extlist.h" -#undef GLUTIL_EXT - - // Mark as initialized. - s_glExtInitialized = true; - } - - // Done. - LeaveCriticalSection(&s_getProcAddressMutex); - return; -} - -void setGLContext(GLContext& glctx) -{ - if (!glctx.hglrc) - LOG(FATAL) << "setGLContext() called with null gltcx"; - if (!wglMakeCurrent(glctx.hdc, glctx.hglrc)) - LOG(FATAL) << "wglMakeCurrent() failed when setting GL context"; - - if (glctx.extInitialized) - return; - initializeGLExtensions(); - glctx.extInitialized = 1; -} - -void releaseGLContext(void) -{ - if (!wglMakeCurrent(NULL, NULL)) - LOG(FATAL) << "wglMakeCurrent() failed when releasing GL context"; -} - -extern "C" int set_gpu(const char*); // In setgpu.lib -GLContext createGLContext(int cudaDeviceIdx) -{ - if (cudaDeviceIdx >= 0) - { - char pciBusId[256] = ""; - LOG(INFO) << "Creating GL context for Cuda device " << cudaDeviceIdx; - if (cudaDeviceGetPCIBusId(pciBusId, 255, cudaDeviceIdx)) - { - LOG(INFO) << "PCI bus id query failed"; - } - else - { - int res = set_gpu(pciBusId); - LOG(INFO) << "Selecting device with PCI bus id " << pciBusId << " - " << (res ? "failed, expect crash or major slowdown" : "success"); - } - } - - HINSTANCE hInstance = GetModuleHandle(NULL); - WNDCLASS wc = {}; - wc.style = CS_OWNDC; - wc.lpfnWndProc = DefWindowProc; - wc.hInstance = hInstance; - wc.lpszClassName = "__DummyGLClassCPP"; - int res = RegisterClass(&wc); - - HWND hwnd = CreateWindow( - "__DummyGLClassCPP", // lpClassName - "__DummyGLWindowCPP", // lpWindowName - WS_OVERLAPPEDWINDOW, // dwStyle - CW_USEDEFAULT, // x - CW_USEDEFAULT, // y - 0, 0, // nWidth, nHeight - NULL, NULL, // hWndParent, hMenu - hInstance, // hInstance - NULL // lpParam - ); - - PIXELFORMATDESCRIPTOR pfd = {}; - pfd.dwFlags = PFD_SUPPORT_OPENGL; - pfd.iPixelType = PFD_TYPE_RGBA; - pfd.iLayerType = PFD_MAIN_PLANE; - pfd.cColorBits = 32; - pfd.cDepthBits = 24; - pfd.cStencilBits = 8; - - HDC hdc = GetDC(hwnd); - int pixelformat = ChoosePixelFormat(hdc, &pfd); - SetPixelFormat(hdc, pixelformat, &pfd); - - HGLRC hglrc = wglCreateContext(hdc); - LOG(INFO) << std::hex << std::setfill('0') - << "WGL OpenGL context created (hdc: 0x" << std::setw(8) << (uint32_t)(uintptr_t)hdc - << ", hglrc: 0x" << std::setw(8) << (uint32_t)(uintptr_t)hglrc << ")"; - - GLContext glctx = {hdc, hglrc, 0}; - return glctx; -} - -void destroyGLContext(GLContext& glctx) -{ - if (!glctx.hglrc) - LOG(FATAL) << "destroyGLContext() called with null gltcx"; - - // If this is the current context, release it. - if (wglGetCurrentContext() == glctx.hglrc) - releaseGLContext(); - - HWND hwnd = WindowFromDC(glctx.hdc); - if (!hwnd) - LOG(FATAL) << "WindowFromDC() failed"; - if (!ReleaseDC(hwnd, glctx.hdc)) - LOG(FATAL) << "ReleaseDC() failed"; - if (!wglDeleteContext(glctx.hglrc)) - LOG(FATAL) << "wglDeleteContext() failed"; - if (!DestroyWindow(hwnd)) - LOG(FATAL) << "DestroyWindow() failed"; - - LOG(INFO) << std::hex << std::setfill('0') - << "WGL OpenGL context destroyed (hdc: 0x" << std::setw(8) << (uint32_t)(uintptr_t)glctx.hdc - << ", hglrc: 0x" << std::setw(8) << (uint32_t)(uintptr_t)glctx.hglrc << ")"; - - memset(&glctx, 0, sizeof(GLContext)); -} - -#endif // _WIN32 - -//------------------------------------------------------------------------ -// Linux. -//------------------------------------------------------------------------ - -#ifdef __linux__ - -static pthread_mutex_t s_getProcAddressMutex; - -typedef void (*PROCFN)(); - -static void safeGetProcAddress(const char* name, PROCFN* pfn) -{ - PROCFN result = eglGetProcAddress(name); - if (!result) - { - pthread_mutex_unlock(&s_getProcAddressMutex); // Prepare for thread exit. - LOG(FATAL) << "wglGetProcAddress() failed for '" << name << "'"; - exit(1); // Should never get here but make sure we exit. - } - *pfn = result; -} - -static void initializeGLExtensions(void) -{ - pthread_mutex_lock(&s_getProcAddressMutex); - - // Only dig function pointers if not done already. - if (!s_glExtInitialized) - { - // Generate code to populate the function pointers. -#define GLUTIL_EXT(return_type, name, ...) safeGetProcAddress(#name, (PROCFN*)&name); -#include "glutil_extlist.h" -#undef GLUTIL_EXT - - // Mark as initialized. - s_glExtInitialized = true; - } - - pthread_mutex_unlock(&s_getProcAddressMutex); - return; -} - -void setGLContext(GLContext& glctx) -{ - if (!glctx.context) - LOG(FATAL) << "setGLContext() called with null gltcx"; - - if (!eglMakeCurrent(glctx.display, EGL_NO_SURFACE, EGL_NO_SURFACE, glctx.context)) - LOG(ERROR) << "eglMakeCurrent() failed when setting GL context"; - - if (glctx.extInitialized) - return; - initializeGLExtensions(); - glctx.extInitialized = 1; -} - -void releaseGLContext(void) -{ - EGLDisplay display = eglGetCurrentDisplay(); - if (display == EGL_NO_DISPLAY) - LOG(WARNING) << "releaseGLContext() called with no active display"; - if (!eglMakeCurrent(display, EGL_NO_SURFACE, EGL_NO_SURFACE, EGL_NO_CONTEXT)) - LOG(FATAL) << "eglMakeCurrent() failed when releasing GL context"; -} - -static EGLDisplay getCudaDisplay(int cudaDeviceIdx) -{ - typedef EGLBoolean (*eglQueryDevicesEXT_t)(EGLint, EGLDeviceEXT, EGLint*); - typedef EGLBoolean (*eglQueryDeviceAttribEXT_t)(EGLDeviceEXT, EGLint, EGLAttrib*); - typedef EGLDisplay (*eglGetPlatformDisplayEXT_t)(EGLenum, void*, const EGLint*); - - eglQueryDevicesEXT_t eglQueryDevicesEXT = (eglQueryDevicesEXT_t)eglGetProcAddress("eglQueryDevicesEXT"); - if (!eglQueryDevicesEXT) - { - LOG(INFO) << "eglGetProcAddress(\"eglQueryDevicesEXT\") failed"; - return 0; - } - - eglQueryDeviceAttribEXT_t eglQueryDeviceAttribEXT = (eglQueryDeviceAttribEXT_t)eglGetProcAddress("eglQueryDeviceAttribEXT"); - if (!eglQueryDeviceAttribEXT) - { - LOG(INFO) << "eglGetProcAddress(\"eglQueryDeviceAttribEXT\") failed"; - return 0; - } - - eglGetPlatformDisplayEXT_t eglGetPlatformDisplayEXT = (eglGetPlatformDisplayEXT_t)eglGetProcAddress("eglGetPlatformDisplayEXT"); - if (!eglGetPlatformDisplayEXT) - { - LOG(INFO) << "eglGetProcAddress(\"eglGetPlatformDisplayEXT\") failed"; - return 0; - } - - int num_devices = 0; - eglQueryDevicesEXT(0, 0, &num_devices); - if (!num_devices) - return 0; - - EGLDisplay display = 0; - EGLDeviceEXT* devices = (EGLDeviceEXT*)malloc(num_devices * sizeof(void*)); - eglQueryDevicesEXT(num_devices, devices, &num_devices); - for (int i=0; i < num_devices; i++) - { - EGLDeviceEXT device = devices[i]; - intptr_t value = -1; - if (eglQueryDeviceAttribEXT(device, EGL_CUDA_DEVICE_NV, &value) && value == cudaDeviceIdx) - { - display = eglGetPlatformDisplayEXT(EGL_PLATFORM_DEVICE_EXT, device, 0); - break; - } - } - - free(devices); - return display; -} - -GLContext createGLContext(int cudaDeviceIdx) -{ - EGLDisplay display = 0; - - if (cudaDeviceIdx >= 0) - { - char pciBusId[256] = ""; - LOG(INFO) << "Creating GL context for Cuda device " << cudaDeviceIdx; - display = getCudaDisplay(cudaDeviceIdx); - if (!display) - LOG(INFO) << "Failed, falling back to default display"; - } - - if (!display) - { - display = eglGetDisplay(EGL_DEFAULT_DISPLAY); - if (display == EGL_NO_DISPLAY) - LOG(FATAL) << "eglGetDisplay() failed"; - } - - EGLint major; - EGLint minor; - if (!eglInitialize(display, &major, &minor)) - LOG(FATAL) << "eglInitialize() failed"; - - // Choose configuration. - - const EGLint context_attribs[] = { - EGL_RED_SIZE, 8, - EGL_GREEN_SIZE, 8, - EGL_BLUE_SIZE, 8, - EGL_ALPHA_SIZE, 8, - EGL_DEPTH_SIZE, 24, - EGL_STENCIL_SIZE, 8, - EGL_RENDERABLE_TYPE, EGL_OPENGL_BIT, - EGL_SURFACE_TYPE, EGL_PBUFFER_BIT, - EGL_NONE - }; - - EGLConfig config; - EGLint num_config; - if (!eglChooseConfig(display, context_attribs, &config, 1, &num_config)) - LOG(FATAL) << "eglChooseConfig() failed"; - - // Create GL context. - - if (!eglBindAPI(EGL_OPENGL_API)) - LOG(FATAL) << "eglBindAPI() failed"; - - EGLContext context = eglCreateContext(display, config, EGL_NO_CONTEXT, NULL); - if (context == EGL_NO_CONTEXT) - LOG(FATAL) << "eglCreateContext() failed"; - - // Done. - - LOG(INFO) << "EGL " << (int)minor << "." << (int)major << " OpenGL context created (disp: 0x" - << std::hex << std::setfill('0') - << std::setw(16) << (uintptr_t)display - << ", ctx: 0x" << std::setw(16) << (uintptr_t)context << ")"; - - GLContext glctx = {display, context, 0}; - return glctx; -} - -void destroyGLContext(GLContext& glctx) -{ - if (!glctx.context) - LOG(FATAL) << "destroyGLContext() called with null gltcx"; - - // If this is the current context, release it. - if (eglGetCurrentContext() == glctx.context) - releaseGLContext(); - - if (!eglDestroyContext(glctx.display, glctx.context)) - LOG(ERROR) << "eglDestroyContext() failed"; - - LOG(INFO) << "EGL OpenGL context destroyed (disp: 0x" - << std::hex << std::setfill('0') - << std::setw(16) << (uintptr_t)glctx.display - << ", ctx: 0x" << std::setw(16) << (uintptr_t)glctx.context << ")"; - - memset(&glctx, 0, sizeof(GLContext)); -} - -//------------------------------------------------------------------------ - -#endif // __linux__ - -//------------------------------------------------------------------------ diff --git a/spaces/h2oai/h2ogpt-chatbot/gradio_utils/grclient.py b/spaces/h2oai/h2ogpt-chatbot/gradio_utils/grclient.py deleted file mode 100644 index 8346a61cad99d492f8a10de17851454488364b83..0000000000000000000000000000000000000000 --- a/spaces/h2oai/h2ogpt-chatbot/gradio_utils/grclient.py +++ /dev/null @@ -1,82 +0,0 @@ -import traceback -from typing import Callable -import os - -from gradio_client.client import Job - -os.environ['HF_HUB_DISABLE_TELEMETRY'] = '1' - -from gradio_client import Client - - -class GradioClient(Client): - """ - Parent class of gradio client - To handle automatically refreshing client if detect gradio server changed - """ - - def __init__(self, *args, **kwargs): - self.args = args - self.kwargs = kwargs - super().__init__(*args, **kwargs) - self.server_hash = self.get_server_hash() - - def get_server_hash(self): - """ - Get server hash using super without any refresh action triggered - Returns: git hash of gradio server - """ - return super().submit(api_name='/system_hash').result() - - def refresh_client_if_should(self): - # get current hash in order to update api_name -> fn_index map in case gradio server changed - # FIXME: Could add cli api as hash - server_hash = self.get_server_hash() - if self.server_hash != server_hash: - self.refresh_client() - self.server_hash = server_hash - else: - self.reset_session() - - def refresh_client(self): - """ - Ensure every client call is independent - Also ensure map between api_name and fn_index is updated in case server changed (e.g. restarted with new code) - Returns: - """ - # need session hash to be new every time, to avoid "generator already executing" - self.reset_session() - - client = Client(*self.args, **self.kwargs) - for k, v in client.__dict__.items(): - setattr(self, k, v) - - def submit( - self, - *args, - api_name: str | None = None, - fn_index: int | None = None, - result_callbacks: Callable | list[Callable] | None = None, - ) -> Job: - # Note predict calls submit - try: - self.refresh_client_if_should() - job = super().submit(*args, api_name=api_name, fn_index=fn_index) - except Exception as e: - print("Hit e=%s" % str(e), flush=True) - # force reconfig in case only that - self.refresh_client() - job = super().submit(*args, api_name=api_name, fn_index=fn_index) - - # see if immediately failed - e = job.future._exception - if e is not None: - print("GR job failed: %s %s" % (str(e), ''.join(traceback.format_tb(e.__traceback__))), flush=True) - # force reconfig in case only that - self.refresh_client() - job = super().submit(*args, api_name=api_name, fn_index=fn_index) - e2 = job.future._exception - if e2 is not None: - print("GR job failed again: %s\n%s" % (str(e2), ''.join(traceback.format_tb(e2.__traceback__))), flush=True) - - return job diff --git a/spaces/haakohu/deep_privacy2_face/dp2/metrics/fid_clip.py b/spaces/haakohu/deep_privacy2_face/dp2/metrics/fid_clip.py deleted file mode 100644 index 43bde1bf74c69399308ed15ceda5aaeb59a69818..0000000000000000000000000000000000000000 --- a/spaces/haakohu/deep_privacy2_face/dp2/metrics/fid_clip.py +++ /dev/null @@ -1,84 +0,0 @@ -import pickle -import torch -import torchvision -from pathlib import Path -from dp2 import utils -import tops -try: - import clip -except ImportError: - print("Could not import clip.") -from torch_fidelity.metric_fid import fid_features_to_statistics, fid_statistics_to_metric -clip_model = None -clip_preprocess = None - - -@torch.no_grad() -def compute_fid_clip( - dataloader, generator, - cache_directory, - data_len=None, - **kwargs - ) -> dict: - """ - FID CLIP following the description in The Role of ImageNet Classes in Frechet Inception Distance, Thomas Kynkaamniemi et al. - Args: - n_samples (int): Creates N samples from same image to calculate stats - """ - global clip_model, clip_preprocess - if clip_model is None: - clip_model, preprocess = clip.load("ViT-B/32", device="cpu") - normalize_fn = preprocess.transforms[-1] - img_mean = normalize_fn.mean - img_std = normalize_fn.std - clip_model = tops.to_cuda(clip_model.visual) - clip_preprocess = tops.to_cuda(torch.nn.Sequential( - torchvision.transforms.Resize((224, 224), interpolation=torchvision.transforms.InterpolationMode.BICUBIC), - torchvision.transforms.Normalize(img_mean, img_std) - )) - cache_directory = Path(cache_directory) - if data_len is None: - data_len = len(dataloader)*dataloader.batch_size - fid_cache_path = cache_directory.joinpath("fid_stats_clip.pkl") - has_fid_cache = fid_cache_path.is_file() - if not has_fid_cache: - fid_features_real = torch.zeros(data_len, 512, dtype=torch.float32, device=tops.get_device()) - fid_features_fake = torch.zeros(data_len, 512, dtype=torch.float32, device=tops.get_device()) - eidx = 0 - n_samples_seen = 0 - for batch in utils.tqdm_(iter(dataloader), desc="Computing FID CLIP."): - sidx = eidx - eidx = sidx + batch["img"].shape[0] - n_samples_seen += batch["img"].shape[0] - with torch.cuda.amp.autocast(tops.AMP()): - fakes = generator(**batch)["img"] - real_data = batch["img"] - fakes = utils.denormalize_img(fakes) - real_data = utils.denormalize_img(real_data) - if not has_fid_cache: - real_data = clip_preprocess(real_data) - fid_features_real[sidx:eidx] = clip_model(real_data) - fakes = clip_preprocess(fakes) - fid_features_fake[sidx:eidx] = clip_model(fakes) - fid_features_fake = fid_features_fake[:n_samples_seen] - fid_features_fake = tops.all_gather_uneven(fid_features_fake).cpu() - if has_fid_cache: - if tops.rank() == 0: - with open(fid_cache_path, "rb") as fp: - fid_stat_real = pickle.load(fp) - else: - fid_features_real = fid_features_real[:n_samples_seen] - fid_features_real = tops.all_gather_uneven(fid_features_real).cpu() - assert fid_features_real.shape == fid_features_fake.shape - if tops.rank() == 0: - fid_stat_real = fid_features_to_statistics(fid_features_real) - cache_directory.mkdir(exist_ok=True, parents=True) - with open(fid_cache_path, "wb") as fp: - pickle.dump(fid_stat_real, fp) - - if tops.rank() == 0: - print("Starting calculation of fid from features of shape:", fid_features_fake.shape) - fid_stat_fake = fid_features_to_statistics(fid_features_fake) - fid_ = fid_statistics_to_metric(fid_stat_real, fid_stat_fake, verbose=False)["frechet_inception_distance"] - return dict(fid_clip=fid_) - return dict(fid_clip=-1) diff --git a/spaces/hahahafofo/vits-uma-genshin-honkai/commons.py b/spaces/hahahafofo/vits-uma-genshin-honkai/commons.py deleted file mode 100644 index 40fcc05364d4815971f5c6f9dbb8dcef8e3ec1e9..0000000000000000000000000000000000000000 --- a/spaces/hahahafofo/vits-uma-genshin-honkai/commons.py +++ /dev/null @@ -1,172 +0,0 @@ -import math -import torch -from torch.nn import functional as F -import torch.jit - - -def script_method(fn, _rcb=None): - return fn - - -def script(obj, optimize=True, _frames_up=0, _rcb=None): - return obj - - -torch.jit.script_method = script_method -torch.jit.script = script - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size*dilation - dilation)/2) - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def intersperse(lst, item): - result = [item] * (len(lst) * 2 + 1) - result[1::2] = lst - return result - - -def kl_divergence(m_p, logs_p, m_q, logs_q): - """KL(P||Q)""" - kl = (logs_q - logs_p) - 0.5 - kl += 0.5 * (torch.exp(2. * logs_p) + ((m_p - m_q)**2)) * torch.exp(-2. * logs_q) - return kl - - -def rand_gumbel(shape): - """Sample from the Gumbel distribution, protect from overflows.""" - uniform_samples = torch.rand(shape) * 0.99998 + 0.00001 - return -torch.log(-torch.log(uniform_samples)) - - -def rand_gumbel_like(x): - g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device) - return g - - -def slice_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, :, idx_str:idx_end] - return ret - - -def rand_slice_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def get_timing_signal_1d( - length, channels, min_timescale=1.0, max_timescale=1.0e4): - position = torch.arange(length, dtype=torch.float) - num_timescales = channels // 2 - log_timescale_increment = ( - math.log(float(max_timescale) / float(min_timescale)) / - (num_timescales - 1)) - inv_timescales = min_timescale * torch.exp( - torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment) - scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1) - signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0) - signal = F.pad(signal, [0, 0, 0, channels % 2]) - signal = signal.view(1, channels, length) - return signal - - -def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return x + signal.to(dtype=x.dtype, device=x.device) - - -def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis) - - -def subsequent_mask(length): - mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0) - return mask - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def shift_1d(x): - x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1] - return x - - -def sequence_mask(length, max_length=None): - if max_length is None: - max_length = length.max() - x = torch.arange(max_length, dtype=length.dtype, device=length.device) - return x.unsqueeze(0) < length.unsqueeze(1) - - -def generate_path(duration, mask): - """ - duration: [b, 1, t_x] - mask: [b, 1, t_y, t_x] - """ - device = duration.device - - b, _, t_y, t_x = mask.shape - cum_duration = torch.cumsum(duration, -1) - - cum_duration_flat = cum_duration.view(b * t_x) - path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype) - path = path.view(b, t_x, t_y) - path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1] - path = path.unsqueeze(1).transpose(2,3) * mask - return path - - -def clip_grad_value_(parameters, clip_value, norm_type=2): - if isinstance(parameters, torch.Tensor): - parameters = [parameters] - parameters = list(filter(lambda p: p.grad is not None, parameters)) - norm_type = float(norm_type) - if clip_value is not None: - clip_value = float(clip_value) - - total_norm = 0 - for p in parameters: - param_norm = p.grad.data.norm(norm_type) - total_norm += param_norm.item() ** norm_type - if clip_value is not None: - p.grad.data.clamp_(min=-clip_value, max=clip_value) - total_norm = total_norm ** (1. / norm_type) - return total_norm diff --git "a/spaces/hands012/gpt-academic/crazy_functions/\346\225\260\345\255\246\345\212\250\347\224\273\347\224\237\346\210\220manim.py" "b/spaces/hands012/gpt-academic/crazy_functions/\346\225\260\345\255\246\345\212\250\347\224\273\347\224\237\346\210\220manim.py" deleted file mode 100644 index 5851b9c67110ddcdb2ada0bb4d32e4c0154bb272..0000000000000000000000000000000000000000 --- "a/spaces/hands012/gpt-academic/crazy_functions/\346\225\260\345\255\246\345\212\250\347\224\273\347\224\237\346\210\220manim.py" +++ /dev/null @@ -1,187 +0,0 @@ -from toolbox import CatchException, update_ui, gen_time_str -from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive -from .crazy_utils import input_clipping - -def inspect_dependency(chatbot, history): - # 尝试导入依赖,如果缺少依赖,则给出安装建议 - try: - import manim - return True - except: - chatbot.append(["导入依赖失败", "使用该模块需要额外依赖,安装方法:```pip install manimgl```"]) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return False - -def eval_manim(code): - import subprocess, sys, os, shutil - - with open('gpt_log/MyAnimation.py', 'w', encoding='utf8') as f: - f.write(code) - - def get_class_name(class_string): - import re - # Use regex to extract the class name - class_name = re.search(r'class (\w+)\(', class_string).group(1) - return class_name - - class_name = get_class_name(code) - - try: - subprocess.check_output([sys.executable, '-c', f"from gpt_log.MyAnimation import {class_name}; {class_name}().render()"]) - shutil.move('media/videos/1080p60/{class_name}.mp4', f'gpt_log/{class_name}-{gen_time_str()}.mp4') - return f'gpt_log/{gen_time_str()}.mp4' - except subprocess.CalledProcessError as e: - output = e.output.decode() - print(f"Command returned non-zero exit status {e.returncode}: {output}.") - return f"Evaluating python script failed: {e.output}." - except: - print('generating mp4 failed') - return "Generating mp4 failed." - - -def get_code_block(reply): - import re - pattern = r"```([\s\S]*?)```" # regex pattern to match code blocks - matches = re.findall(pattern, reply) # find all code blocks in text - if len(matches) != 1: - raise RuntimeError("GPT is not generating proper code.") - return matches[0].strip('python') # code block - -@CatchException -def 动画生成(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): - """ - txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径 - llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行 - plugin_kwargs 插件模型的参数,暂时没有用武之地 - chatbot 聊天显示框的句柄,用于显示给用户 - history 聊天历史,前情提要 - system_prompt 给gpt的静默提醒 - web_port 当前软件运行的端口号 - """ - # 清空历史,以免输入溢出 - history = [] - - # 基本信息:功能、贡献者 - chatbot.append([ - "函数插件功能?", - "生成数学动画, 此插件处于开发阶段, 建议暂时不要使用, 作者: binary-husky, 插件初始化中 ..." - ]) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - # 尝试导入依赖, 如果缺少依赖, 则给出安装建议 - dep_ok = yield from inspect_dependency(chatbot=chatbot, history=history) # 刷新界面 - if not dep_ok: return - - # 输入 - i_say = f'Generate a animation to show: ' + txt - demo = ["Here is some examples of manim", examples_of_manim()] - _, demo = input_clipping(inputs="", history=demo, max_token_limit=2560) - # 开始 - gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive( - inputs=i_say, inputs_show_user=i_say, - llm_kwargs=llm_kwargs, chatbot=chatbot, history=demo, - sys_prompt= - r"Write a animation script with 3blue1brown's manim. "+ - r"Please begin with `from manim import *`. " + - r"Answer me with a code block wrapped by ```." - ) - chatbot.append(["开始生成动画", "..."]) - history.extend([i_say, gpt_say]) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新 - - # 将代码转为动画 - code = get_code_block(gpt_say) - res = eval_manim(code) - - chatbot.append(("生成的视频文件路径", res)) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新 - -# 在这里放一些网上搜集的demo,辅助gpt生成代码 -def examples_of_manim(): - return r""" - - -``` - -class MovingGroupToDestination(Scene): - def construct(self): - group = VGroup(Dot(LEFT), Dot(ORIGIN), Dot(RIGHT, color=RED), Dot(2 * RIGHT)).scale(1.4) - dest = Dot([4, 3, 0], color=YELLOW) - self.add(group, dest) - self.play(group.animate.shift(dest.get_center() - group[2].get_center())) - self.wait(0.5) - -``` - - -``` - -class LatexWithMovingFramebox(Scene): - def construct(self): - text=MathTex( - "\\frac{d}{dx}f(x)g(x)=","f(x)\\frac{d}{dx}g(x)","+", - "g(x)\\frac{d}{dx}f(x)" - ) - self.play(Write(text)) - framebox1 = SurroundingRectangle(text[1], buff = .1) - framebox2 = SurroundingRectangle(text[3], buff = .1) - self.play( - Create(framebox1), - ) - self.wait() - self.play( - ReplacementTransform(framebox1,framebox2), - ) - self.wait() - -``` - - - -``` - -class PointWithTrace(Scene): - def construct(self): - path = VMobject() - dot = Dot() - path.set_points_as_corners([dot.get_center(), dot.get_center()]) - def update_path(path): - previous_path = path.copy() - previous_path.add_points_as_corners([dot.get_center()]) - path.become(previous_path) - path.add_updater(update_path) - self.add(path, dot) - self.play(Rotating(dot, radians=PI, about_point=RIGHT, run_time=2)) - self.wait() - self.play(dot.animate.shift(UP)) - self.play(dot.animate.shift(LEFT)) - self.wait() - -``` - -``` - -# do not use get_graph, this funciton is deprecated - -class ExampleFunctionGraph(Scene): - def construct(self): - cos_func = FunctionGraph( - lambda t: np.cos(t) + 0.5 * np.cos(7 * t) + (1 / 7) * np.cos(14 * t), - color=RED, - ) - - sin_func_1 = FunctionGraph( - lambda t: np.sin(t) + 0.5 * np.sin(7 * t) + (1 / 7) * np.sin(14 * t), - color=BLUE, - ) - - sin_func_2 = FunctionGraph( - lambda t: np.sin(t) + 0.5 * np.sin(7 * t) + (1 / 7) * np.sin(14 * t), - x_range=[-4, 4], - color=GREEN, - ).move_to([0, 1, 0]) - - self.add(cos_func, sin_func_1, sin_func_2) - -``` -""" \ No newline at end of file diff --git a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/dev/parse_results.sh b/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/dev/parse_results.sh deleted file mode 100644 index 874b688889049e869854273c83182e5b019315b3..0000000000000000000000000000000000000000 --- a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/dev/parse_results.sh +++ /dev/null @@ -1,45 +0,0 @@ -#!/bin/bash -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved - -# A shell script that parses metrics from the log file. -# Make it easier for developers to track performance of models. - -LOG="$1" - -if [[ -z "$LOG" ]]; then - echo "Usage: $0 /path/to/log/file" - exit 1 -fi - -# [12/15 11:47:32] trainer INFO: Total training time: 12:15:04.446477 (0.4900 s / it) -# [12/15 11:49:03] inference INFO: Total inference time: 0:01:25.326167 (0.13652186737060548 s / demo per device, on 8 devices) -# [12/15 11:49:03] inference INFO: Total inference pure compute time: ..... - -# training time -trainspeed=$(grep -o 'Overall training.*' "$LOG" | grep -Eo '\(.*\)' | grep -o '[0-9\.]*') -echo "Training speed: $trainspeed s/it" - -# inference time: there could be multiple inference during training -inferencespeed=$(grep -o 'Total inference pure.*' "$LOG" | tail -n1 | grep -Eo '\(.*\)' | grep -o '[0-9\.]*' | head -n1) -echo "Inference speed: $inferencespeed s/it" - -# [12/15 11:47:18] trainer INFO: eta: 0:00:00 iter: 90000 loss: 0.5407 (0.7256) loss_classifier: 0.1744 (0.2446) loss_box_reg: 0.0838 (0.1160) loss_mask: 0.2159 (0.2722) loss_objectness: 0.0244 (0.0429) loss_rpn_box_reg: 0.0279 (0.0500) time: 0.4487 (0.4899) data: 0.0076 (0.0975) lr: 0.000200 max mem: 4161 -memory=$(grep -o 'max[_ ]mem: [0-9]*' "$LOG" | tail -n1 | grep -o '[0-9]*') -echo "Training memory: $memory MB" - -echo "Easy to copypaste:" -echo "$trainspeed","$inferencespeed","$memory" - -echo "------------------------------" - -# [12/26 17:26:32] engine.coco_evaluation: copypaste: Task: bbox -# [12/26 17:26:32] engine.coco_evaluation: copypaste: AP,AP50,AP75,APs,APm,APl -# [12/26 17:26:32] engine.coco_evaluation: copypaste: 0.0017,0.0024,0.0017,0.0005,0.0019,0.0011 -# [12/26 17:26:32] engine.coco_evaluation: copypaste: Task: segm -# [12/26 17:26:32] engine.coco_evaluation: copypaste: AP,AP50,AP75,APs,APm,APl -# [12/26 17:26:32] engine.coco_evaluation: copypaste: 0.0014,0.0021,0.0016,0.0005,0.0016,0.0011 - -echo "COCO Results:" -num_tasks=$(grep -o 'copypaste:.*Task.*' "$LOG" | sort -u | wc -l) -# each task has 3 lines -grep -o 'copypaste:.*' "$LOG" | cut -d ' ' -f 2- | tail -n $((num_tasks * 3)) diff --git a/spaces/hca97/Mosquito-Detection/my_models/torch_hub_cache/yolov5/utils/segment/augmentations.py b/spaces/hca97/Mosquito-Detection/my_models/torch_hub_cache/yolov5/utils/segment/augmentations.py deleted file mode 100644 index f8154b834869acd87f80c0152c870b7631a918ba..0000000000000000000000000000000000000000 --- a/spaces/hca97/Mosquito-Detection/my_models/torch_hub_cache/yolov5/utils/segment/augmentations.py +++ /dev/null @@ -1,104 +0,0 @@ -# YOLOv5 🚀 by Ultralytics, AGPL-3.0 license -""" -Image augmentation functions -""" - -import math -import random - -import cv2 -import numpy as np - -from ..augmentations import box_candidates -from ..general import resample_segments, segment2box - - -def mixup(im, labels, segments, im2, labels2, segments2): - # Applies MixUp augmentation https://arxiv.org/pdf/1710.09412.pdf - r = np.random.beta(32.0, 32.0) # mixup ratio, alpha=beta=32.0 - im = (im * r + im2 * (1 - r)).astype(np.uint8) - labels = np.concatenate((labels, labels2), 0) - segments = np.concatenate((segments, segments2), 0) - return im, labels, segments - - -def random_perspective(im, - targets=(), - segments=(), - degrees=10, - translate=.1, - scale=.1, - shear=10, - perspective=0.0, - border=(0, 0)): - # torchvision.transforms.RandomAffine(degrees=(-10, 10), translate=(.1, .1), scale=(.9, 1.1), shear=(-10, 10)) - # targets = [cls, xyxy] - - height = im.shape[0] + border[0] * 2 # shape(h,w,c) - width = im.shape[1] + border[1] * 2 - - # Center - C = np.eye(3) - C[0, 2] = -im.shape[1] / 2 # x translation (pixels) - C[1, 2] = -im.shape[0] / 2 # y translation (pixels) - - # Perspective - P = np.eye(3) - P[2, 0] = random.uniform(-perspective, perspective) # x perspective (about y) - P[2, 1] = random.uniform(-perspective, perspective) # y perspective (about x) - - # Rotation and Scale - R = np.eye(3) - a = random.uniform(-degrees, degrees) - # a += random.choice([-180, -90, 0, 90]) # add 90deg rotations to small rotations - s = random.uniform(1 - scale, 1 + scale) - # s = 2 ** random.uniform(-scale, scale) - R[:2] = cv2.getRotationMatrix2D(angle=a, center=(0, 0), scale=s) - - # Shear - S = np.eye(3) - S[0, 1] = math.tan(random.uniform(-shear, shear) * math.pi / 180) # x shear (deg) - S[1, 0] = math.tan(random.uniform(-shear, shear) * math.pi / 180) # y shear (deg) - - # Translation - T = np.eye(3) - T[0, 2] = (random.uniform(0.5 - translate, 0.5 + translate) * width) # x translation (pixels) - T[1, 2] = (random.uniform(0.5 - translate, 0.5 + translate) * height) # y translation (pixels) - - # Combined rotation matrix - M = T @ S @ R @ P @ C # order of operations (right to left) is IMPORTANT - if (border[0] != 0) or (border[1] != 0) or (M != np.eye(3)).any(): # image changed - if perspective: - im = cv2.warpPerspective(im, M, dsize=(width, height), borderValue=(114, 114, 114)) - else: # affine - im = cv2.warpAffine(im, M[:2], dsize=(width, height), borderValue=(114, 114, 114)) - - # Visualize - # import matplotlib.pyplot as plt - # ax = plt.subplots(1, 2, figsize=(12, 6))[1].ravel() - # ax[0].imshow(im[:, :, ::-1]) # base - # ax[1].imshow(im2[:, :, ::-1]) # warped - - # Transform label coordinates - n = len(targets) - new_segments = [] - if n: - new = np.zeros((n, 4)) - segments = resample_segments(segments) # upsample - for i, segment in enumerate(segments): - xy = np.ones((len(segment), 3)) - xy[:, :2] = segment - xy = xy @ M.T # transform - xy = (xy[:, :2] / xy[:, 2:3] if perspective else xy[:, :2]) # perspective rescale or affine - - # clip - new[i] = segment2box(xy, width, height) - new_segments.append(xy) - - # filter candidates - i = box_candidates(box1=targets[:, 1:5].T * s, box2=new.T, area_thr=0.01) - targets = targets[i] - targets[:, 1:5] = new[i] - new_segments = np.array(new_segments)[i] - - return im, targets, new_segments diff --git a/spaces/hdhzk/bingo/src/components/ui/icons.tsx b/spaces/hdhzk/bingo/src/components/ui/icons.tsx deleted file mode 100644 index 742b489b50437c5b64c86082f2ebc712eeb6a2b0..0000000000000000000000000000000000000000 --- a/spaces/hdhzk/bingo/src/components/ui/icons.tsx +++ /dev/null @@ -1,504 +0,0 @@ -'use client' - -import * as React from 'react' - -import { cn } from '@/lib/utils' - -function IconNextChat({ - className, - inverted, - ...props -}: React.ComponentProps<'svg'> & { inverted?: boolean }) { - const id = React.useId() - - return ( - - - - - - - - - - - - - - - - - - - - - - ) -} - -function IconOpenAI({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - OpenAI icon - - - ) -} - -function IconGitHub({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - GitHub - - - ) -} - -function IconSeparator({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - ) -} - -function IconArrowDown({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconArrowRight({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconUser({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconPlus({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconArrowElbow({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconSpinner({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconMessage({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconTrash({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconMore({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconRefresh({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconStop({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconSidebar({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconMoon({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconSun({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconCopy({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconCheck({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconDownload({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconClose({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconEdit({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconShare({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconUsers({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconExternalLink({ - className, - ...props -}: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconChevronUpDown({ - className, - ...props -}: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -export { - IconEdit, - IconNextChat, - IconOpenAI, - IconGitHub, - IconSeparator, - IconArrowDown, - IconArrowRight, - IconUser, - IconPlus, - IconArrowElbow, - IconSpinner, - IconMessage, - IconTrash, - IconMore, - IconRefresh, - IconStop, - IconSidebar, - IconMoon, - IconSun, - IconCopy, - IconCheck, - IconDownload, - IconClose, - IconShare, - IconUsers, - IconExternalLink, - IconChevronUpDown -} diff --git a/spaces/heath1989/prompt-r-gen-sd/scripts/README.md b/spaces/heath1989/prompt-r-gen-sd/scripts/README.md deleted file mode 100644 index dd81e1c8f3e1c739a57ec2b8b8e5e94210575a06..0000000000000000000000000000000000000000 --- a/spaces/heath1989/prompt-r-gen-sd/scripts/README.md +++ /dev/null @@ -1,6 +0,0 @@ ---- -title: prompt-rp -app_file: prompt_rg.py -sdk: gradio -sdk_version: 3.40.1 ---- diff --git a/spaces/hebert2099/MusicGen/tests/utils/__init__.py b/spaces/hebert2099/MusicGen/tests/utils/__init__.py deleted file mode 100644 index 0952fcc3f57e34b3747962e9ebd6fc57aeea63fa..0000000000000000000000000000000000000000 --- a/spaces/hebert2099/MusicGen/tests/utils/__init__.py +++ /dev/null @@ -1,5 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. diff --git a/spaces/hekbobo/bingo/src/pages/api/proxy.ts b/spaces/hekbobo/bingo/src/pages/api/proxy.ts deleted file mode 100644 index 240b5fb5561d993c6381649bf4544ce12f3cdab2..0000000000000000000000000000000000000000 --- a/spaces/hekbobo/bingo/src/pages/api/proxy.ts +++ /dev/null @@ -1,24 +0,0 @@ -'use server' - -import { NextApiRequest, NextApiResponse } from 'next' -import { fetch } from '@/lib/isomorphic' - -export default async function handler(req: NextApiRequest, res: NextApiResponse) { - try { - const { url, headers, method = 'GET', body } = req.body - if (!url) { - return res.end('ok') - } - const response = await fetch(url, { headers, method, body, redirect: 'manual' }) - const text = await response.text() - res.writeHead(200, { - 'Content-Type': 'application/text', - 'x-url': response.url, - 'x-status': response.status, - }) - res.end(text) - } catch (e) { - console.log(e) - return res.end(e) - } -} diff --git a/spaces/higantest/openai-reverse-proxy/README.md b/spaces/higantest/openai-reverse-proxy/README.md deleted file mode 100644 index 57900858007dd192f8b9f651b020888bd12ecb6b..0000000000000000000000000000000000000000 --- a/spaces/higantest/openai-reverse-proxy/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: Openai Reverse Proxy -emoji: 💻 -colorFrom: gray -colorTo: green -sdk: docker -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/ho11laqe/nnUNet_calvingfront_detection/documentation/common_problems_and_solutions.md b/spaces/ho11laqe/nnUNet_calvingfront_detection/documentation/common_problems_and_solutions.md deleted file mode 100644 index 442d92ce179859461330fe63e6a9d734667cc0fa..0000000000000000000000000000000000000000 --- a/spaces/ho11laqe/nnUNet_calvingfront_detection/documentation/common_problems_and_solutions.md +++ /dev/null @@ -1,104 +0,0 @@ -# Common Issues and their Solutions - -## RuntimeError: Expected scalar type half but found float - -This can happen when running inference (or training) with mixed precision enabled on older GPU hardware. It points -to some operation not being implemented in half precision for the type of GPU you are using. There are flags to enforce - the use of fp32 for both nnUNet_predict and nnUNet_train. If you run into this error, using these flags will probably - solve it. See `nnUNet_predict -h` and `nnUNet_train -h` for what the flags are. - -## nnU-Net gets 'stuck' during preprocessing, training or inference -nnU-Net uses python multiprocessing to leverage multiple CPU cores during preprocessing, background workers for data -augmentation in training, preprocessing of cases during inference as well as resampling and exporting the final -predictions during validation and inference. Unfortunately, python (or maybe it is just me as a programmer) is not -very good at communicating errors that happen in background workers, causing the main process to indefinitely wait for -them to return indefinitely. - -Whenever nnU-Net appears to be stuck, this is what you should do: - -1) There is almost always an error message which will give you an indication of what the problem is. This error message -is often not at the bottom of the text output, but further up. If you run nnU-Net on a GPU cluster (like we do) the -error message may be WAYYYY off in the log file, sometimes at the very start of the training/inference. Locate the -error message (if necessary copy the stdout to a text editor and search for 'error') - -2) If there is no error message, this could mean that your OS silently killed a background worker because it was about -to go out of memory. In this case, please rerun whatever command you have been running and closely monitor your system -RAM (not GPU memory!) usage. If your RAM is full or close to full, you need to take action: - - reduce the number of background workers: use `-tl` and `-tf` in `nnUNet_plan_and_preprocess` (you may have to - go as low as 1!). Reduce the number of workers used by `nnUNet_predict` by reducing `--num_threads_preprocessing` and - `--num_threads_nifti_save`. - - If even `-tf 1` during preprocessing is not low enough, consider adding a swap partition located on an SSD. - - upgrade your RAM! (32 GB should get the job done) - - -## nnU-Net training: RuntimeError: CUDA out of memory - -This section is dealing with error messages such as this: - -``` -RuntimeError: CUDA out of memory. Tried to allocate 4.16 GiB (GPU 0; 10.76 GiB total capacity; 2.82 GiB already allocated; 4.18 GiB free; 4.33 GiB reserved in total by PyTorch) -``` - -This message appears when the GPU memory is insufficient. For most datasets, nnU-Net uses about 8GB of video memory. -To ensure that you can run all trainings, we recommend to use a GPU with at least 11GB (this will have some headroom). -If you are running other programs on the GPU you intend to train on (for example the GUI of your operating system), -the amount of VRAM available to nnU-Net is less than whatever is on your GPU. Please close all unnecessary programs or -invest in a second GPU. We for example like to use a low cost GPU (GTX 1050 or slower) for the display outputs while -having the 2080ti (or equivelant) handle the training. - -At the start of each training, cuDNN will run some benchmarks in order to figure out the fastest convolution algorithm -for the current network architecture (we use `torch.backends.cudnn.benchmark=True`). VRAM consumption will jump all over -the place while these benchmarks run and can briefly exceed the 8GB nnU-Net typically requires. If you keep running into - `RuntimeError: CUDA out of memory` problems you may want to consider disabling that. You can do so by setting the - `--deterministic` flag when using `nnUNet_train`. Setting this flag can slow down your training, so it is recommended - to only use it if necessary. - -## nnU-Net training in Docker container: RuntimeError: unable to write to file - -Nvidia NGC (https://ngc.nvidia.com/catalog/containers/nvidia:pytorch) is a great place to find Docker containers with -the most recent software (pytorch, cuDNN, etc.) in them. When starting Docker containers with command provided on the -Nvidia website, the docker will crash with errors like this when running nnU-Net: `RuntimeError: unable to write to -file `. Please start the docker with the `--ipc=host` flag to solve this. - -## Downloading pretrained models: unzip: cannot find zipfile directory in one of /home/isensee/.nnunetdownload_16031094034174126 - -Sometimes downloading the large zip files containing our pretrained models can fail and cause the error above. Please -make sure to use the most recent nnU-Net version (we constantly try to improve the downloading). If that does not fix it -you can always download the zip file from our zenodo (https://zenodo.org/record/4003545) and use the -`nnUNet_install_pretrained_model_from_zip` command to install the model. - -## Downloading pre-trained models: `unzip: 'unzip' is not recognized as an internal or external command` OR `Command 'unzip' not found` - -On Windows systems and on a bare WSL2 system, the `unzip` command may not be present. -Either install it, unzip the pre-trained model from zenodo download, or update to a newer version of nnUNet that uses the Python build in -(https://docs.python.org/3/library/zipfile.html) - -## nnU-Net training (2D U-Net): High (and increasing) system RAM usage, OOM - -There was a issue with mixed precision causing a system RAM memory leak. This is fixed when using cuDNN 8.0.2 or newer, -but the current pytorch master comes with cuDNN 7.6.5. If you encounter this problem, please consider using Nvidias NGC -pytorch container for training (the pytorch it comes with has a recent cuDNN version). You can also install the new -cuDNN version on your system and compile pytorch yourself (instructions on the pytorch website!). This is what we do at DKFZ. - - -## nnU-Net training of cascade: Error `seg from prev stage missing` -You need to run all five folds of `3d_lowres`. Segmentations of the previous stage can only be generated from the -validation set, otherwise we would overfit. - -## nnU-Net training: `RuntimeError: CUDA error: device-side assert triggered` -This error often goes along with something like `void THCudaTensor_scatterFillKernel(TensorInfo, -TensorInfo, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: -block: [4770,0,0], thread: [374,0,0] Assertion indexValue >= 0 && indexValue < tensor.sizes[dim] failed.`. - -This means that your dataset contains unexpected values in the segmentations. nnU-Net expects all labels to be -consecutive integers. So if your dataset has 4 classes (background and three foregound labels), then the labels -must be 0, 1, 2, 3 (where 0 must be background!). There cannot be any other values in the ground truth segmentations. - -If you run `nnUNet_plan_and_preprocess` with the `--verify_dataset_integrity` option, this should never happen because -it will check for wrong values in the label images. - -## nnU-Net training: Error: mmap length is greater than file size and EOFError -Please delete all .npy files in the nnUNet_preprocessed folder of the test you were trying to train. Then try again. - -## running nnU-Net on Azure instances -see https://github.com/MIC-DKFZ/nnUNet/issues/437, thank you @Alaska47 \ No newline at end of file diff --git a/spaces/huggan/anime-face-generator/app.py b/spaces/huggan/anime-face-generator/app.py deleted file mode 100644 index e7bd4b68ffb1cb6cb68a11191cde0e2853328adc..0000000000000000000000000000000000000000 --- a/spaces/huggan/anime-face-generator/app.py +++ /dev/null @@ -1,27 +0,0 @@ -import gradio as gr -import matplotlib.pyplot as plt -import tensorflow as tf - -from huggingface_hub import from_pretrained_keras -seed = gr.inputs.Slider(step = 1) -number_of_examples = gr.inputs.Slider(minimum = 1, maximum = 4, step = 1, label = "Number of Examples to Generate") -image = gr.outputs.Image(type = "plot") - -model = from_pretrained_keras("merve/anime-faces-generator") -def generate_and_save_images(number_of_examples): - - seed = tf.random.normal([number_of_examples, 100]) - predictions = model(seed, training=False) - - fig = plt.figure(figsize=(80, 80)) - - for i in range(predictions.shape[0]): - plt.subplot(2, 2, i+1) - plt.imshow(predictions[i, :, :, :]) - plt.axis('off') - return fig - - -description = "Anime face generator made with DCGAN" -gr.Interface(generate_and_save_images, inputs = [number_of_examples], outputs = image, -title = "Anime Face Generator", description = description).launch() \ No newline at end of file diff --git a/spaces/huggan/sefa/models/stylegan_generator.py b/spaces/huggan/sefa/models/stylegan_generator.py deleted file mode 100644 index 650f074214472adec9a25312208a91d1db665647..0000000000000000000000000000000000000000 --- a/spaces/huggan/sefa/models/stylegan_generator.py +++ /dev/null @@ -1,916 +0,0 @@ -# python3.7 -"""Contains the implementation of generator described in StyleGAN. - -Paper: https://arxiv.org/pdf/1812.04948.pdf - -Official TensorFlow implementation: https://github.com/NVlabs/stylegan -""" -import os - -import numpy as np - -import torch -import torch.nn as nn -import torch.nn.functional as F - -from .sync_op import all_gather - -from huggingface_hub import PyTorchModelHubMixin, PYTORCH_WEIGHTS_NAME, hf_hub_download - -__all__ = ['StyleGANGenerator'] - -# Resolutions allowed. -_RESOLUTIONS_ALLOWED = [8, 16, 32, 64, 128, 256, 512, 1024] - -# Initial resolution. -_INIT_RES = 4 - -# Fused-scale options allowed. -_FUSED_SCALE_ALLOWED = [True, False, 'auto'] - -# Minimal resolution for `auto` fused-scale strategy. -_AUTO_FUSED_SCALE_MIN_RES = 128 - -# Default gain factor for weight scaling. -_WSCALE_GAIN = np.sqrt(2.0) -_STYLEMOD_WSCALE_GAIN = 1.0 - - -class StyleGANGenerator(nn.Module, PyTorchModelHubMixin): - """Defines the generator network in StyleGAN. - - NOTE: The synthesized images are with `RGB` channel order and pixel range - [-1, 1]. - - Settings for the mapping network: - - (1) z_space_dim: Dimension of the input latent space, Z. (default: 512) - (2) w_space_dim: Dimension of the outout latent space, W. (default: 512) - (3) label_size: Size of the additional label for conditional generation. - (default: 0) - (4)mapping_layers: Number of layers of the mapping network. (default: 8) - (5) mapping_fmaps: Number of hidden channels of the mapping network. - (default: 512) - (6) mapping_lr_mul: Learning rate multiplier for the mapping network. - (default: 0.01) - (7) repeat_w: Repeat w-code for different layers. - - Settings for the synthesis network: - - (1) resolution: The resolution of the output image. - (2) image_channels: Number of channels of the output image. (default: 3) - (3) final_tanh: Whether to use `tanh` to control the final pixel range. - (default: False) - (4) const_input: Whether to use a constant in the first convolutional layer. - (default: True) - (5) fused_scale: Whether to fused `upsample` and `conv2d` together, - resulting in `conv2d_transpose`. (default: `auto`) - (6) use_wscale: Whether to use weight scaling. (default: True) - (7) fmaps_base: Factor to control number of feature maps for each layer. - (default: 16 << 10) - (8) fmaps_max: Maximum number of feature maps in each layer. (default: 512) - """ - - def __init__(self, - resolution, - z_space_dim=512, - w_space_dim=512, - label_size=0, - mapping_layers=8, - mapping_fmaps=512, - mapping_lr_mul=0.01, - repeat_w=True, - image_channels=3, - final_tanh=False, - const_input=True, - fused_scale='auto', - use_wscale=True, - fmaps_base=16 << 10, - fmaps_max=512, - **kwargs): - """Initializes with basic settings. - - Raises: - ValueError: If the `resolution` is not supported, or `fused_scale` - is not supported. - """ - super().__init__() - - if resolution not in _RESOLUTIONS_ALLOWED: - raise ValueError(f'Invalid resolution: `{resolution}`!\n' - f'Resolutions allowed: {_RESOLUTIONS_ALLOWED}.') - if fused_scale not in _FUSED_SCALE_ALLOWED: - raise ValueError(f'Invalid fused-scale option: `{fused_scale}`!\n' - f'Options allowed: {_FUSED_SCALE_ALLOWED}.') - - self.init_res = _INIT_RES - self.resolution = resolution - self.z_space_dim = z_space_dim - self.w_space_dim = w_space_dim - self.label_size = label_size - self.mapping_layers = mapping_layers - self.mapping_fmaps = mapping_fmaps - self.mapping_lr_mul = mapping_lr_mul - self.repeat_w = repeat_w - self.image_channels = image_channels - self.final_tanh = final_tanh - self.const_input = const_input - self.fused_scale = fused_scale - self.use_wscale = use_wscale - self.fmaps_base = fmaps_base - self.fmaps_max = fmaps_max - - self.config = kwargs.pop("config", None) - - - self.num_layers = int(np.log2(self.resolution // self.init_res * 2)) * 2 - - if self.repeat_w: - self.mapping_space_dim = self.w_space_dim - else: - self.mapping_space_dim = self.w_space_dim * self.num_layers - self.mapping = MappingModule(input_space_dim=self.z_space_dim, - hidden_space_dim=self.mapping_fmaps, - final_space_dim=self.mapping_space_dim, - label_size=self.label_size, - num_layers=self.mapping_layers, - use_wscale=self.use_wscale, - lr_mul=self.mapping_lr_mul) - - self.truncation = TruncationModule(w_space_dim=self.w_space_dim, - num_layers=self.num_layers, - repeat_w=self.repeat_w) - - self.synthesis = SynthesisModule(resolution=self.resolution, - init_resolution=self.init_res, - w_space_dim=self.w_space_dim, - image_channels=self.image_channels, - final_tanh=self.final_tanh, - const_input=self.const_input, - fused_scale=self.fused_scale, - use_wscale=self.use_wscale, - fmaps_base=self.fmaps_base, - fmaps_max=self.fmaps_max) - - self.pth_to_tf_var_mapping = {} - for key, val in self.mapping.pth_to_tf_var_mapping.items(): - self.pth_to_tf_var_mapping[f'mapping.{key}'] = val - for key, val in self.truncation.pth_to_tf_var_mapping.items(): - self.pth_to_tf_var_mapping[f'truncation.{key}'] = val - for key, val in self.synthesis.pth_to_tf_var_mapping.items(): - self.pth_to_tf_var_mapping[f'synthesis.{key}'] = val - - def forward(self, - z, - label=None, - lod=None, - w_moving_decay=0.995, - style_mixing_prob=0.9, - trunc_psi=None, - trunc_layers=None, - randomize_noise=False, - **_unused_kwargs): - mapping_results = self.mapping(z, label) - w = mapping_results['w'] - - if self.training and w_moving_decay < 1: - batch_w_avg = all_gather(w).mean(dim=0) - self.truncation.w_avg.copy_( - self.truncation.w_avg * w_moving_decay + - batch_w_avg * (1 - w_moving_decay)) - - if self.training and style_mixing_prob > 0: - new_z = torch.randn_like(z) - new_w = self.mapping(new_z, label)['w'] - lod = self.synthesis.lod.cpu().tolist() if lod is None else lod - current_layers = self.num_layers - int(lod) * 2 - if np.random.uniform() < style_mixing_prob: - mixing_cutoff = np.random.randint(1, current_layers) - w = self.truncation(w) - new_w = self.truncation(new_w) - w[:, mixing_cutoff:] = new_w[:, mixing_cutoff:] - - wp = self.truncation(w, trunc_psi, trunc_layers) - synthesis_results = self.synthesis(wp, lod, randomize_noise) - - return {**mapping_results, **synthesis_results} - - @classmethod - def _from_pretrained( - cls, - model_id, - revision, - cache_dir, - force_download, - proxies, - resume_download, - local_files_only, - use_auth_token, - map_location="cpu", - strict=False, - **model_kwargs, - ): - """ - Overwrite this method in case you wish to initialize your model in a - different way. - """ - map_location = torch.device(map_location) - - if os.path.isdir(model_id): - print("Loading weights from local directory") - model_file = os.path.join(model_id, PYTORCH_WEIGHTS_NAME) - else: - model_file = hf_hub_download( - repo_id=model_id, - filename=PYTORCH_WEIGHTS_NAME, - revision=revision, - cache_dir=cache_dir, - force_download=force_download, - proxies=proxies, - resume_download=resume_download, - use_auth_token=use_auth_token, - local_files_only=local_files_only, - ) - - pretrained = torch.load(model_file, map_location=map_location) - return pretrained - - -class MappingModule(nn.Module): - """Implements the latent space mapping module. - - Basically, this module executes several dense layers in sequence. - """ - - def __init__(self, - input_space_dim=512, - hidden_space_dim=512, - final_space_dim=512, - label_size=0, - num_layers=8, - normalize_input=True, - use_wscale=True, - lr_mul=0.01): - super().__init__() - - self.input_space_dim = input_space_dim - self.hidden_space_dim = hidden_space_dim - self.final_space_dim = final_space_dim - self.label_size = label_size - self.num_layers = num_layers - self.normalize_input = normalize_input - self.use_wscale = use_wscale - self.lr_mul = lr_mul - - self.norm = PixelNormLayer() if self.normalize_input else nn.Identity() - - self.pth_to_tf_var_mapping = {} - for i in range(num_layers): - dim_mul = 2 if label_size else 1 - in_channels = (input_space_dim * dim_mul if i == 0 else - hidden_space_dim) - out_channels = (final_space_dim if i == (num_layers - 1) else - hidden_space_dim) - self.add_module(f'dense{i}', - DenseBlock(in_channels=in_channels, - out_channels=out_channels, - use_wscale=self.use_wscale, - lr_mul=self.lr_mul)) - self.pth_to_tf_var_mapping[f'dense{i}.weight'] = f'Dense{i}/weight' - self.pth_to_tf_var_mapping[f'dense{i}.bias'] = f'Dense{i}/bias' - if label_size: - self.label_weight = nn.Parameter( - torch.randn(label_size, input_space_dim)) - self.pth_to_tf_var_mapping[f'label_weight'] = f'LabelConcat/weight' - - def forward(self, z, label=None): - if z.ndim != 2 or z.shape[1] != self.input_space_dim: - raise ValueError(f'Input latent code should be with shape ' - f'[batch_size, input_dim], where ' - f'`input_dim` equals to {self.input_space_dim}!\n' - f'But `{z.shape}` is received!') - if self.label_size: - if label is None: - raise ValueError(f'Model requires an additional label ' - f'(with size {self.label_size}) as input, ' - f'but no label is received!') - if label.ndim != 2 or label.shape != (z.shape[0], self.label_size): - raise ValueError(f'Input label should be with shape ' - f'[batch_size, label_size], where ' - f'`batch_size` equals to that of ' - f'latent codes ({z.shape[0]}) and ' - f'`label_size` equals to {self.label_size}!\n' - f'But `{label.shape}` is received!') - embedding = torch.matmul(label, self.label_weight) - z = torch.cat((z, embedding), dim=1) - - z = self.norm(z) - w = z - for i in range(self.num_layers): - w = self.__getattr__(f'dense{i}')(w) - results = { - 'z': z, - 'label': label, - 'w': w, - } - if self.label_size: - results['embedding'] = embedding - return results - - -class TruncationModule(nn.Module): - """Implements the truncation module. - - Truncation is executed as follows: - - For layers in range [0, truncation_layers), the truncated w-code is computed - as - - w_new = w_avg + (w - w_avg) * truncation_psi - - To disable truncation, please set - (1) truncation_psi = 1.0 (None) OR - (2) truncation_layers = 0 (None) - - NOTE: The returned tensor is layer-wise style codes. - """ - - def __init__(self, w_space_dim, num_layers, repeat_w=True): - super().__init__() - - self.num_layers = num_layers - self.w_space_dim = w_space_dim - self.repeat_w = repeat_w - - if self.repeat_w: - self.register_buffer('w_avg', torch.zeros(w_space_dim)) - else: - self.register_buffer('w_avg', torch.zeros(num_layers * w_space_dim)) - self.pth_to_tf_var_mapping = {'w_avg': 'dlatent_avg'} - - def forward(self, w, trunc_psi=None, trunc_layers=None): - if w.ndim == 2: - if self.repeat_w and w.shape[1] == self.w_space_dim: - w = w.view(-1, 1, self.w_space_dim) - wp = w.repeat(1, self.num_layers, 1) - else: - assert w.shape[1] == self.w_space_dim * self.num_layers - wp = w.view(-1, self.num_layers, self.w_space_dim) - else: - wp = w - assert wp.ndim == 3 - assert wp.shape[1:] == (self.num_layers, self.w_space_dim) - - trunc_psi = 1.0 if trunc_psi is None else trunc_psi - trunc_layers = 0 if trunc_layers is None else trunc_layers - if trunc_psi < 1.0 and trunc_layers > 0: - layer_idx = np.arange(self.num_layers).reshape(1, -1, 1) - coefs = np.ones_like(layer_idx, dtype=np.float32) - coefs[layer_idx < trunc_layers] *= trunc_psi - coefs = torch.from_numpy(coefs).to(wp) - w_avg = self.w_avg.view(1, -1, self.w_space_dim) - wp = w_avg + (wp - w_avg) * coefs - return wp - - -class SynthesisModule(nn.Module): - """Implements the image synthesis module. - - Basically, this module executes several convolutional layers in sequence. - """ - - def __init__(self, - resolution=1024, - init_resolution=4, - w_space_dim=512, - image_channels=3, - final_tanh=False, - const_input=True, - fused_scale='auto', - use_wscale=True, - fmaps_base=16 << 10, - fmaps_max=512): - super().__init__() - - self.init_res = init_resolution - self.init_res_log2 = int(np.log2(self.init_res)) - self.resolution = resolution - self.final_res_log2 = int(np.log2(self.resolution)) - self.w_space_dim = w_space_dim - self.image_channels = image_channels - self.final_tanh = final_tanh - self.const_input = const_input - self.fused_scale = fused_scale - self.use_wscale = use_wscale - self.fmaps_base = fmaps_base - self.fmaps_max = fmaps_max - - self.num_layers = (self.final_res_log2 - self.init_res_log2 + 1) * 2 - - # Level of detail (used for progressive training). - self.register_buffer('lod', torch.zeros(())) - self.pth_to_tf_var_mapping = {'lod': 'lod'} - - for res_log2 in range(self.init_res_log2, self.final_res_log2 + 1): - res = 2 ** res_log2 - block_idx = res_log2 - self.init_res_log2 - - # First convolution layer for each resolution. - layer_name = f'layer{2 * block_idx}' - if res == self.init_res: - if self.const_input: - self.add_module(layer_name, - ConvBlock(in_channels=self.get_nf(res), - out_channels=self.get_nf(res), - resolution=self.init_res, - w_space_dim=self.w_space_dim, - position='const_init', - use_wscale=self.use_wscale)) - tf_layer_name = 'Const' - self.pth_to_tf_var_mapping[f'{layer_name}.const'] = ( - f'{res}x{res}/{tf_layer_name}/const') - else: - self.add_module(layer_name, - ConvBlock(in_channels=self.w_space_dim, - out_channels=self.get_nf(res), - resolution=self.init_res, - w_space_dim=self.w_space_dim, - kernel_size=self.init_res, - padding=self.init_res - 1, - use_wscale=self.use_wscale)) - tf_layer_name = 'Dense' - self.pth_to_tf_var_mapping[f'{layer_name}.weight'] = ( - f'{res}x{res}/{tf_layer_name}/weight') - else: - if self.fused_scale == 'auto': - fused_scale = (res >= _AUTO_FUSED_SCALE_MIN_RES) - else: - fused_scale = self.fused_scale - self.add_module(layer_name, - ConvBlock(in_channels=self.get_nf(res // 2), - out_channels=self.get_nf(res), - resolution=res, - w_space_dim=self.w_space_dim, - upsample=True, - fused_scale=fused_scale, - use_wscale=self.use_wscale)) - tf_layer_name = 'Conv0_up' - self.pth_to_tf_var_mapping[f'{layer_name}.weight'] = ( - f'{res}x{res}/{tf_layer_name}/weight') - self.pth_to_tf_var_mapping[f'{layer_name}.bias'] = ( - f'{res}x{res}/{tf_layer_name}/bias') - self.pth_to_tf_var_mapping[f'{layer_name}.style.weight'] = ( - f'{res}x{res}/{tf_layer_name}/StyleMod/weight') - self.pth_to_tf_var_mapping[f'{layer_name}.style.bias'] = ( - f'{res}x{res}/{tf_layer_name}/StyleMod/bias') - self.pth_to_tf_var_mapping[f'{layer_name}.apply_noise.weight'] = ( - f'{res}x{res}/{tf_layer_name}/Noise/weight') - self.pth_to_tf_var_mapping[f'{layer_name}.apply_noise.noise'] = ( - f'noise{2 * block_idx}') - - # Second convolution layer for each resolution. - layer_name = f'layer{2 * block_idx + 1}' - self.add_module(layer_name, - ConvBlock(in_channels=self.get_nf(res), - out_channels=self.get_nf(res), - resolution=res, - w_space_dim=self.w_space_dim, - use_wscale=self.use_wscale)) - tf_layer_name = 'Conv' if res == self.init_res else 'Conv1' - self.pth_to_tf_var_mapping[f'{layer_name}.weight'] = ( - f'{res}x{res}/{tf_layer_name}/weight') - self.pth_to_tf_var_mapping[f'{layer_name}.bias'] = ( - f'{res}x{res}/{tf_layer_name}/bias') - self.pth_to_tf_var_mapping[f'{layer_name}.style.weight'] = ( - f'{res}x{res}/{tf_layer_name}/StyleMod/weight') - self.pth_to_tf_var_mapping[f'{layer_name}.style.bias'] = ( - f'{res}x{res}/{tf_layer_name}/StyleMod/bias') - self.pth_to_tf_var_mapping[f'{layer_name}.apply_noise.weight'] = ( - f'{res}x{res}/{tf_layer_name}/Noise/weight') - self.pth_to_tf_var_mapping[f'{layer_name}.apply_noise.noise'] = ( - f'noise{2 * block_idx + 1}') - - # Output convolution layer for each resolution. - self.add_module(f'output{block_idx}', - ConvBlock(in_channels=self.get_nf(res), - out_channels=self.image_channels, - resolution=res, - w_space_dim=self.w_space_dim, - position='last', - kernel_size=1, - padding=0, - use_wscale=self.use_wscale, - wscale_gain=1.0, - activation_type='linear')) - self.pth_to_tf_var_mapping[f'output{block_idx}.weight'] = ( - f'ToRGB_lod{self.final_res_log2 - res_log2}/weight') - self.pth_to_tf_var_mapping[f'output{block_idx}.bias'] = ( - f'ToRGB_lod{self.final_res_log2 - res_log2}/bias') - - self.upsample = UpsamplingLayer() - self.final_activate = nn.Tanh() if final_tanh else nn.Identity() - - def get_nf(self, res): - """Gets number of feature maps according to current resolution.""" - return min(self.fmaps_base // res, self.fmaps_max) - - def forward(self, wp, lod=None, randomize_noise=False): - if wp.ndim != 3 or wp.shape[1:] != (self.num_layers, self.w_space_dim): - raise ValueError(f'Input tensor should be with shape ' - f'[batch_size, num_layers, w_space_dim], where ' - f'`num_layers` equals to {self.num_layers}, and ' - f'`w_space_dim` equals to {self.w_space_dim}!\n' - f'But `{wp.shape}` is received!') - - lod = self.lod.cpu().tolist() if lod is None else lod - if lod + self.init_res_log2 > self.final_res_log2: - raise ValueError(f'Maximum level-of-detail (lod) is ' - f'{self.final_res_log2 - self.init_res_log2}, ' - f'but `{lod}` is received!') - - results = {'wp': wp} - for res_log2 in range(self.init_res_log2, self.final_res_log2 + 1): - current_lod = self.final_res_log2 - res_log2 - if lod < current_lod + 1: - block_idx = res_log2 - self.init_res_log2 - if block_idx == 0: - if self.const_input: - x, style = self.layer0(None, wp[:, 0], randomize_noise) - else: - x = wp[:, 0].view(-1, self.w_space_dim, 1, 1) - x, style = self.layer0(x, wp[:, 0], randomize_noise) - else: - x, style = self.__getattr__(f'layer{2 * block_idx}')( - x, wp[:, 2 * block_idx]) - results[f'style{2 * block_idx:02d}'] = style - x, style = self.__getattr__(f'layer{2 * block_idx + 1}')( - x, wp[:, 2 * block_idx + 1]) - results[f'style{2 * block_idx + 1:02d}'] = style - if current_lod - 1 < lod <= current_lod: - image = self.__getattr__(f'output{block_idx}')(x, None) - elif current_lod < lod < current_lod + 1: - alpha = np.ceil(lod) - lod - image = (self.__getattr__(f'output{block_idx}')(x, None) * alpha - + self.upsample(image) * (1 - alpha)) - elif lod >= current_lod + 1: - image = self.upsample(image) - results['image'] = self.final_activate(image) - return results - - -class PixelNormLayer(nn.Module): - """Implements pixel-wise feature vector normalization layer.""" - - def __init__(self, epsilon=1e-8): - super().__init__() - self.eps = epsilon - - def forward(self, x): - norm = torch.sqrt(torch.mean(x ** 2, dim=1, keepdim=True) + self.eps) - return x / norm - - -class InstanceNormLayer(nn.Module): - """Implements instance normalization layer.""" - - def __init__(self, epsilon=1e-8): - super().__init__() - self.eps = epsilon - - def forward(self, x): - if x.ndim != 4: - raise ValueError(f'The input tensor should be with shape ' - f'[batch_size, channel, height, width], ' - f'but `{x.shape}` is received!') - x = x - torch.mean(x, dim=[2, 3], keepdim=True) - norm = torch.sqrt( - torch.mean(x ** 2, dim=[2, 3], keepdim=True) + self.eps) - return x / norm - - -class UpsamplingLayer(nn.Module): - """Implements the upsampling layer. - - Basically, this layer can be used to upsample feature maps with nearest - neighbor interpolation. - """ - - def __init__(self, scale_factor=2): - super().__init__() - self.scale_factor = scale_factor - - def forward(self, x): - if self.scale_factor <= 1: - return x - return F.interpolate(x, scale_factor=self.scale_factor, mode='nearest') - - -class Blur(torch.autograd.Function): - """Defines blur operation with customized gradient computation.""" - - @staticmethod - def forward(ctx, x, kernel): - ctx.save_for_backward(kernel) - y = F.conv2d(input=x, - weight=kernel, - bias=None, - stride=1, - padding=1, - groups=x.shape[1]) - return y - - @staticmethod - def backward(ctx, dy): - kernel, = ctx.saved_tensors - dx = F.conv2d(input=dy, - weight=kernel.flip((2, 3)), - bias=None, - stride=1, - padding=1, - groups=dy.shape[1]) - return dx, None, None - - -class BlurLayer(nn.Module): - """Implements the blur layer.""" - - def __init__(self, - channels, - kernel=(1, 2, 1), - normalize=True): - super().__init__() - kernel = np.array(kernel, dtype=np.float32).reshape(1, -1) - kernel = kernel.T.dot(kernel) - if normalize: - kernel /= np.sum(kernel) - kernel = kernel[np.newaxis, np.newaxis] - kernel = np.tile(kernel, [channels, 1, 1, 1]) - self.register_buffer('kernel', torch.from_numpy(kernel)) - - def forward(self, x): - return Blur.apply(x, self.kernel) - - -class NoiseApplyingLayer(nn.Module): - """Implements the noise applying layer.""" - - def __init__(self, resolution, channels): - super().__init__() - self.res = resolution - self.register_buffer('noise', torch.randn(1, 1, self.res, self.res)) - self.weight = nn.Parameter(torch.zeros(channels)) - - def forward(self, x, randomize_noise=False): - if x.ndim != 4: - raise ValueError(f'The input tensor should be with shape ' - f'[batch_size, channel, height, width], ' - f'but `{x.shape}` is received!') - if randomize_noise: - noise = torch.randn(x.shape[0], 1, self.res, self.res).to(x) - else: - noise = self.noise - return x + noise * self.weight.view(1, -1, 1, 1) - - -class StyleModLayer(nn.Module): - """Implements the style modulation layer.""" - - def __init__(self, - w_space_dim, - out_channels, - use_wscale=True): - super().__init__() - self.w_space_dim = w_space_dim - self.out_channels = out_channels - - weight_shape = (self.out_channels * 2, self.w_space_dim) - wscale = _STYLEMOD_WSCALE_GAIN / np.sqrt(self.w_space_dim) - if use_wscale: - self.weight = nn.Parameter(torch.randn(*weight_shape)) - self.wscale = wscale - else: - self.weight = nn.Parameter(torch.randn(*weight_shape) * wscale) - self.wscale = 1.0 - - self.bias = nn.Parameter(torch.zeros(self.out_channels * 2)) - - def forward(self, x, w): - if w.ndim != 2 or w.shape[1] != self.w_space_dim: - raise ValueError(f'The input tensor should be with shape ' - f'[batch_size, w_space_dim], where ' - f'`w_space_dim` equals to {self.w_space_dim}!\n' - f'But `{w.shape}` is received!') - style = F.linear(w, weight=self.weight * self.wscale, bias=self.bias) - style_split = style.view(-1, 2, self.out_channels, 1, 1) - x = x * (style_split[:, 0] + 1) + style_split[:, 1] - return x, style - - -class ConvBlock(nn.Module): - """Implements the normal convolutional block. - - Basically, this block executes upsampling layer (if needed), convolutional - layer, blurring layer, noise applying layer, activation layer, instance - normalization layer, and style modulation layer in sequence. - """ - - def __init__(self, - in_channels, - out_channels, - resolution, - w_space_dim, - position=None, - kernel_size=3, - stride=1, - padding=1, - add_bias=True, - upsample=False, - fused_scale=False, - use_wscale=True, - wscale_gain=_WSCALE_GAIN, - lr_mul=1.0, - activation_type='lrelu'): - """Initializes with block settings. - - Args: - in_channels: Number of channels of the input tensor. - out_channels: Number of channels of the output tensor. - resolution: Resolution of the output tensor. - w_space_dim: Dimension of W space for style modulation. - position: Position of the layer. `const_init`, `last` would lead to - different behavior. (default: None) - kernel_size: Size of the convolutional kernels. (default: 3) - stride: Stride parameter for convolution operation. (default: 1) - padding: Padding parameter for convolution operation. (default: 1) - add_bias: Whether to add bias onto the convolutional result. - (default: True) - upsample: Whether to upsample the input tensor before convolution. - (default: False) - fused_scale: Whether to fused `upsample` and `conv2d` together, - resulting in `conv2d_transpose`. (default: False) - use_wscale: Whether to use weight scaling. (default: True) - wscale_gain: Gain factor for weight scaling. (default: _WSCALE_GAIN) - lr_mul: Learning multiplier for both weight and bias. (default: 1.0) - activation_type: Type of activation. Support `linear` and `lrelu`. - (default: `lrelu`) - - Raises: - NotImplementedError: If the `activation_type` is not supported. - """ - super().__init__() - - self.position = position - - if add_bias: - self.bias = nn.Parameter(torch.zeros(out_channels)) - self.bscale = lr_mul - else: - self.bias = None - - if activation_type == 'linear': - self.activate = nn.Identity() - elif activation_type == 'lrelu': - self.activate = nn.LeakyReLU(negative_slope=0.2, inplace=True) - else: - raise NotImplementedError(f'Not implemented activation function: ' - f'`{activation_type}`!') - - if self.position != 'last': - self.apply_noise = NoiseApplyingLayer(resolution, out_channels) - self.normalize = InstanceNormLayer() - self.style = StyleModLayer(w_space_dim, out_channels, use_wscale) - - if self.position == 'const_init': - self.const = nn.Parameter( - torch.ones(1, in_channels, resolution, resolution)) - return - - self.blur = BlurLayer(out_channels) if upsample else nn.Identity() - - if upsample and not fused_scale: - self.upsample = UpsamplingLayer() - else: - self.upsample = nn.Identity() - - if upsample and fused_scale: - self.use_conv2d_transpose = True - self.stride = 2 - self.padding = 1 - else: - self.use_conv2d_transpose = False - self.stride = stride - self.padding = padding - - weight_shape = (out_channels, in_channels, kernel_size, kernel_size) - fan_in = kernel_size * kernel_size * in_channels - wscale = wscale_gain / np.sqrt(fan_in) - if use_wscale: - self.weight = nn.Parameter(torch.randn(*weight_shape) / lr_mul) - self.wscale = wscale * lr_mul - else: - self.weight = nn.Parameter( - torch.randn(*weight_shape) * wscale / lr_mul) - self.wscale = lr_mul - - def forward(self, x, w, randomize_noise=False): - if self.position != 'const_init': - x = self.upsample(x) - weight = self.weight * self.wscale - if self.use_conv2d_transpose: - weight = F.pad(weight, (1, 1, 1, 1, 0, 0, 0, 0), 'constant', 0) - weight = (weight[:, :, 1:, 1:] + weight[:, :, :-1, 1:] + - weight[:, :, 1:, :-1] + weight[:, :, :-1, :-1]) - weight = weight.permute(1, 0, 2, 3) - x = F.conv_transpose2d(x, - weight=weight, - bias=None, - stride=self.stride, - padding=self.padding) - else: - x = F.conv2d(x, - weight=weight, - bias=None, - stride=self.stride, - padding=self.padding) - x = self.blur(x) - else: - x = self.const.repeat(w.shape[0], 1, 1, 1) - - bias = self.bias * self.bscale if self.bias is not None else None - - if self.position == 'last': - if bias is not None: - x = x + bias.view(1, -1, 1, 1) - return x - - x = self.apply_noise(x, randomize_noise) - if bias is not None: - x = x + bias.view(1, -1, 1, 1) - x = self.activate(x) - x = self.normalize(x) - x, style = self.style(x, w) - return x, style - - -class DenseBlock(nn.Module): - """Implements the dense block. - - Basically, this block executes fully-connected layer and activation layer. - """ - - def __init__(self, - in_channels, - out_channels, - add_bias=True, - use_wscale=True, - wscale_gain=_WSCALE_GAIN, - lr_mul=1.0, - activation_type='lrelu'): - """Initializes with block settings. - - Args: - in_channels: Number of channels of the input tensor. - out_channels: Number of channels of the output tensor. - add_bias: Whether to add bias onto the fully-connected result. - (default: True) - use_wscale: Whether to use weight scaling. (default: True) - wscale_gain: Gain factor for weight scaling. (default: _WSCALE_GAIN) - lr_mul: Learning multiplier for both weight and bias. (default: 1.0) - activation_type: Type of activation. Support `linear` and `lrelu`. - (default: `lrelu`) - - Raises: - NotImplementedError: If the `activation_type` is not supported. - """ - super().__init__() - weight_shape = (out_channels, in_channels) - wscale = wscale_gain / np.sqrt(in_channels) - if use_wscale: - self.weight = nn.Parameter(torch.randn(*weight_shape) / lr_mul) - self.wscale = wscale * lr_mul - else: - self.weight = nn.Parameter( - torch.randn(*weight_shape) * wscale / lr_mul) - self.wscale = lr_mul - - if add_bias: - self.bias = nn.Parameter(torch.zeros(out_channels)) - self.bscale = lr_mul - else: - self.bias = None - - if activation_type == 'linear': - self.activate = nn.Identity() - elif activation_type == 'lrelu': - self.activate = nn.LeakyReLU(negative_slope=0.2, inplace=True) - else: - raise NotImplementedError(f'Not implemented activation function: ' - f'`{activation_type}`!') - - def forward(self, x): - if x.ndim != 2: - x = x.view(x.shape[0], -1) - bias = self.bias * self.bscale if self.bias is not None else None - x = F.linear(x, weight=self.weight * self.wscale, bias=bias) - x = self.activate(x) - return x diff --git a/spaces/huggingface/library-metrics/app.py b/spaces/huggingface/library-metrics/app.py deleted file mode 100644 index e6ce1449ca22aaab5915ef3d658f8179a043c58e..0000000000000000000000000000000000000000 --- a/spaces/huggingface/library-metrics/app.py +++ /dev/null @@ -1,34 +0,0 @@ -import gradio as gr -import pypistats -from datetime import date -from dateutil.relativedelta import relativedelta -import pandas as pd - -pd.options.plotting.backend = "plotly" - -def get_plot(lib, time): - data = pypistats.overall(lib, total=True, format="pandas") - data = data.groupby("category").get_group("with_mirrors").sort_values("date") - start_date = date.today() - relativedelta(months=int(time.split(" ")[0])) - data = data[(data['date'] > str(start_date))] - chart = data.plot(x="date", y="downloads") - return chart - -with gr.Blocks() as demo: - - gr.Markdown( - """ - ## Pypi Download Stats 📈 - See live download stats for all of Hugging Face's open-source libraries 🤗 - """) - with gr.Row(): - lib = gr.Dropdown(["transformers", "datasets", "huggingface-hub", "gradio", "accelerate", "optimum", "evaluate", "diffusers", "timm"], label="Library") - time = gr.Dropdown(["3 months", "6 months", "9 months", "12 months"], label="Downloads over the last...") - - plt = gr.Plot() - - lib.change(get_plot, [lib, time], plt) - time.change(get_plot, [lib, time], plt) - demo.load(get_plot, [lib, time], plt) - -demo.launch() \ No newline at end of file diff --git a/spaces/huggingface/transformers-chat/ingest.sh b/spaces/huggingface/transformers-chat/ingest.sh deleted file mode 100644 index aa5c68d9610a867433071f7cde8a49b09ae032b3..0000000000000000000000000000000000000000 --- a/spaces/huggingface/transformers-chat/ingest.sh +++ /dev/null @@ -1,6 +0,0 @@ -# Bash script to ingest data -# This involves scraping the data from the web and then cleaning up and putting in Weaviate. -!set -eu -wget -r -A.html https://langchain.readthedocs.io/en/latest/ -python3 ingest.py -python3 ingest_examples.py diff --git a/spaces/hush1/White-box-Cartoonization/README.md b/spaces/hush1/White-box-Cartoonization/README.md deleted file mode 100644 index 9860239cf42c94e385faaaa75a85311e010d64f7..0000000000000000000000000000000000000000 --- a/spaces/hush1/White-box-Cartoonization/README.md +++ /dev/null @@ -1,15 +0,0 @@ ---- -python_version: 3.7 -title: White Box Cartoonization -emoji: 📚 -colorFrom: purple -colorTo: green -sdk: gradio -sdk_version: 2.9.4 -app_file: app.py -pinned: false -license: apache-2.0 -duplicated_from: hylee/White-box-Cartoonization ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/hyxue/HiFiFace-inference-demo/arcface_torch/configs/wf12m_mbf.py b/spaces/hyxue/HiFiFace-inference-demo/arcface_torch/configs/wf12m_mbf.py deleted file mode 100644 index d1cb93b2f168e3a64e65d1f8d6cf058e41676c6a..0000000000000000000000000000000000000000 --- a/spaces/hyxue/HiFiFace-inference-demo/arcface_torch/configs/wf12m_mbf.py +++ /dev/null @@ -1,28 +0,0 @@ -from easydict import EasyDict as edict - -# make training faster -# our RAM is 256G -# mount -t tmpfs -o size=140G tmpfs /train_tmp - -config = edict() -config.margin_list = (1.0, 0.0, 0.4) -config.network = "mbf" -config.resume = False -config.output = None -config.embedding_size = 512 -config.sample_rate = 1.0 -config.interclass_filtering_threshold = 0 -config.fp16 = True -config.weight_decay = 1e-4 -config.batch_size = 128 -config.optimizer = "sgd" -config.lr = 0.1 -config.verbose = 2000 -config.dali = False - -config.rec = "/train_tmp/WebFace12M" -config.num_classes = 617970 -config.num_image = 12720066 -config.num_epoch = 20 -config.warmup_epoch = 0 -config.val_targets = [] diff --git a/spaces/hyxue/HiFiFace-inference-demo/entry/train.py b/spaces/hyxue/HiFiFace-inference-demo/entry/train.py deleted file mode 100644 index 79b9e6f6e2ef5915938ffa83ed60d8444dba9dfa..0000000000000000000000000000000000000000 --- a/spaces/hyxue/HiFiFace-inference-demo/entry/train.py +++ /dev/null @@ -1,96 +0,0 @@ -import os -import sys - -import torch -from loguru import logger - -from configs.train_config import TrainConfig -from data.dataset import TrainDatasetDataLoader -from models.model import HifiFace -from utils.visualizer import Visualizer - -use_ddp = TrainConfig().use_ddp -if use_ddp: - - import torch.distributed as dist - - def setup(): - # os.environ["MASTER_ADDR"] = "localhost" - # os.environ["MASTER_PORT"] = "12345" - dist.init_process_group("nccl") # , rank=rank, world_size=world_size) - return dist.get_rank() - - def cleanup(): - dist.destroy_process_group() - - -def train(): - rank = 0 - if use_ddp: - rank = setup() - device = torch.device(f"cuda:{rank}") - logger.info(f"use device {device}") - - opt = TrainConfig() - dataloader = TrainDatasetDataLoader() - dataset_length = len(dataloader) - logger.info(f"Dataset length: {dataset_length}") - - model = HifiFace( - opt.identity_extractor_config, is_training=True, device=device, load_checkpoint=opt.load_checkpoint - ) - model.train() - - logger.info("model initialized") - visualizer = None - ckpt = False - if not opt.use_ddp or rank == 0: - visualizer = Visualizer(opt) - ckpt = True - - total_iter = 0 - epoch = 0 - while True: - if opt.use_ddp: - dataloader.train_sampler.set_epoch(epoch) - for data in dataloader: - source_image = data["source_image"].to(device) - target_image = data["target_image"].to(device) - targe_mask = data["target_mask"].to(device) - same = data["same"].to(device) - loss_dict, visual_dict = model.optimize(source_image, target_image, targe_mask, same) - - total_iter += 1 - - if total_iter % opt.visualize_interval == 0 and visualizer is not None: - visualizer.display_current_results(total_iter, visual_dict) - - if total_iter % opt.plot_interval == 0 and visualizer is not None: - visualizer.plot_current_losses(total_iter, loss_dict) - logger.info(f"Iter: {total_iter}") - for k, v in loss_dict.items(): - logger.info(f" {k}: {v}") - logger.info("=" * 20) - - if total_iter % opt.checkpoint_interval == 0 and ckpt: - logger.info(f"Saving model at iter {total_iter}") - model.save(opt.checkpoint_dir, total_iter) - - if total_iter > opt.max_iters: - logger.info(f"Maximum iterations exceeded. Stopping training.") - if ckpt: - model.save(opt.checkpoint_dir, total_iter) - if use_ddp: - cleanup() - sys.exit(0) - epoch += 1 - - -if __name__ == "__main__": - if use_ddp: - # CUDA_VISIBLE_DEVICES=2,3 torchrun --nnodes=1 --nproc_per_node=2 --rdzv_id=100 --rdzv_backend=c10d --rdzv_endpoint=127.0.0.1:29400 -m entry.train - os.environ["OMP_NUM_THREADS"] = "1" - n_gpus = torch.cuda.device_count() - train() - else: - train() diff --git a/spaces/hzwluoye/gpt4/client/css/buttons.css b/spaces/hzwluoye/gpt4/client/css/buttons.css deleted file mode 100644 index e13f52d9a0414daaa80518bd205913a645a29563..0000000000000000000000000000000000000000 --- a/spaces/hzwluoye/gpt4/client/css/buttons.css +++ /dev/null @@ -1,4 +0,0 @@ -.buttons { - display: flex; - justify-content: left; -} diff --git a/spaces/ibm-nasa-geospatial/Prithvi-100M-Burn-scars-demo/app.py b/spaces/ibm-nasa-geospatial/Prithvi-100M-Burn-scars-demo/app.py deleted file mode 100644 index b3d750a612aa11f75806b0f2bf40fa3da76b4cbf..0000000000000000000000000000000000000000 --- a/spaces/ibm-nasa-geospatial/Prithvi-100M-Burn-scars-demo/app.py +++ /dev/null @@ -1,218 +0,0 @@ -######### pull files -import os -from huggingface_hub import hf_hub_download -config_path=hf_hub_download(repo_id="ibm-nasa-geospatial/Prithvi-100M-burn-scar", filename="burn_scars_Prithvi_100M.py", token=os.environ.get("token")) -ckpt=hf_hub_download(repo_id="ibm-nasa-geospatial/Prithvi-100M-burn-scar", filename='burn_scars_Prithvi_100M.pth', token=os.environ.get("token")) -########## - - -import argparse -from mmcv import Config - -from mmseg.models import build_segmentor - -from mmseg.datasets.pipelines import Compose, LoadImageFromFile - -import rasterio -import torch - -from mmseg.apis import init_segmentor - -from mmcv.parallel import collate, scatter - -import numpy as np -import glob -import os - -import time - -import numpy as np -import gradio as gr -from functools import partial - -import pdb - -import matplotlib.pyplot as plt - - -def open_tiff(fname): - - with rasterio.open(fname, "r") as src: - - data = src.read() - - return data - -def write_tiff(img_wrt, filename, metadata): - - """ - It writes a raster image to file. - - :param img_wrt: numpy array containing the data (can be 2D for single band or 3D for multiple bands) - :param filename: file path to the output file - :param metadata: metadata to use to write the raster to disk - :return: - """ - - with rasterio.open(filename, "w", **metadata) as dest: - - if len(img_wrt.shape) == 2: - - img_wrt = img_wrt[None] - - for i in range(img_wrt.shape[0]): - dest.write(img_wrt[i, :, :], i + 1) - - return filename - - -def get_meta(fname): - - with rasterio.open(fname, "r") as src: - - meta = src.meta - - return meta - -def preprocess_example(example_list): - - example_list = [os.path.join(os.path.abspath(''), x) for x in example_list] - - return example_list - - -def inference_segmentor(model, imgs, custom_test_pipeline=None): - """Inference image(s) with the segmentor. - - Args: - model (nn.Module): The loaded segmentor. - imgs (str/ndarray or list[str/ndarray]): Either image files or loaded - images. - - Returns: - (list[Tensor]): The segmentation result. - """ - cfg = model.cfg - device = next(model.parameters()).device # model device - # build the data pipeline - test_pipeline = [LoadImageFromFile()] + cfg.data.test.pipeline[1:] if custom_test_pipeline == None else custom_test_pipeline - test_pipeline = Compose(test_pipeline) - # prepare data - data = [] - imgs = imgs if isinstance(imgs, list) else [imgs] - for img in imgs: - img_data = {'img_info': {'filename': img}} - img_data = test_pipeline(img_data) - data.append(img_data) - # print(data.shape) - - data = collate(data, samples_per_gpu=len(imgs)) - if next(model.parameters()).is_cuda: - # data = collate(data, samples_per_gpu=len(imgs)) - # scatter to specified GPU - data = scatter(data, [device])[0] - else: - # img_metas = scatter(data['img_metas'],'cpu') - # data['img_metas'] = [i.data[0] for i in data['img_metas']] - - img_metas = data['img_metas'].data[0] - img = data['img'] - data = {'img': img, 'img_metas':img_metas} - - with torch.no_grad(): - result = model(return_loss=False, rescale=True, **data) - return result - - -def inference_on_file(target_image, model, custom_test_pipeline): - - target_image = target_image.name - # print(type(target_image)) - - # output_image = target_image.replace('.tif', '_pred.tif') - time_taken=-1 - - st = time.time() - print('Running inference...') - result = inference_segmentor(model, target_image, custom_test_pipeline) - print("Output has shape: " + str(result[0].shape)) - - # prep outputs - mask = open_tiff(target_image) - rgb = mask[[5, 3, 2], :, :].transpose((1,2,0)) - meta = get_meta(target_image) - mask = np.where(mask == meta['nodata'], 1, 0) - mask = np.max(mask, axis=0)[None] - rgb = np.where(mask.transpose((1,2,0)) == 1, 0, rgb) - rgb = np.where(rgb < 0, 0, rgb) - rgb = np.where(rgb > 1, 1, rgb) - - prediction = np.where(mask == 1, 0, result[0]*255) - et = time.time() - time_taken = np.round(et - st, 1) - print(f'Inference completed in {str(time_taken)} seconds') - - return rgb, prediction[0] - - -def process_test_pipeline(custom_test_pipeline, bands=None): - - # change extracted bands if necessary - if bands is not None: - - extract_index = [i for i, x in enumerate(custom_test_pipeline) if x['type'] == 'BandsExtract' ] - - if len(extract_index) > 0: - - custom_test_pipeline[extract_index[0]]['bands'] = eval(bands) - - collect_index = [i for i, x in enumerate(custom_test_pipeline) if x['type'].find('Collect') > -1] - - # adapt collected keys if necessary - if len(collect_index) > 0: - - keys = ['img_info', 'filename', 'ori_filename', 'img', 'img_shape', 'ori_shape', 'pad_shape', 'scale_factor', 'img_norm_cfg'] - custom_test_pipeline[collect_index[0]]['meta_keys'] = keys - - return custom_test_pipeline - -config = Config.fromfile(config_path) -config.model.backbone.pretrained=None -model = init_segmentor(config, ckpt, device='cpu') -custom_test_pipeline=process_test_pipeline(model.cfg.data.test.pipeline, None) - -func = partial(inference_on_file, model=model, custom_test_pipeline=custom_test_pipeline) - -with gr.Blocks() as demo: - - gr.Markdown(value='# Prithvi burn scars detection') - gr.Markdown(value='''Prithvi is a first-of-its-kind temporal Vision transformer pretrained by the IBM and NASA team on continental US Harmonised Landsat Sentinel 2 (HLS) data. This demo showcases how the model was finetuned to detect burn scars. More detailes can be found [here](https://huggingface.co/ibm-nasa-geospatial/Prithvi-100M-burn-scar).\n - The user needs to provide an HLS geotiff image, including the following channels in reflectance units (e.g. 0-1): Blue, Green, Red, Narrow NIR, SWIR, SWIR 2. - ''') - with gr.Row(): - with gr.Column(): - inp = gr.File() - btn = gr.Button("Submit") - - with gr.Row(): - gr.Markdown(value='### Input color composite (SWIR, Narrow NIR, Red)') - gr.Markdown(value='### Model prediction (Black: No burn scar; White: Burn scar)') - - with gr.Row(): - out1=gr.Image(image_mode='RGB') - out2 = gr.Image(image_mode='L') - - btn.click(fn=func, inputs=inp, outputs=[out1, out2]) - - with gr.Row(): - gr.Examples(examples=["subsetted_512x512_HLS.S30.T10TGS.2020245.v1.4_merged.tif", - "subsetted_512x512_HLS.S30.T10TGS.2018285.v1.4_merged.tif", - "subsetted_512x512_HLS.S30.T10UGV.2020218.v1.4_merged.tif"], - inputs=inp, - outputs=[out1, out2], - preprocess=preprocess_example, - fn=func, - cache_examples=True, - ) - -demo.launch() \ No newline at end of file diff --git a/spaces/iccv23-diffusers-demo/Shap-E/app.py b/spaces/iccv23-diffusers-demo/Shap-E/app.py deleted file mode 100644 index f9ef78fe4bd364fc1fefe89214e615005ed19905..0000000000000000000000000000000000000000 --- a/spaces/iccv23-diffusers-demo/Shap-E/app.py +++ /dev/null @@ -1,33 +0,0 @@ -#!/usr/bin/env python - -import os - -import gradio as gr -import torch - -from app_image_to_3d import create_demo as create_demo_image_to_3d -from app_text_to_3d import create_demo as create_demo_text_to_3d -from model import Model - -DESCRIPTION = "# [Shap-E](https://github.com/openai/shap-e)" - -if not torch.cuda.is_available(): - DESCRIPTION += "\n

        Running on CPU 🥶 This demo does not work on CPU.

        " - -model = Model() - -with gr.Blocks(css="style.css") as demo: - gr.Markdown(DESCRIPTION) - gr.DuplicateButton( - value="Duplicate Space for private use", - elem_id="duplicate-button", - visible=os.getenv("SHOW_DUPLICATE_BUTTON") == "1", - ) - with gr.Tabs(): - with gr.Tab(label="Text to 3D"): - create_demo_text_to_3d(model) - with gr.Tab(label="Image to 3D"): - create_demo_image_to_3d(model) - -if __name__ == "__main__": - demo.queue(max_size=10).launch() diff --git a/spaces/icehelmetminer/runwayml-stable-diffusion-v1-5/README.md b/spaces/icehelmetminer/runwayml-stable-diffusion-v1-5/README.md deleted file mode 100644 index 2361258e6be024d301a0967538e613c26a866434..0000000000000000000000000000000000000000 --- a/spaces/icehelmetminer/runwayml-stable-diffusion-v1-5/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Runwayml Stable Diffusion V1 5 -emoji: ⚡ -colorFrom: gray -colorTo: yellow -sdk: streamlit -sdk_version: 1.21.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/ieftimov/confusingflags/app.py b/spaces/ieftimov/confusingflags/app.py deleted file mode 100644 index 9241d61045934b1bc5a126a9742a66ba4c880746..0000000000000000000000000000000000000000 --- a/spaces/ieftimov/confusingflags/app.py +++ /dev/null @@ -1,37 +0,0 @@ -from fastai.vision.all import * -import gradio as gr - -learn = load_learner("model.pkl") - -labels = learn.dls.vocab -def classify_image(img): - pred, idx, probs = learn.predict(img) - return {labels[i]: float(probs[i]) for i in range(len(labels))} - -image = gr.inputs.Image(shape=(192,192)) -label = gr.outputs.Label() -examples = ['flag_australia.jpg', 'flag_chad.jpg', 'flag_ecuador.jpg', 'flag_monaco.jpg'] - -title = "Confusing flags" -description = "A pet breed classifier trained on the Oxford Pets dataset with fastai. Created as a demo for Gradio and HuggingFace Spaces." - -description = """ -There are too many countries in the world, and even though it'd be interesting to cover all of them, there are a few sets of flags \[0] that look _very_ similar. Namely: - -* Chad and Romania -* Senegal and Mali -* Indoneasia and Monaco -* New Zealand and Australia -* Ireland and Côte d’Ivoire -* Norway and Iceland -* Venezuela, Ecuador, and Colombia -* Luxembourg and the Netherlands -* Slovenia, Russia, and Slovakia - -This is where this space helps. - -\[0]: https://www.britannica.com/list/flags-that-look-alike -""" - -iface = gr.Interface(fn=classify_image, inputs=image, outputs=gr.outputs.Label(num_top_classes=3), examples=examples, title=title, description=description) -iface.launch(inline=False) diff --git a/spaces/inreVtussa/clothingai/Examples/Cypheros Ts Doctor Crack Download Free 9 Fixed.md b/spaces/inreVtussa/clothingai/Examples/Cypheros Ts Doctor Crack Download Free 9 Fixed.md deleted file mode 100644 index a078f7dac5e93e8d838ec889b9fc6bd1696780ff..0000000000000000000000000000000000000000 --- a/spaces/inreVtussa/clothingai/Examples/Cypheros Ts Doctor Crack Download Free 9 Fixed.md +++ /dev/null @@ -1,32 +0,0 @@ -

        cypheros ts doctor crack download free 9


        DOWNLOADhttps://tiurll.com/2uCli6



        - -But if this opinion fails to satisfy me, then what we see 10 - - But if there is no cause that can be shown, then again 11 - - But though I should hesitate to place a man in suspense, 12 - - But which of the two is least likely to be true? 13 - - But Zeus has no sting 14 - - But while there is no chance that all will be well, yet I 15 - - But where does this come from? 16 - - But were I called upon to judge, 17 - - But you see I am not expected to give any opinion 18 - - But you will have one, will you not? 19 - - But who is most likely to be right? 20 - - But who is sure to be right? 21 - - But when you were taking the sun-dials 22 - - But you must not ask me to 4fefd39f24
        -
        -
        -

        diff --git a/spaces/ismot/1702t1/dataset/communal/base_dataset.py b/spaces/ismot/1702t1/dataset/communal/base_dataset.py deleted file mode 100644 index a4256581b25518957066f3b4e3c343bbcdc6f9a1..0000000000000000000000000000000000000000 --- a/spaces/ismot/1702t1/dataset/communal/base_dataset.py +++ /dev/null @@ -1,127 +0,0 @@ -""" -@Date: 2021/07/26 -@description: -""" -import numpy as np -import torch - -from utils.boundary import corners2boundary, visibility_corners, get_heat_map -from utils.conversion import xyz2depth, uv2xyz, uv2pixel -from dataset.communal.data_augmentation import PanoDataAugmentation - - -class BaseDataset(torch.utils.data.Dataset): - def __init__(self, mode, shape=None, max_wall_num=999, aug=None, camera_height=1.6, patch_num=256, keys=None): - if keys is None or len(keys) == 0: - keys = ['image', 'depth', 'ratio', 'id', 'corners'] - if shape is None: - shape = [512, 1024] - - assert mode == 'train' or mode == 'val' or mode == 'test' or mode is None, 'unknown mode!' - self.mode = mode - self.keys = keys - self.shape = shape - self.pano_aug = None if aug is None or mode == 'val' else PanoDataAugmentation(aug) - self.camera_height = camera_height - self.max_wall_num = max_wall_num - self.patch_num = patch_num - self.data = None - - def __len__(self): - return len(self.data) - - @staticmethod - def get_depth(corners, plan_y=1, length=256, visible=True): - visible_floor_boundary = corners2boundary(corners, length=length, visible=visible) - # The horizon-depth relative to plan_y - visible_depth = xyz2depth(uv2xyz(visible_floor_boundary, plan_y), plan_y) - return visible_depth - - def process_data(self, label, image, patch_num): - """ - :param label: - :param image: - :param patch_num: - :return: - """ - corners = label['corners'] - if self.pano_aug is not None: - corners, image = self.pano_aug.execute_aug(corners, image if 'image' in self.keys else None) - eps = 1e-3 - corners[:, 1] = np.clip(corners[:, 1], 0.5+eps, 1-eps) - - output = {} - if 'image' in self.keys: - image = image.transpose(2, 0, 1) - output['image'] = image - - visible_corners = None - if 'corner_class' in self.keys or 'depth' in self.keys: - visible_corners = visibility_corners(corners) - - if 'depth' in self.keys: - depth = self.get_depth(visible_corners, length=patch_num, visible=False) - assert len(depth) == patch_num, f"{label['id']}, {len(depth)}, {self.pano_aug.parameters}, {corners}" - output['depth'] = depth - - if 'ratio' in self.keys: - # Why use ratio? Because when floor_height =y_plan=1, we only need to predict ceil_height(ratio). - output['ratio'] = label['ratio'] - - if 'id' in self.keys: - output['id'] = label['id'] - - if 'corners' in self.keys: - # all corners for evaluating Full_IoU - assert len(label['corners']) <= 32, "len(label['corners']):"+len(label['corners']) - output['corners'] = np.zeros((32, 2), dtype=np.float32) - output['corners'][:len(label['corners'])] = label['corners'] - - if 'corner_heat_map' in self.keys: - output['corner_heat_map'] = get_heat_map(visible_corners[..., 0]) - - if 'object' in self.keys and 'objects' in label: - output[f'object_heat_map'] = np.zeros((3, patch_num), dtype=np.float32) - output['object_size'] = np.zeros((3, patch_num), dtype=np.float32) # width, height, bottom_height - for i, type in enumerate(label['objects']): - if len(label['objects'][type]) == 0: - continue - - u_s = [] - for obj in label['objects'][type]: - center_u = obj['center_u'] - u_s.append(center_u) - center_pixel_u = uv2pixel(np.array([center_u]), w=patch_num, axis=0)[0] - output['object_size'][0, center_pixel_u] = obj['width_u'] - output['object_size'][1, center_pixel_u] = obj['height_v'] - output['object_size'][2, center_pixel_u] = obj['boundary_v'] - output[f'object_heat_map'][i] = get_heat_map(np.array(u_s)) - - return output - - -if __name__ == '__main__': - from dataset.communal.read import read_image, read_label - from visualization.boundary import draw_boundaries - from utils.boundary import depth2boundaries - from tqdm import trange - - # np.random.seed(0) - dataset = BaseDataset() - dataset.pano_aug = PanoDataAugmentation(aug={ - 'STRETCH': True, - 'ROTATE': True, - 'FLIP': True, - }) - # pano_img = read_image("../src/demo.png") - # label = read_label("../src/demo.json") - pano_img_path = "../../src/dataset/mp3d/image/yqstnuAEVhm_6589ad7a5a0444b59adbf501c0f0fe53.png" - label_path = "../../src/dataset/mp3d/label/yqstnuAEVhm_6589ad7a5a0444b59adbf501c0f0fe53.json" - pano_img = read_image(pano_img_path) - label = read_label(label_path) - - # batch test - for i in trange(1): - output = dataset.process_data(label, pano_img, 256) - boundary_list = depth2boundaries(output['ratio'], output['depth'], step=None) - draw_boundaries(output['image'].transpose(1, 2, 0), boundary_list=boundary_list, show=True) diff --git a/spaces/jackli888/stable-diffusion-webui/modules/hypernetworks/ui.py b/spaces/jackli888/stable-diffusion-webui/modules/hypernetworks/ui.py deleted file mode 100644 index be2fd77cc76a24d0e7932c6b1fb26efcb18edcc5..0000000000000000000000000000000000000000 --- a/spaces/jackli888/stable-diffusion-webui/modules/hypernetworks/ui.py +++ /dev/null @@ -1,40 +0,0 @@ -import html -import os -import re - -import gradio as gr -import modules.hypernetworks.hypernetwork -from modules import devices, sd_hijack, shared - -not_available = ["hardswish", "multiheadattention"] -keys = list(x for x in modules.hypernetworks.hypernetwork.HypernetworkModule.activation_dict.keys() if x not in not_available) - - -def create_hypernetwork(name, enable_sizes, overwrite_old, layer_structure=None, activation_func=None, weight_init=None, add_layer_norm=False, use_dropout=False, dropout_structure=None): - filename = modules.hypernetworks.hypernetwork.create_hypernetwork(name, enable_sizes, overwrite_old, layer_structure, activation_func, weight_init, add_layer_norm, use_dropout, dropout_structure) - - return gr.Dropdown.update(choices=sorted([x for x in shared.hypernetworks.keys()])), f"Created: {filename}", "" - - -def train_hypernetwork(*args): - shared.loaded_hypernetworks = [] - - assert not shared.cmd_opts.lowvram, 'Training models with lowvram is not possible' - - try: - sd_hijack.undo_optimizations() - - hypernetwork, filename = modules.hypernetworks.hypernetwork.train_hypernetwork(*args) - - res = f""" -Training {'interrupted' if shared.state.interrupted else 'finished'} at {hypernetwork.step} steps. -Hypernetwork saved to {html.escape(filename)} -""" - return res, "" - except Exception: - raise - finally: - shared.sd_model.cond_stage_model.to(devices.device) - shared.sd_model.first_stage_model.to(devices.device) - sd_hijack.apply_optimizations() - diff --git a/spaces/jackyccl/segment-anything/groundingdino/util/visualizer.py b/spaces/jackyccl/segment-anything/groundingdino/util/visualizer.py deleted file mode 100644 index 7a1b7b101e9b73f75f9136bc67f2063c7c1cf1c1..0000000000000000000000000000000000000000 --- a/spaces/jackyccl/segment-anything/groundingdino/util/visualizer.py +++ /dev/null @@ -1,318 +0,0 @@ -# -*- coding: utf-8 -*- -""" -@File : visualizer.py -@Time : 2022/04/05 11:39:33 -@Author : Shilong Liu -@Contact : slongliu86@gmail.com -""" - -import datetime -import os - -import cv2 -import matplotlib.pyplot as plt -import numpy as np -import torch -from matplotlib import transforms -from matplotlib.collections import PatchCollection -from matplotlib.patches import Polygon -from pycocotools import mask as maskUtils - - -def renorm( - img: torch.FloatTensor, mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225] -) -> torch.FloatTensor: - # img: tensor(3,H,W) or tensor(B,3,H,W) - # return: same as img - assert img.dim() == 3 or img.dim() == 4, "img.dim() should be 3 or 4 but %d" % img.dim() - if img.dim() == 3: - assert img.size(0) == 3, 'img.size(0) shoule be 3 but "%d". (%s)' % ( - img.size(0), - str(img.size()), - ) - img_perm = img.permute(1, 2, 0) - mean = torch.Tensor(mean) - std = torch.Tensor(std) - img_res = img_perm * std + mean - return img_res.permute(2, 0, 1) - else: # img.dim() == 4 - assert img.size(1) == 3, 'img.size(1) shoule be 3 but "%d". (%s)' % ( - img.size(1), - str(img.size()), - ) - img_perm = img.permute(0, 2, 3, 1) - mean = torch.Tensor(mean) - std = torch.Tensor(std) - img_res = img_perm * std + mean - return img_res.permute(0, 3, 1, 2) - - -class ColorMap: - def __init__(self, basergb=[255, 255, 0]): - self.basergb = np.array(basergb) - - def __call__(self, attnmap): - # attnmap: h, w. np.uint8. - # return: h, w, 4. np.uint8. - assert attnmap.dtype == np.uint8 - h, w = attnmap.shape - res = self.basergb.copy() - res = res[None][None].repeat(h, 0).repeat(w, 1) # h, w, 3 - attn1 = attnmap.copy()[..., None] # h, w, 1 - res = np.concatenate((res, attn1), axis=-1).astype(np.uint8) - return res - - -def rainbow_text(x, y, ls, lc, **kw): - """ - Take a list of strings ``ls`` and colors ``lc`` and place them next to each - other, with text ls[i] being shown in color lc[i]. - - This example shows how to do both vertical and horizontal text, and will - pass all keyword arguments to plt.text, so you can set the font size, - family, etc. - """ - t = plt.gca().transData - fig = plt.gcf() - plt.show() - - # horizontal version - for s, c in zip(ls, lc): - text = plt.text(x, y, " " + s + " ", color=c, transform=t, **kw) - text.draw(fig.canvas.get_renderer()) - ex = text.get_window_extent() - t = transforms.offset_copy(text._transform, x=ex.width, units="dots") - - # #vertical version - # for s,c in zip(ls,lc): - # text = plt.text(x,y," "+s+" ",color=c, transform=t, - # rotation=90,va='bottom',ha='center',**kw) - # text.draw(fig.canvas.get_renderer()) - # ex = text.get_window_extent() - # t = transforms.offset_copy(text._transform, y=ex.height, units='dots') - - -class COCOVisualizer: - def __init__(self, coco=None, tokenlizer=None) -> None: - self.coco = coco - - def visualize(self, img, tgt, caption=None, dpi=180, savedir="vis"): - """ - img: tensor(3, H, W) - tgt: make sure they are all on cpu. - must have items: 'image_id', 'boxes', 'size' - """ - plt.figure(dpi=dpi) - plt.rcParams["font.size"] = "5" - ax = plt.gca() - img = renorm(img).permute(1, 2, 0) - # if os.environ.get('IPDB_SHILONG_DEBUG', None) == 'INFO': - # import ipdb; ipdb.set_trace() - ax.imshow(img) - - self.addtgt(tgt) - - if tgt is None: - image_id = 0 - elif "image_id" not in tgt: - image_id = 0 - else: - image_id = tgt["image_id"] - - if caption is None: - savename = "{}/{}-{}.png".format( - savedir, int(image_id), str(datetime.datetime.now()).replace(" ", "-") - ) - else: - savename = "{}/{}-{}-{}.png".format( - savedir, caption, int(image_id), str(datetime.datetime.now()).replace(" ", "-") - ) - print("savename: {}".format(savename)) - os.makedirs(os.path.dirname(savename), exist_ok=True) - plt.savefig(savename) - plt.close() - - def addtgt(self, tgt): - """ """ - if tgt is None or not "boxes" in tgt: - ax = plt.gca() - - if "caption" in tgt: - ax.set_title(tgt["caption"], wrap=True) - - ax.set_axis_off() - return - - ax = plt.gca() - H, W = tgt["size"] - numbox = tgt["boxes"].shape[0] - - color = [] - polygons = [] - boxes = [] - for box in tgt["boxes"].cpu(): - unnormbbox = box * torch.Tensor([W, H, W, H]) - unnormbbox[:2] -= unnormbbox[2:] / 2 - [bbox_x, bbox_y, bbox_w, bbox_h] = unnormbbox.tolist() - boxes.append([bbox_x, bbox_y, bbox_w, bbox_h]) - poly = [ - [bbox_x, bbox_y], - [bbox_x, bbox_y + bbox_h], - [bbox_x + bbox_w, bbox_y + bbox_h], - [bbox_x + bbox_w, bbox_y], - ] - np_poly = np.array(poly).reshape((4, 2)) - polygons.append(Polygon(np_poly)) - c = (np.random.random((1, 3)) * 0.6 + 0.4).tolist()[0] - color.append(c) - - p = PatchCollection(polygons, facecolor=color, linewidths=0, alpha=0.1) - ax.add_collection(p) - p = PatchCollection(polygons, facecolor="none", edgecolors=color, linewidths=2) - ax.add_collection(p) - - if "strings_positive" in tgt and len(tgt["strings_positive"]) > 0: - assert ( - len(tgt["strings_positive"]) == numbox - ), f"{len(tgt['strings_positive'])} = {numbox}, " - for idx, strlist in enumerate(tgt["strings_positive"]): - cate_id = int(tgt["labels"][idx]) - _string = str(cate_id) + ":" + " ".join(strlist) - bbox_x, bbox_y, bbox_w, bbox_h = boxes[idx] - # ax.text(bbox_x, bbox_y, _string, color='black', bbox={'facecolor': 'yellow', 'alpha': 1.0, 'pad': 1}) - ax.text( - bbox_x, - bbox_y, - _string, - color="black", - bbox={"facecolor": color[idx], "alpha": 0.6, "pad": 1}, - ) - - if "box_label" in tgt: - assert len(tgt["box_label"]) == numbox, f"{len(tgt['box_label'])} = {numbox}, " - for idx, bl in enumerate(tgt["box_label"]): - _string = str(bl) - bbox_x, bbox_y, bbox_w, bbox_h = boxes[idx] - # ax.text(bbox_x, bbox_y, _string, color='black', bbox={'facecolor': 'yellow', 'alpha': 1.0, 'pad': 1}) - ax.text( - bbox_x, - bbox_y, - _string, - color="black", - bbox={"facecolor": color[idx], "alpha": 0.6, "pad": 1}, - ) - - if "caption" in tgt: - ax.set_title(tgt["caption"], wrap=True) - # plt.figure() - # rainbow_text(0.0,0.0,"all unicorns poop rainbows ! ! !".split(), - # ['red', 'orange', 'brown', 'green', 'blue', 'purple', 'black']) - - if "attn" in tgt: - # if os.environ.get('IPDB_SHILONG_DEBUG', None) == 'INFO': - # import ipdb; ipdb.set_trace() - if isinstance(tgt["attn"], tuple): - tgt["attn"] = [tgt["attn"]] - for item in tgt["attn"]: - attn_map, basergb = item - attn_map = (attn_map - attn_map.min()) / (attn_map.max() - attn_map.min() + 1e-3) - attn_map = (attn_map * 255).astype(np.uint8) - cm = ColorMap(basergb) - heatmap = cm(attn_map) - ax.imshow(heatmap) - ax.set_axis_off() - - def showAnns(self, anns, draw_bbox=False): - """ - Display the specified annotations. - :param anns (array of object): annotations to display - :return: None - """ - if len(anns) == 0: - return 0 - if "segmentation" in anns[0] or "keypoints" in anns[0]: - datasetType = "instances" - elif "caption" in anns[0]: - datasetType = "captions" - else: - raise Exception("datasetType not supported") - if datasetType == "instances": - ax = plt.gca() - ax.set_autoscale_on(False) - polygons = [] - color = [] - for ann in anns: - c = (np.random.random((1, 3)) * 0.6 + 0.4).tolist()[0] - if "segmentation" in ann: - if type(ann["segmentation"]) == list: - # polygon - for seg in ann["segmentation"]: - poly = np.array(seg).reshape((int(len(seg) / 2), 2)) - polygons.append(Polygon(poly)) - color.append(c) - else: - # mask - t = self.imgs[ann["image_id"]] - if type(ann["segmentation"]["counts"]) == list: - rle = maskUtils.frPyObjects( - [ann["segmentation"]], t["height"], t["width"] - ) - else: - rle = [ann["segmentation"]] - m = maskUtils.decode(rle) - img = np.ones((m.shape[0], m.shape[1], 3)) - if ann["iscrowd"] == 1: - color_mask = np.array([2.0, 166.0, 101.0]) / 255 - if ann["iscrowd"] == 0: - color_mask = np.random.random((1, 3)).tolist()[0] - for i in range(3): - img[:, :, i] = color_mask[i] - ax.imshow(np.dstack((img, m * 0.5))) - if "keypoints" in ann and type(ann["keypoints"]) == list: - # turn skeleton into zero-based index - sks = np.array(self.loadCats(ann["category_id"])[0]["skeleton"]) - 1 - kp = np.array(ann["keypoints"]) - x = kp[0::3] - y = kp[1::3] - v = kp[2::3] - for sk in sks: - if np.all(v[sk] > 0): - plt.plot(x[sk], y[sk], linewidth=3, color=c) - plt.plot( - x[v > 0], - y[v > 0], - "o", - markersize=8, - markerfacecolor=c, - markeredgecolor="k", - markeredgewidth=2, - ) - plt.plot( - x[v > 1], - y[v > 1], - "o", - markersize=8, - markerfacecolor=c, - markeredgecolor=c, - markeredgewidth=2, - ) - - if draw_bbox: - [bbox_x, bbox_y, bbox_w, bbox_h] = ann["bbox"] - poly = [ - [bbox_x, bbox_y], - [bbox_x, bbox_y + bbox_h], - [bbox_x + bbox_w, bbox_y + bbox_h], - [bbox_x + bbox_w, bbox_y], - ] - np_poly = np.array(poly).reshape((4, 2)) - polygons.append(Polygon(np_poly)) - color.append(c) - - # p = PatchCollection(polygons, facecolor=color, linewidths=0, alpha=0.4) - # ax.add_collection(p) - p = PatchCollection(polygons, facecolor="none", edgecolors=color, linewidths=2) - ax.add_collection(p) - elif datasetType == "captions": - for ann in anns: - print(ann["caption"]) diff --git a/spaces/james-oldfield/PandA/networks/genforce/utils/logger_test.py b/spaces/james-oldfield/PandA/networks/genforce/utils/logger_test.py deleted file mode 100644 index 338a7f4c283943ab04ac368adab1e2bc069d880e..0000000000000000000000000000000000000000 --- a/spaces/james-oldfield/PandA/networks/genforce/utils/logger_test.py +++ /dev/null @@ -1,36 +0,0 @@ -# python3.7 -"""Unit test for logger.""" - -import time - -from .logger import build_logger - - -def test_logger(): - """Test function.""" - - for logger_type in ['normal', 'rich', 'dumb']: - if logger_type == 'normal': - class_name = 'Logger' - elif logger_type == 'rich': - class_name = 'RichLogger' - elif logger_type == 'dumb': - class_name = 'DumbLogger' - - print(f'===== Test `utils.logger.{class_name}` =====') - logger = build_logger(logger_type, - logger_name=logger_type, - logfile_name=f'test_{logger_type}_logger.log') - logger.print('print log') - logger.debug('debug log') - logger.info('info log') - logger.warning('warning log') - logger.init_pbar() - task1 = logger.add_pbar_task('Task 1', 500) - task2 = logger.add_pbar_task('Task 2', 1000) - for _ in range(1000): - logger.update_pbar(task1, 1) - logger.update_pbar(task2, 1) - time.sleep(0.005) - logger.close_pbar() - print('Success!') diff --git a/spaces/jbetker/tortoise/tortoise/models/xtransformers.py b/spaces/jbetker/tortoise/tortoise/models/xtransformers.py deleted file mode 100644 index df9ee25131ffae50047fe4bbc7659c67de6537a3..0000000000000000000000000000000000000000 --- a/spaces/jbetker/tortoise/tortoise/models/xtransformers.py +++ /dev/null @@ -1,1252 +0,0 @@ -import functools -import math -import torch -from torch import nn, einsum -import torch.nn.functional as F -from functools import partial -from inspect import isfunction -from collections import namedtuple - -from einops import rearrange, repeat, reduce -from einops.layers.torch import Rearrange - -from torch.utils.checkpoint import checkpoint - -DEFAULT_DIM_HEAD = 64 - -Intermediates = namedtuple('Intermediates', [ - 'pre_softmax_attn', - 'post_softmax_attn' -]) - -LayerIntermediates = namedtuple('Intermediates', [ - 'hiddens', - 'attn_intermediates', - 'past_key_values', -]) - - -# helpers - -def exists(val): - return val is not None - - -def default(val, d): - if exists(val): - return val - return d() if isfunction(d) else d - - -def cast_tuple(val, depth): - return val if isinstance(val, tuple) else (val,) * depth - - -class always(): - def __init__(self, val): - self.val = val - - def __call__(self, *args, **kwargs): - return self.val - - -class not_equals(): - def __init__(self, val): - self.val = val - - def __call__(self, x, *args, **kwargs): - return x != self.val - - -class equals(): - def __init__(self, val): - self.val = val - - def __call__(self, x, *args, **kwargs): - return x == self.val - - -def max_neg_value(tensor): - return -torch.finfo(tensor.dtype).max - - -def l2norm(t): - return F.normalize(t, p=2, dim=-1) - - -# init helpers - -def init_zero_(layer): - nn.init.constant_(layer.weight, 0.) - if exists(layer.bias): - nn.init.constant_(layer.bias, 0.) - - -# keyword argument helpers - -def pick_and_pop(keys, d): - values = list(map(lambda key: d.pop(key), keys)) - return dict(zip(keys, values)) - - -def group_dict_by_key(cond, d): - return_val = [dict(), dict()] - for key in d.keys(): - match = bool(cond(key)) - ind = int(not match) - return_val[ind][key] = d[key] - return (*return_val,) - - -def string_begins_with(prefix, str): - return str.startswith(prefix) - - -def group_by_key_prefix(prefix, d): - return group_dict_by_key(partial(string_begins_with, prefix), d) - - -def groupby_prefix_and_trim(prefix, d): - kwargs_with_prefix, kwargs = group_dict_by_key(partial(string_begins_with, prefix), d) - kwargs_without_prefix = dict(map(lambda x: (x[0][len(prefix):], x[1]), tuple(kwargs_with_prefix.items()))) - return kwargs_without_prefix, kwargs - - -# activations - -class ReluSquared(nn.Module): - def forward(self, x): - return F.relu(x) ** 2 - - -# positional embeddings - -class AbsolutePositionalEmbedding(nn.Module): - def __init__(self, dim, max_seq_len): - super().__init__() - self.scale = dim ** -0.5 - self.emb = nn.Embedding(max_seq_len, dim) - - def forward(self, x): - n = torch.arange(x.shape[1], device=x.device) - pos_emb = self.emb(n) - pos_emb = rearrange(pos_emb, 'n d -> () n d') - return pos_emb * self.scale - - -class FixedPositionalEmbedding(nn.Module): - def __init__(self, dim): - super().__init__() - inv_freq = 1. / (10000 ** (torch.arange(0, dim, 2).float() / dim)) - self.register_buffer('inv_freq', inv_freq) - - def forward(self, x, seq_dim=1, offset=0): - t = torch.arange(x.shape[seq_dim], device=x.device).type_as(self.inv_freq) + offset - sinusoid_inp = torch.einsum('i , j -> i j', t, self.inv_freq) - emb = torch.cat((sinusoid_inp.sin(), sinusoid_inp.cos()), dim=-1) - return rearrange(emb, 'n d -> () n d') - - -class RelativePositionBias(nn.Module): - def __init__(self, scale, causal=False, num_buckets=32, max_distance=128, heads=8): - super().__init__() - self.scale = scale - self.causal = causal - self.num_buckets = num_buckets - self.max_distance = max_distance - self.relative_attention_bias = nn.Embedding(num_buckets, heads) - - @staticmethod - def _relative_position_bucket(relative_position, causal=True, num_buckets=32, max_distance=128): - ret = 0 - n = -relative_position - if not causal: - num_buckets //= 2 - ret += (n < 0).long() * num_buckets - n = torch.abs(n) - else: - n = torch.max(n, torch.zeros_like(n)) - - max_exact = num_buckets // 2 - is_small = n < max_exact - - val_if_large = max_exact + ( - torch.log(n.float() / max_exact) / math.log(max_distance / max_exact) * (num_buckets - max_exact) - ).long() - val_if_large = torch.min(val_if_large, torch.full_like(val_if_large, num_buckets - 1)) - - ret += torch.where(is_small, n, val_if_large) - return ret - - def forward(self, qk_dots): - i, j, device = *qk_dots.shape[-2:], qk_dots.device - q_pos = torch.arange(i, dtype=torch.long, device=device) - k_pos = torch.arange(j, dtype=torch.long, device=device) - rel_pos = k_pos[None, :] - q_pos[:, None] - rp_bucket = self._relative_position_bucket(rel_pos, causal=self.causal, num_buckets=self.num_buckets, - max_distance=self.max_distance) - values = self.relative_attention_bias(rp_bucket) - bias = rearrange(values, 'i j h -> () h i j') - return qk_dots + (bias * self.scale) - - -class AlibiPositionalBias(nn.Module): - def __init__(self, heads, **kwargs): - super().__init__() - self.heads = heads - slopes = torch.Tensor(self._get_slopes(heads)) - slopes = rearrange(slopes, 'h -> () h () ()') - self.register_buffer('slopes', slopes, persistent=False) - self.register_buffer('bias', None, persistent=False) - - @staticmethod - def _get_slopes(heads): - def get_slopes_power_of_2(n): - start = (2 ** (-2 ** -(math.log2(n) - 3))) - ratio = start - return [start * ratio ** i for i in range(n)] - - if math.log2(heads).is_integer(): - return get_slopes_power_of_2(heads) - - closest_power_of_2 = 2 ** math.floor(math.log2(heads)) - return get_slopes_power_of_2(closest_power_of_2) + get_slopes_power_of_2(2 * closest_power_of_2)[0::2][ - :heads - closest_power_of_2] - - def forward(self, qk_dots): - h, i, j, device = *qk_dots.shape[-3:], qk_dots.device - - if exists(self.bias) and self.bias.shape[-1] >= j: - return qk_dots + self.bias[..., :j] - - bias = torch.arange(j, device=device) - bias = rearrange(bias, 'j -> () () () j') - bias = bias * self.slopes - - num_heads_unalibied = h - bias.shape[1] - bias = F.pad(bias, (0, 0, 0, 0, 0, num_heads_unalibied)) - - self.register_buffer('bias', bias, persistent=False) - return qk_dots + self.bias - - -class LearnedAlibiPositionalBias(AlibiPositionalBias): - def __init__(self, heads, bidirectional=False): - super().__init__(heads) - los_slopes = torch.log(self.slopes) - self.learned_logslopes = nn.Parameter(los_slopes) - - self.bidirectional = bidirectional - if self.bidirectional: - self.learned_logslopes_future = nn.Parameter(los_slopes) - - def forward(self, qk_dots): - h, i, j, device = *qk_dots.shape[-3:], qk_dots.device - - def get_slopes(param): - return F.pad(param.exp(), (0, 0, 0, 0, 0, h - param.shape[1])) - - if exists(self.bias) and self.bias.shape[-1] >= j: - bias = self.bias[..., :i, :j] - else: - i_arange = torch.arange(i, device=device) - j_arange = torch.arange(j, device=device) - bias = rearrange(j_arange, 'j -> 1 1 1 j') - rearrange(i_arange, 'i -> 1 1 i 1') - self.register_buffer('bias', bias, persistent=False) - - if self.bidirectional: - past_slopes = get_slopes(self.learned_logslopes) - future_slopes = get_slopes(self.learned_logslopes_future) - bias = torch.tril(bias * past_slopes) + torch.triu(bias * future_slopes) - else: - slopes = get_slopes(self.learned_logslopes) - bias = bias * slopes - - return qk_dots + bias - - -class RotaryEmbedding(nn.Module): - def __init__(self, dim): - super().__init__() - inv_freq = 1. / (10000 ** (torch.arange(0, dim, 2).float() / dim)) - self.register_buffer('inv_freq', inv_freq) - - def forward(self, max_seq_len, device): - t = torch.arange(max_seq_len, device=device).type_as(self.inv_freq) - freqs = torch.einsum('i , j -> i j', t, self.inv_freq) - emb = torch.cat((freqs, freqs), dim=-1) - return rearrange(emb, 'n d -> () () n d') - - -def rotate_half(x): - x = rearrange(x, '... (j d) -> ... j d', j=2) - x1, x2 = x.unbind(dim=-2) - return torch.cat((-x2, x1), dim=-1) - - -def apply_rotary_pos_emb(t, freqs): - seq_len = t.shape[-2] - freqs = freqs[:, :, -seq_len:] - return (t * freqs.cos()) + (rotate_half(t) * freqs.sin()) - - -# norms - -class Scale(nn.Module): - def __init__(self, value, fn): - super().__init__() - self.value = value - self.fn = fn - - def forward(self, x, **kwargs): - out = self.fn(x, **kwargs) - scale_fn = lambda t: t * self.value - - if not isinstance(out, tuple): - return scale_fn(out) - - return (scale_fn(out[0]), *out[1:]) - - -class Rezero(nn.Module): - def __init__(self, fn): - super().__init__() - self.fn = fn - self.g = nn.Parameter(torch.zeros(1)) - - def forward(self, x, **kwargs): - out = self.fn(x, **kwargs) - rezero_fn = lambda t: t * self.g - - if not isinstance(out, tuple): - return rezero_fn(out) - - return (rezero_fn(out[0]), *out[1:]) - - -class ScaleNorm(nn.Module): - def __init__(self, dim, eps=1e-5): - super().__init__() - self.scale = dim ** -0.5 - self.eps = eps - self.g = nn.Parameter(torch.ones(1)) - - def forward(self, x): - norm = torch.norm(x, dim=-1, keepdim=True) * self.scale - return x / norm.clamp(min=self.eps) * self.g - - -class RMSNorm(nn.Module): - def __init__(self, dim, eps=1e-8): - super().__init__() - self.scale = dim ** -0.5 - self.eps = eps - self.g = nn.Parameter(torch.ones(dim)) - - def forward(self, x): - norm = torch.norm(x, dim=-1, keepdim=True) * self.scale - return x / norm.clamp(min=self.eps) * self.g - - -class RMSScaleShiftNorm(nn.Module): - def __init__(self, dim, eps=1e-8): - super().__init__() - self.scale = dim ** -0.5 - self.eps = eps - self.g = nn.Parameter(torch.ones(dim)) - self.scale_shift_process = nn.Linear(dim * 2, dim * 2) - - def forward(self, x, norm_scale_shift_inp): - norm = torch.norm(x, dim=-1, keepdim=True) * self.scale - norm = x / norm.clamp(min=self.eps) * self.g - - ss_emb = self.scale_shift_process(norm_scale_shift_inp) - scale, shift = torch.chunk(ss_emb, 2, dim=1) - h = norm * (1 + scale.unsqueeze(1)) + shift.unsqueeze(1) - return h - - -# residual and residual gates - -class Residual(nn.Module): - def __init__(self, dim, scale_residual=False): - super().__init__() - self.residual_scale = nn.Parameter(torch.ones(dim)) if scale_residual else None - - def forward(self, x, residual): - if exists(self.residual_scale): - residual = residual * self.residual_scale - - return x + residual - - -class GRUGating(nn.Module): - def __init__(self, dim, scale_residual=False): - super().__init__() - self.gru = nn.GRUCell(dim, dim) - self.residual_scale = nn.Parameter(torch.ones(dim)) if scale_residual else None - - def forward(self, x, residual): - if exists(self.residual_scale): - residual = residual * self.residual_scale - - gated_output = self.gru( - rearrange(x, 'b n d -> (b n) d'), - rearrange(residual, 'b n d -> (b n) d') - ) - - return gated_output.reshape_as(x) - - -# token shifting - -def shift(t, amount, mask=None): - if amount == 0: - return t - - if exists(mask): - t = t.masked_fill(~mask[..., None], 0.) - - return F.pad(t, (0, 0, amount, -amount), value=0.) - - -class ShiftTokens(nn.Module): - def __init__(self, shifts, fn): - super().__init__() - self.fn = fn - self.shifts = tuple(shifts) - - def forward(self, x, **kwargs): - mask = kwargs.get('mask', None) - shifts = self.shifts - segments = len(shifts) - feats_per_shift = x.shape[-1] // segments - splitted = x.split(feats_per_shift, dim=-1) - segments_to_shift, rest = splitted[:segments], splitted[segments:] - segments_to_shift = list(map(lambda args: shift(*args, mask=mask), zip(segments_to_shift, shifts))) - x = torch.cat((*segments_to_shift, *rest), dim=-1) - return self.fn(x, **kwargs) - - -# feedforward - -class GLU(nn.Module): - def __init__(self, dim_in, dim_out, activation): - super().__init__() - self.act = activation - self.proj = nn.Linear(dim_in, dim_out * 2) - - def forward(self, x): - x, gate = self.proj(x).chunk(2, dim=-1) - return x * self.act(gate) - - -class FeedForward(nn.Module): - def __init__( - self, - dim, - dim_out=None, - mult=4, - glu=False, - relu_squared=False, - post_act_ln=False, - dropout=0., - zero_init_output=False - ): - super().__init__() - inner_dim = int(dim * mult) - dim_out = default(dim_out, dim) - activation = ReluSquared() if relu_squared else nn.GELU() - - project_in = nn.Sequential( - nn.Linear(dim, inner_dim), - activation - ) if not glu else GLU(dim, inner_dim, activation) - - self.net = nn.Sequential( - project_in, - nn.LayerNorm(inner_dim) if post_act_ln else nn.Identity(), - nn.Dropout(dropout), - nn.Linear(inner_dim, dim_out) - ) - - # init last linear layer to 0 - if zero_init_output: - init_zero_(self.net[-1]) - - def forward(self, x): - return self.net(x) - - -# attention. - -class Attention(nn.Module): - def __init__( - self, - dim, - dim_head=DEFAULT_DIM_HEAD, - heads=8, - causal=False, - talking_heads=False, - head_scale=False, - collab_heads=False, - collab_compression=.3, - sparse_topk=None, - use_entmax15=False, - num_mem_kv=0, - dropout=0., - on_attn=False, - gate_values=False, - zero_init_output=False, - max_attend_past=None, - qk_norm=False, - scale_init_value=None, - rel_pos_bias=False, - rel_pos_num_buckets=32, - rel_pos_max_distance=128, - ): - super().__init__() - self.scale = dim_head ** -0.5 - - self.heads = heads - self.causal = causal - self.max_attend_past = max_attend_past - - qk_dim = v_dim = dim_head * heads - - # collaborative heads - self.collab_heads = collab_heads - if self.collab_heads: - qk_dim = int(collab_compression * qk_dim) - self.collab_mixing = nn.Parameter(torch.randn(heads, qk_dim)) - - self.to_q = nn.Linear(dim, qk_dim, bias=False) - self.to_k = nn.Linear(dim, qk_dim, bias=False) - self.to_v = nn.Linear(dim, v_dim, bias=False) - - self.dropout = nn.Dropout(dropout) - - # add GLU gating for aggregated values, from alphafold2 - self.to_v_gate = None - if gate_values: - self.to_v_gate = nn.Linear(dim, v_dim) - nn.init.constant_(self.to_v_gate.weight, 0) - nn.init.constant_(self.to_v_gate.bias, 1) - - # cosine sim attention - self.qk_norm = qk_norm - if qk_norm: - scale_init_value = default(scale_init_value, - -3) # if not provided, initialize as though it were sequence length of 1024 - self.scale = nn.Parameter(torch.ones(1, heads, 1, 1) * scale_init_value) - - # talking heads - self.talking_heads = talking_heads - if talking_heads: - self.pre_softmax_proj = nn.Parameter(torch.randn(heads, heads)) - self.post_softmax_proj = nn.Parameter(torch.randn(heads, heads)) - - # head scaling - self.head_scale = head_scale - if head_scale: - self.head_scale_params = nn.Parameter(torch.ones(1, heads, 1, 1)) - - # explicit topk sparse attention - self.sparse_topk = sparse_topk - - # entmax - self.attn_fn = F.softmax - - # add memory key / values - self.num_mem_kv = num_mem_kv - if num_mem_kv > 0: - self.mem_k = nn.Parameter(torch.randn(heads, num_mem_kv, dim_head)) - self.mem_v = nn.Parameter(torch.randn(heads, num_mem_kv, dim_head)) - - # attention on attention - self.attn_on_attn = on_attn - self.to_out = nn.Sequential(nn.Linear(v_dim, dim * 2), nn.GLU()) if on_attn else nn.Linear(v_dim, dim) - - self.rel_pos_bias = rel_pos_bias - if rel_pos_bias: - assert rel_pos_num_buckets <= rel_pos_max_distance, 'number of relative position buckets must be less than the relative position max distance' - self.rel_pos = RelativePositionBias(scale=dim_head ** 0.5, causal=causal, heads=heads, - num_buckets=rel_pos_num_buckets, max_distance=rel_pos_max_distance) - - # init output projection 0 - if zero_init_output: - init_zero_(self.to_out) - - def forward( - self, - x, - context=None, - mask=None, - context_mask=None, - attn_mask=None, - sinusoidal_emb=None, - rotary_pos_emb=None, - prev_attn=None, - mem=None, - layer_past=None, - ): - b, n, _, h, talking_heads, collab_heads, head_scale, scale, device, has_context = *x.shape, self.heads, self.talking_heads, self.collab_heads, self.head_scale, self.scale, x.device, exists( - context) - kv_input = default(context, x) - - q_input = x - k_input = kv_input - v_input = kv_input - - if exists(mem): - k_input = torch.cat((mem, k_input), dim=-2) - v_input = torch.cat((mem, v_input), dim=-2) - - if exists(sinusoidal_emb): - # in shortformer, the query would start at a position offset depending on the past cached memory - offset = k_input.shape[-2] - q_input.shape[-2] - q_input = q_input + sinusoidal_emb(q_input, offset=offset) - k_input = k_input + sinusoidal_emb(k_input) - - q = self.to_q(q_input) - k = self.to_k(k_input) - v = self.to_v(v_input) - - if not collab_heads: - q, k, v = map(lambda t: rearrange(t, 'b n (h d) -> b h n d', h=h), (q, k, v)) - else: - q = einsum('b i d, h d -> b h i d', q, self.collab_mixing) - k = rearrange(k, 'b n d -> b () n d') - v = rearrange(v, 'b n (h d) -> b h n d', h=h) - - if layer_past is not None: - past_key, past_value = layer_past - k = torch.cat([past_key, k], dim=-2) - v = torch.cat([past_value, v], dim=-2) - k_cache = k - v_cache = v - - if exists(rotary_pos_emb) and not has_context: - l = rotary_pos_emb.shape[-1] - (ql, qr), (kl, kr), (vl, vr) = map(lambda t: (t[..., :l], t[..., l:]), (q, k, v)) - ql, kl, vl = map(lambda t: apply_rotary_pos_emb(t, rotary_pos_emb), (ql, kl, vl)) - q, k, v = map(lambda t: torch.cat(t, dim=-1), ((ql, qr), (kl, kr), (vl, vr))) - - input_mask = None - if any(map(exists, (mask, context_mask))): - q_mask = default(mask, lambda: torch.ones((b, n), device=device).bool()) - k_mask = q_mask if not exists(context) else context_mask - k_mask = default(k_mask, lambda: torch.ones((b, k.shape[-2]), device=device).bool()) - q_mask = rearrange(q_mask, 'b i -> b () i ()') - k_mask = rearrange(k_mask, 'b j -> b () () j') - input_mask = q_mask * k_mask - - if self.num_mem_kv > 0: - mem_k, mem_v = map(lambda t: repeat(t, 'h n d -> b h n d', b=b), (self.mem_k, self.mem_v)) - k = torch.cat((mem_k, k), dim=-2) - v = torch.cat((mem_v, v), dim=-2) - if exists(input_mask): - input_mask = F.pad(input_mask, (self.num_mem_kv, 0), value=True) - - if collab_heads: - k = k.expand(-1, h, -1, -1) - - if self.qk_norm: - q, k = map(l2norm, (q, k)) - scale = 1 / (self.scale.exp().clamp(min=1e-2)) - - dots = einsum('b h i d, b h j d -> b h i j', q, k) * scale - mask_value = max_neg_value(dots) - - if exists(prev_attn): - dots = dots + prev_attn - - pre_softmax_attn = dots.clone() - - if talking_heads: - dots = einsum('b h i j, h k -> b k i j', dots, self.pre_softmax_proj).contiguous() - - if self.rel_pos_bias: - dots = self.rel_pos(dots) - - if exists(input_mask): - dots.masked_fill_(~input_mask, mask_value) - del input_mask - - if exists(attn_mask): - assert 2 <= attn_mask.ndim <= 4, 'attention mask must have greater than 2 dimensions but less than or equal to 4' - if attn_mask.ndim == 2: - attn_mask = rearrange(attn_mask, 'i j -> () () i j') - elif attn_mask.ndim == 3: - attn_mask = rearrange(attn_mask, 'h i j -> () h i j') - dots.masked_fill_(~attn_mask, mask_value) - - if exists(self.max_attend_past): - i, j = dots.shape[-2:] - range_q = torch.arange(j - i, j, device=device) - range_k = torch.arange(j, device=device) - dist = rearrange(range_q, 'i -> () () i ()') - rearrange(range_k, 'j -> () () () j') - mask = dist > self.max_attend_past - dots.masked_fill_(mask, mask_value) - del mask - - if self.causal: - i, j = dots.shape[-2:] - r = torch.arange(i, device=device) - mask = rearrange(r, 'i -> () () i ()') < rearrange(r, 'j -> () () () j') - mask = F.pad(mask, (j - i, 0), value=False) - dots.masked_fill_(mask, mask_value) - del mask - - if exists(self.sparse_topk) and self.sparse_topk < dots.shape[-1]: - top, _ = dots.topk(self.sparse_topk, dim=-1) - vk = top[..., -1].unsqueeze(-1).expand_as(dots) - mask = dots < vk - dots.masked_fill_(mask, mask_value) - del mask - - attn = self.attn_fn(dots, dim=-1) - post_softmax_attn = attn.clone() - - attn = self.dropout(attn) - - if talking_heads: - attn = einsum('b h i j, h k -> b k i j', attn, self.post_softmax_proj).contiguous() - - out = einsum('b h i j, b h j d -> b h i d', attn, v) - - if head_scale: - out = out * self.head_scale_params - - out = rearrange(out, 'b h n d -> b n (h d)') - - if exists(self.to_v_gate): - gates = self.to_v_gate(x) - out = out * gates.sigmoid() - - intermediates = Intermediates( - pre_softmax_attn=pre_softmax_attn, - post_softmax_attn=post_softmax_attn - ) - - return self.to_out(out), intermediates, k_cache, v_cache - - -class AttentionLayers(nn.Module): - def __init__( - self, - dim, - depth, - heads=8, - causal=False, - cross_attend=False, - only_cross=False, - use_scalenorm=False, - use_rms_scaleshift_norm=False, - use_rmsnorm=False, - use_rezero=False, - alibi_pos_bias=False, - alibi_num_heads=None, - alibi_learned=False, - position_infused_attn=False, - rotary_pos_emb=False, - rotary_emb_dim=None, - custom_layers=None, - sandwich_coef=None, - par_ratio=None, - residual_attn=False, - cross_residual_attn=False, - macaron=False, - pre_norm=True, - gate_residual=False, - scale_residual=False, - shift_tokens=0, - sandwich_norm=False, - use_qk_norm_attn=False, - qk_norm_attn_seq_len=None, - zero_init_branch_output=False, - **kwargs - ): - super().__init__() - ff_kwargs, kwargs = groupby_prefix_and_trim('ff_', kwargs) - attn_kwargs, _ = groupby_prefix_and_trim('attn_', kwargs) - - dim_head = attn_kwargs.get('dim_head', DEFAULT_DIM_HEAD) - - self.dim = dim - self.depth = depth - self.layers = nn.ModuleList([]) - self.causal = causal - - rel_pos_bias = 'rel_pos_bias' in attn_kwargs - self.has_pos_emb = position_infused_attn or rel_pos_bias or rotary_pos_emb - self.pia_pos_emb = FixedPositionalEmbedding(dim) if position_infused_attn else None - - rotary_emb_dim = max(default(rotary_emb_dim, dim_head // 2), 32) - self.rotary_pos_emb = RotaryEmbedding(rotary_emb_dim) if rotary_pos_emb else None - - assert not ( - alibi_pos_bias and rel_pos_bias), 'you can only choose Alibi positional bias or T5 relative positional bias, not both' - - if alibi_pos_bias: - alibi_num_heads = default(alibi_num_heads, heads) - assert alibi_num_heads <= heads, 'number of ALiBi heads must be less than the total number of heads' - alibi_pos_klass = LearnedAlibiPositionalBias if alibi_learned or not causal else AlibiPositionalBias - self.rel_pos = alibi_pos_klass(heads=alibi_num_heads, bidirectional=not causal) - else: - self.rel_pos = None - - assert not (not pre_norm and sandwich_norm), 'sandwich norm cannot be used when not using prenorm' - self.pre_norm = pre_norm - self.sandwich_norm = sandwich_norm - - self.residual_attn = residual_attn - self.cross_residual_attn = cross_residual_attn - self.cross_attend = cross_attend - - norm_class = ScaleNorm if use_scalenorm else nn.LayerNorm - norm_class = RMSNorm if use_rmsnorm else norm_class - norm_class = RMSScaleShiftNorm if use_rms_scaleshift_norm else norm_class - norm_fn = partial(norm_class, dim) - - norm_fn = nn.Identity if use_rezero else norm_fn - branch_fn = Rezero if use_rezero else None - - if cross_attend and not only_cross: - default_block = ('a', 'c', 'f') - elif cross_attend and only_cross: - default_block = ('c', 'f') - else: - default_block = ('a', 'f') - - if macaron: - default_block = ('f',) + default_block - - # qk normalization - - if use_qk_norm_attn: - attn_scale_init_value = -math.log(math.log2(qk_norm_attn_seq_len ** 2 - qk_norm_attn_seq_len)) if exists( - qk_norm_attn_seq_len) else None - attn_kwargs = {**attn_kwargs, 'qk_norm': True, 'scale_init_value': attn_scale_init_value} - - # zero init - - if zero_init_branch_output: - attn_kwargs = {**attn_kwargs, 'zero_init_output': True} - ff_kwargs = {**ff_kwargs, 'zero_init_output': True} - - # calculate layer block order - - if exists(custom_layers): - layer_types = custom_layers - elif exists(par_ratio): - par_depth = depth * len(default_block) - assert 1 < par_ratio <= par_depth, 'par ratio out of range' - default_block = tuple(filter(not_equals('f'), default_block)) - par_attn = par_depth // par_ratio - depth_cut = par_depth * 2 // 3 # 2 / 3 attention layer cutoff suggested by PAR paper - par_width = (depth_cut + depth_cut // par_attn) // par_attn - assert len(default_block) <= par_width, 'default block is too large for par_ratio' - par_block = default_block + ('f',) * (par_width - len(default_block)) - par_head = par_block * par_attn - layer_types = par_head + ('f',) * (par_depth - len(par_head)) - elif exists(sandwich_coef): - assert sandwich_coef > 0 and sandwich_coef <= depth, 'sandwich coefficient should be less than the depth' - layer_types = ('a',) * sandwich_coef + default_block * (depth - sandwich_coef) + ('f',) * sandwich_coef - else: - layer_types = default_block * depth - - self.layer_types = layer_types - self.num_attn_layers = len(list(filter(equals('a'), layer_types))) - - # calculate token shifting - - shift_tokens = cast_tuple(shift_tokens, len(layer_types)) - - # iterate and construct layers - - for ind, (layer_type, layer_shift_tokens) in enumerate(zip(self.layer_types, shift_tokens)): - is_last_layer = ind == (len(self.layer_types) - 1) - - if layer_type == 'a': - layer = Attention(dim, heads=heads, causal=causal, **attn_kwargs) - elif layer_type == 'c': - layer = Attention(dim, heads=heads, **attn_kwargs) - elif layer_type == 'f': - layer = FeedForward(dim, **ff_kwargs) - layer = layer if not macaron else Scale(0.5, layer) - else: - raise Exception(f'invalid layer type {layer_type}') - - if layer_shift_tokens > 0: - shift_range_upper = layer_shift_tokens + 1 - shift_range_lower = -layer_shift_tokens if not causal else 0 - layer = ShiftTokens(range(shift_range_lower, shift_range_upper), layer) - - if exists(branch_fn): - layer = branch_fn(layer) - - residual_fn = GRUGating if gate_residual else Residual - residual = residual_fn(dim, scale_residual=scale_residual) - - layer_uses_qk_norm = use_qk_norm_attn and layer_type in ('a', 'c') - - pre_branch_norm = norm_fn() if pre_norm and not layer_uses_qk_norm else None - post_branch_norm = norm_fn() if sandwich_norm or layer_uses_qk_norm else None - post_main_norm = norm_fn() if not pre_norm and not is_last_layer else None - - norms = nn.ModuleList([ - pre_branch_norm, - post_branch_norm, - post_main_norm - ]) - - self.layers.append(nn.ModuleList([ - norms, - layer, - residual - ])) - - def forward( - self, - x, - context=None, - full_context=None, # for passing a list of hidden states from an encoder - mask=None, - context_mask=None, - attn_mask=None, - mems=None, - return_hiddens=False, - norm_scale_shift_inp=None, - past_key_values=None, - expected_seq_len=None, - ): - - assert not (self.cross_attend ^ (exists(context) or exists( - full_context))), 'context must be passed in if cross_attend is set to True' - assert context is None or full_context is None, 'only one of full_context or context can be provided' - - hiddens = [] - intermediates = [] - prev_attn = None - prev_cross_attn = None - - mems = mems.copy() if exists(mems) else [None] * self.num_attn_layers - norm_args = {} - if exists(norm_scale_shift_inp): - norm_args['norm_scale_shift_inp'] = norm_scale_shift_inp - - rotary_pos_emb = None - if exists(self.rotary_pos_emb): - if not self.training and self.causal: - assert expected_seq_len is not None, "To decode a transformer with rotary embeddings, you must specify an `expected_seq_len`" - elif expected_seq_len is None: - expected_seq_len = 0 - seq_len = x.shape[1] - if past_key_values is not None: - seq_len += past_key_values[0][0].shape[-2] - max_rotary_emb_length = max(list(map(lambda m: (m.shape[1] if exists(m) else 0) + seq_len, mems)) + [expected_seq_len]) - rotary_pos_emb = self.rotary_pos_emb(max_rotary_emb_length, x.device) - - present_key_values = [] - cross_attn_count = 0 - for ind, (layer_type, (norm, block, residual_fn)) in enumerate(zip(self.layer_types, self.layers)): - if layer_type == 'a': - layer_mem = mems.pop(0) if mems else None - - residual = x - - pre_branch_norm, post_branch_norm, post_main_norm = norm - - if exists(pre_branch_norm): - x = pre_branch_norm(x, **norm_args) - - if layer_type == 'a' or layer_type == 'c': - if past_key_values is not None: - layer_kv = past_key_values.pop(0) - layer_past = tuple(s.to(x.device) for s in layer_kv) - else: - layer_past = None - - if layer_type == 'a': - out, inter, k, v = checkpoint(block, x, None, mask, None, attn_mask, self.pia_pos_emb, rotary_pos_emb, - prev_attn, layer_mem, layer_past) - elif layer_type == 'c': - if exists(full_context): - out, inter, k, v = checkpoint(block, x, full_context[cross_attn_count], mask, context_mask, None, None, - None, prev_attn, None, layer_past) - else: - out, inter, k, v = checkpoint(block, x, context, mask, context_mask, None, None, None, prev_attn, None, layer_past) - elif layer_type == 'f': - out = checkpoint(block, x) - - if layer_type == 'a' or layer_type == 'c' and present_key_values is not None: - present_key_values.append((k.detach(), v.detach())) - - if exists(post_branch_norm): - out = post_branch_norm(out, **norm_args) - - x = residual_fn(out, residual) - - if layer_type in ('a', 'c'): - intermediates.append(inter) - - if layer_type == 'a' and self.residual_attn: - prev_attn = inter.pre_softmax_attn - elif layer_type == 'c' and self.cross_residual_attn: - prev_cross_attn = inter.pre_softmax_attn - - if exists(post_main_norm): - x = post_main_norm(x, **norm_args) - - if layer_type == 'c': - cross_attn_count += 1 - - if layer_type == 'f': - hiddens.append(x) - - if return_hiddens: - intermediates = LayerIntermediates( - hiddens=hiddens, - attn_intermediates=intermediates, - past_key_values=present_key_values - ) - - return x, intermediates - - return x - - -class Encoder(AttentionLayers): - def __init__(self, **kwargs): - assert 'causal' not in kwargs, 'cannot set causality on encoder' - super().__init__(causal=False, **kwargs) - - -class Decoder(AttentionLayers): - def __init__(self, **kwargs): - assert 'causal' not in kwargs, 'cannot set causality on decoder' - super().__init__(causal=True, **kwargs) - - -class CrossAttender(AttentionLayers): - def __init__(self, **kwargs): - super().__init__(cross_attend=True, only_cross=True, **kwargs) - - -class ViTransformerWrapper(nn.Module): - def __init__( - self, - *, - image_size, - patch_size, - attn_layers, - num_classes=None, - dropout=0., - emb_dropout=0. - ): - super().__init__() - assert isinstance(attn_layers, Encoder), 'attention layers must be an Encoder' - assert image_size % patch_size == 0, 'image dimensions must be divisible by the patch size' - dim = attn_layers.dim - num_patches = (image_size // patch_size) ** 2 - patch_dim = 3 * patch_size ** 2 - - self.patch_size = patch_size - - self.pos_embedding = nn.Parameter(torch.randn(1, num_patches + 1, dim)) - self.patch_to_embedding = nn.Linear(patch_dim, dim) - self.cls_token = nn.Parameter(torch.randn(1, 1, dim)) - self.dropout = nn.Dropout(emb_dropout) - - self.attn_layers = attn_layers - self.norm = nn.LayerNorm(dim) - self.mlp_head = FeedForward(dim, dim_out=num_classes, dropout=dropout) if exists(num_classes) else None - - def forward( - self, - img, - return_embeddings=False - ): - p = self.patch_size - - x = rearrange(img, 'b c (h p1) (w p2) -> b (h w) (p1 p2 c)', p1=p, p2=p) - x = self.patch_to_embedding(x) - b, n, _ = x.shape - - cls_tokens = repeat(self.cls_token, '() n d -> b n d', b=b) - x = torch.cat((cls_tokens, x), dim=1) - x = x + self.pos_embedding[:, :(n + 1)] - x = self.dropout(x) - - x = self.attn_layers(x) - x = self.norm(x) - - if not exists(self.mlp_head) or return_embeddings: - return x - - return self.mlp_head(x[:, 0]) - - -class TransformerWrapper(nn.Module): - def __init__( - self, - *, - num_tokens, - max_seq_len, - attn_layers, - emb_dim=None, - max_mem_len=0., - shift_mem_down=0, - emb_dropout=0., - num_memory_tokens=None, - tie_embedding=False, - use_pos_emb=True - ): - super().__init__() - assert isinstance(attn_layers, AttentionLayers), 'attention layers must be one of Encoder or Decoder' - - dim = attn_layers.dim - emb_dim = default(emb_dim, dim) - - self.max_seq_len = max_seq_len - self.max_mem_len = max_mem_len - self.shift_mem_down = shift_mem_down - - self.token_emb = nn.Embedding(num_tokens, emb_dim) - self.pos_emb = AbsolutePositionalEmbedding(emb_dim, max_seq_len) if ( - use_pos_emb and not attn_layers.has_pos_emb) else always(0) - self.emb_dropout = nn.Dropout(emb_dropout) - - self.project_emb = nn.Linear(emb_dim, dim) if emb_dim != dim else nn.Identity() - self.attn_layers = attn_layers - self.norm = nn.LayerNorm(dim) - - self.init_() - - self.to_logits = nn.Linear(dim, num_tokens) if not tie_embedding else lambda t: t @ self.token_emb.weight.t() - - # memory tokens (like [cls]) from Memory Transformers paper - num_memory_tokens = default(num_memory_tokens, 0) - self.num_memory_tokens = num_memory_tokens - if num_memory_tokens > 0: - self.memory_tokens = nn.Parameter(torch.randn(num_memory_tokens, dim)) - - def init_(self): - nn.init.kaiming_normal_(self.token_emb.weight) - - def forward( - self, - x, - return_embeddings=False, - mask=None, - return_hiddens=False, - return_attn=False, - mems=None, - use_cache=False, - **kwargs - ): - b, n, device, num_mem = *x.shape, x.device, self.num_memory_tokens - x = self.token_emb(x) - x = x + self.pos_emb(x) - x = self.emb_dropout(x) - - x = self.project_emb(x) - - if num_mem > 0: - mem = repeat(self.memory_tokens, 'n d -> b n d', b=b) - x = torch.cat((mem, x), dim=1) - - # auto-handle masking after appending memory tokens - if exists(mask): - mask = F.pad(mask, (num_mem, 0), value=True) - - if self.shift_mem_down and exists(mems): - mems_l, mems_r = mems[:self.shift_mem_down], mems[self.shift_mem_down:] - mems = [*mems_r, *mems_l] - - x, intermediates = self.attn_layers(x, mask=mask, mems=mems, return_hiddens=True, **kwargs) - x = self.norm(x) - - mem, x = x[:, :num_mem], x[:, num_mem:] - - out = self.to_logits(x) if not return_embeddings else x - - if return_hiddens: - hiddens = intermediates.hiddens - return out, hiddens - - res = [out] - if return_attn: - attn_maps = list(map(lambda t: t.post_softmax_attn, intermediates.attn_intermediates)) - res.append(attn_maps) - if use_cache: - res.append(intermediates.past_key_values) - - if len(res) > 1: - return tuple(res) - return res[0] - - -class ContinuousTransformerWrapper(nn.Module): - def __init__( - self, - *, - max_seq_len, - attn_layers, - dim_in=None, - dim_out=None, - emb_dim=None, - emb_dropout=0., - use_pos_emb=True - ): - super().__init__() - assert isinstance(attn_layers, AttentionLayers), 'attention layers must be one of Encoder or Decoder' - - dim = attn_layers.dim - - self.max_seq_len = max_seq_len - - self.pos_emb = AbsolutePositionalEmbedding(dim, max_seq_len) if ( - use_pos_emb and not attn_layers.has_pos_emb) else always(0) - self.emb_dropout = nn.Dropout(emb_dropout) - - self.project_in = nn.Linear(dim_in, dim) if exists(dim_in) else nn.Identity() - - self.attn_layers = attn_layers - self.norm = nn.LayerNorm(dim) - - self.project_out = nn.Linear(dim, dim_out) if exists(dim_out) else nn.Identity() - - def forward( - self, - x, - return_embeddings=False, - mask=None, - return_attn=False, - mems=None, - use_cache=False, - **kwargs - ): - b, n, _, device = *x.shape, x.device - - x = self.project_in(x) - x = x + self.pos_emb(x) - x = self.emb_dropout(x) - - x, intermediates = self.attn_layers(x, mask=mask, mems=mems, return_hiddens=True, **kwargs) - x = self.norm(x) - - out = self.project_out(x) if not return_embeddings else x - - res = [out] - if return_attn: - attn_maps = list(map(lambda t: t.post_softmax_attn, intermediates.attn_intermediates)) - res.append(attn_maps) - if use_cache: - res.append(intermediates.past_key_values) - - if len(res) > 1: - return tuple(res) - return res[0] - diff --git a/spaces/jbochi/Candle-CoEdIT-Wasm/index.html b/spaces/jbochi/Candle-CoEdIT-Wasm/index.html deleted file mode 100644 index 4f153b0da28c77c26490f80325105b55520faeac..0000000000000000000000000000000000000000 --- a/spaces/jbochi/Candle-CoEdIT-Wasm/index.html +++ /dev/null @@ -1,286 +0,0 @@ - - - - Grammar correction in the browser using CoEdIT and Candle - - - - - - - - - - - - - - - - - - -
        - 🕯️ -
        -

        Grammar correction in the browser using CoEdIT and Candle.

        -

        CoEdIT Rust/WASM Demo

        -

        - This demo showcases (CoEdIT) models right in your browser, thanks to - - Candle - - ML framework and rust/wasm. The models are loaded from huggingface - - jbochi/candle-coedit-quantized, which are based on the original models ( - - coedit-large, - - coedit-xl, and - - coedit-xxl - ). - Note that larger models may fail to load in your browser. - This space is a fork of - radames/Candle-T5-Generation-Wasm. -

        -
        - -
        - - -
        - -
        -

        Task Prefix:

        -
        -
        -
        - - - -
        -
        - - - - 0.00 - - - - 1.00 - - - - - 1.10 - - - -
        -
        -

        Generation:

        -
        -

        No output yet

        -
        -
        -
        - - diff --git a/spaces/jeonchangbin49/De-limiter/utils/__init__.py b/spaces/jeonchangbin49/De-limiter/utils/__init__.py deleted file mode 100644 index d8fd5d53b0a777fe70fa30265b040bc17fc009e6..0000000000000000000000000000000000000000 --- a/spaces/jeonchangbin49/De-limiter/utils/__init__.py +++ /dev/null @@ -1,19 +0,0 @@ -from .read_wave_utils import ( - load_wav_arbitrary_position_mono, - load_wav_specific_position_mono, - load_wav_arbitrary_position_stereo, - load_wav_specific_position_stereo, -) -from .loudness_utils import ( - linear2db, - db2linear, - normalize_mag_spec, - denormalize_mag_spec, - loudness_match_and_norm, - loudness_normal_match_and_norm, - loudness_normal_match_and_norm_output_louder_first, - loudnorm, -) -from .logging import save_img_and_npy, save_checkpoint, AverageMeter, EarlyStopping -from .lr_scheduler import CosineAnnealingWarmUpRestarts -from .train_utils import worker_init_fn, str2bool, get_config diff --git a/spaces/jgerbscheid/dpa-example/dijkprofile_annotator/preprocessing/preprocessing.py b/spaces/jgerbscheid/dpa-example/dijkprofile_annotator/preprocessing/preprocessing.py deleted file mode 100644 index ce0f0e91746285999c666574906e4d64578b0ec2..0000000000000000000000000000000000000000 --- a/spaces/jgerbscheid/dpa-example/dijkprofile_annotator/preprocessing/preprocessing.py +++ /dev/null @@ -1,355 +0,0 @@ -import csv -import os -from operator import itemgetter - -import numpy as np -from dijkprofile_annotator.config import (CLASS_DICT_FULL, CLASS_DICT_REGIONAL, - CLASS_DICT_SIMPLE, - CLASS_DICT_SIMPLE_BERM, - CLASS_DICT_SIMPLE_SLOOT) -from dijkprofile_annotator.dataset import DijkprofileDataset -from sklearn.model_selection import train_test_split - - -def read_surfaceline_file(surfaceline_fp): - """Read surfaceline file and convert to dict. - - Args: - surfaceline_fp (string): path to the surfacelines file. - - Returns: - dict: dict containing list of points per location. - """ - # read the coordinates and collect to surfaceline_dict - surfacelines = {} - with open(surfaceline_fp) as csvfile: - surfacereader = csv.reader(csvfile, delimiter=';', quotechar='|') - next(surfacereader) # skip header - # print("header: {}".format(header)) # not very useful - stop_exec = False - for row in surfacereader: - if stop_exec: - break - location = row[0] - surfacelines[location] = [] - for i in range(1, len(row)-2, 3): - # some files have empty points - if row[i] == '' or row[i+1] == '' or row[i+2] == '': - continue - try: - - x = _parse_coordinate(row[i].replace('"', '')) - y = _parse_coordinate(row[i+1].replace('"', '')) - z = _parse_coordinate(row[i+2].replace('"', '')) - surfacelines[location].append((x, y, z)) - except ValueError as e: - print(f"error reading point from surfaceline at location: {location} (index: {i}), error: {e}") - stop_exec = True - break - return surfacelines - - -def read_charpoints_file(charlines_fp): - """Read characteristicpoints file and convert to dict. - - Args: - charlines_fp (string): path to characteristicpoints file. - - Returns: - dict: dict containing list of points per location. - """ - charpoints = {} - with open(charlines_fp) as csvfile: - cpointsreader = csv.reader(csvfile, delimiter=';', quotechar='|') - header = next(cpointsreader) - stop_exec = False - for idx, row in enumerate(cpointsreader): - if stop_exec: - break - try: - location = row[0] - except IndexError as e: - print(f"couldn't read location in row: {row} at {idx}, file: {charlines_fp}") - point_dict = {} - for i in range(1, len(row)-2, 3): - if row[i] == '' or row[i+1] == '' or row[i+2] == '': - continue - try: - x = _parse_coordinate(row[i].replace('"', '')) - y = _parse_coordinate(row[i+1].replace('"', '')) - z = _parse_coordinate(row[i+2].replace('"', '')) - - point_dict[header[i][2:]] = (x, y, z) - except ValueError as e: - print( - f"error reading point from characteristicpoints at location: {location} (index: {i}), error: {e}") - stop_exec = True - - charpoints[location] = point_dict - return charpoints - - -def _parse_coordinate(coord): - """Convert string point coordinate to float, remove double dots if needed. - Some of the coordinates contain multiple dots, probably because someone - opened the file in excel and it formatted it weird. In all examples I've - seen the first point is only to indicate 1000's and can savely be removed - - Args: - point (str): string representation of the number to parse - - Returns: - float: float representation of the coordinate - """ - try: - return float(coord) - except: - parts = coord.split(".") - return float("".join(parts[:-1]) + "." + parts[-1]) - - -def make_height_profiles(surfaceline_dict, max_profile_size): - """Make height arrays from surfacelines dict. - - Args: - surfaceline_dict (dict): dict of surfacelines by location. - max_profile_size (int): fixed max size for the height profile. - - Returns: - dict: dict containing height profiles by location. - """ - profile_dict = {} - for location in surfaceline_dict.keys(): - heights = np.array(surfaceline_dict[location])[:, 2].astype(np.float32) - - # we'll fit whole profile in a fixed length so that multiple profiles can be used as samples - z_tmp = np.zeros(max_profile_size) - profile_length = heights.shape[0] - if profile_length < max_profile_size: - z_tmp[:profile_length] = np.array(heights, dtype=np.float32)[:profile_length] - z_tmp[profile_length:] = heights[profile_length-1] - heights = z_tmp - else: - heights = heights[:max_profile_size] - profile_dict[location] = {"profile": heights} - return profile_dict - - -def make_labeled_height_profiles(surfaceline_dict, cpoints_dict, max_profile_size, class_list='simple', require_all_points=True): - """Make height profile and labels from surfacelines and cpoints. - - Args: - surfaceline_dict (dict): dict of surfacelines by location. - cpoints_dict (dict): dict of characteristic points by location. - max_profile_size (int): fixed max size for the height profile. - class_list (bool): selection of classes to use, see config. - require_all_points: filter profiles that do not contain all the points in the class_list. - - Returns: - dict: dict containing height profiles and their labels by location. - """ - profile_label_dict = {} - - class_list = class_list.lower() - class_dict = {} - if class_list == 'regional': - class_dict = CLASS_DICT_REGIONAL - elif class_list == 'simple': - class_dict = CLASS_DICT_SIMPLE - elif class_list == 'berm': - class_dict = CLASS_DICT_SIMPLE_BERM - elif class_list == 'sloot': - class_dict = CLASS_DICT_SIMPLE_SLOOT - elif class_list == 'full': - class_dict = CLASS_DICT_FULL - else: - raise NotImplementedError(f"No class list available of type: {class_list}") - - required_point_types = list(class_dict.keys()) - required_point_types.remove('leeg') # we don't want to require check for the empty class - - for location in surfaceline_dict.keys(): - heights = np.array(surfaceline_dict[location])[:, 2].astype(np.float32) - labels = np.zeros(len(heights)) - - # if no labels were given for this location, skip it - if not location in cpoints_dict.keys(): - # print(f"location not in cpoints dict, {location}") - continue - - # skip the location if the required points are not all present - if require_all_points: - labeled_point_types = [key for key, value in cpoints_dict[location].items() if value != (-1.0, -1.0, -1.0)] - if not all([point_type in labeled_point_types for point_type in required_point_types]): - # print(f"not all point types present, missing {set(required_point_types) - set(labeled_point_types)}") - continue - - for i, (key, point) in enumerate(cpoints_dict[location].items()): - # if the point is not empty, find the nearest point in the surface file, - # problems with rounding errors require matching by distance per point - if point == (-1.0, -1.0, -1.0): - continue - - distances = [] - for idx, surfacepoint in enumerate(surfaceline_dict[location]): - dist = np.linalg.norm(np.array(surfacepoint)-np.array(point)) - distances.append((idx, dist)) - (idx, dist) = sorted(distances, key=itemgetter(1))[0] - if key in class_dict: - labels[idx] = class_dict[key] - - # forward fill the labels - for i in range(1, len(labels)): - if labels[i] == 0.0: - labels[i] = labels[i-1] - - # we'll fit whole profile in a fixed length so that multiple profiles can be used as samples - z_tmp = np.zeros(max_profile_size) - labels_tmp = np.zeros(max_profile_size) - profile_length = labels.shape[0] - if profile_length < max_profile_size: - z_tmp[:profile_length] = np.array(heights, dtype=np.float32)[:profile_length] - labels_tmp[:profile_length] = np.array(labels)[:profile_length] - z_tmp[profile_length:] = heights[profile_length-1] - labels_tmp[profile_length:] = labels[profile_length-1] - heights = z_tmp - labels = labels_tmp - else: - heights = heights[:max_profile_size] - labels = labels[:max_profile_size] - - # rescale every profile to between -1 and 1 - # scaler = MinMaxScaler(feature_range=(-1, 1)) - # heights = scaler.fit_transform(heights.reshape(-1, 1)) - - profile_label_dict[location] = {} - profile_label_dict[location]['profile'] = heights.astype(np.float32) - profile_label_dict[location]['label'] = labels.astype(np.int32) - return profile_label_dict - - -def filepath_pair_to_labeled_sample(source_surfacelines, source_characteristicpoints, max_profile_size=352, class_list='simple', require_all_points=True): - """Convert pair of surfacelines and characteristicpoints filepaths to format suited for machine learning. - - Args: - source_surfacelines (string): path to the surfacelines file. - source_characteristicpoints (string): path to the characteristicpoints file. - max_profile_size (int, optional): max size for the profile. Defaults to 352. - regional (bool): use regional point labelset, see config. Defaults to False. - - Returns: - dict: dict containing height profile and labels by location. - """ - surfaceline_dict = read_surfaceline_file(source_surfacelines) - cpoints_dict = read_charpoints_file(source_characteristicpoints) - - profile_label_dict = make_labeled_height_profiles( - surfaceline_dict, - cpoints_dict, - max_profile_size, - class_list=class_list, - require_all_points=require_all_points) - return profile_label_dict - - -def file_pairs_to_tensor_profiles(filepair_list, max_profile_size=352, class_list='simple', require_all_points=True): - """Convert list of pairs of surfacelines and characteristicpoints to format suited for machine learning. - - Args: - filepair_list (list): list of tuples containing the paths to the surfacelines and characteristicpoints files. - max_profile_size (int, optional): max size for the profile. Defaults to 352. - regional (bool): use regional point labelset, see config. Defaults to False. - - Returns: - dict: Dict containing all the height profiles and labels by location. - """ - all_profiles = {} - for source_surfacelines, source_characteristicpoints in filepair_list: - profile_label_dict = filepath_pair_to_labeled_sample( - source_surfacelines, - source_characteristicpoints, - max_profile_size, - class_list, - require_all_points=require_all_points) - for key, value in profile_label_dict.items(): - all_profiles[key] = value - return all_profiles - - -def get_file_pairs_from_dir(path, krp_format=False): - """Recursively get all pairs of lines and points files in a directory. - - Args: - path (str): path to the root directory containing the lines and points csv files, - directory is searched recursively for pairs. - krp (bool): Indicates that the folder contains csv files in the naming convention used by - waterschap Vallei en Veluwe. - - Returns: - list: list of tuples where the first item is the path to the surfacelines.csv and the second - the path to the characteristicpoints.csv - """ - if krp_format: - return _get_file_pairs_from_dir_krp(path) - list_of_files = [] - for (dirpath, _, filenames) in os.walk(path): - for filename in filenames: - if filename.endswith('lines.csv'): - if os.path.exists(os.sep.join([dirpath, filename])) and \ - os.path.exists(os.sep.join([dirpath, 'characteristicpoints.csv'])): - - list_of_files.append(( - os.sep.join([dirpath, filename]), - os.sep.join([dirpath, 'characteristicpoints.csv']))) - return list_of_files - - -def _get_file_pairs_from_dir_krp(path): - """Recursively get all pairs of lines and points files in a directory but in the format used - by Waterschap Vallei en Veluwe, same functionality as get_file_pairs_from_dir. - - Args: - path (str): path to the root directory containing the lines and points csv files, - directory is searched recursively for pairs - - Returns: - list: list of tuples where the first item is the path to the surfacelines.csv and the second - the path to the characteristicpoints.csv - """ - list_of_files = [] - for (dirpath, _, filenames) in os.walk(path): - for filename in filenames: - if filename.endswith('.krp.csv'): - if os.path.exists(os.sep.join([dirpath, filename])) and \ - os.path.exists(os.sep.join([dirpath, filename.split(".krp")[0] + ".csv"])): - - list_of_files.append(( - os.sep.join([dirpath, filename.split(".krp")[0] + ".csv"]), - os.sep.join([dirpath, filename]))) - return list_of_files - - -def load_datasets(annotation_tuples, custom_scaler_path=None, test_size=0.2, max_profile_size=512, class_list='simple', require_all_points=True): - """Load datasets given list of annotation tuples. - - Args: - annotation_tuples ([(str,str)]): list of tuples of filepaths to the lines and points files. - custom_scaler_path (str, optional): path to a custom scaler to rescale the data. Defaults to None. - test_size (float, optional): Test size for the training. Defaults to 0.2. - max_profile_size (int, optional): max profile size. Defaults to 512. - class_list (str, optional): class_mapping/class_list to use. Defaults to 'simple'. - require_all_points (bool, optional): wether to drop profiles that don't contain all points in the mapping. Defaults to True. - - Returns: - DijkprofileDataset, DijkprofileDataset: train and test dataset classes - """ - profile_dict = file_pairs_to_tensor_profiles(annotation_tuples, max_profile_size=max_profile_size, class_list=class_list, require_all_points=require_all_points) - - # construct dataloaders - id_list = list(profile_dict.keys()) - [train, test] = train_test_split(id_list, shuffle=True, test_size=test_size) - - dataset_train = DijkprofileDataset(profile_dict, train, custom_scaler_path=custom_scaler_path) - dataset_validation = DijkprofileDataset(profile_dict, test, custom_scaler_path=custom_scaler_path) - - return dataset_train, dataset_validation diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/__init__.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/__init__.py deleted file mode 100644 index 76a690d3d34be570842a65533fe1233b703f9879..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/__init__.py +++ /dev/null @@ -1,8 +0,0 @@ -import logging -from fontTools.misc.loggingTools import configLogger - -log = logging.getLogger(__name__) - -version = __version__ = "4.43.0" - -__all__ = ["version", "log", "configLogger"] diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fsspec/implementations/ftp.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fsspec/implementations/ftp.py deleted file mode 100644 index 7e79877ebdd287e0ab2938345d448f52ab92dc90..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fsspec/implementations/ftp.py +++ /dev/null @@ -1,380 +0,0 @@ -import os -import sys -import uuid -import warnings -from ftplib import FTP, Error, error_perm -from typing import Any - -from ..spec import AbstractBufferedFile, AbstractFileSystem -from ..utils import infer_storage_options, isfilelike - - -class FTPFileSystem(AbstractFileSystem): - """A filesystem over classic FTP""" - - root_marker = "/" - cachable = False - protocol = "ftp" - - def __init__( - self, - host, - port=21, - username=None, - password=None, - acct=None, - block_size=None, - tempdir=None, - timeout=30, - encoding="utf-8", - **kwargs, - ): - """ - You can use _get_kwargs_from_urls to get some kwargs from - a reasonable FTP url. - - Authentication will be anonymous if username/password are not - given. - - Parameters - ---------- - host: str - The remote server name/ip to connect to - port: int - Port to connect with - username: str or None - If authenticating, the user's identifier - password: str of None - User's password on the server, if using - acct: str or None - Some servers also need an "account" string for auth - block_size: int or None - If given, the read-ahead or write buffer size. - tempdir: str - Directory on remote to put temporary files when in a transaction - timeout: int - Timeout of the ftp connection in seconds - encoding: str - Encoding to use for directories and filenames in FTP connection - """ - super(FTPFileSystem, self).__init__(**kwargs) - self.host = host - self.port = port - self.tempdir = tempdir or "/tmp" - self.cred = username, password, acct - self.timeout = timeout - self.encoding = encoding - if block_size is not None: - self.blocksize = block_size - else: - self.blocksize = 2**16 - self._connect() - - def _connect(self): - if sys.version_info >= (3, 9): - self.ftp = FTP(timeout=self.timeout, encoding=self.encoding) - elif self.encoding: - warnings.warn("`encoding` not supported for python<3.9, ignoring") - self.ftp = FTP(timeout=self.timeout) - else: - self.ftp = FTP(timeout=self.timeout) - self.ftp.connect(self.host, self.port) - self.ftp.login(*self.cred) - - @classmethod - def _strip_protocol(cls, path): - return "/" + infer_storage_options(path)["path"].lstrip("/").rstrip("/") - - @staticmethod - def _get_kwargs_from_urls(urlpath): - out = infer_storage_options(urlpath) - out.pop("path", None) - out.pop("protocol", None) - return out - - def ls(self, path, detail=True, **kwargs): - path = self._strip_protocol(path) - out = [] - if path not in self.dircache: - try: - try: - out = [ - (fn, details) - for (fn, details) in self.ftp.mlsd(path) - if fn not in [".", ".."] - and details["type"] not in ["pdir", "cdir"] - ] - except error_perm: - out = _mlsd2(self.ftp, path) # Not platform independent - for fn, details in out: - if path == "/": - path = "" # just for forming the names, below - details["name"] = "/".join([path, fn.lstrip("/")]) - if details["type"] == "file": - details["size"] = int(details["size"]) - else: - details["size"] = 0 - if details["type"] == "dir": - details["type"] = "directory" - self.dircache[path] = out - except Error: - try: - info = self.info(path) - if info["type"] == "file": - out = [(path, info)] - except (Error, IndexError): - raise FileNotFoundError(path) - files = self.dircache.get(path, out) - if not detail: - return sorted([fn for fn, details in files]) - return [details for fn, details in files] - - def info(self, path, **kwargs): - # implement with direct method - path = self._strip_protocol(path) - if path == "/": - # special case, since this dir has no real entry - return {"name": "/", "size": 0, "type": "directory"} - files = self.ls(self._parent(path).lstrip("/"), True) - try: - out = [f for f in files if f["name"] == path][0] - except IndexError: - raise FileNotFoundError(path) - return out - - def get_file(self, rpath, lpath, **kwargs): - if self.isdir(rpath): - if not os.path.exists(lpath): - os.mkdir(lpath) - return - if isfilelike(lpath): - outfile = lpath - else: - outfile = open(lpath, "wb") - - def cb(x): - outfile.write(x) - - self.ftp.retrbinary( - "RETR %s" % rpath, - blocksize=self.blocksize, - callback=cb, - ) - if not isfilelike(lpath): - outfile.close() - - def cat_file(self, path, start=None, end=None, **kwargs): - if end is not None: - return super().cat_file(path, start, end, **kwargs) - out = [] - - def cb(x): - out.append(x) - - self.ftp.retrbinary( - "RETR %s" % path, - blocksize=self.blocksize, - rest=start, - callback=cb, - ) - return b"".join(out) - - def _open( - self, - path, - mode="rb", - block_size=None, - cache_options=None, - autocommit=True, - **kwargs, - ): - path = self._strip_protocol(path) - block_size = block_size or self.blocksize - return FTPFile( - self, - path, - mode=mode, - block_size=block_size, - tempdir=self.tempdir, - autocommit=autocommit, - cache_options=cache_options, - ) - - def _rm(self, path): - path = self._strip_protocol(path) - self.ftp.delete(path) - self.invalidate_cache(self._parent(path)) - - def rm(self, path, recursive=False, maxdepth=None): - paths = self.expand_path(path, recursive=recursive, maxdepth=maxdepth) - for p in reversed(paths): - if self.isfile(p): - self.rm_file(p) - else: - self.rmdir(p) - - def mkdir(self, path: str, create_parents: bool = True, **kwargs: Any) -> None: - path = self._strip_protocol(path) - parent = self._parent(path) - if parent != self.root_marker and not self.exists(parent) and create_parents: - self.mkdir(parent, create_parents=create_parents) - - self.ftp.mkd(path) - self.invalidate_cache(self._parent(path)) - - def makedirs(self, path: str, exist_ok: bool = False) -> None: - path = self._strip_protocol(path) - if self.exists(path): - # NB: "/" does not "exist" as it has no directory entry - if not exist_ok: - raise FileExistsError(f"{path} exists without `exist_ok`") - # exists_ok=True -> no-op - else: - self.mkdir(path, create_parents=True) - - def rmdir(self, path): - path = self._strip_protocol(path) - self.ftp.rmd(path) - self.invalidate_cache(self._parent(path)) - - def mv(self, path1, path2, **kwargs): - path1 = self._strip_protocol(path1) - path2 = self._strip_protocol(path2) - self.ftp.rename(path1, path2) - self.invalidate_cache(self._parent(path1)) - self.invalidate_cache(self._parent(path2)) - - def __del__(self): - self.ftp.close() - - def invalidate_cache(self, path=None): - if path is None: - self.dircache.clear() - else: - self.dircache.pop(path, None) - super(FTPFileSystem, self).invalidate_cache(path) - - -class TransferDone(Exception): - """Internal exception to break out of transfer""" - - pass - - -class FTPFile(AbstractBufferedFile): - """Interact with a remote FTP file with read/write buffering""" - - def __init__( - self, - fs, - path, - mode="rb", - block_size="default", - autocommit=True, - cache_type="readahead", - cache_options=None, - **kwargs, - ): - super().__init__( - fs, - path, - mode=mode, - block_size=block_size, - autocommit=autocommit, - cache_type=cache_type, - cache_options=cache_options, - **kwargs, - ) - if not autocommit: - self.target = self.path - self.path = "/".join([kwargs["tempdir"], str(uuid.uuid4())]) - - def commit(self): - self.fs.mv(self.path, self.target) - - def discard(self): - self.fs.rm(self.path) - - def _fetch_range(self, start, end): - """Get bytes between given byte limits - - Implemented by raising an exception in the fetch callback when the - number of bytes received reaches the requested amount. - - Will fail if the server does not respect the REST command on - retrieve requests. - """ - out = [] - total = [0] - - def callback(x): - total[0] += len(x) - if total[0] > end - start: - out.append(x[: (end - start) - total[0]]) - if end < self.size: - raise TransferDone - else: - out.append(x) - - if total[0] == end - start and end < self.size: - raise TransferDone - - try: - self.fs.ftp.retrbinary( - "RETR %s" % self.path, - blocksize=self.blocksize, - rest=start, - callback=callback, - ) - except TransferDone: - try: - # stop transfer, we got enough bytes for this block - self.fs.ftp.abort() - self.fs.ftp.getmultiline() - except Error: - self.fs._connect() - - return b"".join(out) - - def _upload_chunk(self, final=False): - self.buffer.seek(0) - self.fs.ftp.storbinary( - "STOR " + self.path, self.buffer, blocksize=self.blocksize, rest=self.offset - ) - return True - - -def _mlsd2(ftp, path="."): - """ - Fall back to using `dir` instead of `mlsd` if not supported. - - This parses a Linux style `ls -l` response to `dir`, but the response may - be platform dependent. - - Parameters - ---------- - ftp: ftplib.FTP - path: str - Expects to be given path, but defaults to ".". - """ - lines = [] - minfo = [] - ftp.dir(path, lines.append) - for line in lines: - line = line.split() - this = ( - line[-1], - { - "modify": " ".join(line[5:8]), - "unix.owner": line[2], - "unix.group": line[3], - "unix.mode": line[0], - "size": line[4], - }, - ) - if "d" == this[1]["unix.mode"][0]: - this[1]["type"] = "dir" - else: - this[1]["type"] = "file" - minfo.append(this) - return minfo diff --git a/spaces/johnyang/ChatPaper111/config.py b/spaces/johnyang/ChatPaper111/config.py deleted file mode 100644 index 6534f1df5bfe72ebce438dd4b41a35330610c57d..0000000000000000000000000000000000000000 --- a/spaces/johnyang/ChatPaper111/config.py +++ /dev/null @@ -1,15 +0,0 @@ - - -MAX_TOKEN_MODEL_MAP = { - "gpt-3.5-turbo": 4096, -} - -PDF_SAVE_DIR = "/app/files/" - - -DEFAULT_ENGINE = "gpt-3.5-turbo" -DEFAULT_TEMPERATURE = 0.5 -DEFAULT_TOP_P = 1 -DEFAULT_PRESENCE_PENALTY = 0 -DEFAULT_FREQUENCY_PENALTY = 0 -DEFAULT_REPLY_COUNT = 1 \ No newline at end of file diff --git a/spaces/jone/Music_Source_Separation/bytesep/utils.py b/spaces/jone/Music_Source_Separation/bytesep/utils.py deleted file mode 100644 index 4a38928bd5b00521d32b67c484e5561ff2ead439..0000000000000000000000000000000000000000 --- a/spaces/jone/Music_Source_Separation/bytesep/utils.py +++ /dev/null @@ -1,189 +0,0 @@ -import datetime -import logging -import os -import pickle -from typing import Dict, NoReturn - -import librosa -import numpy as np -import yaml - - -def create_logging(log_dir: str, filemode: str) -> logging: - r"""Create logging to write out log files. - - Args: - logs_dir, str, directory to write out logs - filemode: str, e.g., "w" - - Returns: - logging - """ - os.makedirs(log_dir, exist_ok=True) - i1 = 0 - - while os.path.isfile(os.path.join(log_dir, "{:04d}.log".format(i1))): - i1 += 1 - - log_path = os.path.join(log_dir, "{:04d}.log".format(i1)) - logging.basicConfig( - level=logging.DEBUG, - format="%(asctime)s %(filename)s[line:%(lineno)d] %(levelname)s %(message)s", - datefmt="%a, %d %b %Y %H:%M:%S", - filename=log_path, - filemode=filemode, - ) - - # Print to console - console = logging.StreamHandler() - console.setLevel(logging.INFO) - formatter = logging.Formatter("%(name)-12s: %(levelname)-8s %(message)s") - console.setFormatter(formatter) - logging.getLogger("").addHandler(console) - - return logging - - -def load_audio( - audio_path: str, - mono: bool, - sample_rate: float, - offset: float = 0.0, - duration: float = None, -) -> np.array: - r"""Load audio. - - Args: - audio_path: str - mono: bool - sample_rate: float - """ - audio, _ = librosa.core.load( - audio_path, sr=sample_rate, mono=mono, offset=offset, duration=duration - ) - # (audio_samples,) | (channels_num, audio_samples) - - if audio.ndim == 1: - audio = audio[None, :] - # (1, audio_samples,) - - return audio - - -def load_random_segment( - audio_path: str, random_state, segment_seconds: float, mono: bool, sample_rate: int -) -> np.array: - r"""Randomly select an audio segment from a recording.""" - - duration = librosa.get_duration(filename=audio_path) - - start_time = random_state.uniform(0.0, duration - segment_seconds) - - audio = load_audio( - audio_path=audio_path, - mono=mono, - sample_rate=sample_rate, - offset=start_time, - duration=segment_seconds, - ) - # (channels_num, audio_samples) - - return audio - - -def float32_to_int16(x: np.float32) -> np.int16: - - x = np.clip(x, a_min=-1, a_max=1) - - return (x * 32767.0).astype(np.int16) - - -def int16_to_float32(x: np.int16) -> np.float32: - - return (x / 32767.0).astype(np.float32) - - -def read_yaml(config_yaml: str): - - with open(config_yaml, "r") as fr: - configs = yaml.load(fr, Loader=yaml.FullLoader) - - return configs - - -def check_configs_gramma(configs: Dict) -> NoReturn: - r"""Check if the gramma of the config dictionary for training is legal.""" - input_source_types = configs['train']['input_source_types'] - - for augmentation_type in configs['train']['augmentations'].keys(): - augmentation_dict = configs['train']['augmentations'][augmentation_type] - - for source_type in augmentation_dict.keys(): - if source_type not in input_source_types: - error_msg = ( - "The source type '{}'' in configs['train']['augmentations']['{}'] " - "must be one of input_source_types {}".format( - source_type, augmentation_type, input_source_types - ) - ) - raise Exception(error_msg) - - -def magnitude_to_db(x: float) -> float: - eps = 1e-10 - return 20.0 * np.log10(max(x, eps)) - - -def db_to_magnitude(x: float) -> float: - return 10.0 ** (x / 20) - - -def get_pitch_shift_factor(shift_pitch: float) -> float: - r"""The factor of the audio length to be scaled.""" - return 2 ** (shift_pitch / 12) - - -class StatisticsContainer(object): - def __init__(self, statistics_path): - self.statistics_path = statistics_path - - self.backup_statistics_path = "{}_{}.pkl".format( - os.path.splitext(self.statistics_path)[0], - datetime.datetime.now().strftime("%Y-%m-%d_%H-%M-%S"), - ) - - self.statistics_dict = {"train": [], "test": []} - - def append(self, steps, statistics, split): - statistics["steps"] = steps - self.statistics_dict[split].append(statistics) - - def dump(self): - pickle.dump(self.statistics_dict, open(self.statistics_path, "wb")) - pickle.dump(self.statistics_dict, open(self.backup_statistics_path, "wb")) - logging.info(" Dump statistics to {}".format(self.statistics_path)) - logging.info(" Dump statistics to {}".format(self.backup_statistics_path)) - - ''' - def load_state_dict(self, resume_steps): - self.statistics_dict = pickle.load(open(self.statistics_path, "rb")) - - resume_statistics_dict = {"train": [], "test": []} - - for key in self.statistics_dict.keys(): - for statistics in self.statistics_dict[key]: - if statistics["steps"] <= resume_steps: - resume_statistics_dict[key].append(statistics) - - self.statistics_dict = resume_statistics_dict - ''' - - -def calculate_sdr(ref: np.array, est: np.array) -> float: - s_true = ref - s_artif = est - ref - sdr = 10.0 * ( - np.log10(np.clip(np.mean(s_true ** 2), 1e-8, np.inf)) - - np.log10(np.clip(np.mean(s_artif ** 2), 1e-8, np.inf)) - ) - return sdr diff --git a/spaces/justest/gpt4free/g4f/.v1/Dockerfile b/spaces/justest/gpt4free/g4f/.v1/Dockerfile deleted file mode 100644 index 5b2a1f7a38f482a5ecec0932af9a537d67cd7185..0000000000000000000000000000000000000000 --- a/spaces/justest/gpt4free/g4f/.v1/Dockerfile +++ /dev/null @@ -1,19 +0,0 @@ -FROM python:3.11.3-slim - -RUN apt-get update \ - && apt-get install -y --no-install-recommends ffmpeg \ - && apt-get -y clean \ - && rm -rf /var/lib/apt/lists/* - -COPY requirements.txt /tmp -RUN pip install --upgrade pip \ - && pip install -r /tmp/requirements.txt \ - && rm /tmp/requirements.txt - -COPY . /root/gpt4free - -WORKDIR /root/gpt4free - -CMD ["streamlit", "run", "./gui/streamlit_app.py"] - -EXPOSE 8501 diff --git a/spaces/justin-zk/Personalize-SAM/per_segment_anything/modeling/common.py b/spaces/justin-zk/Personalize-SAM/per_segment_anything/modeling/common.py deleted file mode 100644 index 2bf15236a3eb24d8526073bc4fa2b274cccb3f96..0000000000000000000000000000000000000000 --- a/spaces/justin-zk/Personalize-SAM/per_segment_anything/modeling/common.py +++ /dev/null @@ -1,43 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. - -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import torch -import torch.nn as nn - -from typing import Type - - -class MLPBlock(nn.Module): - def __init__( - self, - embedding_dim: int, - mlp_dim: int, - act: Type[nn.Module] = nn.GELU, - ) -> None: - super().__init__() - self.lin1 = nn.Linear(embedding_dim, mlp_dim) - self.lin2 = nn.Linear(mlp_dim, embedding_dim) - self.act = act() - - def forward(self, x: torch.Tensor) -> torch.Tensor: - return self.lin2(self.act(self.lin1(x))) - - -# From https://github.com/facebookresearch/detectron2/blob/main/detectron2/layers/batch_norm.py # noqa -# Itself from https://github.com/facebookresearch/ConvNeXt/blob/d1fa8f6fef0a165b27399986cc2bdacc92777e40/models/convnext.py#L119 # noqa -class LayerNorm2d(nn.Module): - def __init__(self, num_channels: int, eps: float = 1e-6) -> None: - super().__init__() - self.weight = nn.Parameter(torch.ones(num_channels)) - self.bias = nn.Parameter(torch.zeros(num_channels)) - self.eps = eps - - def forward(self, x: torch.Tensor) -> torch.Tensor: - u = x.mean(1, keepdim=True) - s = (x - u).pow(2).mean(1, keepdim=True) - x = (x - u) / torch.sqrt(s + self.eps) - x = self.weight[:, None, None] * x + self.bias[:, None, None] - return x diff --git a/spaces/jvde/sovits-webui/app.py b/spaces/jvde/sovits-webui/app.py deleted file mode 100644 index 977d863b4bd011fa5d950045803d8df1e2349340..0000000000000000000000000000000000000000 --- a/spaces/jvde/sovits-webui/app.py +++ /dev/null @@ -1,371 +0,0 @@ -import matplotlib.pyplot as plt -import IPython.display as ipd -import os -import json -import math -import torch -import commons -import utils -from models import SynthesizerTrn -from text.symbols import symbols -from text import text_to_sequence -from scipy.io.wavfile import write -import gradio as gr -import numpy as np -from PIL import Image -import numpy as np -import os -from pathlib import Path - - - - -LANGUAGES = ['EN','CN','JP'] -SPEAKER_ID = 0 -COVER = "models/Yuuka/cover.png" -speaker_choice = "Yuuka" -MODEL_ZH_NAME = "早濑优香" -EXAMPLE_TEXT = "先生。今日も全力であなたをアシストしますね。" -#USER_INPUT_TEXT = "" - -CONFIG_PATH = "configs/config2.json" -MODEL_PATH = "models/parappa/path.pth" - -hps = utils.get_hparams_from_file(CONFIG_PATH) -net_g = SynthesizerTrn( - len(hps.symbols), - hps.data.filter_length // 2 + 1, - hps.train.segment_size // hps.data.hop_length, - n_speakers=hps.data.n_speakers, - **hps.model) - -model = net_g.eval() -model = utils.load_checkpoint(MODEL_PATH, net_g, None) - -def load_model(): - global hps,net_g,model - - hps = utils.get_hparams_from_file(CONFIG_PATH) - net_g = SynthesizerTrn( - len(hps.symbols), - hps.data.filter_length // 2 + 1, - hps.train.segment_size // hps.data.hop_length, - n_speakers=hps.data.n_speakers, - **hps.model) - - model = net_g.eval() - model = utils.load_checkpoint(MODEL_PATH, net_g, None) - -def get_text(text, hps): - text_norm = text_to_sequence(text, hps.data.text_cleaners) - if hps.data.add_blank: - text_norm = commons.intersperse(text_norm, 0) - text_norm = torch.LongTensor(text_norm) - return text_norm - -def tts_fn(text, noise_scale, noise_scale_w, length_scale): - stn_tst = get_text(text, hps) - with torch.no_grad(): - x_tst = stn_tst.unsqueeze(0) - x_tst_lengths = torch.LongTensor([stn_tst.size(0)]) - sid = torch.LongTensor([SPEAKER_ID]) - audio = net_g.infer(x_tst, x_tst_lengths, sid=sid, noise_scale=noise_scale, noise_scale_w=noise_scale_w, length_scale=length_scale)[0][0,0].data.cpu().float().numpy() - return (22050, audio) - -def add_model_fn(example_text, cover, speakerID, name_en, name_cn, language): - - # 检查必填字段是否为空 - if not speakerID or not name_en or not language: - raise gr.Error("Please fill in all required fields!") - return "Failed to add model" - - ### 保存上传的文件 - - # 生成文件路径 - model_save_dir = Path("models") - model_save_dir = model_save_dir / name_en - img_save_dir = model_save_dir - model_save_dir.mkdir(parents=True, exist_ok=True) - - Model_name = name_en + ".pth" - model_save_dir = model_save_dir / Model_name - - # 保存上传的图片 - if cover is not None: - img = np.array(cover) - img = Image.fromarray(img) - img.save(os.path.join(img_save_dir, 'cover.png')) - - #获取用户输入 - new_model = { - "name_en": name_en, - "name_zh": name_cn, - "cover": img_save_dir / "cover.png", - "sid": speakerID, - "example": example_text, - "language": language, - "type": "single", - "model_path": model_save_dir - } - - #写入json - with open("models/model_info.json", "r", encoding="utf-8") as f: - models_info = json.load(f) - - models_info[name_en] = new_model - with open("models/model_info.json", "w") as f: - json.dump(models_info, f, cls=CustomEncoder) - - - return "Success" - -def clear_input_text(): - return "" - -def clear_add_model_info(): - return "",None,"","","","" - -def get_options(): - with open("models/model_info.json", "r", encoding="utf-8") as f: - global models_info - models_info = json.load(f) - - for i,model_info in models_info.items(): - global name_en - name_en = model_info['name_en'] - -def reset_options(): - value_model_choice = models_info['Yuuka']['name_en'] - value_speaker_id = models_info['Yuuka']['sid'] - return value_model_choice,value_speaker_id - -def refresh_options(): - get_options() - value_model_choice = models_info[speaker_choice]['name_en'] - value_speaker_id = models_info[speaker_choice]['sid'] - return value_model_choice,value_speaker_id - -def change_dropdown(choice): - global speaker_choice - speaker_choice = choice - global COVER - COVER = str(models_info[speaker_choice]['cover']) - global MODEL_PATH - MODEL_PATH = str(models_info[speaker_choice]['model_path']) - global MODEL_ZH_NAME - MODEL_ZH_NAME = str(models_info[speaker_choice]['name_zh']) - global EXAMPLE_TEXT - EXAMPLE_TEXT = str(models_info[speaker_choice]['example']) - - speaker_id_change = gr.update(value=str(models_info[speaker_choice]['sid'])) - cover_change = gr.update(value='
        ' - f'' if COVER else "" - f'{speaker_choice}' - '
        ') - title_change = gr.update(value= - '
        ' - f'

        {"语音名称: "}{MODEL_ZH_NAME}' - f'

        {"checkpoint: "}{speaker_choice}' - '

        ') - - - lan_change = gr.update(value=str(models_info[speaker_choice]['language'])) - - example_change = gr.update(value=EXAMPLE_TEXT) - - load_model() - - return [speaker_id_change,cover_change,title_change,lan_change,example_change] - -class CustomEncoder(json.JSONEncoder): - def default(self, obj): - if isinstance(obj, Path): - return str(obj) - return super().default(obj) - -download_audio_js = """ -() =>{{ - let root = document.querySelector("body > gradio-app"); - if (root.shadowRoot != null) - root = root.shadowRoot; - let audio = root.querySelector("#tts-audio-{audio_id}").querySelector("audio"); - let text = root.querySelector("#input-text-{audio_id}").querySelector("textarea"); - if (audio == undefined) - return; - text = text.value; - if (text == undefined) - text = Math.floor(Math.random()*100000000); - audio = audio.src; - let oA = document.createElement("a"); - oA.download = text.substr(0, 20)+'.wav'; - oA.href = audio; - document.body.appendChild(oA); - oA.click(); - oA.remove(); -}} -""" - - - - - - - - -if __name__ == '__main__': - - with open("models/model_info.json", "r", encoding="utf-8") as f: - models_info = json.load(f) - - for i, model_info in models_info.items(): - name_en = model_info['name_en'] - - - theme = gr.themes.Base() - - with gr.Blocks(theme=theme) as interface: - with gr.Tab("Text to Speech"): - with gr.Column(): - cover_markdown = gr.Markdown( - '
        ' - f'' if COVER else "" - '
        ') - title_markdown = gr.Markdown( - '
        ' - f'

        {"语音名称: "}{MODEL_ZH_NAME}' - f'

        {"checkpoint: "}{speaker_choice}' - '

        ') - - with gr.Row(): - with gr.Column(scale=4): - input_text = gr.Textbox( - label="Input", - lines=2, - placeholder="Enter the text you want to process here", - elem_id=f"input-text-en-{name_en.replace(' ', '')}", - scale=2 - ) - with gr.Column(scale=1): - gen_button = gr.Button("Generate", variant="primary") - clear_input_button = gr.Button("Clear") - - with gr.Row(): - with gr.Column(scale=2): - lan = gr.Radio(label="Language", choices=LANGUAGES, value="JP") - noise_scale = gr.Slider(minimum=0.1, maximum=1.0, step=0.1, label="Noise Scale (情感变化程度)", - value=0.6) - noise_scale_w = gr.Slider(minimum=0.1, maximum=1.0, step=0.1, label="Noise Scale w (发音长度)", - value=0.668) - length_scale = gr.Slider(minimum=0.1, maximum=2.0, step=0.1, label="Length Scale (语速)", - value=1.0) - - with gr.Column(scale=1): - example_text_box = gr.Textbox(label="Example:", - value=EXAMPLE_TEXT) - - output_audio = gr.Audio(label="Output", elem_id=f"tts-audio-en-{name_en.replace(' ', '')}") - download_button = gr.Button("Download") - - # example = gr.Examples( - # examples = [EXAMPLE_TEXT], - # inputs=input_text, - # outputs = output_audio, - # fn=example_tts_fn, - # cache_examples=True - # ) - - gen_button.click( - tts_fn, - inputs=[input_text, noise_scale, noise_scale_w, length_scale], - outputs=output_audio) - clear_input_button.click( - clear_input_text, - outputs=input_text - ) - download_button.click(None, [], [], _js=download_audio_js.format(audio_id=f"en-{name_en.replace(' ', '')}")) - - # ------------------------------------------------------------------------------------------------------------------------ - with gr.Tab("AI Singer"): - input_text_singer = gr.Textbox() - - # ------------------------------------------------------------------------------------------------------------------------ - with gr.Tab("TTS with ChatGPT"): - input_text_gpt = gr.Textbox() - - # ------------------------------------------------------------------------------------------------------------------------ - with gr.Tab("Settings"): - with gr.Box(): - gr.Markdown("""# Select Model""") - with gr.Row(): - with gr.Column(scale=5): - model_choice = gr.Dropdown(label="Model", - choices=[(model["name_en"]) for name, model in models_info.items()], - interactive=True, - value=models_info['Yuuka']['name_en'] - ) - with gr.Column(scale=5): - speaker_id_choice = gr.Dropdown(label="Speaker ID", - choices=[(str(model["sid"])) for name, model in - models_info.items()], - interactive=True, - value=str(models_info['Yuuka']['sid']) - ) - - with gr.Column(scale=1): - refresh_button = gr.Button("Refresh", variant="primary") - reset_button = gr.Button("Reset") - - model_choice.change(fn=change_dropdown, inputs=model_choice, - outputs=[speaker_id_choice, cover_markdown, title_markdown, lan, example_text_box]) - - refresh_button.click(fn=refresh_options, outputs=[model_choice, speaker_id_choice]) - reset_button.click(reset_options, outputs=[model_choice, speaker_id_choice]) - - with gr.Box(): - gr.Markdown("# Add Model\n" - "> *为必填选项\n" - "> 添加完成后将**checkpoints**文件放到对应生成的文件夹中" - ) - - with gr.Row(): - # file = gr.Files(label = "VITS Model*", file_types=[".pth"]) - example_text = gr.Textbox(label="Example Text", - lines=16, - placeholder="Enter the example text here", ) - model_cover = gr.Image(label="Cover") - - with gr.Column(): - model_speaker_id = gr.Textbox(label="Speaker List*", - placeholder="Single speaker model default=0") - model_name_en = gr.Textbox(label="name_en*") - model_name_cn = gr.Textbox(label="name_cn") - model_language = gr.Dropdown(label="Language*", - choices=LANGUAGES, - interactive=True) - with gr.Row(): - add_model_button = gr.Button("Add Model", variant="primary") - clear_add_model_button = gr.Button("Clear") - with gr.Box(): - with gr.Row(): - message_box = gr.Textbox(label="Message") - - add_model_button.click(add_model_fn, - inputs=[example_text, model_cover, model_speaker_id, model_name_en, model_name_cn, - model_language], - outputs=message_box - ) - clear_add_model_button.click(clear_add_model_info, - outputs=[example_text, model_cover, model_speaker_id, model_name_en, - model_name_cn, model_language] - ) - - interface.queue(concurrency_count=1).launch(debug=True) - - - - - - - - - diff --git a/spaces/k1ngtai/MMS/vits/text/__init__.py b/spaces/k1ngtai/MMS/vits/text/__init__.py deleted file mode 100644 index 4ac41f9025755d8ffd74068af14c6cfc8e5a4173..0000000000000000000000000000000000000000 --- a/spaces/k1ngtai/MMS/vits/text/__init__.py +++ /dev/null @@ -1,54 +0,0 @@ -""" from https://github.com/keithito/tacotron """ -from text import cleaners -from text.symbols import symbols - - -# Mappings from symbol to numeric ID and vice versa: -_symbol_to_id = {s: i for i, s in enumerate(symbols)} -_id_to_symbol = {i: s for i, s in enumerate(symbols)} - - -def text_to_sequence(text, cleaner_names): - '''Converts a string of text to a sequence of IDs corresponding to the symbols in the text. - Args: - text: string to convert to a sequence - cleaner_names: names of the cleaner functions to run the text through - Returns: - List of integers corresponding to the symbols in the text - ''' - sequence = [] - - clean_text = _clean_text(text, cleaner_names) - for symbol in clean_text: - symbol_id = _symbol_to_id[symbol] - sequence += [symbol_id] - return sequence - - -def cleaned_text_to_sequence(cleaned_text): - '''Converts a string of text to a sequence of IDs corresponding to the symbols in the text. - Args: - text: string to convert to a sequence - Returns: - List of integers corresponding to the symbols in the text - ''' - sequence = [_symbol_to_id[symbol] for symbol in cleaned_text] - return sequence - - -def sequence_to_text(sequence): - '''Converts a sequence of IDs back to a string''' - result = '' - for symbol_id in sequence: - s = _id_to_symbol[symbol_id] - result += s - return result - - -def _clean_text(text, cleaner_names): - for name in cleaner_names: - cleaner = getattr(cleaners, name) - if not cleaner: - raise Exception('Unknown cleaner: %s' % name) - text = cleaner(text) - return text diff --git a/spaces/kangvcar/RealChar/realtime_ai_character/restful_routes.py b/spaces/kangvcar/RealChar/realtime_ai_character/restful_routes.py deleted file mode 100644 index 51d5b0d7ed5ac3ec89f3e4afa99b9f83ab32d876..0000000000000000000000000000000000000000 --- a/spaces/kangvcar/RealChar/realtime_ai_character/restful_routes.py +++ /dev/null @@ -1,49 +0,0 @@ -import os - -from fastapi import APIRouter, Depends, HTTPException, Request -from fastapi.responses import HTMLResponse -from fastapi.templating import Jinja2Templates -import firebase_admin -from firebase_admin import auth, credentials -from firebase_admin.exceptions import FirebaseError - - -router = APIRouter() - -templates = Jinja2Templates(directory=os.path.join( - os.path.dirname(os.path.abspath(__file__)), 'static')) - -if os.getenv('USE_AUTH', ''): - cred = credentials.Certificate(os.environ.get('FIREBASE_CONFIG_PATH')) - firebase_admin.initialize_app(cred) - -async def get_current_user(request: Request): - """Heler function for auth with Firebase.""" - if os.getenv('USE_AUTH', ''): - # Extracts the token from the Authorization header - if 'Authorization' not in request.headers: - # Anonymous users. - return "" - token = request.headers.get('Authorization').split("Bearer ")[1] - try: - # Verify the token against the Firebase Auth API. - decoded_token = auth.verify_id_token(token) - except FirebaseError: - raise HTTPException( - status_code=status.HTTP_401_UNAUTHORIZED, - detail='Invalid authentication credentials', - headers={'WWW-Authenticate': 'Bearer'}, - ) - - return decoded_token - else: - return "" - -@router.get("/status") -async def status(): - return {"status": "ok"} - - -@router.get("/", response_class=HTMLResponse) -async def index(request: Request, user=Depends(get_current_user)): - return templates.TemplateResponse("index.html", {"request": request}) diff --git a/spaces/kcagle/AutoGPT/ui/utils.py b/spaces/kcagle/AutoGPT/ui/utils.py deleted file mode 100644 index 71703e2009afac0582300f5d99a91ddec4119e04..0000000000000000000000000000000000000000 --- a/spaces/kcagle/AutoGPT/ui/utils.py +++ /dev/null @@ -1,31 +0,0 @@ -import os -import re - -def format_directory(directory): - output = [] - def helper(directory, level, output): - files = os.listdir(directory) - for i, item in enumerate(files): - is_folder = os.path.isdir(os.path.join(directory, item)) - joiner = "├── " if i < len(files) - 1 else "└── " - item_html = item + "/" if is_folder else f"{item}" - output.append("│ " * level + joiner + item_html) - if is_folder: - helper(os.path.join(directory, item), level + 1, output) - output.append(os.path.basename(directory) + "/") - helper(directory, 1, output) - return "\n".join(output) - -DOWNLOAD_OUTPUTS_JS = """ -() => { - const a = document.createElement('a'); - a.href = 'file=outputs.zip'; - a.download = 'outputs.zip'; - document.body.appendChild(a); - a.click(); - document.body.removeChild(a); -}""" - -def remove_color(text): - ansi_escape = re.compile(r'\x1B(?:[@-Z\\-_]|\[[0-?]*[ -/]*[@-~])') - return ansi_escape.sub('', text) \ No newline at end of file diff --git a/spaces/kepl/gpt/g4f/Provider/Providers/Ezcht.py b/spaces/kepl/gpt/g4f/Provider/Providers/Ezcht.py deleted file mode 100644 index baec214f7e0e936ea06bffa357e1bd2b77cd4089..0000000000000000000000000000000000000000 --- a/spaces/kepl/gpt/g4f/Provider/Providers/Ezcht.py +++ /dev/null @@ -1,35 +0,0 @@ -import requests -import os -import json -from ...typing import sha256, Dict, get_type_hints - -url = 'https://gpt4.ezchat.top' -model = ['gpt-3.5-turbo', 'gpt-3.5-turbo-16k', 'gpt-3.5-turbo-16k-0613', 'gpt-3.5-turbo-0613'] -supports_stream = True -needs_auth = False - -def _create_completion(model: str, messages: list, stream: bool, temperature: float = 0.7, **kwargs): - headers = { - 'Content-Type': 'application/json', - } - data = { - 'model': model, - 'temperature': 0.7, - 'presence_penalty': 0, - 'messages': messages, - } - response = requests.post(url + '/api/openai/v1/chat/completions', - json=data, stream=True) - - if stream: - for chunk in response.iter_content(chunk_size=None): - chunk = chunk.decode('utf-8') - if chunk.strip(): - message = json.loads(chunk)['choices'][0]['message']['content'] - yield message - else: - message = response.json()['choices'][0]['message']['content'] - yield message - -params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + \ - '(%s)' % ', '.join([f"{name}: {get_type_hints(_create_completion)[name].__name__}" for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]]) \ No newline at end of file diff --git a/spaces/kevinwang676/VoiceChangers/src/face3d/models/arcface_torch/torch2onnx.py b/spaces/kevinwang676/VoiceChangers/src/face3d/models/arcface_torch/torch2onnx.py deleted file mode 100644 index fc26ab82e552331bc8d75b34e81000418f4d38ec..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/VoiceChangers/src/face3d/models/arcface_torch/torch2onnx.py +++ /dev/null @@ -1,59 +0,0 @@ -import numpy as np -import onnx -import torch - - -def convert_onnx(net, path_module, output, opset=11, simplify=False): - assert isinstance(net, torch.nn.Module) - img = np.random.randint(0, 255, size=(112, 112, 3), dtype=np.int32) - img = img.astype(np.float) - img = (img / 255. - 0.5) / 0.5 # torch style norm - img = img.transpose((2, 0, 1)) - img = torch.from_numpy(img).unsqueeze(0).float() - - weight = torch.load(path_module) - net.load_state_dict(weight) - net.eval() - torch.onnx.export(net, img, output, keep_initializers_as_inputs=False, verbose=False, opset_version=opset) - model = onnx.load(output) - graph = model.graph - graph.input[0].type.tensor_type.shape.dim[0].dim_param = 'None' - if simplify: - from onnxsim import simplify - model, check = simplify(model) - assert check, "Simplified ONNX model could not be validated" - onnx.save(model, output) - - -if __name__ == '__main__': - import os - import argparse - from backbones import get_model - - parser = argparse.ArgumentParser(description='ArcFace PyTorch to onnx') - parser.add_argument('input', type=str, help='input backbone.pth file or path') - parser.add_argument('--output', type=str, default=None, help='output onnx path') - parser.add_argument('--network', type=str, default=None, help='backbone network') - parser.add_argument('--simplify', type=bool, default=False, help='onnx simplify') - args = parser.parse_args() - input_file = args.input - if os.path.isdir(input_file): - input_file = os.path.join(input_file, "backbone.pth") - assert os.path.exists(input_file) - model_name = os.path.basename(os.path.dirname(input_file)).lower() - params = model_name.split("_") - if len(params) >= 3 and params[1] in ('arcface', 'cosface'): - if args.network is None: - args.network = params[2] - assert args.network is not None - print(args) - backbone_onnx = get_model(args.network, dropout=0) - - output_path = args.output - if output_path is None: - output_path = os.path.join(os.path.dirname(__file__), 'onnx') - if not os.path.exists(output_path): - os.makedirs(output_path) - assert os.path.isdir(output_path) - output_file = os.path.join(output_path, "%s.onnx" % model_name) - convert_onnx(backbone_onnx, input_file, output_file, simplify=args.simplify) diff --git a/spaces/kornia/kornia-augmentations-tester/kornia_aug.py b/spaces/kornia/kornia-augmentations-tester/kornia_aug.py deleted file mode 100644 index 7dfdd7e0e9fae6bd95a2afb4446610c67bec2bf2..0000000000000000000000000000000000000000 --- a/spaces/kornia/kornia-augmentations-tester/kornia_aug.py +++ /dev/null @@ -1,142 +0,0 @@ -import streamlit as st -import kornia -from torch import nn -import torch -from torchvision.transforms import functional as F -from torchvision.utils import make_grid -from streamlit_ace import st_ace -from PIL import Image - -IS_LOCAL = False #Change this - -@st.cache(suppress_st_warning=True) -def set_transform(content): - # st.write("set transform") - try: - transform = eval(content, {"kornia": kornia, "nn": nn}, None) - except Exception as e: - st.write(f"There was an error: {e}") - transform = nn.Sequential() - return transform - -st.markdown("# Kornia Augmentations Demo") -st.sidebar.markdown( - "[Kornia](https://github.com/kornia/kornia) is a *differentiable* computer vision library for PyTorch." -) -uploaded_file = st.sidebar.file_uploader("Choose a file") -if uploaded_file is not None: - im = Image.open(uploaded_file) -else: - im = Image.open("./images/pretty_bird.jpg") -scaler = int(im.height / 2) -st.sidebar.image(im, caption="Input Image", width=256) -image = F.pil_to_tensor(im).float() / 255 - - -# batch size is just for show -batch_size = st.sidebar.slider("batch_size", min_value=4, max_value=16,value=8) -gpu = st.sidebar.checkbox("Use GPU!", value=True) -if not gpu: - st.sidebar.markdown("With Kornia you do ops on the GPU!") - device = torch.device("cpu") -else: - if not IS_LOCAL: - st.sidebar.markdown("(GPU Not available on hosted demo, try on your local!)") - # Credits - st.sidebar.caption("Demo made by [Ceyda Cinarel](https://linktr.ee/ceydai)") - st.sidebar.markdown("Clone [Code](https://github.com/cceyda/kornia-demo)") - device = torch.device("cpu") - else: - st.sidebar.markdown("Running on GPU~") - device = torch.device("cuda:0") - -predefined_transforms = [ - """ -nn.Sequential( - kornia.augmentation.RandomAffine(degrees=360,p=0.5), - kornia.augmentation.ColorJitter(brightness=0.2, contrast=0.3, saturation=0.2, hue=0.3, p=1) -) -# p=0.5 is the probability of applying the transformation -""", - """ -nn.Sequential( - kornia.augmentation.RandomErasing(scale=(.4, .8), ratio=(.3, 1/.3), p=0.5), -) -""", - """ -nn.Sequential( - kornia.augmentation.RandomErasing(scale=(.4, .8), ratio=(.3, 1/.3), p=1, same_on_batch=True), -) -#By setting same_on_batch=True you can apply the same transform across the batch -""", - f""" -nn.Sequential( - kornia.augmentation.RandomResizedCrop(size=({scaler}, {scaler}), scale=(3., 3.), ratio=(2., 2.), p=1.), - kornia.augmentation.RandomHorizontalFlip(p=0.7), - kornia.augmentation.RandomGrayscale(p=0.5), -) -""", -] - -selected_transform = st.selectbox( - "Pick an augmentation pipeline example:", predefined_transforms -) - -st.write("Transform to apply:") -readonly = False -content = st_ace( - value=selected_transform, - height=150, - language="python", - keybinding="vscode", - show_gutter=True, - show_print_margin=True, - wrap=False, - auto_update=False, - readonly=readonly, -) -if content: - # st.write(content) - transform = set_transform(content) - -# st.write(transform) - -# with st.echo(): -# transform = nn.Sequential( -# K.RandomAffine(360), -# K.ColorJitter(0.2, 0.3, 0.2, 0.3) -# ) - -process = st.button("Next Batch") - -# Fake dataloader -image_batch = torch.stack(batch_size * [image]) - - -image_batch.to(device) -transformeds = None -try: - transformeds = transform(image_batch) -except Exception as e: - st.write(f"There was an error: {e}") - - - - -cols = st.columns(4) - -# st.image(F.to_pil_image(make_grid(transformeds))) -if transformeds is not None: - for i, x in enumerate(transformeds): - i = i % 4 - cols[i].image(F.to_pil_image(x), use_column_width=True) - -st.markdown( - "There are a lot more transformations available: [Documentation](https://kornia.readthedocs.io/en/latest/augmentation.module.html)" -) -st.markdown( - "Kornia can do a lot more than augmentations~ [Check it out](https://kornia.readthedocs.io/en/latest/introduction.html#highlighted-features)" -) -# if process: -# pass - diff --git a/spaces/kquote03/lama-video-watermark-remover/saicinpainting/training/losses/feature_matching.py b/spaces/kquote03/lama-video-watermark-remover/saicinpainting/training/losses/feature_matching.py deleted file mode 100644 index c019895c9178817837d1a6773367b178a861dc61..0000000000000000000000000000000000000000 --- a/spaces/kquote03/lama-video-watermark-remover/saicinpainting/training/losses/feature_matching.py +++ /dev/null @@ -1,33 +0,0 @@ -from typing import List - -import torch -import torch.nn.functional as F - - -def masked_l2_loss(pred, target, mask, weight_known, weight_missing): - per_pixel_l2 = F.mse_loss(pred, target, reduction='none') - pixel_weights = mask * weight_missing + (1 - mask) * weight_known - return (pixel_weights * per_pixel_l2).mean() - - -def masked_l1_loss(pred, target, mask, weight_known, weight_missing): - per_pixel_l1 = F.l1_loss(pred, target, reduction='none') - pixel_weights = mask * weight_missing + (1 - mask) * weight_known - return (pixel_weights * per_pixel_l1).mean() - - -def feature_matching_loss(fake_features: List[torch.Tensor], target_features: List[torch.Tensor], mask=None): - if mask is None: - res = torch.stack([F.mse_loss(fake_feat, target_feat) - for fake_feat, target_feat in zip(fake_features, target_features)]).mean() - else: - res = 0 - norm = 0 - for fake_feat, target_feat in zip(fake_features, target_features): - cur_mask = F.interpolate(mask, size=fake_feat.shape[-2:], mode='bilinear', align_corners=False) - error_weights = 1 - cur_mask - cur_val = ((fake_feat - target_feat).pow(2) * error_weights).mean() - res = res + cur_val - norm += 1 - res = res / norm - return res diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/matplotlib/backends/backend_qt5.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/matplotlib/backends/backend_qt5.py deleted file mode 100644 index d94062b723f49aa1ff2fb0621748232684feef72..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/matplotlib/backends/backend_qt5.py +++ /dev/null @@ -1,28 +0,0 @@ -from .. import backends - -backends._QT_FORCE_QT5_BINDING = True - - -from .backend_qt import ( # noqa - SPECIAL_KEYS, - # Public API - cursord, _create_qApp, _BackendQT, TimerQT, MainWindow, FigureCanvasQT, - FigureManagerQT, ToolbarQt, NavigationToolbar2QT, SubplotToolQt, - SaveFigureQt, ConfigureSubplotsQt, RubberbandQt, - HelpQt, ToolCopyToClipboardQT, - # internal re-exports - FigureCanvasBase, FigureManagerBase, MouseButton, NavigationToolbar2, - TimerBase, ToolContainerBase, figureoptions, Gcf -) -from . import backend_qt as _backend_qt # noqa - - -@_BackendQT.export -class _BackendQT5(_BackendQT): - pass - - -def __getattr__(name): - if name == 'qApp': - return _backend_qt.qApp - raise AttributeError(f"module {__name__!r} has no attribute {name!r}") diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/matplotlib/dates.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/matplotlib/dates.py deleted file mode 100644 index 2c2293e039860cf0402c01cd0299591b40eb07df..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/matplotlib/dates.py +++ /dev/null @@ -1,1942 +0,0 @@ -""" -Matplotlib provides sophisticated date plotting capabilities, standing on the -shoulders of python :mod:`datetime` and the add-on module dateutil_. - -By default, Matplotlib uses the units machinery described in -`~matplotlib.units` to convert `datetime.datetime`, and `numpy.datetime64` -objects when plotted on an x- or y-axis. The user does not -need to do anything for dates to be formatted, but dates often have strict -formatting needs, so this module provides many axis locators and formatters. -A basic example using `numpy.datetime64` is:: - - import numpy as np - - times = np.arange(np.datetime64('2001-01-02'), - np.datetime64('2002-02-03'), np.timedelta64(75, 'm')) - y = np.random.randn(len(times)) - - fig, ax = plt.subplots() - ax.plot(times, y) - -.. seealso:: - - - :doc:`/gallery/text_labels_and_annotations/date` - - :doc:`/gallery/ticks/date_concise_formatter` - - :doc:`/gallery/ticks/date_demo_convert` - -.. _date-format: - -Matplotlib date format ----------------------- - -Matplotlib represents dates using floating point numbers specifying the number -of days since a default epoch of 1970-01-01 UTC; for example, -1970-01-01, 06:00 is the floating point number 0.25. The formatters and -locators require the use of `datetime.datetime` objects, so only dates between -year 0001 and 9999 can be represented. Microsecond precision -is achievable for (approximately) 70 years on either side of the epoch, and -20 microseconds for the rest of the allowable range of dates (year 0001 to -9999). The epoch can be changed at import time via `.dates.set_epoch` or -:rc:`dates.epoch` to other dates if necessary; see -:doc:`/gallery/ticks/date_precision_and_epochs` for a discussion. - -.. note:: - - Before Matplotlib 3.3, the epoch was 0000-12-31 which lost modern - microsecond precision and also made the default axis limit of 0 an invalid - datetime. In 3.3 the epoch was changed as above. To convert old - ordinal floats to the new epoch, users can do:: - - new_ordinal = old_ordinal + mdates.date2num(np.datetime64('0000-12-31')) - - -There are a number of helper functions to convert between :mod:`datetime` -objects and Matplotlib dates: - -.. currentmodule:: matplotlib.dates - -.. autosummary:: - :nosignatures: - - datestr2num - date2num - num2date - num2timedelta - drange - set_epoch - get_epoch - -.. note:: - - Like Python's `datetime.datetime`, Matplotlib uses the Gregorian calendar - for all conversions between dates and floating point numbers. This practice - is not universal, and calendar differences can cause confusing - differences between what Python and Matplotlib give as the number of days - since 0001-01-01 and what other software and databases yield. For - example, the US Naval Observatory uses a calendar that switches - from Julian to Gregorian in October, 1582. Hence, using their - calculator, the number of days between 0001-01-01 and 2006-04-01 is - 732403, whereas using the Gregorian calendar via the datetime - module we find:: - - In [1]: date(2006, 4, 1).toordinal() - date(1, 1, 1).toordinal() - Out[1]: 732401 - -All the Matplotlib date converters, tickers and formatters are timezone aware. -If no explicit timezone is provided, :rc:`timezone` is assumed, provided as a -string. If you want to use a different timezone, pass the *tz* keyword -argument of `num2date` to any date tickers or locators you create. This can -be either a `datetime.tzinfo` instance or a string with the timezone name that -can be parsed by `~dateutil.tz.gettz`. - -A wide range of specific and general purpose date tick locators and -formatters are provided in this module. See -:mod:`matplotlib.ticker` for general information on tick locators -and formatters. These are described below. - -The dateutil_ module provides additional code to handle date ticking, making it -easy to place ticks on any kinds of dates. See examples below. - -.. _dateutil: https://dateutil.readthedocs.io - -Date tickers ------------- - -Most of the date tickers can locate single or multiple values. For example:: - - # import constants for the days of the week - from matplotlib.dates import MO, TU, WE, TH, FR, SA, SU - - # tick on Mondays every week - loc = WeekdayLocator(byweekday=MO, tz=tz) - - # tick on Mondays and Saturdays - loc = WeekdayLocator(byweekday=(MO, SA)) - -In addition, most of the constructors take an interval argument:: - - # tick on Mondays every second week - loc = WeekdayLocator(byweekday=MO, interval=2) - -The rrule locator allows completely general date ticking:: - - # tick every 5th easter - rule = rrulewrapper(YEARLY, byeaster=1, interval=5) - loc = RRuleLocator(rule) - -The available date tickers are: - -* `MicrosecondLocator`: Locate microseconds. - -* `SecondLocator`: Locate seconds. - -* `MinuteLocator`: Locate minutes. - -* `HourLocator`: Locate hours. - -* `DayLocator`: Locate specified days of the month. - -* `WeekdayLocator`: Locate days of the week, e.g., MO, TU. - -* `MonthLocator`: Locate months, e.g., 7 for July. - -* `YearLocator`: Locate years that are multiples of base. - -* `RRuleLocator`: Locate using a `rrulewrapper`. - `rrulewrapper` is a simple wrapper around dateutil_'s `dateutil.rrule` - which allow almost arbitrary date tick specifications. - See :doc:`rrule example `. - -* `AutoDateLocator`: On autoscale, this class picks the best `DateLocator` - (e.g., `RRuleLocator`) to set the view limits and the tick locations. If - called with ``interval_multiples=True`` it will make ticks line up with - sensible multiples of the tick intervals. For example, if the interval is - 4 hours, it will pick hours 0, 4, 8, etc. as ticks. This behaviour is not - guaranteed by default. - -Date formatters ---------------- - -The available date formatters are: - -* `AutoDateFormatter`: attempts to figure out the best format to use. This is - most useful when used with the `AutoDateLocator`. - -* `ConciseDateFormatter`: also attempts to figure out the best format to use, - and to make the format as compact as possible while still having complete - date information. This is most useful when used with the `AutoDateLocator`. - -* `DateFormatter`: use `~datetime.datetime.strftime` format strings. -""" - -import datetime -import functools -import logging -import math -import re - -from dateutil.rrule import (rrule, MO, TU, WE, TH, FR, SA, SU, YEARLY, - MONTHLY, WEEKLY, DAILY, HOURLY, MINUTELY, - SECONDLY) -from dateutil.relativedelta import relativedelta -import dateutil.parser -import dateutil.tz -import numpy as np - -import matplotlib as mpl -from matplotlib import _api, cbook, ticker, units - -__all__ = ('datestr2num', 'date2num', 'num2date', 'num2timedelta', 'drange', - 'set_epoch', 'get_epoch', 'DateFormatter', 'ConciseDateFormatter', - 'AutoDateFormatter', 'DateLocator', 'RRuleLocator', - 'AutoDateLocator', 'YearLocator', 'MonthLocator', 'WeekdayLocator', - 'DayLocator', 'HourLocator', 'MinuteLocator', - 'SecondLocator', 'MicrosecondLocator', - 'rrule', 'MO', 'TU', 'WE', 'TH', 'FR', 'SA', 'SU', - 'YEARLY', 'MONTHLY', 'WEEKLY', 'DAILY', - 'HOURLY', 'MINUTELY', 'SECONDLY', 'MICROSECONDLY', 'relativedelta', - 'DateConverter', 'ConciseDateConverter', 'rrulewrapper') - - -_log = logging.getLogger(__name__) -UTC = datetime.timezone.utc - - -@_api.caching_module_getattr -class __getattr__: - JULIAN_OFFSET = _api.deprecated("3.7")(property(lambda self: 1721424.5)) - # Julian date at 0000-12-31 - # note that the Julian day epoch is achievable w/ - # np.datetime64('-4713-11-24T12:00:00'); datetime64 is proleptic - # Gregorian and BC has a one-year offset. So - # np.datetime64('0000-12-31') - np.datetime64('-4713-11-24T12:00') = - # 1721424.5 - # Ref: https://en.wikipedia.org/wiki/Julian_day - - -def _get_tzinfo(tz=None): - """ - Generate `~datetime.tzinfo` from a string or return `~datetime.tzinfo`. - If None, retrieve the preferred timezone from the rcParams dictionary. - """ - if tz is None: - tz = mpl.rcParams['timezone'] - if tz == 'UTC': - return UTC - if isinstance(tz, str): - tzinfo = dateutil.tz.gettz(tz) - if tzinfo is None: - raise ValueError(f"{tz} is not a valid timezone as parsed by" - " dateutil.tz.gettz.") - return tzinfo - if isinstance(tz, datetime.tzinfo): - return tz - raise TypeError("tz must be string or tzinfo subclass.") - - -# Time-related constants. -EPOCH_OFFSET = float(datetime.datetime(1970, 1, 1).toordinal()) -# EPOCH_OFFSET is not used by matplotlib -MICROSECONDLY = SECONDLY + 1 -HOURS_PER_DAY = 24. -MIN_PER_HOUR = 60. -SEC_PER_MIN = 60. -MONTHS_PER_YEAR = 12. - -DAYS_PER_WEEK = 7. -DAYS_PER_MONTH = 30. -DAYS_PER_YEAR = 365.0 - -MINUTES_PER_DAY = MIN_PER_HOUR * HOURS_PER_DAY - -SEC_PER_HOUR = SEC_PER_MIN * MIN_PER_HOUR -SEC_PER_DAY = SEC_PER_HOUR * HOURS_PER_DAY -SEC_PER_WEEK = SEC_PER_DAY * DAYS_PER_WEEK - -MUSECONDS_PER_DAY = 1e6 * SEC_PER_DAY - -MONDAY, TUESDAY, WEDNESDAY, THURSDAY, FRIDAY, SATURDAY, SUNDAY = ( - MO, TU, WE, TH, FR, SA, SU) -WEEKDAYS = (MONDAY, TUESDAY, WEDNESDAY, THURSDAY, FRIDAY, SATURDAY, SUNDAY) - -# default epoch: passed to np.datetime64... -_epoch = None - - -def _reset_epoch_test_example(): - """ - Reset the Matplotlib date epoch so it can be set again. - - Only for use in tests and examples. - """ - global _epoch - _epoch = None - - -def set_epoch(epoch): - """ - Set the epoch (origin for dates) for datetime calculations. - - The default epoch is :rc:`dates.epoch` (by default 1970-01-01T00:00). - - If microsecond accuracy is desired, the date being plotted needs to be - within approximately 70 years of the epoch. Matplotlib internally - represents dates as days since the epoch, so floating point dynamic - range needs to be within a factor of 2^52. - - `~.dates.set_epoch` must be called before any dates are converted - (i.e. near the import section) or a RuntimeError will be raised. - - See also :doc:`/gallery/ticks/date_precision_and_epochs`. - - Parameters - ---------- - epoch : str - valid UTC date parsable by `numpy.datetime64` (do not include - timezone). - - """ - global _epoch - if _epoch is not None: - raise RuntimeError('set_epoch must be called before dates plotted.') - _epoch = epoch - - -def get_epoch(): - """ - Get the epoch used by `.dates`. - - Returns - ------- - epoch : str - String for the epoch (parsable by `numpy.datetime64`). - """ - global _epoch - - if _epoch is None: - _epoch = mpl.rcParams['date.epoch'] - return _epoch - - -def _dt64_to_ordinalf(d): - """ - Convert `numpy.datetime64` or an `numpy.ndarray` of those types to - Gregorian date as UTC float relative to the epoch (see `.get_epoch`). - Roundoff is float64 precision. Practically: microseconds for dates - between 290301 BC, 294241 AD, milliseconds for larger dates - (see `numpy.datetime64`). - """ - - # the "extra" ensures that we at least allow the dynamic range out to - # seconds. That should get out to +/-2e11 years. - dseconds = d.astype('datetime64[s]') - extra = (d - dseconds).astype('timedelta64[ns]') - t0 = np.datetime64(get_epoch(), 's') - dt = (dseconds - t0).astype(np.float64) - dt += extra.astype(np.float64) / 1.0e9 - dt = dt / SEC_PER_DAY - - NaT_int = np.datetime64('NaT').astype(np.int64) - d_int = d.astype(np.int64) - dt[d_int == NaT_int] = np.nan - return dt - - -def _from_ordinalf(x, tz=None): - """ - Convert Gregorian float of the date, preserving hours, minutes, - seconds and microseconds. Return value is a `.datetime`. - - The input date *x* is a float in ordinal days at UTC, and the output will - be the specified `.datetime` object corresponding to that time in - timezone *tz*, or if *tz* is ``None``, in the timezone specified in - :rc:`timezone`. - """ - - tz = _get_tzinfo(tz) - - dt = (np.datetime64(get_epoch()) + - np.timedelta64(int(np.round(x * MUSECONDS_PER_DAY)), 'us')) - if dt < np.datetime64('0001-01-01') or dt >= np.datetime64('10000-01-01'): - raise ValueError(f'Date ordinal {x} converts to {dt} (using ' - f'epoch {get_epoch()}), but Matplotlib dates must be ' - 'between year 0001 and 9999.') - # convert from datetime64 to datetime: - dt = dt.tolist() - - # datetime64 is always UTC: - dt = dt.replace(tzinfo=dateutil.tz.gettz('UTC')) - # but maybe we are working in a different timezone so move. - dt = dt.astimezone(tz) - # fix round off errors - if np.abs(x) > 70 * 365: - # if x is big, round off to nearest twenty microseconds. - # This avoids floating point roundoff error - ms = round(dt.microsecond / 20) * 20 - if ms == 1000000: - dt = dt.replace(microsecond=0) + datetime.timedelta(seconds=1) - else: - dt = dt.replace(microsecond=ms) - - return dt - - -# a version of _from_ordinalf that can operate on numpy arrays -_from_ordinalf_np_vectorized = np.vectorize(_from_ordinalf, otypes="O") - - -# a version of dateutil.parser.parse that can operate on numpy arrays -_dateutil_parser_parse_np_vectorized = np.vectorize(dateutil.parser.parse) - - -def datestr2num(d, default=None): - """ - Convert a date string to a datenum using `dateutil.parser.parse`. - - Parameters - ---------- - d : str or sequence of str - The dates to convert. - - default : datetime.datetime, optional - The default date to use when fields are missing in *d*. - """ - if isinstance(d, str): - dt = dateutil.parser.parse(d, default=default) - return date2num(dt) - else: - if default is not None: - d = [date2num(dateutil.parser.parse(s, default=default)) - for s in d] - return np.asarray(d) - d = np.asarray(d) - if not d.size: - return d - return date2num(_dateutil_parser_parse_np_vectorized(d)) - - -def date2num(d): - """ - Convert datetime objects to Matplotlib dates. - - Parameters - ---------- - d : `datetime.datetime` or `numpy.datetime64` or sequences of these - - Returns - ------- - float or sequence of floats - Number of days since the epoch. See `.get_epoch` for the - epoch, which can be changed by :rc:`date.epoch` or `.set_epoch`. If - the epoch is "1970-01-01T00:00:00" (default) then noon Jan 1 1970 - ("1970-01-01T12:00:00") returns 0.5. - - Notes - ----- - The Gregorian calendar is assumed; this is not universal practice. - For details see the module docstring. - """ - # Unpack in case of e.g. Pandas or xarray object - d = cbook._unpack_to_numpy(d) - - # make an iterable, but save state to unpack later: - iterable = np.iterable(d) - if not iterable: - d = [d] - - masked = np.ma.is_masked(d) - mask = np.ma.getmask(d) - d = np.asarray(d) - - # convert to datetime64 arrays, if not already: - if not np.issubdtype(d.dtype, np.datetime64): - # datetime arrays - if not d.size: - # deals with an empty array... - return d - tzi = getattr(d[0], 'tzinfo', None) - if tzi is not None: - # make datetime naive: - d = [dt.astimezone(UTC).replace(tzinfo=None) for dt in d] - d = np.asarray(d) - d = d.astype('datetime64[us]') - - d = np.ma.masked_array(d, mask=mask) if masked else d - d = _dt64_to_ordinalf(d) - - return d if iterable else d[0] - - -@_api.deprecated("3.7") -def julian2num(j): - """ - Convert a Julian date (or sequence) to a Matplotlib date (or sequence). - - Parameters - ---------- - j : float or sequence of floats - Julian dates (days relative to 4713 BC Jan 1, 12:00:00 Julian - calendar or 4714 BC Nov 24, 12:00:00, proleptic Gregorian calendar). - - Returns - ------- - float or sequence of floats - Matplotlib dates (days relative to `.get_epoch`). - """ - ep = np.datetime64(get_epoch(), 'h').astype(float) / 24. - ep0 = np.datetime64('0000-12-31T00:00:00', 'h').astype(float) / 24. - # Julian offset defined above is relative to 0000-12-31, but we need - # relative to our current epoch: - dt = __getattr__("JULIAN_OFFSET") - ep0 + ep - return np.subtract(j, dt) # Handles both scalar & nonscalar j. - - -@_api.deprecated("3.7") -def num2julian(n): - """ - Convert a Matplotlib date (or sequence) to a Julian date (or sequence). - - Parameters - ---------- - n : float or sequence of floats - Matplotlib dates (days relative to `.get_epoch`). - - Returns - ------- - float or sequence of floats - Julian dates (days relative to 4713 BC Jan 1, 12:00:00). - """ - ep = np.datetime64(get_epoch(), 'h').astype(float) / 24. - ep0 = np.datetime64('0000-12-31T00:00:00', 'h').astype(float) / 24. - # Julian offset defined above is relative to 0000-12-31, but we need - # relative to our current epoch: - dt = __getattr__("JULIAN_OFFSET") - ep0 + ep - return np.add(n, dt) # Handles both scalar & nonscalar j. - - -def num2date(x, tz=None): - """ - Convert Matplotlib dates to `~datetime.datetime` objects. - - Parameters - ---------- - x : float or sequence of floats - Number of days (fraction part represents hours, minutes, seconds) - since the epoch. See `.get_epoch` for the - epoch, which can be changed by :rc:`date.epoch` or `.set_epoch`. - tz : str or `~datetime.tzinfo`, default: :rc:`timezone` - Timezone of *x*. If a string, *tz* is passed to `dateutil.tz`. - - Returns - ------- - `~datetime.datetime` or sequence of `~datetime.datetime` - Dates are returned in timezone *tz*. - - If *x* is a sequence, a sequence of `~datetime.datetime` objects will - be returned. - - Notes - ----- - The Gregorian calendar is assumed; this is not universal practice. - For details, see the module docstring. - """ - tz = _get_tzinfo(tz) - return _from_ordinalf_np_vectorized(x, tz).tolist() - - -_ordinalf_to_timedelta_np_vectorized = np.vectorize( - lambda x: datetime.timedelta(days=x), otypes="O") - - -def num2timedelta(x): - """ - Convert number of days to a `~datetime.timedelta` object. - - If *x* is a sequence, a sequence of `~datetime.timedelta` objects will - be returned. - - Parameters - ---------- - x : float, sequence of floats - Number of days. The fraction part represents hours, minutes, seconds. - - Returns - ------- - `datetime.timedelta` or list[`datetime.timedelta`] - """ - return _ordinalf_to_timedelta_np_vectorized(x).tolist() - - -def drange(dstart, dend, delta): - """ - Return a sequence of equally spaced Matplotlib dates. - - The dates start at *dstart* and reach up to, but not including *dend*. - They are spaced by *delta*. - - Parameters - ---------- - dstart, dend : `~datetime.datetime` - The date limits. - delta : `datetime.timedelta` - Spacing of the dates. - - Returns - ------- - `numpy.array` - A list floats representing Matplotlib dates. - - """ - f1 = date2num(dstart) - f2 = date2num(dend) - step = delta.total_seconds() / SEC_PER_DAY - - # calculate the difference between dend and dstart in times of delta - num = int(np.ceil((f2 - f1) / step)) - - # calculate end of the interval which will be generated - dinterval_end = dstart + num * delta - - # ensure, that an half open interval will be generated [dstart, dend) - if dinterval_end >= dend: - # if the endpoint is greater than or equal to dend, - # just subtract one delta - dinterval_end -= delta - num -= 1 - - f2 = date2num(dinterval_end) # new float-endpoint - return np.linspace(f1, f2, num + 1) - - -def _wrap_in_tex(text): - p = r'([a-zA-Z]+)' - ret_text = re.sub(p, r'}$\1$\\mathdefault{', text) - - # Braces ensure symbols are not spaced like binary operators. - ret_text = ret_text.replace('-', '{-}').replace(':', '{:}') - # To not concatenate space between numbers. - ret_text = ret_text.replace(' ', r'\;') - ret_text = '$\\mathdefault{' + ret_text + '}$' - ret_text = ret_text.replace('$\\mathdefault{}$', '') - return ret_text - - -## date tickers and formatters ### - - -class DateFormatter(ticker.Formatter): - """ - Format a tick (in days since the epoch) with a - `~datetime.datetime.strftime` format string. - """ - - def __init__(self, fmt, tz=None, *, usetex=None): - """ - Parameters - ---------- - fmt : str - `~datetime.datetime.strftime` format string - tz : str or `~datetime.tzinfo`, default: :rc:`timezone` - Ticks timezone. If a string, *tz* is passed to `dateutil.tz`. - usetex : bool, default: :rc:`text.usetex` - To enable/disable the use of TeX's math mode for rendering the - results of the formatter. - """ - self.tz = _get_tzinfo(tz) - self.fmt = fmt - self._usetex = (usetex if usetex is not None else - mpl.rcParams['text.usetex']) - - def __call__(self, x, pos=0): - result = num2date(x, self.tz).strftime(self.fmt) - return _wrap_in_tex(result) if self._usetex else result - - def set_tzinfo(self, tz): - self.tz = _get_tzinfo(tz) - - -class ConciseDateFormatter(ticker.Formatter): - """ - A `.Formatter` which attempts to figure out the best format to use for the - date, and to make it as compact as possible, but still be complete. This is - most useful when used with the `AutoDateLocator`:: - - >>> locator = AutoDateLocator() - >>> formatter = ConciseDateFormatter(locator) - - Parameters - ---------- - locator : `.ticker.Locator` - Locator that this axis is using. - - tz : str or `~datetime.tzinfo`, default: :rc:`timezone` - Ticks timezone, passed to `.dates.num2date`. - - formats : list of 6 strings, optional - Format strings for 6 levels of tick labelling: mostly years, - months, days, hours, minutes, and seconds. Strings use - the same format codes as `~datetime.datetime.strftime`. Default is - ``['%Y', '%b', '%d', '%H:%M', '%H:%M', '%S.%f']`` - - zero_formats : list of 6 strings, optional - Format strings for tick labels that are "zeros" for a given tick - level. For instance, if most ticks are months, ticks around 1 Jan 2005 - will be labeled "Dec", "2005", "Feb". The default is - ``['', '%Y', '%b', '%b-%d', '%H:%M', '%H:%M']`` - - offset_formats : list of 6 strings, optional - Format strings for the 6 levels that is applied to the "offset" - string found on the right side of an x-axis, or top of a y-axis. - Combined with the tick labels this should completely specify the - date. The default is:: - - ['', '%Y', '%Y-%b', '%Y-%b-%d', '%Y-%b-%d', '%Y-%b-%d %H:%M'] - - show_offset : bool, default: True - Whether to show the offset or not. - - usetex : bool, default: :rc:`text.usetex` - To enable/disable the use of TeX's math mode for rendering the results - of the formatter. - - Examples - -------- - See :doc:`/gallery/ticks/date_concise_formatter` - - .. plot:: - - import datetime - import matplotlib.dates as mdates - - base = datetime.datetime(2005, 2, 1) - dates = np.array([base + datetime.timedelta(hours=(2 * i)) - for i in range(732)]) - N = len(dates) - np.random.seed(19680801) - y = np.cumsum(np.random.randn(N)) - - fig, ax = plt.subplots(constrained_layout=True) - locator = mdates.AutoDateLocator() - formatter = mdates.ConciseDateFormatter(locator) - ax.xaxis.set_major_locator(locator) - ax.xaxis.set_major_formatter(formatter) - - ax.plot(dates, y) - ax.set_title('Concise Date Formatter') - - """ - - def __init__(self, locator, tz=None, formats=None, offset_formats=None, - zero_formats=None, show_offset=True, *, usetex=None): - """ - Autoformat the date labels. The default format is used to form an - initial string, and then redundant elements are removed. - """ - self._locator = locator - self._tz = tz - self.defaultfmt = '%Y' - # there are 6 levels with each level getting a specific format - # 0: mostly years, 1: months, 2: days, - # 3: hours, 4: minutes, 5: seconds - if formats: - if len(formats) != 6: - raise ValueError('formats argument must be a list of ' - '6 format strings (or None)') - self.formats = formats - else: - self.formats = ['%Y', # ticks are mostly years - '%b', # ticks are mostly months - '%d', # ticks are mostly days - '%H:%M', # hrs - '%H:%M', # min - '%S.%f', # secs - ] - # fmt for zeros ticks at this level. These are - # ticks that should be labeled w/ info the level above. - # like 1 Jan can just be labelled "Jan". 02:02:00 can - # just be labeled 02:02. - if zero_formats: - if len(zero_formats) != 6: - raise ValueError('zero_formats argument must be a list of ' - '6 format strings (or None)') - self.zero_formats = zero_formats - elif formats: - # use the users formats for the zero tick formats - self.zero_formats = [''] + self.formats[:-1] - else: - # make the defaults a bit nicer: - self.zero_formats = [''] + self.formats[:-1] - self.zero_formats[3] = '%b-%d' - - if offset_formats: - if len(offset_formats) != 6: - raise ValueError('offset_formats argument must be a list of ' - '6 format strings (or None)') - self.offset_formats = offset_formats - else: - self.offset_formats = ['', - '%Y', - '%Y-%b', - '%Y-%b-%d', - '%Y-%b-%d', - '%Y-%b-%d %H:%M'] - self.offset_string = '' - self.show_offset = show_offset - self._usetex = (usetex if usetex is not None else - mpl.rcParams['text.usetex']) - - def __call__(self, x, pos=None): - formatter = DateFormatter(self.defaultfmt, self._tz, - usetex=self._usetex) - return formatter(x, pos=pos) - - def format_ticks(self, values): - tickdatetime = [num2date(value, tz=self._tz) for value in values] - tickdate = np.array([tdt.timetuple()[:6] for tdt in tickdatetime]) - - # basic algorithm: - # 1) only display a part of the date if it changes over the ticks. - # 2) don't display the smaller part of the date if: - # it is always the same or if it is the start of the - # year, month, day etc. - # fmt for most ticks at this level - fmts = self.formats - # format beginnings of days, months, years, etc. - zerofmts = self.zero_formats - # offset fmt are for the offset in the upper left of the - # or lower right of the axis. - offsetfmts = self.offset_formats - show_offset = self.show_offset - - # determine the level we will label at: - # mostly 0: years, 1: months, 2: days, - # 3: hours, 4: minutes, 5: seconds, 6: microseconds - for level in range(5, -1, -1): - unique = np.unique(tickdate[:, level]) - if len(unique) > 1: - # if 1 is included in unique, the year is shown in ticks - if level < 2 and np.any(unique == 1): - show_offset = False - break - elif level == 0: - # all tickdate are the same, so only micros might be different - # set to the most precise (6: microseconds doesn't exist...) - level = 5 - - # level is the basic level we will label at. - # now loop through and decide the actual ticklabels - zerovals = [0, 1, 1, 0, 0, 0, 0] - labels = [''] * len(tickdate) - for nn in range(len(tickdate)): - if level < 5: - if tickdate[nn][level] == zerovals[level]: - fmt = zerofmts[level] - else: - fmt = fmts[level] - else: - # special handling for seconds + microseconds - if (tickdatetime[nn].second == tickdatetime[nn].microsecond - == 0): - fmt = zerofmts[level] - else: - fmt = fmts[level] - labels[nn] = tickdatetime[nn].strftime(fmt) - - # special handling of seconds and microseconds: - # strip extra zeros and decimal if possible. - # this is complicated by two factors. 1) we have some level-4 strings - # here (i.e. 03:00, '0.50000', '1.000') 2) we would like to have the - # same number of decimals for each string (i.e. 0.5 and 1.0). - if level >= 5: - trailing_zeros = min( - (len(s) - len(s.rstrip('0')) for s in labels if '.' in s), - default=None) - if trailing_zeros: - for nn in range(len(labels)): - if '.' in labels[nn]: - labels[nn] = labels[nn][:-trailing_zeros].rstrip('.') - - if show_offset: - # set the offset string: - self.offset_string = tickdatetime[-1].strftime(offsetfmts[level]) - if self._usetex: - self.offset_string = _wrap_in_tex(self.offset_string) - else: - self.offset_string = '' - - if self._usetex: - return [_wrap_in_tex(l) for l in labels] - else: - return labels - - def get_offset(self): - return self.offset_string - - def format_data_short(self, value): - return num2date(value, tz=self._tz).strftime('%Y-%m-%d %H:%M:%S') - - -class AutoDateFormatter(ticker.Formatter): - """ - A `.Formatter` which attempts to figure out the best format to use. This - is most useful when used with the `AutoDateLocator`. - - `.AutoDateFormatter` has a ``.scale`` dictionary that maps tick scales (the - interval in days between one major tick) to format strings; this dictionary - defaults to :: - - self.scaled = { - DAYS_PER_YEAR: rcParams['date.autoformatter.year'], - DAYS_PER_MONTH: rcParams['date.autoformatter.month'], - 1: rcParams['date.autoformatter.day'], - 1 / HOURS_PER_DAY: rcParams['date.autoformatter.hour'], - 1 / MINUTES_PER_DAY: rcParams['date.autoformatter.minute'], - 1 / SEC_PER_DAY: rcParams['date.autoformatter.second'], - 1 / MUSECONDS_PER_DAY: rcParams['date.autoformatter.microsecond'], - } - - The formatter uses the format string corresponding to the lowest key in - the dictionary that is greater or equal to the current scale. Dictionary - entries can be customized:: - - locator = AutoDateLocator() - formatter = AutoDateFormatter(locator) - formatter.scaled[1/(24*60)] = '%M:%S' # only show min and sec - - Custom callables can also be used instead of format strings. The following - example shows how to use a custom format function to strip trailing zeros - from decimal seconds and adds the date to the first ticklabel:: - - def my_format_function(x, pos=None): - x = matplotlib.dates.num2date(x) - if pos == 0: - fmt = '%D %H:%M:%S.%f' - else: - fmt = '%H:%M:%S.%f' - label = x.strftime(fmt) - label = label.rstrip("0") - label = label.rstrip(".") - return label - - formatter.scaled[1/(24*60)] = my_format_function - """ - - # This can be improved by providing some user-level direction on - # how to choose the best format (precedence, etc.). - - # Perhaps a 'struct' that has a field for each time-type where a - # zero would indicate "don't show" and a number would indicate - # "show" with some sort of priority. Same priorities could mean - # show all with the same priority. - - # Or more simply, perhaps just a format string for each - # possibility... - - def __init__(self, locator, tz=None, defaultfmt='%Y-%m-%d', *, - usetex=None): - """ - Autoformat the date labels. - - Parameters - ---------- - locator : `.ticker.Locator` - Locator that this axis is using. - - tz : str or `~datetime.tzinfo`, default: :rc:`timezone` - Ticks timezone. If a string, *tz* is passed to `dateutil.tz`. - - defaultfmt : str - The default format to use if none of the values in ``self.scaled`` - are greater than the unit returned by ``locator._get_unit()``. - - usetex : bool, default: :rc:`text.usetex` - To enable/disable the use of TeX's math mode for rendering the - results of the formatter. If any entries in ``self.scaled`` are set - as functions, then it is up to the customized function to enable or - disable TeX's math mode itself. - """ - self._locator = locator - self._tz = tz - self.defaultfmt = defaultfmt - self._formatter = DateFormatter(self.defaultfmt, tz) - rcParams = mpl.rcParams - self._usetex = (usetex if usetex is not None else - mpl.rcParams['text.usetex']) - self.scaled = { - DAYS_PER_YEAR: rcParams['date.autoformatter.year'], - DAYS_PER_MONTH: rcParams['date.autoformatter.month'], - 1: rcParams['date.autoformatter.day'], - 1 / HOURS_PER_DAY: rcParams['date.autoformatter.hour'], - 1 / MINUTES_PER_DAY: rcParams['date.autoformatter.minute'], - 1 / SEC_PER_DAY: rcParams['date.autoformatter.second'], - 1 / MUSECONDS_PER_DAY: rcParams['date.autoformatter.microsecond'] - } - - def _set_locator(self, locator): - self._locator = locator - - def __call__(self, x, pos=None): - try: - locator_unit_scale = float(self._locator._get_unit()) - except AttributeError: - locator_unit_scale = 1 - # Pick the first scale which is greater than the locator unit. - fmt = next((fmt for scale, fmt in sorted(self.scaled.items()) - if scale >= locator_unit_scale), - self.defaultfmt) - - if isinstance(fmt, str): - self._formatter = DateFormatter(fmt, self._tz, usetex=self._usetex) - result = self._formatter(x, pos) - elif callable(fmt): - result = fmt(x, pos) - else: - raise TypeError('Unexpected type passed to {0!r}.'.format(self)) - - return result - - -class rrulewrapper: - """ - A simple wrapper around a `dateutil.rrule` allowing flexible - date tick specifications. - """ - def __init__(self, freq, tzinfo=None, **kwargs): - """ - Parameters - ---------- - freq : {YEARLY, MONTHLY, WEEKLY, DAILY, HOURLY, MINUTELY, SECONDLY} - Tick frequency. These constants are defined in `dateutil.rrule`, - but they are accessible from `matplotlib.dates` as well. - tzinfo : `datetime.tzinfo`, optional - Time zone information. The default is None. - **kwargs - Additional keyword arguments are passed to the `dateutil.rrule`. - """ - kwargs['freq'] = freq - self._base_tzinfo = tzinfo - - self._update_rrule(**kwargs) - - def set(self, **kwargs): - """Set parameters for an existing wrapper.""" - self._construct.update(kwargs) - - self._update_rrule(**self._construct) - - def _update_rrule(self, **kwargs): - tzinfo = self._base_tzinfo - - # rrule does not play nicely with timezones - especially pytz time - # zones, it's best to use naive zones and attach timezones once the - # datetimes are returned - if 'dtstart' in kwargs: - dtstart = kwargs['dtstart'] - if dtstart.tzinfo is not None: - if tzinfo is None: - tzinfo = dtstart.tzinfo - else: - dtstart = dtstart.astimezone(tzinfo) - - kwargs['dtstart'] = dtstart.replace(tzinfo=None) - - if 'until' in kwargs: - until = kwargs['until'] - if until.tzinfo is not None: - if tzinfo is not None: - until = until.astimezone(tzinfo) - else: - raise ValueError('until cannot be aware if dtstart ' - 'is naive and tzinfo is None') - - kwargs['until'] = until.replace(tzinfo=None) - - self._construct = kwargs.copy() - self._tzinfo = tzinfo - self._rrule = rrule(**self._construct) - - def _attach_tzinfo(self, dt, tzinfo): - # pytz zones are attached by "localizing" the datetime - if hasattr(tzinfo, 'localize'): - return tzinfo.localize(dt, is_dst=True) - - return dt.replace(tzinfo=tzinfo) - - def _aware_return_wrapper(self, f, returns_list=False): - """Decorator function that allows rrule methods to handle tzinfo.""" - # This is only necessary if we're actually attaching a tzinfo - if self._tzinfo is None: - return f - - # All datetime arguments must be naive. If they are not naive, they are - # converted to the _tzinfo zone before dropping the zone. - def normalize_arg(arg): - if isinstance(arg, datetime.datetime) and arg.tzinfo is not None: - if arg.tzinfo is not self._tzinfo: - arg = arg.astimezone(self._tzinfo) - - return arg.replace(tzinfo=None) - - return arg - - def normalize_args(args, kwargs): - args = tuple(normalize_arg(arg) for arg in args) - kwargs = {kw: normalize_arg(arg) for kw, arg in kwargs.items()} - - return args, kwargs - - # There are two kinds of functions we care about - ones that return - # dates and ones that return lists of dates. - if not returns_list: - def inner_func(*args, **kwargs): - args, kwargs = normalize_args(args, kwargs) - dt = f(*args, **kwargs) - return self._attach_tzinfo(dt, self._tzinfo) - else: - def inner_func(*args, **kwargs): - args, kwargs = normalize_args(args, kwargs) - dts = f(*args, **kwargs) - return [self._attach_tzinfo(dt, self._tzinfo) for dt in dts] - - return functools.wraps(f)(inner_func) - - def __getattr__(self, name): - if name in self.__dict__: - return self.__dict__[name] - - f = getattr(self._rrule, name) - - if name in {'after', 'before'}: - return self._aware_return_wrapper(f) - elif name in {'xafter', 'xbefore', 'between'}: - return self._aware_return_wrapper(f, returns_list=True) - else: - return f - - def __setstate__(self, state): - self.__dict__.update(state) - - -class DateLocator(ticker.Locator): - """ - Determines the tick locations when plotting dates. - - This class is subclassed by other Locators and - is not meant to be used on its own. - """ - hms0d = {'byhour': 0, 'byminute': 0, 'bysecond': 0} - - def __init__(self, tz=None): - """ - Parameters - ---------- - tz : str or `~datetime.tzinfo`, default: :rc:`timezone` - Ticks timezone. If a string, *tz* is passed to `dateutil.tz`. - """ - self.tz = _get_tzinfo(tz) - - def set_tzinfo(self, tz): - """ - Set timezone info. - - Parameters - ---------- - tz : str or `~datetime.tzinfo`, default: :rc:`timezone` - Ticks timezone. If a string, *tz* is passed to `dateutil.tz`. - """ - self.tz = _get_tzinfo(tz) - - def datalim_to_dt(self): - """Convert axis data interval to datetime objects.""" - dmin, dmax = self.axis.get_data_interval() - if dmin > dmax: - dmin, dmax = dmax, dmin - - return num2date(dmin, self.tz), num2date(dmax, self.tz) - - def viewlim_to_dt(self): - """Convert the view interval to datetime objects.""" - vmin, vmax = self.axis.get_view_interval() - if vmin > vmax: - vmin, vmax = vmax, vmin - return num2date(vmin, self.tz), num2date(vmax, self.tz) - - def _get_unit(self): - """ - Return how many days a unit of the locator is; used for - intelligent autoscaling. - """ - return 1 - - def _get_interval(self): - """ - Return the number of units for each tick. - """ - return 1 - - def nonsingular(self, vmin, vmax): - """ - Given the proposed upper and lower extent, adjust the range - if it is too close to being singular (i.e. a range of ~0). - """ - if not np.isfinite(vmin) or not np.isfinite(vmax): - # Except if there is no data, then use 1970 as default. - return (date2num(datetime.date(1970, 1, 1)), - date2num(datetime.date(1970, 1, 2))) - if vmax < vmin: - vmin, vmax = vmax, vmin - unit = self._get_unit() - interval = self._get_interval() - if abs(vmax - vmin) < 1e-6: - vmin -= 2 * unit * interval - vmax += 2 * unit * interval - return vmin, vmax - - -class RRuleLocator(DateLocator): - # use the dateutil rrule instance - - def __init__(self, o, tz=None): - super().__init__(tz) - self.rule = o - - def __call__(self): - # if no data have been set, this will tank with a ValueError - try: - dmin, dmax = self.viewlim_to_dt() - except ValueError: - return [] - - return self.tick_values(dmin, dmax) - - def tick_values(self, vmin, vmax): - start, stop = self._create_rrule(vmin, vmax) - dates = self.rule.between(start, stop, True) - if len(dates) == 0: - return date2num([vmin, vmax]) - return self.raise_if_exceeds(date2num(dates)) - - def _create_rrule(self, vmin, vmax): - # set appropriate rrule dtstart and until and return - # start and end - delta = relativedelta(vmax, vmin) - - # We need to cap at the endpoints of valid datetime - try: - start = vmin - delta - except (ValueError, OverflowError): - # cap - start = datetime.datetime(1, 1, 1, 0, 0, 0, - tzinfo=datetime.timezone.utc) - - try: - stop = vmax + delta - except (ValueError, OverflowError): - # cap - stop = datetime.datetime(9999, 12, 31, 23, 59, 59, - tzinfo=datetime.timezone.utc) - - self.rule.set(dtstart=start, until=stop) - - return vmin, vmax - - def _get_unit(self): - # docstring inherited - freq = self.rule._rrule._freq - return self.get_unit_generic(freq) - - @staticmethod - def get_unit_generic(freq): - if freq == YEARLY: - return DAYS_PER_YEAR - elif freq == MONTHLY: - return DAYS_PER_MONTH - elif freq == WEEKLY: - return DAYS_PER_WEEK - elif freq == DAILY: - return 1.0 - elif freq == HOURLY: - return 1.0 / HOURS_PER_DAY - elif freq == MINUTELY: - return 1.0 / MINUTES_PER_DAY - elif freq == SECONDLY: - return 1.0 / SEC_PER_DAY - else: - # error - return -1 # or should this just return '1'? - - def _get_interval(self): - return self.rule._rrule._interval - - -class AutoDateLocator(DateLocator): - """ - On autoscale, this class picks the best `DateLocator` to set the view - limits and the tick locations. - - Attributes - ---------- - intervald : dict - - Mapping of tick frequencies to multiples allowed for that ticking. - The default is :: - - self.intervald = { - YEARLY : [1, 2, 4, 5, 10, 20, 40, 50, 100, 200, 400, 500, - 1000, 2000, 4000, 5000, 10000], - MONTHLY : [1, 2, 3, 4, 6], - DAILY : [1, 2, 3, 7, 14, 21], - HOURLY : [1, 2, 3, 4, 6, 12], - MINUTELY: [1, 5, 10, 15, 30], - SECONDLY: [1, 5, 10, 15, 30], - MICROSECONDLY: [1, 2, 5, 10, 20, 50, 100, 200, 500, - 1000, 2000, 5000, 10000, 20000, 50000, - 100000, 200000, 500000, 1000000], - } - - where the keys are defined in `dateutil.rrule`. - - The interval is used to specify multiples that are appropriate for - the frequency of ticking. For instance, every 7 days is sensible - for daily ticks, but for minutes/seconds, 15 or 30 make sense. - - When customizing, you should only modify the values for the existing - keys. You should not add or delete entries. - - Example for forcing ticks every 3 hours:: - - locator = AutoDateLocator() - locator.intervald[HOURLY] = [3] # only show every 3 hours - """ - - def __init__(self, tz=None, minticks=5, maxticks=None, - interval_multiples=True): - """ - Parameters - ---------- - tz : str or `~datetime.tzinfo`, default: :rc:`timezone` - Ticks timezone. If a string, *tz* is passed to `dateutil.tz`. - minticks : int - The minimum number of ticks desired; controls whether ticks occur - yearly, monthly, etc. - maxticks : int - The maximum number of ticks desired; controls the interval between - ticks (ticking every other, every 3, etc.). For fine-grained - control, this can be a dictionary mapping individual rrule - frequency constants (YEARLY, MONTHLY, etc.) to their own maximum - number of ticks. This can be used to keep the number of ticks - appropriate to the format chosen in `AutoDateFormatter`. Any - frequency not specified in this dictionary is given a default - value. - interval_multiples : bool, default: True - Whether ticks should be chosen to be multiple of the interval, - locking them to 'nicer' locations. For example, this will force - the ticks to be at hours 0, 6, 12, 18 when hourly ticking is done - at 6 hour intervals. - """ - super().__init__(tz=tz) - self._freq = YEARLY - self._freqs = [YEARLY, MONTHLY, DAILY, HOURLY, MINUTELY, - SECONDLY, MICROSECONDLY] - self.minticks = minticks - - self.maxticks = {YEARLY: 11, MONTHLY: 12, DAILY: 11, HOURLY: 12, - MINUTELY: 11, SECONDLY: 11, MICROSECONDLY: 8} - if maxticks is not None: - try: - self.maxticks.update(maxticks) - except TypeError: - # Assume we were given an integer. Use this as the maximum - # number of ticks for every frequency and create a - # dictionary for this - self.maxticks = dict.fromkeys(self._freqs, maxticks) - self.interval_multiples = interval_multiples - self.intervald = { - YEARLY: [1, 2, 4, 5, 10, 20, 40, 50, 100, 200, 400, 500, - 1000, 2000, 4000, 5000, 10000], - MONTHLY: [1, 2, 3, 4, 6], - DAILY: [1, 2, 3, 7, 14, 21], - HOURLY: [1, 2, 3, 4, 6, 12], - MINUTELY: [1, 5, 10, 15, 30], - SECONDLY: [1, 5, 10, 15, 30], - MICROSECONDLY: [1, 2, 5, 10, 20, 50, 100, 200, 500, 1000, 2000, - 5000, 10000, 20000, 50000, 100000, 200000, 500000, - 1000000], - } - if interval_multiples: - # Swap "3" for "4" in the DAILY list; If we use 3 we get bad - # tick loc for months w/ 31 days: 1, 4, ..., 28, 31, 1 - # If we use 4 then we get: 1, 5, ... 25, 29, 1 - self.intervald[DAILY] = [1, 2, 4, 7, 14] - - self._byranges = [None, range(1, 13), range(1, 32), - range(0, 24), range(0, 60), range(0, 60), None] - - def __call__(self): - # docstring inherited - dmin, dmax = self.viewlim_to_dt() - locator = self.get_locator(dmin, dmax) - return locator() - - def tick_values(self, vmin, vmax): - return self.get_locator(vmin, vmax).tick_values(vmin, vmax) - - def nonsingular(self, vmin, vmax): - # whatever is thrown at us, we can scale the unit. - # But default nonsingular date plots at an ~4 year period. - if not np.isfinite(vmin) or not np.isfinite(vmax): - # Except if there is no data, then use 1970 as default. - return (date2num(datetime.date(1970, 1, 1)), - date2num(datetime.date(1970, 1, 2))) - if vmax < vmin: - vmin, vmax = vmax, vmin - if vmin == vmax: - vmin = vmin - DAYS_PER_YEAR * 2 - vmax = vmax + DAYS_PER_YEAR * 2 - return vmin, vmax - - def _get_unit(self): - if self._freq in [MICROSECONDLY]: - return 1. / MUSECONDS_PER_DAY - else: - return RRuleLocator.get_unit_generic(self._freq) - - def get_locator(self, dmin, dmax): - """Pick the best locator based on a distance.""" - delta = relativedelta(dmax, dmin) - tdelta = dmax - dmin - - # take absolute difference - if dmin > dmax: - delta = -delta - tdelta = -tdelta - # The following uses a mix of calls to relativedelta and timedelta - # methods because there is incomplete overlap in the functionality of - # these similar functions, and it's best to avoid doing our own math - # whenever possible. - numYears = float(delta.years) - numMonths = numYears * MONTHS_PER_YEAR + delta.months - numDays = tdelta.days # Avoids estimates of days/month, days/year. - numHours = numDays * HOURS_PER_DAY + delta.hours - numMinutes = numHours * MIN_PER_HOUR + delta.minutes - numSeconds = np.floor(tdelta.total_seconds()) - numMicroseconds = np.floor(tdelta.total_seconds() * 1e6) - - nums = [numYears, numMonths, numDays, numHours, numMinutes, - numSeconds, numMicroseconds] - - use_rrule_locator = [True] * 6 + [False] - - # Default setting of bymonth, etc. to pass to rrule - # [unused (for year), bymonth, bymonthday, byhour, byminute, - # bysecond, unused (for microseconds)] - byranges = [None, 1, 1, 0, 0, 0, None] - - # Loop over all the frequencies and try to find one that gives at - # least a minticks tick positions. Once this is found, look for - # an interval from a list specific to that frequency that gives no - # more than maxticks tick positions. Also, set up some ranges - # (bymonth, etc.) as appropriate to be passed to rrulewrapper. - for i, (freq, num) in enumerate(zip(self._freqs, nums)): - # If this particular frequency doesn't give enough ticks, continue - if num < self.minticks: - # Since we're not using this particular frequency, set - # the corresponding by_ to None so the rrule can act as - # appropriate - byranges[i] = None - continue - - # Find the first available interval that doesn't give too many - # ticks - for interval in self.intervald[freq]: - if num <= interval * (self.maxticks[freq] - 1): - break - else: - if not (self.interval_multiples and freq == DAILY): - _api.warn_external( - f"AutoDateLocator was unable to pick an appropriate " - f"interval for this date range. It may be necessary " - f"to add an interval value to the AutoDateLocator's " - f"intervald dictionary. Defaulting to {interval}.") - - # Set some parameters as appropriate - self._freq = freq - - if self._byranges[i] and self.interval_multiples: - byranges[i] = self._byranges[i][::interval] - if i in (DAILY, WEEKLY): - if interval == 14: - # just make first and 15th. Avoids 30th. - byranges[i] = [1, 15] - elif interval == 7: - byranges[i] = [1, 8, 15, 22] - - interval = 1 - else: - byranges[i] = self._byranges[i] - break - else: - interval = 1 - - if (freq == YEARLY) and self.interval_multiples: - locator = YearLocator(interval, tz=self.tz) - elif use_rrule_locator[i]: - _, bymonth, bymonthday, byhour, byminute, bysecond, _ = byranges - rrule = rrulewrapper(self._freq, interval=interval, - dtstart=dmin, until=dmax, - bymonth=bymonth, bymonthday=bymonthday, - byhour=byhour, byminute=byminute, - bysecond=bysecond) - - locator = RRuleLocator(rrule, tz=self.tz) - else: - locator = MicrosecondLocator(interval, tz=self.tz) - if date2num(dmin) > 70 * 365 and interval < 1000: - _api.warn_external( - 'Plotting microsecond time intervals for dates far from ' - f'the epoch (time origin: {get_epoch()}) is not well-' - 'supported. See matplotlib.dates.set_epoch to change the ' - 'epoch.') - - locator.set_axis(self.axis) - return locator - - -class YearLocator(RRuleLocator): - """ - Make ticks on a given day of each year that is a multiple of base. - - Examples:: - - # Tick every year on Jan 1st - locator = YearLocator() - - # Tick every 5 years on July 4th - locator = YearLocator(5, month=7, day=4) - """ - def __init__(self, base=1, month=1, day=1, tz=None): - """ - Parameters - ---------- - base : int, default: 1 - Mark ticks every *base* years. - month : int, default: 1 - The month on which to place the ticks, starting from 1. Default is - January. - day : int, default: 1 - The day on which to place the ticks. - tz : str or `~datetime.tzinfo`, default: :rc:`timezone` - Ticks timezone. If a string, *tz* is passed to `dateutil.tz`. - """ - rule = rrulewrapper(YEARLY, interval=base, bymonth=month, - bymonthday=day, **self.hms0d) - super().__init__(rule, tz=tz) - self.base = ticker._Edge_integer(base, 0) - - def _create_rrule(self, vmin, vmax): - # 'start' needs to be a multiple of the interval to create ticks on - # interval multiples when the tick frequency is YEARLY - ymin = max(self.base.le(vmin.year) * self.base.step, 1) - ymax = min(self.base.ge(vmax.year) * self.base.step, 9999) - - c = self.rule._construct - replace = {'year': ymin, - 'month': c.get('bymonth', 1), - 'day': c.get('bymonthday', 1), - 'hour': 0, 'minute': 0, 'second': 0} - - start = vmin.replace(**replace) - stop = start.replace(year=ymax) - self.rule.set(dtstart=start, until=stop) - - return start, stop - - -class MonthLocator(RRuleLocator): - """ - Make ticks on occurrences of each month, e.g., 1, 3, 12. - """ - def __init__(self, bymonth=None, bymonthday=1, interval=1, tz=None): - """ - Parameters - ---------- - bymonth : int or list of int, default: all months - Ticks will be placed on every month in *bymonth*. Default is - ``range(1, 13)``, i.e. every month. - bymonthday : int, default: 1 - The day on which to place the ticks. - interval : int, default: 1 - The interval between each iteration. For example, if - ``interval=2``, mark every second occurrence. - tz : str or `~datetime.tzinfo`, default: :rc:`timezone` - Ticks timezone. If a string, *tz* is passed to `dateutil.tz`. - """ - if bymonth is None: - bymonth = range(1, 13) - - rule = rrulewrapper(MONTHLY, bymonth=bymonth, bymonthday=bymonthday, - interval=interval, **self.hms0d) - super().__init__(rule, tz=tz) - - -class WeekdayLocator(RRuleLocator): - """ - Make ticks on occurrences of each weekday. - """ - - def __init__(self, byweekday=1, interval=1, tz=None): - """ - Parameters - ---------- - byweekday : int or list of int, default: all days - Ticks will be placed on every weekday in *byweekday*. Default is - every day. - - Elements of *byweekday* must be one of MO, TU, WE, TH, FR, SA, - SU, the constants from :mod:`dateutil.rrule`, which have been - imported into the :mod:`matplotlib.dates` namespace. - interval : int, default: 1 - The interval between each iteration. For example, if - ``interval=2``, mark every second occurrence. - tz : str or `~datetime.tzinfo`, default: :rc:`timezone` - Ticks timezone. If a string, *tz* is passed to `dateutil.tz`. - """ - rule = rrulewrapper(DAILY, byweekday=byweekday, - interval=interval, **self.hms0d) - super().__init__(rule, tz=tz) - - -class DayLocator(RRuleLocator): - """ - Make ticks on occurrences of each day of the month. For example, - 1, 15, 30. - """ - def __init__(self, bymonthday=None, interval=1, tz=None): - """ - Parameters - ---------- - bymonthday : int or list of int, default: all days - Ticks will be placed on every day in *bymonthday*. Default is - ``bymonthday=range(1, 32)``, i.e., every day of the month. - interval : int, default: 1 - The interval between each iteration. For example, if - ``interval=2``, mark every second occurrence. - tz : str or `~datetime.tzinfo`, default: :rc:`timezone` - Ticks timezone. If a string, *tz* is passed to `dateutil.tz`. - """ - if interval != int(interval) or interval < 1: - raise ValueError("interval must be an integer greater than 0") - if bymonthday is None: - bymonthday = range(1, 32) - - rule = rrulewrapper(DAILY, bymonthday=bymonthday, - interval=interval, **self.hms0d) - super().__init__(rule, tz=tz) - - -class HourLocator(RRuleLocator): - """ - Make ticks on occurrences of each hour. - """ - def __init__(self, byhour=None, interval=1, tz=None): - """ - Parameters - ---------- - byhour : int or list of int, default: all hours - Ticks will be placed on every hour in *byhour*. Default is - ``byhour=range(24)``, i.e., every hour. - interval : int, default: 1 - The interval between each iteration. For example, if - ``interval=2``, mark every second occurrence. - tz : str or `~datetime.tzinfo`, default: :rc:`timezone` - Ticks timezone. If a string, *tz* is passed to `dateutil.tz`. - """ - if byhour is None: - byhour = range(24) - - rule = rrulewrapper(HOURLY, byhour=byhour, interval=interval, - byminute=0, bysecond=0) - super().__init__(rule, tz=tz) - - -class MinuteLocator(RRuleLocator): - """ - Make ticks on occurrences of each minute. - """ - def __init__(self, byminute=None, interval=1, tz=None): - """ - Parameters - ---------- - byminute : int or list of int, default: all minutes - Ticks will be placed on every minute in *byminute*. Default is - ``byminute=range(60)``, i.e., every minute. - interval : int, default: 1 - The interval between each iteration. For example, if - ``interval=2``, mark every second occurrence. - tz : str or `~datetime.tzinfo`, default: :rc:`timezone` - Ticks timezone. If a string, *tz* is passed to `dateutil.tz`. - """ - if byminute is None: - byminute = range(60) - - rule = rrulewrapper(MINUTELY, byminute=byminute, interval=interval, - bysecond=0) - super().__init__(rule, tz=tz) - - -class SecondLocator(RRuleLocator): - """ - Make ticks on occurrences of each second. - """ - def __init__(self, bysecond=None, interval=1, tz=None): - """ - Parameters - ---------- - bysecond : int or list of int, default: all seconds - Ticks will be placed on every second in *bysecond*. Default is - ``bysecond = range(60)``, i.e., every second. - interval : int, default: 1 - The interval between each iteration. For example, if - ``interval=2``, mark every second occurrence. - tz : str or `~datetime.tzinfo`, default: :rc:`timezone` - Ticks timezone. If a string, *tz* is passed to `dateutil.tz`. - """ - if bysecond is None: - bysecond = range(60) - - rule = rrulewrapper(SECONDLY, bysecond=bysecond, interval=interval) - super().__init__(rule, tz=tz) - - -class MicrosecondLocator(DateLocator): - """ - Make ticks on regular intervals of one or more microsecond(s). - - .. note:: - - By default, Matplotlib uses a floating point representation of time in - days since the epoch, so plotting data with - microsecond time resolution does not work well for - dates that are far (about 70 years) from the epoch (check with - `~.dates.get_epoch`). - - If you want sub-microsecond resolution time plots, it is strongly - recommended to use floating point seconds, not datetime-like - time representation. - - If you really must use datetime.datetime() or similar and still - need microsecond precision, change the time origin via - `.dates.set_epoch` to something closer to the dates being plotted. - See :doc:`/gallery/ticks/date_precision_and_epochs`. - - """ - def __init__(self, interval=1, tz=None): - """ - Parameters - ---------- - interval : int, default: 1 - The interval between each iteration. For example, if - ``interval=2``, mark every second occurrence. - tz : str or `~datetime.tzinfo`, default: :rc:`timezone` - Ticks timezone. If a string, *tz* is passed to `dateutil.tz`. - """ - super().__init__(tz=tz) - self._interval = interval - self._wrapped_locator = ticker.MultipleLocator(interval) - - def set_axis(self, axis): - self._wrapped_locator.set_axis(axis) - return super().set_axis(axis) - - def __call__(self): - # if no data have been set, this will tank with a ValueError - try: - dmin, dmax = self.viewlim_to_dt() - except ValueError: - return [] - - return self.tick_values(dmin, dmax) - - def tick_values(self, vmin, vmax): - nmin, nmax = date2num((vmin, vmax)) - t0 = np.floor(nmin) - nmax = nmax - t0 - nmin = nmin - t0 - nmin *= MUSECONDS_PER_DAY - nmax *= MUSECONDS_PER_DAY - - ticks = self._wrapped_locator.tick_values(nmin, nmax) - - ticks = ticks / MUSECONDS_PER_DAY + t0 - return ticks - - def _get_unit(self): - # docstring inherited - return 1. / MUSECONDS_PER_DAY - - def _get_interval(self): - # docstring inherited - return self._interval - - -@_api.deprecated("3.6", alternative="`AutoDateLocator` and `AutoDateFormatter`" - " or vendor the code") -def date_ticker_factory(span, tz=None, numticks=5): - """ - Create a date locator with *numticks* (approx) and a date formatter - for *span* in days. Return value is (locator, formatter). - """ - - if span == 0: - span = 1 / HOURS_PER_DAY - - mins = span * MINUTES_PER_DAY - hrs = span * HOURS_PER_DAY - days = span - wks = span / DAYS_PER_WEEK - months = span / DAYS_PER_MONTH # Approx - years = span / DAYS_PER_YEAR # Approx - - if years > numticks: - locator = YearLocator(int(years / numticks), tz=tz) # define - fmt = '%Y' - elif months > numticks: - locator = MonthLocator(tz=tz) - fmt = '%b %Y' - elif wks > numticks: - locator = WeekdayLocator(tz=tz) - fmt = '%a, %b %d' - elif days > numticks: - locator = DayLocator(interval=math.ceil(days / numticks), tz=tz) - fmt = '%b %d' - elif hrs > numticks: - locator = HourLocator(interval=math.ceil(hrs / numticks), tz=tz) - fmt = '%H:%M\n%b %d' - elif mins > numticks: - locator = MinuteLocator(interval=math.ceil(mins / numticks), tz=tz) - fmt = '%H:%M:%S' - else: - locator = MinuteLocator(tz=tz) - fmt = '%H:%M:%S' - - formatter = DateFormatter(fmt, tz=tz) - return locator, formatter - - -class DateConverter(units.ConversionInterface): - """ - Converter for `datetime.date` and `datetime.datetime` data, or for - date/time data represented as it would be converted by `date2num`. - - The 'unit' tag for such data is None or a `~datetime.tzinfo` instance. - """ - - def __init__(self, *, interval_multiples=True): - self._interval_multiples = interval_multiples - super().__init__() - - def axisinfo(self, unit, axis): - """ - Return the `~matplotlib.units.AxisInfo` for *unit*. - - *unit* is a `~datetime.tzinfo` instance or None. - The *axis* argument is required but not used. - """ - tz = unit - - majloc = AutoDateLocator(tz=tz, - interval_multiples=self._interval_multiples) - majfmt = AutoDateFormatter(majloc, tz=tz) - datemin = datetime.date(1970, 1, 1) - datemax = datetime.date(1970, 1, 2) - - return units.AxisInfo(majloc=majloc, majfmt=majfmt, label='', - default_limits=(datemin, datemax)) - - @staticmethod - def convert(value, unit, axis): - """ - If *value* is not already a number or sequence of numbers, convert it - with `date2num`. - - The *unit* and *axis* arguments are not used. - """ - return date2num(value) - - @staticmethod - def default_units(x, axis): - """ - Return the `~datetime.tzinfo` instance of *x* or of its first element, - or None - """ - if isinstance(x, np.ndarray): - x = x.ravel() - - try: - x = cbook._safe_first_finite(x) - except (TypeError, StopIteration): - pass - - try: - return x.tzinfo - except AttributeError: - pass - return None - - -class ConciseDateConverter(DateConverter): - # docstring inherited - - def __init__(self, formats=None, zero_formats=None, offset_formats=None, - show_offset=True, *, interval_multiples=True): - self._formats = formats - self._zero_formats = zero_formats - self._offset_formats = offset_formats - self._show_offset = show_offset - self._interval_multiples = interval_multiples - super().__init__() - - def axisinfo(self, unit, axis): - # docstring inherited - tz = unit - majloc = AutoDateLocator(tz=tz, - interval_multiples=self._interval_multiples) - majfmt = ConciseDateFormatter(majloc, tz=tz, formats=self._formats, - zero_formats=self._zero_formats, - offset_formats=self._offset_formats, - show_offset=self._show_offset) - datemin = datetime.date(1970, 1, 1) - datemax = datetime.date(1970, 1, 2) - return units.AxisInfo(majloc=majloc, majfmt=majfmt, label='', - default_limits=(datemin, datemax)) - - -class _SwitchableDateConverter: - """ - Helper converter-like object that generates and dispatches to - temporary ConciseDateConverter or DateConverter instances based on - :rc:`date.converter` and :rc:`date.interval_multiples`. - """ - - @staticmethod - def _get_converter(): - converter_cls = { - "concise": ConciseDateConverter, "auto": DateConverter}[ - mpl.rcParams["date.converter"]] - interval_multiples = mpl.rcParams["date.interval_multiples"] - return converter_cls(interval_multiples=interval_multiples) - - def axisinfo(self, *args, **kwargs): - return self._get_converter().axisinfo(*args, **kwargs) - - def default_units(self, *args, **kwargs): - return self._get_converter().default_units(*args, **kwargs) - - def convert(self, *args, **kwargs): - return self._get_converter().convert(*args, **kwargs) - - -units.registry[np.datetime64] = \ - units.registry[datetime.date] = \ - units.registry[datetime.datetime] = \ - _SwitchableDateConverter() diff --git a/spaces/leilevy/bingo/src/app/layout.tsx b/spaces/leilevy/bingo/src/app/layout.tsx deleted file mode 100644 index 8b5122759987177b8dc4e4356d1d06cea25c15ea..0000000000000000000000000000000000000000 --- a/spaces/leilevy/bingo/src/app/layout.tsx +++ /dev/null @@ -1,47 +0,0 @@ -import { Metadata } from 'next' -import { Toaster } from 'react-hot-toast' -import { TailwindIndicator } from '@/components/tailwind-indicator' -import { Providers } from '@/components/providers' -import { Header } from '@/components/header' - -import '@/app/globals.scss' - - -export const metadata: Metadata = { - title: { - default: 'Bing AI Chatbot', - template: `%s - Bing AI Chatbot` - }, - description: 'Bing AI Chatbot Web App.', - themeColor: [ - { media: '(prefers-color-scheme: light)', color: 'white' }, - { media: '(prefers-color-scheme: dark)', color: 'dark' } - ], - icons: { - icon: '/favicon.ico', - shortcut: '../assets/images/logo.svg', - apple: '../assets/images/logo.svg' - } -} - -interface RootLayoutProps { - children: React.ReactNode -} - -export default function RootLayout({ children }: RootLayoutProps) { - return ( - - - - -
        - {/* @ts-ignore */} -
        -
        {children}
        -
        - -
        - - - ) -} diff --git a/spaces/lewiswu1209/MockingBird/ppg_extractor/frontend.py b/spaces/lewiswu1209/MockingBird/ppg_extractor/frontend.py deleted file mode 100644 index 32549ed050655d79be1793a9cf04d9d52644794a..0000000000000000000000000000000000000000 --- a/spaces/lewiswu1209/MockingBird/ppg_extractor/frontend.py +++ /dev/null @@ -1,115 +0,0 @@ -import copy -from typing import Tuple -import numpy as np -import torch -from torch_complex.tensor import ComplexTensor - -from .log_mel import LogMel -from .stft import Stft - - -class DefaultFrontend(torch.nn.Module): - """Conventional frontend structure for ASR - - Stft -> WPE -> MVDR-Beamformer -> Power-spec -> Mel-Fbank -> CMVN - """ - - def __init__( - self, - fs: 16000, - n_fft: int = 1024, - win_length: int = 800, - hop_length: int = 160, - center: bool = True, - pad_mode: str = "reflect", - normalized: bool = False, - onesided: bool = True, - n_mels: int = 80, - fmin: int = None, - fmax: int = None, - htk: bool = False, - norm=1, - frontend_conf=None, #Optional[dict] = get_default_kwargs(Frontend), - kaldi_padding_mode=False, - downsample_rate: int = 1, - ): - super().__init__() - self.downsample_rate = downsample_rate - - # Deepcopy (In general, dict shouldn't be used as default arg) - frontend_conf = copy.deepcopy(frontend_conf) - - self.stft = Stft( - n_fft=n_fft, - win_length=win_length, - hop_length=hop_length, - center=center, - pad_mode=pad_mode, - normalized=normalized, - onesided=onesided, - kaldi_padding_mode=kaldi_padding_mode - ) - if frontend_conf is not None: - self.frontend = Frontend(idim=n_fft // 2 + 1, **frontend_conf) - else: - self.frontend = None - - self.logmel = LogMel( - fs=fs, n_fft=n_fft, n_mels=n_mels, fmin=fmin, fmax=fmax, htk=htk, norm=norm, - ) - self.n_mels = n_mels - - def output_size(self) -> int: - return self.n_mels - - def forward( - self, input: torch.Tensor, input_lengths: torch.Tensor - ) -> Tuple[torch.Tensor, torch.Tensor]: - # 1. Domain-conversion: e.g. Stft: time -> time-freq - input_stft, feats_lens = self.stft(input, input_lengths) - - assert input_stft.dim() >= 4, input_stft.shape - # "2" refers to the real/imag parts of Complex - assert input_stft.shape[-1] == 2, input_stft.shape - - # Change torch.Tensor to ComplexTensor - # input_stft: (..., F, 2) -> (..., F) - input_stft = ComplexTensor(input_stft[..., 0], input_stft[..., 1]) - - # 2. [Option] Speech enhancement - if self.frontend is not None: - assert isinstance(input_stft, ComplexTensor), type(input_stft) - # input_stft: (Batch, Length, [Channel], Freq) - input_stft, _, mask = self.frontend(input_stft, feats_lens) - - # 3. [Multi channel case]: Select a channel - if input_stft.dim() == 4: - # h: (B, T, C, F) -> h: (B, T, F) - if self.training: - # Select 1ch randomly - ch = np.random.randint(input_stft.size(2)) - input_stft = input_stft[:, :, ch, :] - else: - # Use the first channel - input_stft = input_stft[:, :, 0, :] - - # 4. STFT -> Power spectrum - # h: ComplexTensor(B, T, F) -> torch.Tensor(B, T, F) - input_power = input_stft.real ** 2 + input_stft.imag ** 2 - - # 5. Feature transform e.g. Stft -> Log-Mel-Fbank - # input_power: (Batch, [Channel,] Length, Freq) - # -> input_feats: (Batch, Length, Dim) - input_feats, _ = self.logmel(input_power, feats_lens) - - # NOTE(sx): pad - max_len = input_feats.size(1) - if self.downsample_rate > 1 and max_len % self.downsample_rate != 0: - padding = self.downsample_rate - max_len % self.downsample_rate - # print("Logmel: ", input_feats.size()) - input_feats = torch.nn.functional.pad(input_feats, (0, 0, 0, padding), - "constant", 0) - # print("Logmel(after padding): ",input_feats.size()) - feats_lens[torch.argmax(feats_lens)] = max_len + padding - - return input_feats, feats_lens diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Contemporary Topics 1 Pdf Free Download.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Contemporary Topics 1 Pdf Free Download.md deleted file mode 100644 index d98fe6a6a0950dbccb2de47ac703d4a17697e287..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/Contemporary Topics 1 Pdf Free Download.md +++ /dev/null @@ -1,7 +0,0 @@ - -

        an adventure away from a recession, a new road and an old game the three most important book-reviews of the year : a literary review of the year, taschen/random house business books of the year, and newsweek/entrepreneur best business book of the year. this book contains only those works which might be considered memorable in terms of writing, subject and structure. it also introduces readers to a set of works which will probably have a lasting effect on the way we think and feel.

        -

        in this book, suresh ramaswamy, a neuroscientist and the founding director of the blue brain project, explains how current neuroscience research is helping to unlock the mysteries of the human brain and body, informing our understanding of perception, cognition, emotion, and motivation. in this often quirky book, he explores how discoveries in the field of neuroscience are affecting not only basic science, but also our understanding of the potential of the human brain to improve human health and serve as a model for medical and computational science.

        in addition, the book includes a glossary, a section on the tools of neuroscience, and a list of websites for further reading on neuroscience topics.

        -

        contemporary topics 1 pdf free download


        DOWNLOAD ○○○ https://bytlly.com/2uGyDo



        -

        millions of americans consider it their civil right to own firearms of any sort, regardless of whether a person has demonstrated a propensity to use them illegally or irresponsibly. gun owners and control advocates are hotly debating the potential harm of allowing additional guns into the united states. from the gun control perspective, the argument for more gun control includes the benefits of a safe and secure society. from the perspective of gun owners, the argument for allowing more guns into the united states includes the benefits of preventing gun control legislation. in this book, an independent scholar takes an objective and academic perspective on the two sides and presents three alternative programs on how to prevent gun violence.

        899543212b
        -
        -
        \ No newline at end of file diff --git a/spaces/lizhen30/LangChainGo/chatgpt-next-web/ApiResponse.py b/spaces/lizhen30/LangChainGo/chatgpt-next-web/ApiResponse.py deleted file mode 100644 index 1a777ec250f532d5da907be801359e99ec8bbaa0..0000000000000000000000000000000000000000 --- a/spaces/lizhen30/LangChainGo/chatgpt-next-web/ApiResponse.py +++ /dev/null @@ -1,12 +0,0 @@ -class ApiResponse: - def __init__(self, code, message, data=None): - self.code = code - self.message = message - self.data = data - - def to_json(self): - return { - 'code': self.code, - 'message': self.message, - 'data': self.data - } \ No newline at end of file diff --git a/spaces/lojban/text-to-speech/vits/preprocess.py b/spaces/lojban/text-to-speech/vits/preprocess.py deleted file mode 100644 index 2472c0199637ea48e08607fad3fefb63b41437ca..0000000000000000000000000000000000000000 --- a/spaces/lojban/text-to-speech/vits/preprocess.py +++ /dev/null @@ -1,25 +0,0 @@ -import argparse -import vits.text as text -from vits.utils import load_filepaths_and_text - -if __name__ == '__main__': - parser = argparse.ArgumentParser() - parser.add_argument("--out_extension", default="cleaned") - parser.add_argument("--text_index", default=1, type=int) - parser.add_argument("--filelists", nargs="+", default=["filelists/ljs_audio_text_val_filelist.txt", "filelists/ljs_audio_text_test_filelist.txt"]) - parser.add_argument("--text_cleaners", nargs="+", default=["english_cleaners2"]) - - args = parser.parse_args() - - - for filelist in args.filelists: - print("START:", filelist) - filepaths_and_text = load_filepaths_and_text(filelist) - for i in range(len(filepaths_and_text)): - original_text = filepaths_and_text[i][args.text_index] - cleaned_text = text._clean_text(original_text, args.text_cleaners) - filepaths_and_text[i][args.text_index] = cleaned_text - - new_filelist = filelist + "." + args.out_extension - with open(new_filelist, "w", encoding="utf-8") as f: - f.writelines(["|".join(x) + "\n" for x in filepaths_and_text]) diff --git a/spaces/lojban/text-to-speech/vits/text/cleaners.py b/spaces/lojban/text-to-speech/vits/text/cleaners.py deleted file mode 100644 index 2658f667a7d59ca99a3e16ba0c157d2ab5d795eb..0000000000000000000000000000000000000000 --- a/spaces/lojban/text-to-speech/vits/text/cleaners.py +++ /dev/null @@ -1,100 +0,0 @@ -""" from https://github.com/keithito/tacotron """ - -''' -Cleaners are transformations that run over the input text at both training and eval time. - -Cleaners can be selected by passing a comma-delimited list of cleaner names as the "cleaners" -hyperparameter. Some cleaners are English-specific. You'll typically want to use: - 1. "english_cleaners" for English text - 2. "transliteration_cleaners" for non-English text that can be transliterated to ASCII using - the Unidecode library (https://pypi.python.org/pypi/Unidecode) - 3. "basic_cleaners" if you do not want to transliterate (in this case, you should also update - the symbols in symbols.py to match your data). -''' - -import re -from unidecode import unidecode -from phonemizer import phonemize - - -# Regular expression matching whitespace: -_whitespace_re = re.compile(r'\s+') - -# List of (regular expression, replacement) pairs for abbreviations: -_abbreviations = [(re.compile('\\b%s\\.' % x[0], re.IGNORECASE), x[1]) for x in [ - ('mrs', 'misess'), - ('mr', 'mister'), - ('dr', 'doctor'), - ('st', 'saint'), - ('co', 'company'), - ('jr', 'junior'), - ('maj', 'major'), - ('gen', 'general'), - ('drs', 'doctors'), - ('rev', 'reverend'), - ('lt', 'lieutenant'), - ('hon', 'honorable'), - ('sgt', 'sergeant'), - ('capt', 'captain'), - ('esq', 'esquire'), - ('ltd', 'limited'), - ('col', 'colonel'), - ('ft', 'fort'), -]] - - -def expand_abbreviations(text): - for regex, replacement in _abbreviations: - text = re.sub(regex, replacement, text) - return text - - -def expand_numbers(text): - return normalize_numbers(text) - - -def lowercase(text): - return text.lower() - - -def collapse_whitespace(text): - return re.sub(_whitespace_re, ' ', text) - - -def convert_to_ascii(text): - return unidecode(text) - - -def basic_cleaners(text): - '''Basic pipeline that lowercases and collapses whitespace without transliteration.''' - text = lowercase(text) - text = collapse_whitespace(text) - return text - - -def transliteration_cleaners(text): - '''Pipeline for non-English text that transliterates to ASCII.''' - text = convert_to_ascii(text) - text = lowercase(text) - text = collapse_whitespace(text) - return text - - -def english_cleaners(text): - '''Pipeline for English text, including abbreviation expansion.''' - text = convert_to_ascii(text) - text = lowercase(text) - text = expand_abbreviations(text) - phonemes = phonemize(text, language='en-us', backend='espeak', strip=True) - phonemes = collapse_whitespace(phonemes) - return phonemes - - -def english_cleaners2(text): - '''Pipeline for English text, including abbreviation expansion. + punctuation + stress''' - text = convert_to_ascii(text) - text = lowercase(text) - text = expand_abbreviations(text) - phonemes = phonemize(text, language='en-us', backend='espeak', strip=True, preserve_punctuation=True, with_stress=True) - phonemes = collapse_whitespace(phonemes) - return phonemes diff --git a/spaces/lwchen/CodeFormer/CodeFormer/scripts/crop_align_face.py b/spaces/lwchen/CodeFormer/CodeFormer/scripts/crop_align_face.py deleted file mode 100644 index 31e66266ac0e5f818fa18b6409993151086bbc8b..0000000000000000000000000000000000000000 --- a/spaces/lwchen/CodeFormer/CodeFormer/scripts/crop_align_face.py +++ /dev/null @@ -1,192 +0,0 @@ -""" -brief: face alignment with FFHQ method (https://github.com/NVlabs/ffhq-dataset) -author: lzhbrian (https://lzhbrian.me) -link: https://gist.github.com/lzhbrian/bde87ab23b499dd02ba4f588258f57d5 -date: 2020.1.5 -note: code is heavily borrowed from - https://github.com/NVlabs/ffhq-dataset - http://dlib.net/face_landmark_detection.py.html -requirements: - conda install Pillow numpy scipy - conda install -c conda-forge dlib - # download face landmark model from: - # http://dlib.net/files/shape_predictor_68_face_landmarks.dat.bz2 -""" - -import cv2 -import dlib -import glob -import numpy as np -import os -import PIL -import PIL.Image -import scipy -import scipy.ndimage -import sys -import argparse - -# download model from: http://dlib.net/files/shape_predictor_68_face_landmarks.dat.bz2 -predictor = dlib.shape_predictor('weights/dlib/shape_predictor_68_face_landmarks-fbdc2cb8.dat') - - -def get_landmark(filepath, only_keep_largest=True): - """get landmark with dlib - :return: np.array shape=(68, 2) - """ - detector = dlib.get_frontal_face_detector() - - img = dlib.load_rgb_image(filepath) - dets = detector(img, 1) - - # Shangchen modified - print("Number of faces detected: {}".format(len(dets))) - if only_keep_largest: - print('Detect several faces and only keep the largest.') - face_areas = [] - for k, d in enumerate(dets): - face_area = (d.right() - d.left()) * (d.bottom() - d.top()) - face_areas.append(face_area) - - largest_idx = face_areas.index(max(face_areas)) - d = dets[largest_idx] - shape = predictor(img, d) - print("Part 0: {}, Part 1: {} ...".format( - shape.part(0), shape.part(1))) - else: - for k, d in enumerate(dets): - print("Detection {}: Left: {} Top: {} Right: {} Bottom: {}".format( - k, d.left(), d.top(), d.right(), d.bottom())) - # Get the landmarks/parts for the face in box d. - shape = predictor(img, d) - print("Part 0: {}, Part 1: {} ...".format( - shape.part(0), shape.part(1))) - - t = list(shape.parts()) - a = [] - for tt in t: - a.append([tt.x, tt.y]) - lm = np.array(a) - # lm is a shape=(68,2) np.array - return lm - -def align_face(filepath, out_path): - """ - :param filepath: str - :return: PIL Image - """ - try: - lm = get_landmark(filepath) - except: - print('No landmark ...') - return - - lm_chin = lm[0:17] # left-right - lm_eyebrow_left = lm[17:22] # left-right - lm_eyebrow_right = lm[22:27] # left-right - lm_nose = lm[27:31] # top-down - lm_nostrils = lm[31:36] # top-down - lm_eye_left = lm[36:42] # left-clockwise - lm_eye_right = lm[42:48] # left-clockwise - lm_mouth_outer = lm[48:60] # left-clockwise - lm_mouth_inner = lm[60:68] # left-clockwise - - # Calculate auxiliary vectors. - eye_left = np.mean(lm_eye_left, axis=0) - eye_right = np.mean(lm_eye_right, axis=0) - eye_avg = (eye_left + eye_right) * 0.5 - eye_to_eye = eye_right - eye_left - mouth_left = lm_mouth_outer[0] - mouth_right = lm_mouth_outer[6] - mouth_avg = (mouth_left + mouth_right) * 0.5 - eye_to_mouth = mouth_avg - eye_avg - - # Choose oriented crop rectangle. - x = eye_to_eye - np.flipud(eye_to_mouth) * [-1, 1] - x /= np.hypot(*x) - x *= max(np.hypot(*eye_to_eye) * 2.0, np.hypot(*eye_to_mouth) * 1.8) - y = np.flipud(x) * [-1, 1] - c = eye_avg + eye_to_mouth * 0.1 - quad = np.stack([c - x - y, c - x + y, c + x + y, c + x - y]) - qsize = np.hypot(*x) * 2 - - # read image - img = PIL.Image.open(filepath) - - output_size = 512 - transform_size = 4096 - enable_padding = False - - # Shrink. - shrink = int(np.floor(qsize / output_size * 0.5)) - if shrink > 1: - rsize = (int(np.rint(float(img.size[0]) / shrink)), - int(np.rint(float(img.size[1]) / shrink))) - img = img.resize(rsize, PIL.Image.ANTIALIAS) - quad /= shrink - qsize /= shrink - - # Crop. - border = max(int(np.rint(qsize * 0.1)), 3) - crop = (int(np.floor(min(quad[:, 0]))), int(np.floor(min(quad[:, 1]))), - int(np.ceil(max(quad[:, 0]))), int(np.ceil(max(quad[:, 1])))) - crop = (max(crop[0] - border, 0), max(crop[1] - border, 0), - min(crop[2] + border, - img.size[0]), min(crop[3] + border, img.size[1])) - if crop[2] - crop[0] < img.size[0] or crop[3] - crop[1] < img.size[1]: - img = img.crop(crop) - quad -= crop[0:2] - - # Pad. - pad = (int(np.floor(min(quad[:, 0]))), int(np.floor(min(quad[:, 1]))), - int(np.ceil(max(quad[:, 0]))), int(np.ceil(max(quad[:, 1])))) - pad = (max(-pad[0] + border, - 0), max(-pad[1] + border, - 0), max(pad[2] - img.size[0] + border, - 0), max(pad[3] - img.size[1] + border, 0)) - if enable_padding and max(pad) > border - 4: - pad = np.maximum(pad, int(np.rint(qsize * 0.3))) - img = np.pad( - np.float32(img), ((pad[1], pad[3]), (pad[0], pad[2]), (0, 0)), - 'reflect') - h, w, _ = img.shape - y, x, _ = np.ogrid[:h, :w, :1] - mask = np.maximum( - 1.0 - - np.minimum(np.float32(x) / pad[0], - np.float32(w - 1 - x) / pad[2]), 1.0 - - np.minimum(np.float32(y) / pad[1], - np.float32(h - 1 - y) / pad[3])) - blur = qsize * 0.02 - img += (scipy.ndimage.gaussian_filter(img, [blur, blur, 0]) - - img) * np.clip(mask * 3.0 + 1.0, 0.0, 1.0) - img += (np.median(img, axis=(0, 1)) - img) * np.clip(mask, 0.0, 1.0) - img = PIL.Image.fromarray( - np.uint8(np.clip(np.rint(img), 0, 255)), 'RGB') - quad += pad[:2] - - img = img.transform((transform_size, transform_size), PIL.Image.QUAD, - (quad + 0.5).flatten(), PIL.Image.BILINEAR) - - if output_size < transform_size: - img = img.resize((output_size, output_size), PIL.Image.ANTIALIAS) - - # Save aligned image. - print('saveing: ', out_path) - img.save(out_path) - - return img, np.max(quad[:, 0]) - np.min(quad[:, 0]) - - -if __name__ == '__main__': - parser = argparse.ArgumentParser() - parser.add_argument('--in_dir', type=str, default='./inputs/whole_imgs') - parser.add_argument('--out_dir', type=str, default='./inputs/cropped_faces') - args = parser.parse_args() - - img_list = sorted(glob.glob(f'{args.in_dir}/*.png')) - img_list = sorted(img_list) - - for in_path in img_list: - out_path = os.path.join(args.out_dir, in_path.split("/")[-1]) - out_path = out_path.replace('.jpg', '.png') - size_ = align_face(in_path, out_path) \ No newline at end of file diff --git a/spaces/lyf/faster-whisper-webui/src/conversion/hf_converter.py b/spaces/lyf/faster-whisper-webui/src/conversion/hf_converter.py deleted file mode 100644 index 6da4f0fd672d63b099f21d0498ba4001d23356f7..0000000000000000000000000000000000000000 --- a/spaces/lyf/faster-whisper-webui/src/conversion/hf_converter.py +++ /dev/null @@ -1,67 +0,0 @@ -# https://github.com/bayartsogt-ya/whisper-multiple-hf-datasets - -from copy import deepcopy -import torch - -WHISPER_MAPPING = { - "layers": "blocks", - "fc1": "mlp.0", - "fc2": "mlp.2", - "final_layer_norm": "mlp_ln", - "layers": "blocks", - ".self_attn.q_proj": ".attn.query", - ".self_attn.k_proj": ".attn.key", - ".self_attn.v_proj": ".attn.value", - ".self_attn_layer_norm": ".attn_ln", - ".self_attn.out_proj": ".attn.out", - ".encoder_attn.q_proj": ".cross_attn.query", - ".encoder_attn.k_proj": ".cross_attn.key", - ".encoder_attn.v_proj": ".cross_attn.value", - ".encoder_attn_layer_norm": ".cross_attn_ln", - ".encoder_attn.out_proj": ".cross_attn.out", - "decoder.layer_norm.": "decoder.ln.", - "encoder.layer_norm.": "encoder.ln_post.", - "embed_tokens": "token_embedding", - "encoder.embed_positions.weight": "encoder.positional_embedding", - "decoder.embed_positions.weight": "decoder.positional_embedding", - "layer_norm": "ln_post", -} - - -def rename_keys(s_dict): - keys = list(s_dict.keys()) - for key in keys: - new_key = key - for k, v in WHISPER_MAPPING.items(): - if k in key: - new_key = new_key.replace(k, v) - - print(f"{key} -> {new_key}") - - s_dict[new_key] = s_dict.pop(key) - return s_dict - - -def convert_hf_whisper(hf_model_name_or_path: str, whisper_state_path: str): - from transformers import WhisperForConditionalGeneration - transformer_model = WhisperForConditionalGeneration.from_pretrained(hf_model_name_or_path) - config = transformer_model.config - - # first build dims - dims = { - 'n_mels': config.num_mel_bins, - 'n_vocab': config.vocab_size, - 'n_audio_ctx': config.max_source_positions, - 'n_audio_state': config.d_model, - 'n_audio_head': config.encoder_attention_heads, - 'n_audio_layer': config.encoder_layers, - 'n_text_ctx': config.max_target_positions, - 'n_text_state': config.d_model, - 'n_text_head': config.decoder_attention_heads, - 'n_text_layer': config.decoder_layers - } - - state_dict = deepcopy(transformer_model.model.state_dict()) - state_dict = rename_keys(state_dict) - - torch.save({"dims": dims, "model_state_dict": state_dict}, whisper_state_path) \ No newline at end of file diff --git a/spaces/ma-xu/LIVE/thrust/thrust/system/detail/sequential/copy.h b/spaces/ma-xu/LIVE/thrust/thrust/system/detail/sequential/copy.h deleted file mode 100644 index 80853f670020fe3926c38f716cc359e8a94f5e70..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/thrust/thrust/system/detail/sequential/copy.h +++ /dev/null @@ -1,63 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -/*! \file copy.h - * \brief Sequential implementations of copy algorithms. - */ - -#pragma once - -#include -#include - -namespace thrust -{ -namespace system -{ -namespace detail -{ -namespace sequential -{ - - -template -__host__ __device__ - OutputIterator copy(sequential::execution_policy &exec, - InputIterator first, - InputIterator last, - OutputIterator result); - - -template -__host__ __device__ - OutputIterator copy_n(sequential::execution_policy &exec, - InputIterator first, - Size n, - OutputIterator result); - - -} // end namespace sequential -} // end namespace detail -} // end namespace system -} // end namespace thrust - -#include - diff --git a/spaces/ma-xu/LIVE/thrust/thrust/type_traits/logical_metafunctions.h b/spaces/ma-xu/LIVE/thrust/thrust/type_traits/logical_metafunctions.h deleted file mode 100644 index 5f86ee6a820d5dd4e5c98d0f9ba21ffd3b287b45..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/thrust/thrust/type_traits/logical_metafunctions.h +++ /dev/null @@ -1,179 +0,0 @@ -/////////////////////////////////////////////////////////////////////////////// -// Copyright (c) 2018 NVIDIA Corporation -// Copyright (c) 2015-2018 Bryce Adelstein Lelbach aka wash -// -// Distributed under the Boost Software License, Version 1.0. (See accompanying -// file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt) -/////////////////////////////////////////////////////////////////////////////// - -/*! \file logical_metafunctions.h - * \brief C++17's \c conjunction, \c disjunction, and \c negation metafunctions. - */ - -#pragma once - -#include -#include - -#if THRUST_CPP_DIALECT >= 2011 - -#include - -namespace thrust -{ - -#if THRUST_CPP_DIALECT >= 2017 - -/// An \c integral_constant whose value is (... && Ts::value). -template -using conjunction = std::conjunction; - -/// A constexpr bool whose value is (... && Ts::value). -template -constexpr bool conjunction_v = conjunction::value; - -/// An \c integral_constant whose value is (... || Ts::value). -template -using disjunction = std::disjunction; - -/// A constexpr bool whose value is (... || Ts::value). -template -constexpr bool disjunction_v = disjunction::value; - -/// An \c integral_constant whose value is !Ts::value. -template -using negation = std::negation; - -/// A constexpr bool whose value is !Ts::value. -template -constexpr bool negation_v = negation::value; - -/////////////////////////////////////////////////////////////////////////////// - -#else // Older than C++17. - -/// An \c integral_constant whose value is (... && Ts::value). -template -struct conjunction; - -#if THRUST_CPP_DIALECT >= 2014 -/// A constexpr bool whose value is (... && Ts::value). -template -constexpr bool conjunction_v = conjunction::value; -#endif - -template <> -struct conjunction<> : std::true_type {}; - -template -struct conjunction : T {}; - -template -struct conjunction : std::conditional::type {}; - -template -struct conjunction - : std::conditional, T0>::type {}; - -/////////////////////////////////////////////////////////////////////////////// - -/// An \c integral_constant whose value is (... || Ts::value). -template -struct disjunction; - -#if THRUST_CPP_DIALECT >= 2014 -/// A constexpr bool whose value is (... || Ts::value). -template -constexpr bool disjunction_v = disjunction::value; -#endif - -template <> -struct disjunction<> : std::false_type {}; - -template -struct disjunction : T {}; - -template -struct disjunction - : std::conditional >::type {}; - -/////////////////////////////////////////////////////////////////////////////// - -/// An \c integral_constant whose value is !T::value. -template -struct negation; - -#if THRUST_CPP_DIALECT >= 2014 -/// A constexpr bool whose value is !T::value. -template -constexpr bool negation_v = negation::value; -#endif - -template -struct negation : std::integral_constant {}; - -#endif // THRUST_CPP_DIALECT >= 2017 - -/////////////////////////////////////////////////////////////////////////////// - -/// An \c integral_constant whose value is (... && Bs). -template -struct conjunction_value; - -#if THRUST_CPP_DIALECT >= 2014 -/// A constexpr bool whose value is (... && Bs). -template -constexpr bool conjunction_value_v = conjunction_value::value; -#endif - -template <> -struct conjunction_value<> : std::true_type {}; - -template -struct conjunction_value : std::integral_constant {}; - -template -struct conjunction_value - : std::integral_constant::value> {}; - -/////////////////////////////////////////////////////////////////////////////// - -/// An \c integral_constant whose value is (... || Bs). -template -struct disjunction_value; - -#if THRUST_CPP_DIALECT >= 2014 -/// A constexpr bool whose value is (... || Bs). -template -constexpr bool disjunction_value_v = disjunction_value::value; -#endif - -template <> -struct disjunction_value<> : std::false_type {}; - -template -struct disjunction_value : std::integral_constant {}; - -template -struct disjunction_value - : std::integral_constant::value> {}; - -/////////////////////////////////////////////////////////////////////////////// - -/// An \c integral_constant whose value is !B. -template -struct negation_value; - -#if THRUST_CPP_DIALECT >= 2014 -/// A constexpr bool whose value is !B. -template -constexpr bool negation_value_v = negation_value::value; -#endif - -template -struct negation_value : std::integral_constant {}; - -} // end namespace thrust - -#endif // THRUST_CPP_DIALECT >= 2011 - diff --git a/spaces/matthoffner/AudioCraft_Plus/audiocraft/grids/musicgen/musicgen_base_cached_32khz.py b/spaces/matthoffner/AudioCraft_Plus/audiocraft/grids/musicgen/musicgen_base_cached_32khz.py deleted file mode 100644 index d9a43f37d7369b5de4542fba87c4c8739d58b1e8..0000000000000000000000000000000000000000 --- a/spaces/matthoffner/AudioCraft_Plus/audiocraft/grids/musicgen/musicgen_base_cached_32khz.py +++ /dev/null @@ -1,67 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -from ._explorers import LMExplorer -from ...environment import AudioCraftEnvironment - - -@LMExplorer -def explorer(launcher): - partitions = AudioCraftEnvironment.get_slurm_partitions(['team', 'global']) - launcher.slurm_(gpus=32, partition=partitions) - launcher.bind_(solver='musicgen/musicgen_base_32khz') - # replace this by the desired music dataset - launcher.bind_(dset='internal/music_400k_32khz') - - fsdp = {'autocast': False, 'fsdp.use': True} - medium = {'model/lm/model_scale': 'medium'} - large = {'model/lm/model_scale': 'large'} - - cfg_low = {'classifier_free_guidance.training_dropout': 0.2} - wd_low = {'conditioners.description.t5.word_dropout': 0.2} - - adam = {'optim.optimizer': 'adamw', 'optim.lr': 1e-4} - - # BEGINNING OF CACHE WRITING JOBS. - cache_write = { - 'cache.path': '/fsx-codegen/defossez/cache/interleave_stereo_nv_32k', - 'cache.write': True, - 'generate.every': 500, - 'evaluate.every': 500, - 'logging.log_updates': 50, - } - - cache_sub = launcher.bind({'model/lm/model_scale': 'xsmall', 'conditioner': 'none'}) - cache_sub.bind_({'deadlock.use': True}) - cache_sub.slurm_(gpus=8) - with launcher.job_array(): - num_shards = 10 # total number of jobs running in parallel. - for shard in range(0, num_shards): - launcher(cache_write, {'cache.write_num_shards': num_shards, 'cache.write_shard': shard}) - - # REMOVE THE FOLLOWING RETURN STATEMENT ONCE THE ABOVE JOBS ARE DONE, - # OR SUFFICIENTLY AHEAD. - return - - cache = { - 'cache.path': '/fsx-codegen/defossez/cache/interleave_stereo_nv_32k', - } - launcher.bind_(fsdp, cache) - - launcher.slurm_(gpus=32).bind_(label='32gpus') - with launcher.job_array(): - sub = launcher.bind() - sub() - - launcher.slurm_(gpus=64).bind_(label='64gpus') - with launcher.job_array(): - sub = launcher.bind() - sub(medium, adam) - - launcher.slurm_(gpus=96).bind_(label='96gpus') - with launcher.job_array(): - sub = launcher.bind() - sub(large, cfg_low, wd_low, adam, {'optim.max_norm': 3}) diff --git a/spaces/mega-snowman/combine-images/README.md b/spaces/mega-snowman/combine-images/README.md deleted file mode 100644 index 4b90c9f3d2c462b936e72f5013ded9e62a2893bd..0000000000000000000000000000000000000000 --- a/spaces/mega-snowman/combine-images/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Combine Images -emoji: ⚡ -colorFrom: indigo -colorTo: indigo -sdk: gradio -sdk_version: 3.43.2 -app_file: app.py -pinned: false -license: openrail ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/merve/anonymization/public/third_party/mobilenet@1.0.0.js b/spaces/merve/anonymization/public/third_party/mobilenet@1.0.0.js deleted file mode 100644 index d50ffe68663e1aabfc07faec02e8a3cb41b5dfe5..0000000000000000000000000000000000000000 --- a/spaces/merve/anonymization/public/third_party/mobilenet@1.0.0.js +++ /dev/null @@ -1,2 +0,0 @@ -// @tensorflow/tfjs-models Copyright 2019 Google -!function(e,a){"object"==typeof exports&&"undefined"!=typeof module?a(exports,require("@tensorflow/tfjs")):"function"==typeof define&&define.amd?define(["exports","@tensorflow/tfjs"],a):a((e=e||self).mobilenet={},e.tf)}(this,function(e,a){"use strict";function r(e,a,r,o){return new(r||(r=Promise))(function(i,t){function n(e){try{l(o.next(e))}catch(e){t(e)}}function s(e){try{l(o.throw(e))}catch(e){t(e)}}function l(e){e.done?i(e.value):new r(function(a){a(e.value)}).then(n,s)}l((o=o.apply(e,a||[])).next())})}function o(e,a){var r,o,i,t,n={label:0,sent:function(){if(1&i[0])throw i[1];return i[1]},trys:[],ops:[]};return t={next:s(0),throw:s(1),return:s(2)},"function"==typeof Symbol&&(t[Symbol.iterator]=function(){return this}),t;function s(t){return function(s){return function(t){if(r)throw new TypeError("Generator is already executing.");for(;n;)try{if(r=1,o&&(i=2&t[0]?o.return:t[0]?o.throw||((i=o.return)&&i.call(o),0):o.next)&&!(i=i.call(o,t[1])).done)return i;switch(o=0,i&&(t=[2&t[0],i.value]),t[0]){case 0:case 1:i=t;break;case 4:return n.label++,{value:t[1],done:!1};case 5:n.label++,o=t[1],t=[0];continue;case 7:t=n.ops.pop(),n.trys.pop();continue;default:if(!(i=(i=n.trys).length>0&&i[i.length-1])&&(6===t[0]||2===t[0])){n=0;continue}if(3===t[0]&&(!i||t[1]>i[0]&&t[1] tag, please also include @tensorflow/tfjs on the page before using this model.");if(r=e.toFixed(2),t=i.toFixed(2),!(r in n))throw new Error("Invalid version of MobileNet. Valid versions are: "+Object.keys(n));if(!(t in n[r]))throw new Error("MobileNet constructed with invalid alpha "+i+". Valid multipliers for this version are: "+Object.keys(n[r])+".");return[4,(l=new s(r,t)).load()];case 1:return o.sent(),[2,l]}})})},e.MobileNet=s,Object.defineProperty(e,"__esModule",{value:!0})}); \ No newline at end of file diff --git a/spaces/merve/fill-in-the-blank/server-side/fill-in-the-blank/gender-over-time-colab/style.css b/spaces/merve/fill-in-the-blank/server-side/fill-in-the-blank/gender-over-time-colab/style.css deleted file mode 100644 index 8165ac5b403d085f7013b25cefc267a6639a0d79..0000000000000000000000000000000000000000 --- a/spaces/merve/fill-in-the-blank/server-side/fill-in-the-blank/gender-over-time-colab/style.css +++ /dev/null @@ -1,70 +0,0 @@ -body{ - font-family: menlo, Consolas, 'Lucida Console', monospace; - margin: 10px; - margin-left: 20px; - width: 1130px; - background: #fff; -} - -.tooltip { - top: -1000px; - position: fixed; - padding: 10px; - background: rgba(255, 255, 255, .90); - border: 1px solid lightgray; - pointer-events: none; -} -.tooltip-hidden{ - opacity: 0; - transition: all .3s; - transition-delay: .1s; -} - -@media (max-width: 590px){ - div.tooltip{ - bottom: -1px; - width: calc(100%); - left: -1px !important; - right: -1px !important; - top: auto !important; - width: auto !important; - } -} - -svg{ - overflow: visible; -} - -.domain{ - display: none; -} - -.axis{ - opacity: .7; -} - -text{ - /*pointer-events: none;*/ - text-shadow: 0 1.5px 0 #fff, 1.5px 0 0 #fff, 0 -1.5px 0 #fff, -1.5px 0 0 #fff; -} - - -#graph > div{ - /*display: inline-block;*/ -} - -.active path{ - stroke: #f0f; - /*stroke-width: 2;*/ - opacity: 1; -} -.active text{ - fill: #f0f; - opacity: 1 !important; - font-size: 14px; - -} - -p{ - max-width: 650px; -} \ No newline at end of file diff --git a/spaces/merve/hidden-bias/source/fill-in-the-blank/init-diff.js b/spaces/merve/hidden-bias/source/fill-in-the-blank/init-diff.js deleted file mode 100644 index e0bb76f70a4d3ff6689b493236b5da93150746da..0000000000000000000000000000000000000000 --- a/spaces/merve/hidden-bias/source/fill-in-the-blank/init-diff.js +++ /dev/null @@ -1,525 +0,0 @@ -/* Copyright 2021 Google LLC. All Rights Reserved. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -==============================================================================*/ - - -window.initDiff = function(pair){ - var sel = d3.select('.' + pair.class).html('') - .at({role: 'graphics-document', 'aria-label': pair.ariaLabel}) - .on('keydown', function(){ - sel.classed('changed', 1) - if (d3.event.keyCode != 13) return - d3.event.preventDefault() - - pair.str0 = '' - - updateChart() - }) - - if (!sel.node()) return - - var isMobile = innerWidth <= 1100 - - var optionSel = sel.append('div.options') - .classed('wide', !isMobile) - .st({marginBottom: isMobile ? 20 : ''}) - - var input0Sel = optionSel.append('div.flex-row').append('textarea.input-0') - .st({marginBottom: 10}) - if (isMobile){ - input0Sel.on('change', updateChart) - } - - input0Sel.node().value = pair.s0.replace('[MASK]', '_') - - var countSel = optionSel.append('div.option-tokens') - .append('b').text('Number of Tokens') - .parent() - .append('div.flex-row') - .appendMany('div.button', [30, 200, 1000, 5000, 99999]) - .text(d => d > 5000 ? 'All' : d) - .st({width: 34, textAlign: 'center'}) - .on('click', d => { - pair.count = d - updateChart() - }) - - var typeSel = optionSel.append('div.option-type') - .append('b').text('Chart Type') - .parent() - .append('div.flex-row') - .appendMany('div.button', ['Likelihoods', 'Differences']) - .text(d => d) - .st({width: 116, textAlign: 'center'}) - .on('click', d => { - pair.type = d - updateChart() - }) - - var modelSel = optionSel.append('div.option-model') - .st({display: 'none'}) - .append('b').text('Model') - .parent() - .append('div.flex-row') - .appendMany('div.button', ['BERT', 'Zari']) - .text(d => d) - .st({width: 116, textAlign: 'center'}) - .on('click', d => { - pair.model = d - updateChart() - }) - - var updateSel = optionSel.append('div.button.update').on('click', updateChart) - .text('Update') - .st({display: isMobile ? 'none' : ''}) - - var resetSel = optionSel.append('div.reset') - .html(' Reset') - .on('click', () => { - pair = JSON.parse(pair.pairStr) - pair.pairStr = JSON.stringify(pair) - input0Sel.node().value = pair.s0 - updateChart(true) - }) - .st({display: 'none'}) - - if (pair.alts){ - d3.select('.' + pair.class + '-alts').html('') - .classed('alt-block', 1).st({display: 'block'}) - .appendMany('span.p-button-link', pair.alts) - .html(d => d.str) - .on('click', d => { - input0Sel.node().value = d.rawStr - - updateChart() - }) - } - - var scatters = [] - var scatterSel = sel.append('div.pair-container-overflow').append('div.pair-container') - .st({width: 940}) - .appendMany('div', 'p0 p1 c0 p2 p3 c1'.split(' ')) - .each(function(id){ - var c = d3.conventions({ - sel: d3.select(this).append('div.graph.diff').st({marginTop: -5}), - height: 250, - width: 250, - margin: {bottom: 40, right: 60, top: 5, left: 0}, - layers: 'sdds', - }) - - var [type, i] = id.split('') - - if (type == 'p'){ - c.sel - .st({pointer: 'cursor'}) - .on('click', () => { - pair.colorByIndex = +i - updateChart() - }) - } - - var nTicks = 4 - var tickScale = d3.scaleLinear().range([0, c.width]) - c.svg.appendMany('path.bg-tick', d3.range(nTicks + 1)) - .at({d: d => `M ${.5 + Math.round(tickScale(d/nTicks))} 0 V ${c.height}`}) - c.svg.appendMany('path.bg-tick', d3.range(nTicks + 1)) - .at({d: d => `M 0 ${.5 + Math.round(tickScale(d/nTicks))} H ${c.width}`}) - - - c.type = type - c.scatters = scatters - c.scatter = window.initScatter(c) - c.scatters.push(c.scatter) - - - d3.select(this).datum({c, type, i}) - }) - - - updateChart(true) - - - async function updateChart(isFirst){ - // warningSel.st({opacity: isFirst ? 0 : 1}) - // resetSel.st({opacity: isFirst ? 0 : 1}) - sel.classed('changed', 0) - - countSel.classed('active', d => d == pair.count) - typeSel.classed('active', d => d == pair.type) - modelSel.classed('active', d => d == pair.model) - - function getStr(sel){ - return sel.node().value.replace('_', '[MASK]') - } - - - pair.s0 = input0Sel.node().value.replace('_', '[MASK]') - var str = pair.s0.replace('[MASK]', '{MASK}') - var sentences = str.split('|').length == 2 ? getZariSenteces() : getTwoPairSentences() - - function getTwoPairSentences(){ - var start = str.split('[')[0] - var mid = str.split(']')[1].split('[')[0] - var last = str.split(']')[2] - - var pairA = str.split('[')[1].split(']')[0].split('|') - var pairB = str.split('[')[2].split(']')[0].split('|') - - return [ - {i: 0, j: 0}, - {i: 0, j: 1}, - {i: 1, j: 0}, - {i: 1, j: 1}, - ].map(word => { - var strA = pairA[word.i] - var strB = pairB[word.j] - - var sentence = [start, strA, mid, strB, last] - .join('') - .replace('{MASK}', '[MASK]') - - var modelPath = pair.model == 'Zari' ? 'embed_zari_cda' : 'embed' - - return {word, strA, strB, sentence, modelPath} - }) - } - - function getZariSenteces(){ - var start = str.split('[')[0] - var last = str.split(']')[1] - var pairB = str.split('[')[1].split(']')[0].split('|') - - return [ - {i: 0, j: 0}, - {i: 0, j: 1}, - {i: 1, j: 0}, - {i: 1, j: 1}, - ].map(word => { - var strA = word.i ? 'Zari' : 'BERT' - var strB = pairB[word.j] - - var sentence = [start, strB, last] - .join('') - .replace('{MASK}', '[MASK]') - - var modelPath = strA == 'Zari' ? 'embed_zari_cda' : 'embed' - - return {word, strA, strB, sentence, modelPath} - }) - } - - - updateSel.classed('loading', 1) - // TODO parallel? - for (var d of sentences){ - d.maskVals = await post(d.modelPath, {sentence: d.sentence}) - } - updateSel.classed('loading', 0) - - - var allTokens = sentences[0].maskVals.map((v0, i) => { - var word = tokenizer.vocab[i] - var v = sentences.map(d => d.maskVals[i]) - - return {word, i, v, isVisible: false} - }) - - _.sortBy(allTokens, d => -d.v[0]).forEach((d, i) => d.v0i = i) - _.sortBy(allTokens, d => -d.v[1]).forEach((d, i) => d.v1i = i) - _.sortBy(allTokens, d => -d.v[2]).forEach((d, i) => d.v2i = i) - _.sortBy(allTokens, d => -d.v[3]).forEach((d, i) => d.v3i = i) - - allTokens - .filter(d => - d.v0i <= pair.count || - d.v1i <= pair.count || - d.v2i <= pair.count || - d.v3i <= pair.count - ) - .forEach(d => { - d.isTop = true - d.isVisible = true - }) - - var pairs = [ - [0, 1], - [2, 3], - - // [1, 2], - // [3, 0], - - [0, 2], - [1, 3], - - ].map((d, i) => { - var sentA = sentences[d[0]] - var sentB = sentences[d[1]] - - var allPairTokens = allTokens.map((t, i) => { - return {word: t.word, v0: t.v[d[0]], i, v1: t.v[d[1]], t} - }) - - allPairTokens.forEach(d => { - d.dif = d.v0 - d.v1 - d.meanV = (d.v0 + d.v1) / 2 - }) - var i0key = 'v' + d[0] + 'i' - var i1key = 'v' + d[1] + 'i' - - // TODO should this be done per chart or globally? - var topTokens = allPairTokens.filter(d => d.t.isTop) - // var topTokens = allPairTokens.filter(d => d.t[i0key] <= pair.count || d.t[i1key] <= pair.count) - var logitExtent = d3.extent(topTokens.map(d => d.v0).concat(topTokens.map(d => d.v1))) - - var tokens = allPairTokens - .filter(d => logitExtent[0] <= d.v0 && logitExtent[0] <= d.v1) - - var mag = logitExtent[1] - logitExtent[0] - logitExtent = [logitExtent[0] - mag*.002, logitExtent[1] + mag*.002] - - if (pair.type == 'Differences') tokens = _.sortBy(allPairTokens, d => -d.meanV).slice(0, pair.count) - - tokens.forEach(d => { - d.isVisible = true - }) - - var maxDif = d3.max(d3.extent(tokens, d => d.dif).map(Math.abs)) - var color = palette(-maxDif*.5, maxDif*.5) - - label0 = sentA.strA + ' / ' + sentA.strB - label1 = sentB.strA + ' / ' + sentB.strB - - - return {i, sentA, sentB, allPairTokens, logitExtent, tokens, maxDif, color, label0, label1} - }) - - var compares = [[0, 1], [2, 3]].map((d, i) => { - var pairA = pairs[d[0]] - var pairB = pairs[d[1]] - - var allTokensA = pairA.allPairTokens - var allTokensB = pairB.allPairTokens - - var allPairTokens = allTokens.map((t, i) => { - return {word: t.word, t, difA: allTokensA[i].dif, meanA: allTokensA[i].meanV, difB: allTokensB[i].dif, meanB: allTokensB[i].meanV} - }) - - _.sortBy(allPairTokens, d => -d.meanA) - .slice(0, pair.count) - .forEach(d => d.isVisible = true) - - _.sortBy(allPairTokens, d => -d.meanB) - .slice(0, pair.count) - .forEach(d => d.isVisible = true) - - var tokens = allPairTokens.filter(d => d.isVisible) - - return {pairA, pairB, tokens, allPairTokens} - }) - - if (!pair.colorByIndex) pair.colorByIndex = 1 - var color = pairs[pair.colorByIndex].color - pairs[pair.colorByIndex].allPairTokens.forEach(d => { - d.t.color = color(d.dif) - }) - - scatterSel.each(function({c, i, type}){ - updatePairChart(c, type == 'p' ? pairs[i] : compares[i]) - }) - } - - function updatePairChart(c, p){ - var {logitExtent, tokens, maxDif, color} = p - var allTokens = p.allPairTokens - - if (c.type == 'c'){ - drawDifDif() - } else { - if (pair.type == 'Likelihoods'){ - drawXY() - } else{ - drawRotated() - } - - sel.classed('is-xy', pair.type == 'Likelihoods') - sel.classed('is-rotate', pair.type != 'Likelihoods') - c.sel.classed('is-color-by', p.i == pair.colorByIndex) - c.sel.classed('not-is-color-by', p.i != pair.colorByIndex) - } - - function drawXY(){ - c.x.domain(logitExtent) - c.y.domain(logitExtent) - - d3.drawAxis(c) - - var s = {30: 4, 200: 3, 1000: 3}[pair.count] || 2 - var scatterData = allTokens.map(d => { - var x = c.x(d.v0) - var y = c.y(d.v1) - var fill = d.t.color - var dif = d.dif - var word = d.word - var show = '' - var isVisible = d.isVisible - - return {x, y, s, dif, fill, word, show, isVisible} - }) - - - var textCandidates = _.sortBy(scatterData.filter(d => d.isVisible), d => d.dif) - d3.nestBy(textCandidates.slice(0, 1000), d => Math.round(d.y/10)) - .forEach(d => d[0].show = 'uf') - d3.nestBy(textCandidates.reverse().slice(0, 1000), d => Math.round(d.y/10)) - .forEach(d => d[0].show = 'lr') - - logitExtent.pair = pair - c.scatter.draw(c, scatterData, true) - c.svg.selectAppend('text.x-axis-label.xy-only') - .translate([c.width/2, c.height + 24]) - .text(p.label0 + ' →') - .at({fill: util.colors[0], textAnchor: 'middle'}) - - c.svg.selectAppend('g.y-axis-label.xy-only') - .translate([c.width + 20, c.height/2]) - .selectAppend('text') - .text(p.label1 + ' →') - .at({fill: util.colors[1], textAnchor: 'middle', transform: 'rotate(-90)'}) - } - - function drawRotated(){ - c.x.domain(d3.extent(tokens, d => d.meanV)) - c.y.domain([maxDif, -maxDif]) - - d3.drawAxis(c) - - var scatterData = allTokens.map(d => { - var x = c.x(d.meanV) - var y = c.y(d.dif) - var fill = d.t.color - var word = d.word - var show = '' - var isVisible = d.isVisible - - return {x, y, s: 2, fill, word, show, isVisible} - }) - - scatterData.forEach(d => { - d.dx = d.x - c.width/2 - d.dy = d.y - c.height/2 - }) - - var textCandidates = _.sortBy(scatterData, d => -d.dx*d.dx - d.dy*d.dy) - .filter(d => d.isVisible) - .slice(0, 5000) - d3.nestBy(textCandidates, d => Math.round(12*Math.atan2(d.dx, d.dy))) - .map(d => d[0]) - .forEach(d => d.show = (d.dy < 0 ? 'u' : 'l') + (d.dx < 0 ? 'l' : 'r')) - - c.scatter.draw(c, scatterData, false) - c.svg.selectAppend('text.rotate-only.x-axis-label') - .translate([c.width/2, c.height + 24]) - .text(p.label0 + ' + ' + p.label1 + ' →') - .at({textAnchor: 'middle'}) - .st({fill: '#000', fontWeight: 300}) - - c.svg.select('g.rotate-only.sent-1').html('') - - c.svg.selectAppend('g.rotate-only.sent-1') - .translate([c.width + 20, c.height/2]) - .append('text') - .text(p.label1 + ' →') - .at({textAnchor: 'start', transform: 'rotate(-90)', x: 10}) - .st({fill: util.colors[1]}) - - c.svg.selectAppend('g.rotate-only.sent-1') - .translate([c.width + 20, c.height/2 + 0]) - .append('text') - .text('← ' + p.label0) - .at({textAnchor: 'end', transform: 'rotate(-90)', x: -10}) - .st({fill: util.colors[0]}) - } - - function drawDifDif(){ - var maxDifA = d3.max(d3.extent(tokens, d => d.difA).map(Math.abs)) - var maxDifB = d3.max(d3.extent(tokens, d => d.difB).map(Math.abs)) - var maxDif = d3.max([maxDifA, maxDifB]) - - c.x.domain([maxDif, -maxDif]) - c.y.domain([maxDif, -maxDif]) - - d3.drawAxis(c) - - var scatterData = allTokens.map(d => { - var x = c.x(d.difA) - var y = c.y(d.difB) - var fill = d.t.color - var word = d.word - var show = '' - var isVisible = d.isVisible - return {x, y, s: 2, fill, word, show, isVisible} - }) - - scatterData.forEach(d => { - d.dx = d.x - c.width/2 - d.dy = d.y - c.height/2 - }) - - var textCandidates = _.sortBy(scatterData.filter(d => d.isVisible), d => d.x - d.y) - d3.nestBy(textCandidates, d => Math.round(d.y/10)) - .forEach(d => d[0].show = 'uf') - d3.nestBy(textCandidates.reverse(), d => Math.round(d.y/10)) - .forEach(d => d[0].show = 'lr') - - c.scatter.draw(c, scatterData, true) - - var isColor = pair.colorByIndex == p.pairA.i - - var labelSel = c.svg.selectAppend('g.sent-0') - .html('') - .translate([c.width/2, c.height + 24]) - - labelSel.append('text') - .text(p.pairA.label1 + ' →') - .at({textAnchor: 'start', x: 10}) - .st({fill: isColor ? util.colors[1] : '#444', fontWeight: isColor ? 400 : ''}) - - labelSel.append('text') - .text('← ' + p.pairA.label0) - .at({textAnchor: 'end', x: -10}) - .st({fill: isColor ? util.colors[0] : '#444', fontWeight: isColor ? 400 : ''}) - - - var isColor = pair.colorByIndex == p.pairB.i - - var labelSel = c.svg.selectAppend('g.sent-1') - .html('') - .translate([c.width + 20, c.height/2]) - - labelSel.append('text') - .text(p.pairB.label1 + ' →') - .at({textAnchor: 'start', transform: 'rotate(-90)', x: 10}) - .st({fill: isColor ? util.colors[1] : '#444', fontWeight: isColor ? 400 : ''}) - - labelSel.append('text') - .text('← ' + p.pairB.label0) - .at({textAnchor: 'end', transform: 'rotate(-90)', x: -10}) - .st({fill: isColor ? util.colors[0] : '#444', fontWeight: isColor ? 400 : ''}) - } - - } -} - -if (window.init) init() diff --git a/spaces/mfrashad/ClothingGAN/models/biggan/pytorch_biggan/pytorch_pretrained_biggan/config.py b/spaces/mfrashad/ClothingGAN/models/biggan/pytorch_biggan/pytorch_pretrained_biggan/config.py deleted file mode 100644 index 454236a4bfa0d11fda0d52e0ce9b2926f8c32d30..0000000000000000000000000000000000000000 --- a/spaces/mfrashad/ClothingGAN/models/biggan/pytorch_biggan/pytorch_pretrained_biggan/config.py +++ /dev/null @@ -1,70 +0,0 @@ -# coding: utf-8 -""" -BigGAN config. -""" -from __future__ import (absolute_import, division, print_function, unicode_literals) - -import copy -import json - -class BigGANConfig(object): - """ Configuration class to store the configuration of a `BigGAN`. - Defaults are for the 128x128 model. - layers tuple are (up-sample in the layer ?, input channels, output channels) - """ - def __init__(self, - output_dim=128, - z_dim=128, - class_embed_dim=128, - channel_width=128, - num_classes=1000, - layers=[(False, 16, 16), - (True, 16, 16), - (False, 16, 16), - (True, 16, 8), - (False, 8, 8), - (True, 8, 4), - (False, 4, 4), - (True, 4, 2), - (False, 2, 2), - (True, 2, 1)], - attention_layer_position=8, - eps=1e-4, - n_stats=51): - """Constructs BigGANConfig. """ - self.output_dim = output_dim - self.z_dim = z_dim - self.class_embed_dim = class_embed_dim - self.channel_width = channel_width - self.num_classes = num_classes - self.layers = layers - self.attention_layer_position = attention_layer_position - self.eps = eps - self.n_stats = n_stats - - @classmethod - def from_dict(cls, json_object): - """Constructs a `BigGANConfig` from a Python dictionary of parameters.""" - config = BigGANConfig() - for key, value in json_object.items(): - config.__dict__[key] = value - return config - - @classmethod - def from_json_file(cls, json_file): - """Constructs a `BigGANConfig` from a json file of parameters.""" - with open(json_file, "r", encoding='utf-8') as reader: - text = reader.read() - return cls.from_dict(json.loads(text)) - - def __repr__(self): - return str(self.to_json_string()) - - def to_dict(self): - """Serializes this instance to a Python dictionary.""" - output = copy.deepcopy(self.__dict__) - return output - - def to_json_string(self): - """Serializes this instance to a JSON string.""" - return json.dumps(self.to_dict(), indent=2, sort_keys=True) + "\n" diff --git a/spaces/miesnerjacob/Multi-task-NLP/part_of_speech_tagging.py b/spaces/miesnerjacob/Multi-task-NLP/part_of_speech_tagging.py deleted file mode 100644 index 6bd649c89e8cca7b3e03ef326bbbe12cd674b963..0000000000000000000000000000000000000000 --- a/spaces/miesnerjacob/Multi-task-NLP/part_of_speech_tagging.py +++ /dev/null @@ -1,26 +0,0 @@ -import nltk -from nltk.tokenize import word_tokenize -nltk.download('punkt') -nltk.download('averaged_perceptron_tagger') - - -class POSTagging: - """Part of Speech Tagging on text data""" - - def __init__(self): - pass - - def classify(self, text): - """ - Generate Part of Speech tags. - - Parameters: - text (str): The user input string to generate tags for - - Returns: - predictions (list): list of tuples containing words and their respective tags - """ - - text = word_tokenize(text) - predictions = nltk.pos_tag(text) - return predictions \ No newline at end of file diff --git a/spaces/mike-ravkine/can-ai-code-compare/README.md b/spaces/mike-ravkine/can-ai-code-compare/README.md deleted file mode 100644 index b753e3a175eb5dc23483eac69aba5ced60cb30de..0000000000000000000000000000000000000000 --- a/spaces/mike-ravkine/can-ai-code-compare/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: Can Ai Code Compare -emoji: ⚖️ -colorFrom: blue -colorTo: indigo -sdk: docker -app_port: 7860 -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/mikebars/huggingface/assets/index-4c4fac98.css b/spaces/mikebars/huggingface/assets/index-4c4fac98.css deleted file mode 100644 index 79f233cc816beae61069a0feb08fb8fa0e410fd8..0000000000000000000000000000000000000000 --- a/spaces/mikebars/huggingface/assets/index-4c4fac98.css +++ /dev/null @@ -1 +0,0 @@ -*,:before,:after{box-sizing:border-box;border-width:0;border-style:solid;border-color:#e5e7eb}:before,:after{--tw-content: ""}html{line-height:1.5;-webkit-text-size-adjust:100%;-moz-tab-size:4;-o-tab-size:4;tab-size:4;font-family:ui-sans-serif,system-ui,-apple-system,BlinkMacSystemFont,Segoe UI,Roboto,Helvetica Neue,Arial,Noto Sans,sans-serif,"Apple Color Emoji","Segoe UI Emoji",Segoe UI Symbol,"Noto Color Emoji";font-feature-settings:normal}body{margin:0;line-height:inherit}hr{height:0;color:inherit;border-top-width:1px}abbr:where([title]){-webkit-text-decoration:underline dotted;text-decoration:underline dotted}h1,h2,h3,h4,h5,h6{font-size:inherit;font-weight:inherit}a{color:inherit;text-decoration:inherit}b,strong{font-weight:bolder}code,kbd,samp,pre{font-family:ui-monospace,SFMono-Regular,Menlo,Monaco,Consolas,Liberation Mono,Courier New,monospace;font-size:1em}small{font-size:80%}sub,sup{font-size:75%;line-height:0;position:relative;vertical-align:baseline}sub{bottom:-.25em}sup{top:-.5em}table{text-indent:0;border-color:inherit;border-collapse:collapse}button,input,optgroup,select,textarea{font-family:inherit;font-size:100%;font-weight:inherit;line-height:inherit;color:inherit;margin:0;padding:0}button,select{text-transform:none}button,[type=button],[type=reset],[type=submit]{-webkit-appearance:button;background-color:transparent;background-image:none}:-moz-focusring{outline:auto}:-moz-ui-invalid{box-shadow:none}progress{vertical-align:baseline}::-webkit-inner-spin-button,::-webkit-outer-spin-button{height:auto}[type=search]{-webkit-appearance:textfield;outline-offset:-2px}::-webkit-search-decoration{-webkit-appearance:none}::-webkit-file-upload-button{-webkit-appearance:button;font:inherit}summary{display:list-item}blockquote,dl,dd,h1,h2,h3,h4,h5,h6,hr,figure,p,pre{margin:0}fieldset{margin:0;padding:0}legend{padding:0}ol,ul,menu{list-style:none;margin:0;padding:0}textarea{resize:vertical}input::-moz-placeholder,textarea::-moz-placeholder{opacity:1;color:#9ca3af}input::placeholder,textarea::placeholder{opacity:1;color:#9ca3af}button,[role=button]{cursor:pointer}:disabled{cursor:default}img,svg,video,canvas,audio,iframe,embed,object{display:block;vertical-align:middle}img,video{max-width:100%;height:auto}[hidden]{display:none}*,:before,:after{--tw-border-spacing-x: 0;--tw-border-spacing-y: 0;--tw-translate-x: 0;--tw-translate-y: 0;--tw-rotate: 0;--tw-skew-x: 0;--tw-skew-y: 0;--tw-scale-x: 1;--tw-scale-y: 1;--tw-pan-x: ;--tw-pan-y: ;--tw-pinch-zoom: ;--tw-scroll-snap-strictness: proximity;--tw-ordinal: ;--tw-slashed-zero: ;--tw-numeric-figure: ;--tw-numeric-spacing: ;--tw-numeric-fraction: ;--tw-ring-inset: ;--tw-ring-offset-width: 0px;--tw-ring-offset-color: #fff;--tw-ring-color: rgb(59 130 246 / .5);--tw-ring-offset-shadow: 0 0 #0000;--tw-ring-shadow: 0 0 #0000;--tw-shadow: 0 0 #0000;--tw-shadow-colored: 0 0 #0000;--tw-blur: ;--tw-brightness: ;--tw-contrast: ;--tw-grayscale: ;--tw-hue-rotate: ;--tw-invert: ;--tw-saturate: ;--tw-sepia: ;--tw-drop-shadow: ;--tw-backdrop-blur: ;--tw-backdrop-brightness: ;--tw-backdrop-contrast: ;--tw-backdrop-grayscale: ;--tw-backdrop-hue-rotate: ;--tw-backdrop-invert: ;--tw-backdrop-opacity: ;--tw-backdrop-saturate: ;--tw-backdrop-sepia: }::backdrop{--tw-border-spacing-x: 0;--tw-border-spacing-y: 0;--tw-translate-x: 0;--tw-translate-y: 0;--tw-rotate: 0;--tw-skew-x: 0;--tw-skew-y: 0;--tw-scale-x: 1;--tw-scale-y: 1;--tw-pan-x: ;--tw-pan-y: ;--tw-pinch-zoom: ;--tw-scroll-snap-strictness: proximity;--tw-ordinal: ;--tw-slashed-zero: ;--tw-numeric-figure: ;--tw-numeric-spacing: ;--tw-numeric-fraction: ;--tw-ring-inset: ;--tw-ring-offset-width: 0px;--tw-ring-offset-color: #fff;--tw-ring-color: rgb(59 130 246 / .5);--tw-ring-offset-shadow: 0 0 #0000;--tw-ring-shadow: 0 0 #0000;--tw-shadow: 0 0 #0000;--tw-shadow-colored: 0 0 #0000;--tw-blur: ;--tw-brightness: ;--tw-contrast: ;--tw-grayscale: ;--tw-hue-rotate: ;--tw-invert: ;--tw-saturate: ;--tw-sepia: ;--tw-drop-shadow: ;--tw-backdrop-blur: ;--tw-backdrop-brightness: ;--tw-backdrop-contrast: ;--tw-backdrop-grayscale: ;--tw-backdrop-hue-rotate: ;--tw-backdrop-invert: ;--tw-backdrop-opacity: ;--tw-backdrop-saturate: ;--tw-backdrop-sepia: }.container{width:100%}@media (min-width: 640px){.container{max-width:640px}}@media (min-width: 768px){.container{max-width:768px}}@media (min-width: 1024px){.container{max-width:1024px}}@media (min-width: 1280px){.container{max-width:1280px}}@media (min-width: 1536px){.container{max-width:1536px}}.block{display:block}.flex{display:flex}.table{display:table}.hidden{display:none}.h-full{height:100%}.min-h-screen{min-height:100vh}.w-2\/3{width:66.666667%}.w-full{width:100%}.cursor-not-allowed{cursor:not-allowed}.cursor-pointer{cursor:pointer}.cursor-wait{cursor:wait}.flex-col{flex-direction:column}.items-center{align-items:center}.justify-center{justify-content:center}.space-y-12>:not([hidden])~:not([hidden]){--tw-space-y-reverse: 0;margin-top:calc(3rem * calc(1 - var(--tw-space-y-reverse)));margin-bottom:calc(3rem * var(--tw-space-y-reverse))}.overflow-auto{overflow:auto}.whitespace-pre-wrap{white-space:pre-wrap}.border-4{border-width:4px}.border-yellow-200{--tw-border-opacity: 1;border-color:rgb(254 240 138 / var(--tw-border-opacity))}.bg-yellow-200{--tw-bg-opacity: 1;background-color:rgb(254 240 138 / var(--tw-bg-opacity))}.bg-yellow-500{--tw-bg-opacity: 1;background-color:rgb(234 179 8 / var(--tw-bg-opacity))}.p-6{padding:1.5rem}.py-24{padding-top:6rem;padding-bottom:6rem}.py-6{padding-top:1.5rem;padding-bottom:1.5rem}.text-center{text-align:center}.text-6xl{font-size:3.75rem;line-height:1}.text-xl{font-size:1.25rem;line-height:1.75rem}.opacity-50{opacity:.5}.filter{filter:var(--tw-blur) var(--tw-brightness) var(--tw-contrast) var(--tw-grayscale) var(--tw-hue-rotate) var(--tw-invert) var(--tw-saturate) var(--tw-sepia) var(--tw-drop-shadow)}*,*:before,*:after{box-sizing:inherit;-webkit-user-select:inherit;-moz-user-select:inherit;user-select:inherit}html,body,#root{box-sizing:border-box;height:100%;min-height:100vh;width:100%;min-width:100vw;margin:0;padding:0;-webkit-user-select:none;-moz-user-select:none;user-select:none}input::-webkit-file-upload-button{display:none}@media (min-width: 1024px){.lg\:w-1\/3{width:33.333333%}} diff --git a/spaces/mikeee/wizardlm-1.0-uncensored-llama2-13b-ggmlv3/run-app.sh b/spaces/mikeee/wizardlm-1.0-uncensored-llama2-13b-ggmlv3/run-app.sh deleted file mode 100644 index 626c9eaf89c208f301d460ae020c8c262f251280..0000000000000000000000000000000000000000 --- a/spaces/mikeee/wizardlm-1.0-uncensored-llama2-13b-ggmlv3/run-app.sh +++ /dev/null @@ -1,2 +0,0 @@ -export GRADIO_SERVER_NAME=0.0.0.0 -nodemon -w app.py -x python app.py diff --git a/spaces/mira-causality/counterfactuals/README.md b/spaces/mira-causality/counterfactuals/README.md deleted file mode 100644 index 87ac9605fc35ee835e74e5dd9529fa46c4e1053d..0000000000000000000000000000000000000000 --- a/spaces/mira-causality/counterfactuals/README.md +++ /dev/null @@ -1,30 +0,0 @@ ---- -title: Counterfactuals -emoji: 🌖 -colorFrom: purple -colorTo: green -sdk: gradio -sdk_version: 3.27.0 -app_file: app.py -pinned: false -license: mit -duplicated_from: fabio-deep/counterfactuals ---- - -Code for the **ICML 2023** paper: - -[**High Fidelity Image Counterfactuals with Probabilistic Causal Models**](https://arxiv.org/abs/2306.15764) - -Fabio De Sousa Ribeiro1, Tian Xia1, Miguel Monteiro1, Nick Pawlowski2, Ben Glocker1\ -1Imperial College London, 2Microsoft Research Cambridge, UK - -``` -@misc{ribeiro2023high, - title={High Fidelity Image Counterfactuals with Probabilistic Causal Models}, - author={Fabio De Sousa Ribeiro and Tian Xia and Miguel Monteiro and Nick Pawlowski and Ben Glocker}, - year={2023}, - eprint={2306.15764}, - archivePrefix={arXiv}, - primaryClass={cs.LG} -} -``` \ No newline at end of file diff --git a/spaces/mishig/phind-wizardcoder-playground/README.md b/spaces/mishig/phind-wizardcoder-playground/README.md deleted file mode 100644 index 0186292ee85fb6ca37743c45368f28ef3abb7c51..0000000000000000000000000000000000000000 --- a/spaces/mishig/phind-wizardcoder-playground/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Phind VS WizardCoder - Playground -emoji: 💻⚔️💻 -colorFrom: blue -colorTo: indigo -sdk: gradio -sdk_version: 3.28.3 -app_file: app.py -pinned: false -duplicated_from: codellama/codellama-playground ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/mshukor/UnIVAL/fairseq/examples/multilingual/data_scripts/remove_valid_test_in_train.py b/spaces/mshukor/UnIVAL/fairseq/examples/multilingual/data_scripts/remove_valid_test_in_train.py deleted file mode 100644 index ef618adef7c7d010f8de38fb5ebeb5a35d2d3cac..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/fairseq/examples/multilingual/data_scripts/remove_valid_test_in_train.py +++ /dev/null @@ -1,290 +0,0 @@ -import os, sys -import glob, itertools -import pandas as pd - -WORKDIR_ROOT = os.environ.get('WORKDIR_ROOT', None) - -if WORKDIR_ROOT is None or not WORKDIR_ROOT.strip(): - print('please specify your working directory root in OS environment variable WORKDIR_ROOT. Exitting..."') - sys.exit(-1) - - -def load_langs(path): - with open(path) as fr: - langs = [l.strip() for l in fr] - return langs - - - -def load_sentences(raw_data, split, direction): - src, tgt = direction.split('-') - src_path = f"{raw_data}/{split}.{direction}.{src}" - tgt_path = f"{raw_data}/{split}.{direction}.{tgt}" - if os.path.exists(src_path) and os.path.exists(tgt_path): - return [(src, open(src_path).read().splitlines()), (tgt, open(tgt_path).read().splitlines())] - else: - return [] - -def swap_direction(d): - src, tgt = d.split('-') - return f'{tgt}-{src}' - -def get_all_test_data(raw_data, directions, split='test'): - test_data = [ - x - for dd in directions - for d in [dd, swap_direction(dd)] - for x in load_sentences(raw_data, split, d) - ] - # all_test_data = {s for _, d in test_data for s in d} - all_test_data = {} - for lang, d in test_data: - for s in d: - s = s.strip() - lgs = all_test_data.get(s, set()) - lgs.add(lang) - all_test_data[s] = lgs - return all_test_data, test_data - -def check_train_sentences(raw_data, direction, all_test_data, mess_up_train={}): - src, tgt = direction.split('-') - tgt_path = f"{raw_data}/train.{direction}.{tgt}" - src_path = f"{raw_data}/train.{direction}.{src}" - print(f'check training data in {raw_data}/train.{direction}') - size = 0 - if not os.path.exists(tgt_path) or not os.path.exists(src_path): - return mess_up_train, size - with open(src_path) as f, open(tgt_path) as g: - for src_line, tgt_line in zip(f, g): - s = src_line.strip() - t = tgt_line.strip() - size += 1 - if s in all_test_data: - langs = mess_up_train.get(s, set()) - langs.add(direction) - mess_up_train[s] = langs - if t in all_test_data: - langs = mess_up_train.get(t, set()) - langs.add(direction) - mess_up_train[t] = langs - return mess_up_train, size - -def check_train_all(raw_data, directions, all_test_data): - mess_up_train = {} - data_sizes = {} - for direction in directions: - _, size = check_train_sentences(raw_data, direction, all_test_data, mess_up_train) - data_sizes[direction] = size - return mess_up_train, data_sizes - -def count_train_in_other_set(mess_up_train): - train_in_others = [(direction, s) for s, directions in mess_up_train.items() for direction in directions] - counts = {} - for direction, s in train_in_others: - counts[direction] = counts.get(direction, 0) + 1 - return counts - -def train_size_if_remove_in_otherset(data_sizes, mess_up_train): - counts_in_other = count_train_in_other_set(mess_up_train) - remain_sizes = [] - for direction, count in counts_in_other.items(): - remain_sizes.append((direction, data_sizes[direction] - count, data_sizes[direction], count, 100 * count / data_sizes[direction] )) - return remain_sizes - - -def remove_messed_up_sentences(raw_data, direction, mess_up_train, mess_up_train_pairs, corrected_langs): - split = 'train' - src_lang, tgt_lang = direction.split('-') - - tgt = f"{raw_data}/{split}.{direction}.{tgt_lang}" - src = f"{raw_data}/{split}.{direction}.{src_lang}" - print(f'working on {direction}: ', src, tgt) - if not os.path.exists(tgt) or not os.path.exists(src) : - return - - corrected_tgt = f"{to_folder}/{split}.{direction}.{tgt_lang}" - corrected_src = f"{to_folder}/{split}.{direction}.{src_lang}" - line_num = 0 - keep_num = 0 - with open(src, encoding='utf8',) as fsrc, \ - open(tgt, encoding='utf8',) as ftgt, \ - open(corrected_src, 'w', encoding='utf8') as fsrc_corrected, \ - open(corrected_tgt, 'w', encoding='utf8') as ftgt_corrected: - for s, t in zip(fsrc, ftgt): - s = s.strip() - t = t.strip() - if t not in mess_up_train \ - and s not in mess_up_train \ - and (s, t) not in mess_up_train_pairs \ - and (t, s) not in mess_up_train_pairs: - corrected_langs.add(direction) - print(s, file=fsrc_corrected) - print(t, file=ftgt_corrected) - keep_num += 1 - line_num += 1 - if line_num % 1000 == 0: - print(f'completed {line_num} lines', end='\r') - return line_num, keep_num - -########## - - -def merge_valid_test_messup(mess_up_train_valid, mess_up_train_test): - merged_mess = [] - for s in set(list(mess_up_train_valid.keys()) + list(mess_up_train_test.keys())): - if not s: - continue - valid = mess_up_train_valid.get(s, set()) - test = mess_up_train_test.get(s, set()) - merged_mess.append((s, valid | test)) - return dict(merged_mess) - - - -######### -def check_train_pairs(raw_data, direction, all_test_data, mess_up_train={}): - src, tgt = direction.split('-') - #a hack; TODO: check the reversed directions - path1 = f"{raw_data}/train.{src}-{tgt}.{src}" - path2 = f"{raw_data}/train.{src}-{tgt}.{tgt}" - if not os.path.exists(path1) or not os.path.exists(path2) : - return - - with open(path1) as f1, open(path2) as f2: - for src_line, tgt_line in zip(f1, f2): - s = src_line.strip() - t = tgt_line.strip() - if (s, t) in all_test_data or (t, s) in all_test_data: - langs = mess_up_train.get( (s, t), set()) - langs.add(src) - langs.add(tgt) - mess_up_train[(s, t)] = langs - - -def load_pairs(raw_data, split, direction): - src, tgt = direction.split('-') - src_f = f"{raw_data}/{split}.{direction}.{src}" - tgt_f = f"{raw_data}/{split}.{direction}.{tgt}" - if tgt != 'en_XX': - src_f, tgt_f = tgt_f, src_f - if os.path.exists(src_f) and os.path.exists(tgt_f): - return list(zip(open(src_f).read().splitlines(), - open(tgt_f).read().splitlines(), - )) - else: - return [] - -# skip_langs = ['cs_CZ', 'en_XX', 'tl_XX', 'tr_TR'] -def get_messed_up_test_pairs(split, directions): - test_pairs = [ - (d, load_pairs(raw_data, split, d)) - for d in directions - ] - # all_test_data = {s for _, d in test_data for s in d} - all_test_pairs = {} - for direction, d in test_pairs: - src, tgt = direction.split('-') - for s in d: - langs = all_test_pairs.get(s, set()) - langs.add(src) - langs.add(tgt) - all_test_pairs[s] = langs - mess_up_train_pairs = {} - for direction in directions: - check_train_pairs(raw_data, direction, all_test_pairs, mess_up_train_pairs) - return all_test_pairs, mess_up_train_pairs - - - -if __name__ == "__main__": - ####### - import argparse - parser = argparse.ArgumentParser() - parser.add_argument( - '--from-folder', - required=True, - type=str) - parser.add_argument( - '--to-folder', - required=True, - type=str) - parser.add_argument( - '--directions', - default=None, - type=str) - - - args = parser.parse_args() - raw_data = args.from_folder - to_folder = args.to_folder - os.makedirs(to_folder, exist_ok=True) - - if args.directions: - directions = args.directions.split(',') - else: - raw_files = itertools.chain( - glob.glob(f'{raw_data}/train*'), - glob.glob(f'{raw_data}/valid*'), - glob.glob(f'{raw_data}/test*'), - ) - directions = [os.path.split(file_path)[-1].split('.')[1] for file_path in raw_files] - print('working on directions: ', directions) - - ########## - - - - all_test_data, test_data = get_all_test_data(raw_data, directions, 'test') - print('==loaded test data==') - all_valid_data, valid_data = get_all_test_data(raw_data, directions, 'valid') - print('==loaded valid data==') - all_valid_test_data = merge_valid_test_messup(all_test_data, all_valid_data) - mess_up_train, data_sizes = check_train_all(raw_data, directions, all_valid_test_data) - print('training messing up with valid, test data:', len(mess_up_train)) - data_situation = train_size_if_remove_in_otherset(data_sizes, mess_up_train) - df = pd.DataFrame(data_situation, columns=['direction', 'train_size_after_remove', 'orig_size', 'num_to_remove', 'remove_percent']) - df.sort_values('remove_percent', ascending=False) - df.to_csv(f'{raw_data}/clean_summary.tsv', sep='\t') - print(f'projected data clean summary in: {raw_data}/clean_summary.tsv') - - # correct the dataset: - all_test_pairs, mess_up_test_train_pairs = get_messed_up_test_pairs('test', directions) - all_valid_pairs, mess_up_valid_train_pairs = get_messed_up_test_pairs('valid', directions) - - all_messed_pairs = set(mess_up_test_train_pairs.keys()).union(set(mess_up_valid_train_pairs.keys())) - corrected_directions = set() - - real_data_situation = [] - for direction in directions: - org_size, new_size = remove_messed_up_sentences(raw_data, direction, mess_up_train, all_messed_pairs, corrected_directions) - if org_size == 0: - print(f"{direction} has size 0") - continue - real_data_situation.append( - (direction, new_size, org_size, org_size - new_size, (org_size - new_size) / org_size * 100) - ) - print('corrected directions: ', corrected_directions) - df = pd.DataFrame(real_data_situation, columns=['direction', 'train_size_after_remove', 'orig_size', 'num_to_remove', 'remove_percent']) - df.sort_values('remove_percent', ascending=False) - df.to_csv(f'{raw_data}/actual_clean_summary.tsv', sep='\t') - print(f'actual data clean summary (which can be different from the projected one because of duplications) in: {raw_data}/actual_clean_summary.tsv') - - import shutil - for direction in directions: - src_lang, tgt_lang = direction.split('-') - for split in ['train', 'valid', 'test']: - # copying valid, test and uncorrected train - if direction in corrected_directions and split == 'train': - continue - tgt = f"{raw_data}/{split}.{direction}.{tgt_lang}" - src = f"{raw_data}/{split}.{direction}.{src_lang}" - if not (os.path.exists(src) and os.path.exists(tgt)): - continue - corrected_tgt = f"{to_folder}/{split}.{direction}.{tgt_lang}" - corrected_src = f"{to_folder}/{split}.{direction}.{src_lang}" - print(f'copying {src} to {corrected_src}') - shutil.copyfile(src, corrected_src) - print(f'copying {tgt} to {corrected_tgt}') - shutil.copyfile(tgt, corrected_tgt) - - print('completed') \ No newline at end of file diff --git a/spaces/mshukor/UnIVAL/fairseq/examples/wmt19/README.md b/spaces/mshukor/UnIVAL/fairseq/examples/wmt19/README.md deleted file mode 100644 index 5c90d0e6c4ae8d043ca622e70c5828dca6f9c2f2..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/fairseq/examples/wmt19/README.md +++ /dev/null @@ -1,85 +0,0 @@ -# WMT 19 - -This page provides pointers to the models of Facebook-FAIR's WMT'19 news translation task submission [(Ng et al., 2019)](https://arxiv.org/abs/1907.06616). - -## Pre-trained models - -Model | Description | Download ----|---|--- -`transformer.wmt19.en-de` | En->De Ensemble | [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/wmt19.en-de.joined-dict.ensemble.tar.gz) -`transformer.wmt19.de-en` | De->En Ensemble | [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/wmt19.de-en.joined-dict.ensemble.tar.gz) -`transformer.wmt19.en-ru` | En->Ru Ensemble | [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/wmt19.en-ru.ensemble.tar.gz) -`transformer.wmt19.ru-en` | Ru->En Ensemble | [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/wmt19.ru-en.ensemble.tar.gz) -`transformer_lm.wmt19.en` | En Language Model | [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/lm/wmt19.en.tar.gz) -`transformer_lm.wmt19.de` | De Language Model | [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/lm/wmt19.de.tar.gz) -`transformer_lm.wmt19.ru` | Ru Language Model | [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/lm/wmt19.ru.tar.gz) - -## Pre-trained single models before finetuning - -Model | Description | Download ----|---|--- -`transformer.wmt19.en-de` | En->De Single, no finetuning | [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/wmt19.en-de.ffn8192.tar.gz) -`transformer.wmt19.de-en` | De->En Single, no finetuning | [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/wmt19.de-en.ffn8192.tar.gz) -`transformer.wmt19.en-ru` | En->Ru Single, no finetuning | [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/wmt19.en-ru.ffn8192.tar.gz) -`transformer.wmt19.ru-en` | Ru->En Single, no finetuning | [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/wmt19.ru-en.ffn8192.tar.gz) - -## Example usage (torch.hub) - -#### Requirements - -We require a few additional Python dependencies for preprocessing: -```bash -pip install fastBPE sacremoses -``` - -#### Translation - -```python -import torch - -# English to German translation -en2de = torch.hub.load('pytorch/fairseq', 'transformer.wmt19.en-de', checkpoint_file='model1.pt:model2.pt:model3.pt:model4.pt', - tokenizer='moses', bpe='fastbpe') -en2de.translate("Machine learning is great!") # 'Maschinelles Lernen ist großartig!' - -# German to English translation -de2en = torch.hub.load('pytorch/fairseq', 'transformer.wmt19.de-en', checkpoint_file='model1.pt:model2.pt:model3.pt:model4.pt', - tokenizer='moses', bpe='fastbpe') -de2en.translate("Maschinelles Lernen ist großartig!") # 'Machine learning is great!' - -# English to Russian translation -en2ru = torch.hub.load('pytorch/fairseq', 'transformer.wmt19.en-ru', checkpoint_file='model1.pt:model2.pt:model3.pt:model4.pt', - tokenizer='moses', bpe='fastbpe') -en2ru.translate("Machine learning is great!") # 'Машинное обучение - это здорово!' - -# Russian to English translation -ru2en = torch.hub.load('pytorch/fairseq', 'transformer.wmt19.ru-en', checkpoint_file='model1.pt:model2.pt:model3.pt:model4.pt', - tokenizer='moses', bpe='fastbpe') -ru2en.translate("Машинное обучение - это здорово!") # 'Machine learning is great!' -``` - -#### Language Modeling - -```python -# Sample from the English LM -en_lm = torch.hub.load('pytorch/fairseq', 'transformer_lm.wmt19.en', tokenizer='moses', bpe='fastbpe') -en_lm.sample("Machine learning is") # 'Machine learning is the future of computing, says Microsoft boss Satya Nadella ...' - -# Sample from the German LM -de_lm = torch.hub.load('pytorch/fairseq', 'transformer_lm.wmt19.de', tokenizer='moses', bpe='fastbpe') -de_lm.sample("Maschinelles lernen ist") # 'Maschinelles lernen ist das A und O (neues-deutschland.de) Die Arbeitsbedingungen für Lehrerinnen und Lehrer sind seit Jahren verbesserungswürdig ...' - -# Sample from the Russian LM -ru_lm = torch.hub.load('pytorch/fairseq', 'transformer_lm.wmt19.ru', tokenizer='moses', bpe='fastbpe') -ru_lm.sample("машинное обучение это") # 'машинное обучение это то, что мы называем "искусственным интеллектом".' -``` - -## Citation -```bibtex -@inproceedings{ng2019facebook}, - title = {Facebook FAIR's WMT19 News Translation Task Submission}, - author = {Ng, Nathan and Yee, Kyra and Baevski, Alexei and Ott, Myle and Auli, Michael and Edunov, Sergey}, - booktitle = {Proc. of WMT}, - year = 2019, -} -``` diff --git a/spaces/msmilauer/AutoGPT-duplicated2/autogpt/logs.py b/spaces/msmilauer/AutoGPT-duplicated2/autogpt/logs.py deleted file mode 100644 index 35037404a98f7be9b7d577b625cc190ca27f4566..0000000000000000000000000000000000000000 --- a/spaces/msmilauer/AutoGPT-duplicated2/autogpt/logs.py +++ /dev/null @@ -1,332 +0,0 @@ -"""Logging module for Auto-GPT.""" -import json -import logging -import os -import random -import re -import time -import traceback -from logging import LogRecord - -from colorama import Fore, Style - -from autogpt.config import Config, Singleton -from autogpt.speech import say_text - -CFG = Config() - - -class Logger(metaclass=Singleton): - """ - Logger that handle titles in different colors. - Outputs logs in console, activity.log, and errors.log - For console handler: simulates typing - """ - - def __init__(self): - # create log directory if it doesn't exist - this_files_dir_path = os.path.dirname(__file__) - log_dir = os.path.join(this_files_dir_path, "../logs") - if not os.path.exists(log_dir): - os.makedirs(log_dir) - - log_file = "activity.log" - error_file = "error.log" - - console_formatter = AutoGptFormatter("%(title_color)s %(message)s") - - # Create a handler for console which simulate typing - self.typing_console_handler = TypingConsoleHandler() - self.typing_console_handler.setLevel(logging.INFO) - self.typing_console_handler.setFormatter(console_formatter) - - # Create a handler for console without typing simulation - self.console_handler = ConsoleHandler() - self.console_handler.setLevel(logging.DEBUG) - self.console_handler.setFormatter(console_formatter) - - # Info handler in activity.log - self.file_handler = logging.FileHandler( - os.path.join(log_dir, log_file), "a", "utf-8" - ) - self.file_handler.setLevel(logging.DEBUG) - info_formatter = AutoGptFormatter( - "%(asctime)s %(levelname)s %(title)s %(message_no_color)s" - ) - self.file_handler.setFormatter(info_formatter) - - # Error handler error.log - error_handler = logging.FileHandler( - os.path.join(log_dir, error_file), "a", "utf-8" - ) - error_handler.setLevel(logging.ERROR) - error_formatter = AutoGptFormatter( - "%(asctime)s %(levelname)s %(module)s:%(funcName)s:%(lineno)d %(title)s" - " %(message_no_color)s" - ) - error_handler.setFormatter(error_formatter) - - self.typing_logger = logging.getLogger("TYPER") - self.typing_logger.addHandler(self.typing_console_handler) - self.typing_logger.addHandler(self.file_handler) - self.typing_logger.addHandler(error_handler) - self.typing_logger.setLevel(logging.DEBUG) - - self.logger = logging.getLogger("LOGGER") - self.logger.addHandler(self.console_handler) - self.logger.addHandler(self.file_handler) - self.logger.addHandler(error_handler) - self.logger.setLevel(logging.DEBUG) - - def typewriter_log( - self, title="", title_color="", content="", speak_text=False, level=logging.INFO - ): - if speak_text and CFG.speak_mode: - say_text(f"{title}. {content}") - - if content: - if isinstance(content, list): - content = " ".join(content) - else: - content = "" - - self.typing_logger.log( - level, content, extra={"title": title, "color": title_color} - ) - - def debug( - self, - message, - title="", - title_color="", - ): - self._log(title, title_color, message, logging.DEBUG) - - def warn( - self, - message, - title="", - title_color="", - ): - self._log(title, title_color, message, logging.WARN) - - def error(self, title, message=""): - self._log(title, Fore.RED, message, logging.ERROR) - - def _log(self, title="", title_color="", message="", level=logging.INFO): - if message: - if isinstance(message, list): - message = " ".join(message) - self.logger.log(level, message, extra={"title": title, "color": title_color}) - - def set_level(self, level): - self.logger.setLevel(level) - self.typing_logger.setLevel(level) - - def double_check(self, additionalText=None): - if not additionalText: - additionalText = ( - "Please ensure you've setup and configured everything" - " correctly. Read https://github.com/Torantulino/Auto-GPT#readme to " - "double check. You can also create a github issue or join the discord" - " and ask there!" - ) - - self.typewriter_log("DOUBLE CHECK CONFIGURATION", Fore.YELLOW, additionalText) - - -""" -Output stream to console using simulated typing -""" - - -class TypingConsoleHandler(logging.StreamHandler): - def emit(self, record): - min_typing_speed = 0.05 - max_typing_speed = 0.01 - - msg = self.format(record) - try: - words = msg.split() - for i, word in enumerate(words): - print(word, end="", flush=True) - if i < len(words) - 1: - print(" ", end="", flush=True) - typing_speed = random.uniform(min_typing_speed, max_typing_speed) - time.sleep(typing_speed) - # type faster after each word - min_typing_speed = min_typing_speed * 0.95 - max_typing_speed = max_typing_speed * 0.95 - print() - except Exception: - self.handleError(record) - - -class ConsoleHandler(logging.StreamHandler): - def emit(self, record) -> None: - msg = self.format(record) - try: - print(msg) - except Exception: - self.handleError(record) - - -class AutoGptFormatter(logging.Formatter): - """ - Allows to handle custom placeholders 'title_color' and 'message_no_color'. - To use this formatter, make sure to pass 'color', 'title' as log extras. - """ - - def format(self, record: LogRecord) -> str: - if hasattr(record, "color"): - record.title_color = ( - getattr(record, "color") - + getattr(record, "title") - + " " - + Style.RESET_ALL - ) - else: - record.title_color = getattr(record, "title") - if hasattr(record, "msg"): - record.message_no_color = remove_color_codes(getattr(record, "msg")) - else: - record.message_no_color = "" - return super().format(record) - - -def remove_color_codes(s: str) -> str: - ansi_escape = re.compile(r"\x1B(?:[@-Z\\-_]|\[[0-?]*[ -/]*[@-~])") - return ansi_escape.sub("", s) - - -logger = Logger() - - -def print_assistant_thoughts(ai_name, assistant_reply): - """Prints the assistant's thoughts to the console""" - from autogpt.json_utils.json_fix_llm import ( - attempt_to_fix_json_by_finding_outermost_brackets, - fix_and_parse_json, - ) - - try: - try: - # Parse and print Assistant response - assistant_reply_json = fix_and_parse_json(assistant_reply) - except json.JSONDecodeError: - logger.error("Error: Invalid JSON in assistant thoughts\n", assistant_reply) - assistant_reply_json = attempt_to_fix_json_by_finding_outermost_brackets( - assistant_reply - ) - if isinstance(assistant_reply_json, str): - assistant_reply_json = fix_and_parse_json(assistant_reply_json) - - # Check if assistant_reply_json is a string and attempt to parse - # it into a JSON object - if isinstance(assistant_reply_json, str): - try: - assistant_reply_json = json.loads(assistant_reply_json) - except json.JSONDecodeError: - logger.error("Error: Invalid JSON\n", assistant_reply) - assistant_reply_json = ( - attempt_to_fix_json_by_finding_outermost_brackets( - assistant_reply_json - ) - ) - - assistant_thoughts_reasoning = None - assistant_thoughts_plan = None - assistant_thoughts_speak = None - assistant_thoughts_criticism = None - if not isinstance(assistant_reply_json, dict): - assistant_reply_json = {} - assistant_thoughts = assistant_reply_json.get("thoughts", {}) - assistant_thoughts_text = assistant_thoughts.get("text") - - if assistant_thoughts: - assistant_thoughts_reasoning = assistant_thoughts.get("reasoning") - assistant_thoughts_plan = assistant_thoughts.get("plan") - assistant_thoughts_criticism = assistant_thoughts.get("criticism") - assistant_thoughts_speak = assistant_thoughts.get("speak") - - logger.typewriter_log( - f"{ai_name.upper()} THOUGHTS:", Fore.YELLOW, f"{assistant_thoughts_text}" - ) - logger.typewriter_log( - "REASONING:", Fore.YELLOW, f"{assistant_thoughts_reasoning}" - ) - - if assistant_thoughts_plan: - logger.typewriter_log("PLAN:", Fore.YELLOW, "") - # If it's a list, join it into a string - if isinstance(assistant_thoughts_plan, list): - assistant_thoughts_plan = "\n".join(assistant_thoughts_plan) - elif isinstance(assistant_thoughts_plan, dict): - assistant_thoughts_plan = str(assistant_thoughts_plan) - - # Split the input_string using the newline character and dashes - lines = assistant_thoughts_plan.split("\n") - for line in lines: - line = line.lstrip("- ") - logger.typewriter_log("- ", Fore.GREEN, line.strip()) - - logger.typewriter_log( - "CRITICISM:", Fore.YELLOW, f"{assistant_thoughts_criticism}" - ) - # Speak the assistant's thoughts - if CFG.speak_mode and assistant_thoughts_speak: - say_text(assistant_thoughts_speak) - else: - logger.typewriter_log("SPEAK:", Fore.YELLOW, f"{assistant_thoughts_speak}") - - return assistant_reply_json - except json.decoder.JSONDecodeError: - logger.error("Error: Invalid JSON\n", assistant_reply) - if CFG.speak_mode: - say_text( - "I have received an invalid JSON response from the OpenAI API." - " I cannot ignore this response." - ) - - # All other errors, return "Error: + error message" - except Exception: - call_stack = traceback.format_exc() - logger.error("Error: \n", call_stack) - - -def print_assistant_thoughts( - ai_name: object, assistant_reply_json_valid: object -) -> None: - assistant_thoughts_reasoning = None - assistant_thoughts_plan = None - assistant_thoughts_speak = None - assistant_thoughts_criticism = None - - assistant_thoughts = assistant_reply_json_valid.get("thoughts", {}) - assistant_thoughts_text = assistant_thoughts.get("text") - if assistant_thoughts: - assistant_thoughts_reasoning = assistant_thoughts.get("reasoning") - assistant_thoughts_plan = assistant_thoughts.get("plan") - assistant_thoughts_criticism = assistant_thoughts.get("criticism") - assistant_thoughts_speak = assistant_thoughts.get("speak") - logger.typewriter_log( - f"{ai_name.upper()} THOUGHTS:", Fore.YELLOW, f"{assistant_thoughts_text}" - ) - logger.typewriter_log("REASONING:", Fore.YELLOW, f"{assistant_thoughts_reasoning}") - if assistant_thoughts_plan: - logger.typewriter_log("PLAN:", Fore.YELLOW, "") - # If it's a list, join it into a string - if isinstance(assistant_thoughts_plan, list): - assistant_thoughts_plan = "\n".join(assistant_thoughts_plan) - elif isinstance(assistant_thoughts_plan, dict): - assistant_thoughts_plan = str(assistant_thoughts_plan) - - # Split the input_string using the newline character and dashes - lines = assistant_thoughts_plan.split("\n") - for line in lines: - line = line.lstrip("- ") - logger.typewriter_log("- ", Fore.GREEN, line.strip()) - logger.typewriter_log("CRITICISM:", Fore.YELLOW, f"{assistant_thoughts_criticism}") - # Speak the assistant's thoughts - if CFG.speak_mode and assistant_thoughts_speak: - say_text(assistant_thoughts_speak) diff --git a/spaces/msmilauer/AutoGPT-duplicated2/autogpt/speech/say.py b/spaces/msmilauer/AutoGPT-duplicated2/autogpt/speech/say.py deleted file mode 100644 index 727983d12bf334205550a54bcd69a7a36824eda4..0000000000000000000000000000000000000000 --- a/spaces/msmilauer/AutoGPT-duplicated2/autogpt/speech/say.py +++ /dev/null @@ -1,41 +0,0 @@ -""" Text to speech module """ -import threading -from threading import Semaphore - -from autogpt.config import Config -from autogpt.speech.brian import BrianSpeech -from autogpt.speech.eleven_labs import ElevenLabsSpeech -from autogpt.speech.gtts import GTTSVoice -from autogpt.speech.macos_tts import MacOSTTS - -CFG = Config() -DEFAULT_VOICE_ENGINE = GTTSVoice() -VOICE_ENGINE = None -if CFG.elevenlabs_api_key: - VOICE_ENGINE = ElevenLabsSpeech() -elif CFG.use_mac_os_tts == "True": - VOICE_ENGINE = MacOSTTS() -elif CFG.use_brian_tts == "True": - VOICE_ENGINE = BrianSpeech() -else: - VOICE_ENGINE = GTTSVoice() - - -QUEUE_SEMAPHORE = Semaphore( - 1 -) # The amount of sounds to queue before blocking the main thread - - -def say_text(text: str, voice_index: int = 0) -> None: - """Speak the given text using the given voice index""" - - def speak() -> None: - success = VOICE_ENGINE.say(text, voice_index) - if not success: - DEFAULT_VOICE_ENGINE.say(text) - - QUEUE_SEMAPHORE.release() - - QUEUE_SEMAPHORE.acquire(True) - thread = threading.Thread(target=speak) - thread.start() diff --git a/spaces/mueller-franzes/medfusion-app/medical_diffusion/models/utils/attention_blocks.py b/spaces/mueller-franzes/medfusion-app/medical_diffusion/models/utils/attention_blocks.py deleted file mode 100644 index b609017118cf875bf31cc4c5302ecd4343e47e41..0000000000000000000000000000000000000000 --- a/spaces/mueller-franzes/medfusion-app/medical_diffusion/models/utils/attention_blocks.py +++ /dev/null @@ -1,335 +0,0 @@ -import torch.nn.functional as F -import torch.nn as nn -import torch - -from monai.networks.blocks import TransformerBlock -from monai.networks.layers.utils import get_norm_layer, get_dropout_layer -from monai.networks.layers.factories import Conv -from einops import rearrange - - -class GEGLU(nn.Module): - def __init__(self, in_channels, out_channels): - super().__init__() - self.norm = nn.LayerNorm(in_channels) - self.proj = nn.Linear(in_channels, out_channels*2, bias=True) - - def forward(self, x): - # x expected to be [B, C, *] - # Workaround as layer norm can't currently be applied on arbitrary dimension: https://github.com/pytorch/pytorch/issues/71465 - b, c, *spatial = x.shape - x = x.reshape(b, c, -1).transpose(1, 2) # -> [B, C, N] -> [B, N, C] - x = self.norm(x) - x, gate = self.proj(x).chunk(2, dim=-1) - x = x * F.gelu(gate) - return x.transpose(1, 2).reshape(b, -1, *spatial) # -> [B, C, N] -> [B, C, *] - -def zero_module(module): - """ - Zero out the parameters of a module and return it. - """ - for p in module.parameters(): - p.detach().zero_() - return module - -def compute_attention(q,k,v , num_heads, scale): - q, k, v = map(lambda t: rearrange(t, 'b (h d) n -> (b h) d n', h=num_heads), (q, k, v)) # [(BxHeads), Dim_per_head, N] - - attn = (torch.einsum('b d i, b d j -> b i j', q*scale, k*scale)).softmax(dim=-1) # Matrix product = [(BxHeads), Dim_per_head, N] * [(BxHeads), Dim_per_head, N'] =[(BxHeads), N, N'] - - out = torch.einsum('b i j, b d j-> b d i', attn, v) # Matrix product: [(BxHeads), N, N'] * [(BxHeads), Dim_per_head, N'] = [(BxHeads), Dim_per_head, N] - out = rearrange(out, '(b h) d n-> b (h d) n', h=num_heads) # -> [B, (Heads x Dim_per_head), N] - - return out - - -class LinearTransformerNd(nn.Module): - """ Combines multi-head self-attention and multi-head cross-attention. - - Multi-Head Self-Attention: - Similar to multi-head self-attention (https://arxiv.org/abs/1706.03762) without Norm+MLP (compare Monai TransformerBlock) - Proposed here: https://github.com/hojonathanho/diffusion/blob/1e0dceb3b3495bbe19116a5e1b3596cd0706c543/diffusion_tf/models/unet.py#L66. - Similar to: https://github.com/CompVis/stable-diffusion/blob/69ae4b35e0a0f6ee1af8bb9a5d0016ccb27e36dc/ldm/modules/diffusionmodules/openaimodel.py#L278 - Similar to: https://github.com/CompVis/stable-diffusion/blob/69ae4b35e0a0f6ee1af8bb9a5d0016ccb27e36dc/ldm/modules/attention.py#L80 - Similar to: https://github.com/lucidrains/denoising-diffusion-pytorch/blob/dfbafee555bdae80b55d63a989073836bbfc257e/denoising_diffusion_pytorch/denoising_diffusion_pytorch.py#L209 - Similar to: https://github.com/CompVis/stable-diffusion/blob/21f890f9da3cfbeaba8e2ac3c425ee9e998d5229/ldm/modules/diffusionmodules/model.py#L150 - - CrossAttention: - Proposed here: https://github.com/CompVis/stable-diffusion/blob/69ae4b35e0a0f6ee1af8bb9a5d0016ccb27e36dc/ldm/modules/attention.py#L152 - - """ - def __init__( - self, - spatial_dims, - in_channels, - out_channels, # WARNING: if out_channels != in_channels, skip connection is disabled - num_heads=8, - ch_per_head=32, # rule of thumb: 32 or 64 channels per head (see stable-diffusion / diffusion models beat GANs) - norm_name=("GROUP", {'num_groups':32, "affine": True}), # Or use LayerNorm but be aware of https://github.com/pytorch/pytorch/issues/71465 (=> GroupNorm with num_groups=1) - dropout=None, - emb_dim=None, - ): - super().__init__() - hid_channels = num_heads*ch_per_head - self.num_heads = num_heads - self.scale = ch_per_head**-0.25 # Should be 1/sqrt("queries and keys of dimension"), Note: additional sqrt needed as it follows OpenAI: (q * scale) * (k * scale) instead of (q *k) * scale - - self.norm_x = get_norm_layer(norm_name, spatial_dims=spatial_dims, channels=in_channels) - emb_dim = in_channels if emb_dim is None else emb_dim - - Convolution = Conv["conv", spatial_dims] - self.to_q = Convolution(in_channels, hid_channels, 1) - self.to_k = Convolution(emb_dim, hid_channels, 1) - self.to_v = Convolution(emb_dim, hid_channels, 1) - - self.to_out = nn.Sequential( - zero_module(Convolution(hid_channels, out_channels, 1)), - nn.Identity() if dropout is None else get_dropout_layer(name=dropout, dropout_dim=spatial_dims) - ) - - def forward(self, x, embedding=None): - # x expected to be [B, C, *] and embedding is None or [B, C*] or [B, C*, *] - # if no embedding is given, cross-attention defaults to self-attention - - # Normalize - b, c, *spatial = x.shape - x_n = self.norm_x(x) - - # Attention: embedding (cross-attention) or x (self-attention) - if embedding is None: - embedding = x_n # WARNING: This assumes that emb_dim==in_channels - else: - if embedding.ndim == 2: - embedding = embedding.reshape(*embedding.shape[:2], *[1]*(x.ndim-2)) # [B, C*] -> [B, C*, *] - # Why no normalization for embedding here? - - # Convolution - q = self.to_q(x_n) # -> [B, (Heads x Dim_per_head), *] - k = self.to_k(embedding) # -> [B, (Heads x Dim_per_head), *] - v = self.to_v(embedding) # -> [B, (Heads x Dim_per_head), *] - - # Flatten - q = q.reshape(b, c, -1) # -> [B, (Heads x Dim_per_head), N] - k = k.reshape(*embedding.shape[:2], -1) # -> [B, (Heads x Dim_per_head), N'] - v = v.reshape(*embedding.shape[:2], -1) # -> [B, (Heads x Dim_per_head), N'] - - # Apply attention - out = compute_attention(q, k, v, self.num_heads, self.scale) - - out = out.reshape(*out.shape[:2], *spatial) # -> [B, (Heads x Dim_per_head), *] - out = self.to_out(out) # -> [B, C', *] - - - if x.shape == out.shape: - out = x + out - return out # [B, C', *] - - -class LinearTransformer(nn.Module): - """ See LinearTransformer, however this implementation is fixed to Conv1d/Linear""" - def __init__( - self, - spatial_dims, - in_channels, - out_channels, # WARNING: if out_channels != in_channels, skip connection is disabled - num_heads, - ch_per_head=32, # rule of thumb: 32 or 64 channels per head (see stable-diffusion / diffusion models beat GANs) - norm_name=("GROUP", {'num_groups':32, "affine": True}), - dropout=None, - emb_dim=None - ): - super().__init__() - hid_channels = num_heads*ch_per_head - self.num_heads = num_heads - self.scale = ch_per_head**-0.25 # Should be 1/sqrt("queries and keys of dimension"), Note: additional sqrt needed as it follows OpenAI: (q * scale) * (k * scale) instead of (q *k) * scale - - self.norm_x = get_norm_layer(norm_name, spatial_dims=spatial_dims, channels=in_channels) - emb_dim = in_channels if emb_dim is None else emb_dim - - # Note: Conv1d and Linear are interchangeable but order of input changes [B, C, N] <-> [B, N, C] - self.to_q = nn.Conv1d(in_channels, hid_channels, 1) - self.to_k = nn.Conv1d(emb_dim, hid_channels, 1) - self.to_v = nn.Conv1d(emb_dim, hid_channels, 1) - # self.to_qkv = nn.Conv1d(emb_dim, hid_channels*3, 1) - - self.to_out = nn.Sequential( - zero_module(nn.Conv1d(hid_channels, out_channels, 1)), - nn.Identity() if dropout is None else get_dropout_layer(name=dropout, dropout_dim=spatial_dims) - ) - - def forward(self, x, embedding=None): - # x expected to be [B, C, *] and embedding is None or [B, C*] or [B, C*, *] - # if no embedding is given, cross-attention defaults to self-attention - - # Normalize - b, c, *spatial = x.shape - x_n = self.norm_x(x) - - # Attention: embedding (cross-attention) or x (self-attention) - if embedding is None: - embedding = x_n # WARNING: This assumes that emb_dim==in_channels - else: - if embedding.ndim == 2: - embedding = embedding.reshape(*embedding.shape[:2], *[1]*(x.ndim-2)) # [B, C*] -> [B, C*, *] - # Why no normalization for embedding here? - - # Flatten - x_n = x_n.reshape(b, c, -1) # [B, C, *] -> [B, C, N] - embedding = embedding.reshape(*embedding.shape[:2], -1) # [B, C*, *] -> [B, C*, N'] - - # Convolution - q = self.to_q(x_n) # -> [B, (Heads x Dim_per_head), N] - k = self.to_k(embedding) # -> [B, (Heads x Dim_per_head), N'] - v = self.to_v(embedding) # -> [B, (Heads x Dim_per_head), N'] - # qkv = self.to_qkv(x_n) - # q,k,v = qkv.split(qkv.shape[1]//3, dim=1) - - # Apply attention - out = compute_attention(q, k, v, self.num_heads, self.scale) - - out = self.to_out(out) # -> [B, C', N] - out = out.reshape(*out.shape[:2], *spatial) # -> [B, C', *] - - if x.shape == out.shape: - out = x + out - return out # [B, C', *] - - - - -class BasicTransformerBlock(nn.Module): - def __init__( - self, - spatial_dims, - in_channels, - out_channels, # WARNING: if out_channels != in_channels, skip connection is disabled - num_heads, - ch_per_head=32, - norm_name=("GROUP", {'num_groups':32, "affine": True}), - dropout=None, - emb_dim=None - ): - super().__init__() - self.self_atn = LinearTransformer(spatial_dims, in_channels, in_channels, num_heads, ch_per_head, norm_name, dropout, None) - if emb_dim is not None: - self.cros_atn = LinearTransformer(spatial_dims, in_channels, in_channels, num_heads, ch_per_head, norm_name, dropout, emb_dim) - self.proj_out = nn.Sequential( - GEGLU(in_channels, in_channels*4), - nn.Identity() if dropout is None else get_dropout_layer(name=dropout, dropout_dim=spatial_dims), - Conv["conv", spatial_dims](in_channels*4, out_channels, 1, bias=True) - ) - - - def forward(self, x, embedding=None): - # x expected to be [B, C, *] and embedding is None or [B, C*] or [B, C*, *] - x = self.self_atn(x) - if embedding is not None: - x = self.cros_atn(x, embedding=embedding) - out = self.proj_out(x) - if out.shape[1] == x.shape[1]: - return out + x - return x - -class SpatialTransformer(nn.Module): - """ Proposed here: https://github.com/CompVis/stable-diffusion/blob/69ae4b35e0a0f6ee1af8bb9a5d0016ccb27e36dc/ldm/modules/attention.py#L218 - Unrelated to: https://arxiv.org/abs/1506.02025 - """ - def __init__( - self, - spatial_dims, - in_channels, - out_channels, # WARNING: if out_channels != in_channels, skip connection is disabled - num_heads, - ch_per_head=32, # rule of thumb: 32 or 64 channels per head (see stable-diffusion / diffusion models beat GANs) - norm_name = ("GROUP", {'num_groups':32, "affine": True}), - dropout=None, - emb_dim=None, - depth=1 - ): - super().__init__() - self.in_channels = in_channels - self.norm = get_norm_layer(norm_name, spatial_dims=spatial_dims, channels=in_channels) - conv_class = Conv["conv", spatial_dims] - hid_channels = num_heads*ch_per_head - - self.proj_in = conv_class( - in_channels, - hid_channels, - kernel_size=1, - stride=1, - padding=0, - ) - - self.transformer_blocks = nn.ModuleList([ - BasicTransformerBlock(spatial_dims, hid_channels, hid_channels, num_heads, ch_per_head, norm_name, dropout=dropout, emb_dim=emb_dim) - for _ in range(depth)] - ) - - self.proj_out = conv_class( # Note: zero_module is used in original code - hid_channels, - out_channels, - kernel_size=1, - stride=1, - padding=0, - ) - - def forward(self, x, embedding=None): - # x expected to be [B, C, *] and embedding is None or [B, C*] or [B, C*, *] - # Note: if no embedding is given, cross-attention is disabled - h = self.norm(x) - h = self.proj_in(h) - - for block in self.transformer_blocks: - h = block(h, embedding=embedding) - - h = self.proj_out(h) # -> [B, C'', *] - if h.shape == x.shape: - return h + x - return h - - -class Attention(nn.Module): - def __init__( - self, - spatial_dims, - in_channels, - out_channels, - num_heads=8, - ch_per_head=32, # rule of thumb: 32 or 64 channels per head (see stable-diffusion / diffusion models beat GANs) - norm_name = ("GROUP", {'num_groups':32, "affine": True}), - dropout=0, - emb_dim=None, - depth=1, - attention_type='linear' - ) -> None: - super().__init__() - if attention_type == 'spatial': - self.attention = SpatialTransformer( - spatial_dims=spatial_dims, - in_channels=in_channels, - out_channels=out_channels, - num_heads=num_heads, - ch_per_head=ch_per_head, - depth=depth, - norm_name=norm_name, - dropout=dropout, - emb_dim=emb_dim - ) - elif attention_type == 'linear': - self.attention = LinearTransformer( - spatial_dims=spatial_dims, - in_channels=in_channels, - out_channels=out_channels, - num_heads=num_heads, - ch_per_head=ch_per_head, - norm_name=norm_name, - dropout=dropout, - emb_dim=emb_dim - ) - - - def forward(self, x, emb=None): - if hasattr(self, 'attention'): - return self.attention(x, emb) - else: - return x \ No newline at end of file diff --git a/spaces/muhammadzain/Background-changer-remover-backend/Dockerfile b/spaces/muhammadzain/Background-changer-remover-backend/Dockerfile deleted file mode 100644 index bb0f02a8eafc16ca7ac5037cd39500d43755128e..0000000000000000000000000000000000000000 --- a/spaces/muhammadzain/Background-changer-remover-backend/Dockerfile +++ /dev/null @@ -1,27 +0,0 @@ -FROM python:3.10 - -RUN apt-get update -y && apt-get install -y build-essential - -WORKDIR /app - -RUN useradd -m -u 1000 user -USER user -ENV HOME=/home/user \ - PATH=/home/user/.local/bin:$PATH - -WORKDIR $HOME/app - -COPY --chown=user . $HOME/app - -COPY app.py app.py - -RUN pip install Flask -RUN pip install gunicorn -RUN pip install -U flask-cors -RUN pip install opencv-python-headless==4.5.5.64 -RUN pip install rembg -RUN pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cpu -RUN pip install backgroundremover - - -CMD ["gunicorn","-b","0.0.0.0:7860", "app:app","--timeout","950"] diff --git a/spaces/multimodalart/mariogpt/mario_gpt/__init__.py b/spaces/multimodalart/mariogpt/mario_gpt/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/mygyasir/genious_bgremover/carvekit/ml/arch/basnet/basnet.py b/spaces/mygyasir/genious_bgremover/carvekit/ml/arch/basnet/basnet.py deleted file mode 100644 index e2ead6a7195374e19de182a63f26449092ec935e..0000000000000000000000000000000000000000 --- a/spaces/mygyasir/genious_bgremover/carvekit/ml/arch/basnet/basnet.py +++ /dev/null @@ -1,478 +0,0 @@ -""" -Source url: https://github.com/NathanUA/BASNet -Modified by Nikita Selin (OPHoperHPO)[https://github.com/OPHoperHPO]. -License: MIT License -""" -import torch -import torch.nn as nn -from torchvision import models - - -def conv3x3(in_planes, out_planes, stride=1): - """3x3 convolution with padding""" - return nn.Conv2d( - in_planes, out_planes, kernel_size=3, stride=stride, padding=1, bias=False - ) - - -class BasicBlock(nn.Module): - expansion = 1 - - def __init__(self, inplanes, planes, stride=1, downsample=None): - super(BasicBlock, self).__init__() - self.conv1 = conv3x3(inplanes, planes, stride) - self.bn1 = nn.BatchNorm2d(planes) - self.relu = nn.ReLU(inplace=True) - self.conv2 = conv3x3(planes, planes) - self.bn2 = nn.BatchNorm2d(planes) - self.downsample = downsample - self.stride = stride - - def forward(self, x): - residual = x - - out = self.conv1(x) - out = self.bn1(out) - out = self.relu(out) - - out = self.conv2(out) - out = self.bn2(out) - - if self.downsample is not None: - residual = self.downsample(x) - - out += residual - out = self.relu(out) - - return out - - -class BasicBlockDe(nn.Module): - expansion = 1 - - def __init__(self, inplanes, planes, stride=1, downsample=None): - super(BasicBlockDe, self).__init__() - - self.convRes = conv3x3(inplanes, planes, stride) - self.bnRes = nn.BatchNorm2d(planes) - self.reluRes = nn.ReLU(inplace=True) - - self.conv1 = conv3x3(inplanes, planes, stride) - self.bn1 = nn.BatchNorm2d(planes) - self.relu = nn.ReLU(inplace=True) - self.conv2 = conv3x3(planes, planes) - self.bn2 = nn.BatchNorm2d(planes) - self.downsample = downsample - self.stride = stride - - def forward(self, x): - residual = self.convRes(x) - residual = self.bnRes(residual) - residual = self.reluRes(residual) - - out = self.conv1(x) - out = self.bn1(out) - out = self.relu(out) - - out = self.conv2(out) - out = self.bn2(out) - - if self.downsample is not None: - residual = self.downsample(x) - - out += residual - out = self.relu(out) - - return out - - -class Bottleneck(nn.Module): - expansion = 4 - - def __init__(self, inplanes, planes, stride=1, downsample=None): - super(Bottleneck, self).__init__() - self.conv1 = nn.Conv2d(inplanes, planes, kernel_size=1, bias=False) - self.bn1 = nn.BatchNorm2d(planes) - self.conv2 = nn.Conv2d( - planes, planes, kernel_size=3, stride=stride, padding=1, bias=False - ) - self.bn2 = nn.BatchNorm2d(planes) - self.conv3 = nn.Conv2d(planes, planes * 4, kernel_size=1, bias=False) - self.bn3 = nn.BatchNorm2d(planes * 4) - self.relu = nn.ReLU(inplace=True) - self.downsample = downsample - self.stride = stride - - def forward(self, x): - residual = x - - out = self.conv1(x) - out = self.bn1(out) - out = self.relu(out) - - out = self.conv2(out) - out = self.bn2(out) - out = self.relu(out) - - out = self.conv3(out) - out = self.bn3(out) - - if self.downsample is not None: - residual = self.downsample(x) - - out += residual - out = self.relu(out) - - return out - - -class RefUnet(nn.Module): - def __init__(self, in_ch, inc_ch): - super(RefUnet, self).__init__() - - self.conv0 = nn.Conv2d(in_ch, inc_ch, 3, padding=1) - - self.conv1 = nn.Conv2d(inc_ch, 64, 3, padding=1) - self.bn1 = nn.BatchNorm2d(64) - self.relu1 = nn.ReLU(inplace=True) - - self.pool1 = nn.MaxPool2d(2, 2, ceil_mode=True) - - self.conv2 = nn.Conv2d(64, 64, 3, padding=1) - self.bn2 = nn.BatchNorm2d(64) - self.relu2 = nn.ReLU(inplace=True) - - self.pool2 = nn.MaxPool2d(2, 2, ceil_mode=True) - - self.conv3 = nn.Conv2d(64, 64, 3, padding=1) - self.bn3 = nn.BatchNorm2d(64) - self.relu3 = nn.ReLU(inplace=True) - - self.pool3 = nn.MaxPool2d(2, 2, ceil_mode=True) - - self.conv4 = nn.Conv2d(64, 64, 3, padding=1) - self.bn4 = nn.BatchNorm2d(64) - self.relu4 = nn.ReLU(inplace=True) - - self.pool4 = nn.MaxPool2d(2, 2, ceil_mode=True) - - self.conv5 = nn.Conv2d(64, 64, 3, padding=1) - self.bn5 = nn.BatchNorm2d(64) - self.relu5 = nn.ReLU(inplace=True) - - self.conv_d4 = nn.Conv2d(128, 64, 3, padding=1) - self.bn_d4 = nn.BatchNorm2d(64) - self.relu_d4 = nn.ReLU(inplace=True) - - self.conv_d3 = nn.Conv2d(128, 64, 3, padding=1) - self.bn_d3 = nn.BatchNorm2d(64) - self.relu_d3 = nn.ReLU(inplace=True) - - self.conv_d2 = nn.Conv2d(128, 64, 3, padding=1) - self.bn_d2 = nn.BatchNorm2d(64) - self.relu_d2 = nn.ReLU(inplace=True) - - self.conv_d1 = nn.Conv2d(128, 64, 3, padding=1) - self.bn_d1 = nn.BatchNorm2d(64) - self.relu_d1 = nn.ReLU(inplace=True) - - self.conv_d0 = nn.Conv2d(64, 1, 3, padding=1) - - self.upscore2 = nn.Upsample( - scale_factor=2, mode="bilinear", align_corners=False - ) - - def forward(self, x): - hx = x - hx = self.conv0(hx) - - hx1 = self.relu1(self.bn1(self.conv1(hx))) - hx = self.pool1(hx1) - - hx2 = self.relu2(self.bn2(self.conv2(hx))) - hx = self.pool2(hx2) - - hx3 = self.relu3(self.bn3(self.conv3(hx))) - hx = self.pool3(hx3) - - hx4 = self.relu4(self.bn4(self.conv4(hx))) - hx = self.pool4(hx4) - - hx5 = self.relu5(self.bn5(self.conv5(hx))) - - hx = self.upscore2(hx5) - - d4 = self.relu_d4(self.bn_d4(self.conv_d4(torch.cat((hx, hx4), 1)))) - hx = self.upscore2(d4) - - d3 = self.relu_d3(self.bn_d3(self.conv_d3(torch.cat((hx, hx3), 1)))) - hx = self.upscore2(d3) - - d2 = self.relu_d2(self.bn_d2(self.conv_d2(torch.cat((hx, hx2), 1)))) - hx = self.upscore2(d2) - - d1 = self.relu_d1(self.bn_d1(self.conv_d1(torch.cat((hx, hx1), 1)))) - - residual = self.conv_d0(d1) - - return x + residual - - -class BASNet(nn.Module): - def __init__(self, n_channels, n_classes): - super(BASNet, self).__init__() - - resnet = models.resnet34(pretrained=False) - - # -------------Encoder-------------- - - self.inconv = nn.Conv2d(n_channels, 64, 3, padding=1) - self.inbn = nn.BatchNorm2d(64) - self.inrelu = nn.ReLU(inplace=True) - - # stage 1 - self.encoder1 = resnet.layer1 # 224 - # stage 2 - self.encoder2 = resnet.layer2 # 112 - # stage 3 - self.encoder3 = resnet.layer3 # 56 - # stage 4 - self.encoder4 = resnet.layer4 # 28 - - self.pool4 = nn.MaxPool2d(2, 2, ceil_mode=True) - - # stage 5 - self.resb5_1 = BasicBlock(512, 512) - self.resb5_2 = BasicBlock(512, 512) - self.resb5_3 = BasicBlock(512, 512) # 14 - - self.pool5 = nn.MaxPool2d(2, 2, ceil_mode=True) - - # stage 6 - self.resb6_1 = BasicBlock(512, 512) - self.resb6_2 = BasicBlock(512, 512) - self.resb6_3 = BasicBlock(512, 512) # 7 - - # -------------Bridge-------------- - - # stage Bridge - self.convbg_1 = nn.Conv2d(512, 512, 3, dilation=2, padding=2) # 7 - self.bnbg_1 = nn.BatchNorm2d(512) - self.relubg_1 = nn.ReLU(inplace=True) - self.convbg_m = nn.Conv2d(512, 512, 3, dilation=2, padding=2) - self.bnbg_m = nn.BatchNorm2d(512) - self.relubg_m = nn.ReLU(inplace=True) - self.convbg_2 = nn.Conv2d(512, 512, 3, dilation=2, padding=2) - self.bnbg_2 = nn.BatchNorm2d(512) - self.relubg_2 = nn.ReLU(inplace=True) - - # -------------Decoder-------------- - - # stage 6d - self.conv6d_1 = nn.Conv2d(1024, 512, 3, padding=1) # 16 - self.bn6d_1 = nn.BatchNorm2d(512) - self.relu6d_1 = nn.ReLU(inplace=True) - - self.conv6d_m = nn.Conv2d(512, 512, 3, dilation=2, padding=2) - self.bn6d_m = nn.BatchNorm2d(512) - self.relu6d_m = nn.ReLU(inplace=True) - - self.conv6d_2 = nn.Conv2d(512, 512, 3, dilation=2, padding=2) - self.bn6d_2 = nn.BatchNorm2d(512) - self.relu6d_2 = nn.ReLU(inplace=True) - - # stage 5d - self.conv5d_1 = nn.Conv2d(1024, 512, 3, padding=1) # 16 - self.bn5d_1 = nn.BatchNorm2d(512) - self.relu5d_1 = nn.ReLU(inplace=True) - - self.conv5d_m = nn.Conv2d(512, 512, 3, padding=1) - self.bn5d_m = nn.BatchNorm2d(512) - self.relu5d_m = nn.ReLU(inplace=True) - - self.conv5d_2 = nn.Conv2d(512, 512, 3, padding=1) - self.bn5d_2 = nn.BatchNorm2d(512) - self.relu5d_2 = nn.ReLU(inplace=True) - - # stage 4d - self.conv4d_1 = nn.Conv2d(1024, 512, 3, padding=1) # 32 - self.bn4d_1 = nn.BatchNorm2d(512) - self.relu4d_1 = nn.ReLU(inplace=True) - - self.conv4d_m = nn.Conv2d(512, 512, 3, padding=1) - self.bn4d_m = nn.BatchNorm2d(512) - self.relu4d_m = nn.ReLU(inplace=True) - - self.conv4d_2 = nn.Conv2d(512, 256, 3, padding=1) - self.bn4d_2 = nn.BatchNorm2d(256) - self.relu4d_2 = nn.ReLU(inplace=True) - - # stage 3d - self.conv3d_1 = nn.Conv2d(512, 256, 3, padding=1) # 64 - self.bn3d_1 = nn.BatchNorm2d(256) - self.relu3d_1 = nn.ReLU(inplace=True) - - self.conv3d_m = nn.Conv2d(256, 256, 3, padding=1) - self.bn3d_m = nn.BatchNorm2d(256) - self.relu3d_m = nn.ReLU(inplace=True) - - self.conv3d_2 = nn.Conv2d(256, 128, 3, padding=1) - self.bn3d_2 = nn.BatchNorm2d(128) - self.relu3d_2 = nn.ReLU(inplace=True) - - # stage 2d - - self.conv2d_1 = nn.Conv2d(256, 128, 3, padding=1) # 128 - self.bn2d_1 = nn.BatchNorm2d(128) - self.relu2d_1 = nn.ReLU(inplace=True) - - self.conv2d_m = nn.Conv2d(128, 128, 3, padding=1) - self.bn2d_m = nn.BatchNorm2d(128) - self.relu2d_m = nn.ReLU(inplace=True) - - self.conv2d_2 = nn.Conv2d(128, 64, 3, padding=1) - self.bn2d_2 = nn.BatchNorm2d(64) - self.relu2d_2 = nn.ReLU(inplace=True) - - # stage 1d - self.conv1d_1 = nn.Conv2d(128, 64, 3, padding=1) # 256 - self.bn1d_1 = nn.BatchNorm2d(64) - self.relu1d_1 = nn.ReLU(inplace=True) - - self.conv1d_m = nn.Conv2d(64, 64, 3, padding=1) - self.bn1d_m = nn.BatchNorm2d(64) - self.relu1d_m = nn.ReLU(inplace=True) - - self.conv1d_2 = nn.Conv2d(64, 64, 3, padding=1) - self.bn1d_2 = nn.BatchNorm2d(64) - self.relu1d_2 = nn.ReLU(inplace=True) - - # -------------Bilinear Upsampling-------------- - self.upscore6 = nn.Upsample( - scale_factor=32, mode="bilinear", align_corners=False - ) - self.upscore5 = nn.Upsample( - scale_factor=16, mode="bilinear", align_corners=False - ) - self.upscore4 = nn.Upsample( - scale_factor=8, mode="bilinear", align_corners=False - ) - self.upscore3 = nn.Upsample( - scale_factor=4, mode="bilinear", align_corners=False - ) - self.upscore2 = nn.Upsample( - scale_factor=2, mode="bilinear", align_corners=False - ) - - # -------------Side Output-------------- - self.outconvb = nn.Conv2d(512, 1, 3, padding=1) - self.outconv6 = nn.Conv2d(512, 1, 3, padding=1) - self.outconv5 = nn.Conv2d(512, 1, 3, padding=1) - self.outconv4 = nn.Conv2d(256, 1, 3, padding=1) - self.outconv3 = nn.Conv2d(128, 1, 3, padding=1) - self.outconv2 = nn.Conv2d(64, 1, 3, padding=1) - self.outconv1 = nn.Conv2d(64, 1, 3, padding=1) - - # -------------Refine Module------------- - self.refunet = RefUnet(1, 64) - - def forward(self, x): - hx = x - - # -------------Encoder------------- - hx = self.inconv(hx) - hx = self.inbn(hx) - hx = self.inrelu(hx) - - h1 = self.encoder1(hx) # 256 - h2 = self.encoder2(h1) # 128 - h3 = self.encoder3(h2) # 64 - h4 = self.encoder4(h3) # 32 - - hx = self.pool4(h4) # 16 - - hx = self.resb5_1(hx) - hx = self.resb5_2(hx) - h5 = self.resb5_3(hx) - - hx = self.pool5(h5) # 8 - - hx = self.resb6_1(hx) - hx = self.resb6_2(hx) - h6 = self.resb6_3(hx) - - # -------------Bridge------------- - hx = self.relubg_1(self.bnbg_1(self.convbg_1(h6))) # 8 - hx = self.relubg_m(self.bnbg_m(self.convbg_m(hx))) - hbg = self.relubg_2(self.bnbg_2(self.convbg_2(hx))) - - # -------------Decoder------------- - - hx = self.relu6d_1(self.bn6d_1(self.conv6d_1(torch.cat((hbg, h6), 1)))) - hx = self.relu6d_m(self.bn6d_m(self.conv6d_m(hx))) - hd6 = self.relu6d_2(self.bn6d_2(self.conv6d_2(hx))) - - hx = self.upscore2(hd6) # 8 -> 16 - - hx = self.relu5d_1(self.bn5d_1(self.conv5d_1(torch.cat((hx, h5), 1)))) - hx = self.relu5d_m(self.bn5d_m(self.conv5d_m(hx))) - hd5 = self.relu5d_2(self.bn5d_2(self.conv5d_2(hx))) - - hx = self.upscore2(hd5) # 16 -> 32 - - hx = self.relu4d_1(self.bn4d_1(self.conv4d_1(torch.cat((hx, h4), 1)))) - hx = self.relu4d_m(self.bn4d_m(self.conv4d_m(hx))) - hd4 = self.relu4d_2(self.bn4d_2(self.conv4d_2(hx))) - - hx = self.upscore2(hd4) # 32 -> 64 - - hx = self.relu3d_1(self.bn3d_1(self.conv3d_1(torch.cat((hx, h3), 1)))) - hx = self.relu3d_m(self.bn3d_m(self.conv3d_m(hx))) - hd3 = self.relu3d_2(self.bn3d_2(self.conv3d_2(hx))) - - hx = self.upscore2(hd3) # 64 -> 128 - - hx = self.relu2d_1(self.bn2d_1(self.conv2d_1(torch.cat((hx, h2), 1)))) - hx = self.relu2d_m(self.bn2d_m(self.conv2d_m(hx))) - hd2 = self.relu2d_2(self.bn2d_2(self.conv2d_2(hx))) - - hx = self.upscore2(hd2) # 128 -> 256 - - hx = self.relu1d_1(self.bn1d_1(self.conv1d_1(torch.cat((hx, h1), 1)))) - hx = self.relu1d_m(self.bn1d_m(self.conv1d_m(hx))) - hd1 = self.relu1d_2(self.bn1d_2(self.conv1d_2(hx))) - - # -------------Side Output------------- - db = self.outconvb(hbg) - db = self.upscore6(db) # 8->256 - - d6 = self.outconv6(hd6) - d6 = self.upscore6(d6) # 8->256 - - d5 = self.outconv5(hd5) - d5 = self.upscore5(d5) # 16->256 - - d4 = self.outconv4(hd4) - d4 = self.upscore4(d4) # 32->256 - - d3 = self.outconv3(hd3) - d3 = self.upscore3(d3) # 64->256 - - d2 = self.outconv2(hd2) - d2 = self.upscore2(d2) # 128->256 - - d1 = self.outconv1(hd1) # 256 - - # -------------Refine Module------------- - dout = self.refunet(d1) # 256 - - return ( - torch.sigmoid(dout), - torch.sigmoid(d1), - torch.sigmoid(d2), - torch.sigmoid(d3), - torch.sigmoid(d4), - torch.sigmoid(d5), - torch.sigmoid(d6), - torch.sigmoid(db), - ) diff --git a/spaces/nateraw/lavila/CODE_OF_CONDUCT.md b/spaces/nateraw/lavila/CODE_OF_CONDUCT.md deleted file mode 100644 index 0d31b1fff37f8283410022a13ba98204fc4acc53..0000000000000000000000000000000000000000 --- a/spaces/nateraw/lavila/CODE_OF_CONDUCT.md +++ /dev/null @@ -1,5 +0,0 @@ -# Code of Conduct - -Facebook has adopted a Code of Conduct that we expect project participants to adhere to. -Please read the [full text](https://code.fb.com/codeofconduct/) -so that you can understand what actions will and will not be tolerated. \ No newline at end of file diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Cabaret In Hindi Torrent Download 720p PORTABLE.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Cabaret In Hindi Torrent Download 720p PORTABLE.md deleted file mode 100644 index 4497a31159ec16c9281b15c0bf4ca0e389e58414..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Cabaret In Hindi Torrent Download 720p PORTABLE.md +++ /dev/null @@ -1,18 +0,0 @@ -
        -Here is a possible title and article with html formatting for the keyword "Cabaret In Hindi Torrent Download 720p": - -

        Cabaret In Hindi Torrent Download 720p: A Musical Drama Set in Nazi Germany

        -

        If you are looking for a musical drama that explores the dark and decadent side of Berlin during the rise of Nazi Germany, you might want to check out Cabaret In Hindi Torrent Download 720p. This is a dubbed version of the 1972 American film Cabaret, directed by Bob Fosse and starring Liza Minnelli, Michael York, Helmut Griem, and Joel Grey.

        -

        Cabaret In Hindi Torrent Download 720p


        DOWNLOAD –––––>>> https://urlcod.com/2uIbmk



        -

        Cabaret In Hindi Torrent Download 720p follows the story of Sally Bowles, an American cabaret singer who performs at the Kit Kat Klub, a seedy nightclub where the Master of Ceremonies (Grey) entertains the audience with provocative and satirical songs. Sally meets and falls in love with Brian Roberts (York), a British academic who is studying in Berlin. Their relationship is complicated by the arrival of Maximilian von Heune (Griem), a wealthy and bisexual playboy who seduces them both. Meanwhile, the Nazi Party is gaining more power and influence in the city, threatening the lives and freedoms of everyone around them.

        -

        Cabaret In Hindi Torrent Download 720p is based on the 1966 Broadway musical Cabaret by Kander and Ebb, which was inspired by Christopher Isherwood's semi-autobiographical novel The Berlin Stories. The film adaptation differs from the stage version in several ways, such as focusing more on the historical and political context of the era, eliminating some songs and adding others, and making the musical numbers entirely diegetic (meaning they only occur within the club setting). The film was a critical and commercial success, winning eight Academy Awards out of ten nominations, including Best Director for Fosse and Best Actress for Minnelli.

        -

        If you want to watch Cabaret In Hindi Torrent Download 720p, you can find it online on various torrent sites. However, be aware that downloading torrents is risky for you: your IP and leaked private data being actively tracked by your ISP and Government Agencies. Protect yourself from expensive lawsuits and fines NOW! You must use a VPN like Expert. It is the only way to download torrents fully anonymous by encrypting all traffic with zero logs.

        -

        Cabaret In Hindi Torrent Download 720p is a captivating and powerful film that will make you laugh, cry, and think about the horrors of fascism and the beauty of art. Don't miss this opportunity to watch this classic musical drama in Hindi.

        -

        Here is a possible continuation of the article: - -

        One of the most memorable aspects of Cabaret In Hindi Torrent Download 720p is the music. The film features some of the most iconic songs from the musical genre, such as "Willkommen", "Mein Herr", "Maybe This Time", "Money", and of course, "Cabaret". The songs are performed with passion and flair by the talented cast, especially Minnelli, who delivers a stunning performance as Sally Bowles. The songs also serve as a commentary on the social and political situation of the time, contrasting the hedonism and escapism of the club with the harsh reality and violence of the outside world.

        -

        Another remarkable feature of Cabaret In Hindi Torrent Download 720p is the cinematography. The film uses a dark and muted color palette to create a sense of gloom and decay in Berlin. The camera also employs various techniques to enhance the mood and atmosphere of the scenes, such as zooms, pans, tilts, and cuts. The film also makes use of symbolism and imagery to convey the themes and messages of the story, such as the use of mirrors, shadows, flags, and costumes.

        -

        Cabaret In Hindi Torrent Download 720p is not only a musical drama, but also a historical drama. The film depicts the rise of Nazi Germany in the early 1930s, showing how it affected the lives and choices of ordinary people. The film does not shy away from showing the brutality and oppression of the Nazi regime, such as the persecution of Jews, homosexuals, communists, and other minorities. The film also shows how some people resisted or ignored the Nazi threat, while others embraced or collaborated with it. The film raises questions about morality, responsibility, and courage in times of crisis.

        -

        Cabaret In Hindi Torrent Download 720p is a masterpiece of cinema that deserves to be seen by everyone. It is a film that will make you feel a range of emotions, from joy to sorrow, from anger to hope. It is a film that will make you reflect on the past and the present, on human nature and society. It is a film that will make you appreciate the power and beauty of art.

        7196e7f11a
        -
        -
        \ No newline at end of file diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Full BEST D16.Group.Decimort.VST.v1.0.Incl.Keygen-AiR.rar.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Full BEST D16.Group.Decimort.VST.v1.0.Incl.Keygen-AiR.rar.md deleted file mode 100644 index f633414f0af8a8ae3a0732fbe241099fd2c9cba5..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Full BEST D16.Group.Decimort.VST.v1.0.Incl.Keygen-AiR.rar.md +++ /dev/null @@ -1,22 +0,0 @@ - -

        How to Download and Install D16 Group Decimort VST v1.0 with Keygen

        -

        D16 Group Decimort VST is a high-quality bit crusher plugin that simulates the sound of vintage samplers and adds a unique character to your music production. It offers various features such as anti-alias filter, image filter, jitter, dithering, and two quantization algorithms[^2^]. If you want to download and install this plugin for free, follow these steps:

        -
          -
        1. Download the file FULL D16.Group.Decimort.VST.v1.0.Incl.Keygen-AiR.rar from the link provided in the reference[^1^]. This is a compressed file that contains the plugin installer and the keygen.
        2. -
        3. Extract the file using a program like WinRAR or 7-Zip. You will get two files: D16.Group.Decimort.VST.v1.0.Incl.Keygen-AiR.exe and air.nfo.
        4. -
        5. Run the installer file and follow the instructions to install the plugin on your computer. You can choose the destination folder and the VST host that you use.
        6. -
        7. After the installation is complete, do not run the plugin yet. Open the file air.nfo using a text editor like Notepad. You will see some information and a serial number for the plugin.
        8. -
        9. Copy the serial number and run the plugin in your VST host. It will ask you to enter the serial number. Paste it and click OK.
        10. -
        11. You have successfully activated the plugin. Enjoy!
        12. -
        -

        Note: This is an illegal way of obtaining the plugin and it may contain viruses or malware. Use it at your own risk. The best way to support the developers is to buy the plugin from their official website[^2^].

        -

        FULL D16.Group.Decimort.VST.v1.0.Incl.Keygen-AiR.rar


        Download File >>> https://urlcod.com/2uIaPB



        Now that you have installed and activated the plugin, you can start using it in your music production. Here are some tips on how to use D16 Group Decimort VST effectively:

        -
          -
        • The plugin has two main sections: the prefilter and the resampler. The prefilter allows you to shape the input signal before it goes into the resampler. You can adjust the cutoff frequency, resonance, and slope of the filter. You can also choose between low-pass, high-pass, band-pass, and band-reject modes.
        • -
        • The resampler is where the magic happens. It reduces the bit depth and sample rate of the input signal, creating the characteristic sound of vintage samplers. You can adjust the bit depth from 1 to 24 bits and the sample rate from 44.1 kHz to 10 Hz. You can also choose between two quantization algorithms: linear and mu-law.
        • -
        • The plugin also has some additional features that enhance the sound quality and add more flexibility. The anti-alias filter and image filter help to reduce unwanted artifacts and noise that may occur during the resampling process. The jitter and dithering parameters add some randomness and smoothness to the output signal. You can also use the dry/wet knob to blend the original and processed signals.
        • -
        • The plugin has a preset manager that allows you to save and load your own settings. You can also browse through the factory presets that cover various genres and styles of music. You can use them as they are or tweak them to suit your needs.
        • -
        -

        D16 Group Decimort VST is a powerful and versatile plugin that can add a lot of character and warmth to your music. Whether you want to recreate the sound of classic samplers or experiment with new sonic possibilities, this plugin can help you achieve your goals.

        81aa517590
        -
        -
        \ No newline at end of file diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/My Name Is Khan Full Movie Online Hd 720p.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/My Name Is Khan Full Movie Online Hd 720p.md deleted file mode 100644 index ccfb0b69442b52f702d9fe70a6401d0094a6d5a9..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/My Name Is Khan Full Movie Online Hd 720p.md +++ /dev/null @@ -1,24 +0,0 @@ - -```html -

        How to Watch My Name Is Khan Full Movie Online HD 720p

        -

        My Name Is Khan is a 2010 Bollywood drama film starring Shah Rukh Khan and Kajol. It tells the story of Rizwan Khan, a Muslim man with Asperger's syndrome, who moves to San Francisco and falls in love with Mandira, a Hindu single mother. After the 9/11 attacks, Rizwan faces discrimination and prejudice because of his name and religion, and embarks on a journey across America to prove his innocence and loyalty.

        -

        My Name Is Khan full movie online hd 720p


        Downloadhttps://urlcod.com/2uIaFg



        -

        The film was directed by Karan Johar and produced by Fox Searchlight Pictures, Red Chillies Entertainment, Star Studios, and Dharma Productions. It received positive reviews from critics and audiences, and was one of the highest-grossing Indian films of all time. It also won several awards, including three Filmfare Awards and two National Film Awards.

        -

        If you are looking for a way to watch My Name Is Khan full movie online HD 720p, you have several options to choose from. Here are some of the best streaming platforms where you can rent or buy the film:

        -
          -
        • Prime Video: Prime Video is Amazon's video-on-demand service that offers thousands of movies and TV shows to stream or download. You can rent My Name Is Khan HD for €3.99 or buy it for €9.99. You can also watch it for free if you have a Prime membership.[^1^]
        • -
        • Moviefone: Moviefone is a website that helps you find movies and TV shows to watch online or in theaters. You can use it to search for streaming services that offer My Name Is Khan. Some of the options are DIRECTV, Microsoft Store, Google Play Movies, Amazon Video, AMC on Demand, Vudu, YouTube, and Apple iTunes.[^2^]
        • -
        • Internet Archive: Internet Archive is a non-profit digital library that preserves and provides access to millions of free books, movies, music, and more. You can watch My Name Is Khan full movie online HD 720p for free on this website.[^3^]
        • -
        • YouTube: YouTube is the world's largest video-sharing platform that hosts billions of videos from various genres and categories. You can watch My Name Is Khan full movie online HD 720p on YouTube for free or rent it for $3.99.[^4^]
        • -
        -

        My Name Is Khan is a powerful and emotional film that explores the themes of love, identity, faith, and humanity. It is a must-watch for fans of Shah Rukh Khan and Kajol, as well as anyone who enjoys a good drama with a social message. If you want to watch My Name Is Khan full movie online HD 720p, you can use any of the streaming platforms mentioned above.

        -``` - -```html -

        My Name Is Khan is not only a film, but also a social movement that inspired many people around the world. The film's tagline, "My name is Khan and I am not a terrorist", became a slogan for many Muslims who faced discrimination and stereotyping after 9/11. The film also raised awareness about Asperger's syndrome, a form of autism that affects social communication and behavior. Shah Rukh Khan's portrayal of Rizwan Khan was praised for its authenticity and sensitivity.

        -

        -

        The film also had a significant impact on the relations between India and Pakistan, two neighboring countries that have a history of conflict and tension. The film was initially banned in Pakistan due to its controversial subject matter, but later released after public demand and intervention from the Pakistani government. The film was well-received by the Pakistani audiences and critics, who appreciated its positive message of peace and harmony. The film also sparked a dialogue between the two countries on various issues, such as terrorism, human rights, and cultural exchange.

        -

        My Name Is Khan is a film that transcends boundaries and genres. It is a film that celebrates the diversity and unity of humanity. It is a film that challenges us to question our prejudices and assumptions. It is a film that reminds us of the power of love and faith. If you want to watch My Name Is Khan full movie online HD 720p, you can use any of the streaming platforms mentioned above.

        -```

        81aa517590
        -
        -
        \ No newline at end of file diff --git a/spaces/nielsr/text-based-inpainting/README.md b/spaces/nielsr/text-based-inpainting/README.md deleted file mode 100644 index 86d62495f5eed858c2b7b29e27f965bf956fa2d4..0000000000000000000000000000000000000000 --- a/spaces/nielsr/text-based-inpainting/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Text Based Inpainting -emoji: 🚀 -colorFrom: blue -colorTo: blue -sdk: gradio -sdk_version: 3.9.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/nihalbaig/layoutlmv3_official_document/README.md b/spaces/nihalbaig/layoutlmv3_official_document/README.md deleted file mode 100644 index 43fdb8ff2367678732e5da7f433474cd3d548bc1..0000000000000000000000000000000000000000 --- a/spaces/nihalbaig/layoutlmv3_official_document/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Layoutlmv3 Official Document -emoji: 🐢 -colorFrom: gray -colorTo: indigo -sdk: gradio -sdk_version: 3.27.0 -app_file: app.py -pinned: false -license: openrail ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/nlp-en-es/bertin-sqac/app.py b/spaces/nlp-en-es/bertin-sqac/app.py deleted file mode 100644 index 850a36f715c855ef2de328823afae2bb5eaf2c06..0000000000000000000000000000000000000000 --- a/spaces/nlp-en-es/bertin-sqac/app.py +++ /dev/null @@ -1,28 +0,0 @@ -import gradio as gr - -title = "BERTIN, tengo una pregunta" -description = "BERTIN large fine-tuned con el corpus SQAC (Spanish Question-Answering Corpus)" -examples = [ - ["BERTIN es un conjunto de modelos de NLP tipo RoBERTa entrenados durante el evento JAX/Flax organizado por Hugging Face.", "¿Qué es BERTIN?"], - ["El corpus SQAC fue creado por un equipo del Barcelona Supercomputing Center y la sigla proviene de Spanish Question-Answering Corpus.", "¿Qué significa SQAC?"] -] -article = """ -

        - NLP en ES 🤗 | nlp-en-es.org -

        -""" - -gr.Interface.load( - name="huggingface/nlp-en-es/bertin-large-finetuned-sqac", - inputs=[gr.inputs.Textbox(label="Contexto"), gr.inputs.Textbox(label="Pregunta")], - outputs=gr.outputs.Textbox(label="Respuesta"), - title=title, - description=description, - article=article, - examples=examples, - theme="huggingface", - allow_screenshot=True, - allow_flagging=True, - flagging_dir="flagged", - enable_queue=True -).launch() diff --git a/spaces/nmitchko/AI-in-Healthcare/Developer Meetup in Boston Generative AI Use Cases in Healthcare _files/css__MRgHwIW31RZOLKE55SBpt0eoWED02wq2IXA5fbDWn20___EFRur0IfJ.css b/spaces/nmitchko/AI-in-Healthcare/Developer Meetup in Boston Generative AI Use Cases in Healthcare _files/css__MRgHwIW31RZOLKE55SBpt0eoWED02wq2IXA5fbDWn20___EFRur0IfJ.css deleted file mode 100644 index 343e406318d9179c21c67539591121a38ce5265c..0000000000000000000000000000000000000000 --- a/spaces/nmitchko/AI-in-Healthcare/Developer Meetup in Boston Generative AI Use Cases in Healthcare _files/css__MRgHwIW31RZOLKE55SBpt0eoWED02wq2IXA5fbDWn20___EFRur0IfJ.css +++ /dev/null @@ -1,8 +0,0 @@ -.field .field-label{font-weight:bold;}.field-label-inline .field-label,.field-label-inline .field-items{float:left;}form .field-multiple-table{margin:0;}form .field-multiple-table th.field-label{padding-left:0;}form .field-multiple-table td.field-multiple-drag{width:30px;padding-right:0;}form .field-multiple-table td.field-multiple-drag a.tabledrag-handle{padding-right:.5em;}form .field-add-more-submit{margin:.5em 0 0;} -/*})'"*/ -.node-unpublished{background-color:#fff4f4;}.preview .node{background-color:#ffffea;}td.revision-current{background:#ffc;} -/*})'"*/ -.views-exposed-form .views-exposed-widget{float:left;padding:.5em 1em 0 0;}.views-exposed-form .views-exposed-widget .form-submit{margin-top:1.6em;}.views-exposed-form .form-item,.views-exposed-form .form-submit{margin-top:0;margin-bottom:0;}.views-exposed-form label{font-weight:bold;}.views-exposed-widgets{margin-bottom:.5em;}.views-align-left{text-align:left;}.views-align-right{text-align:right;}.views-align-center{text-align:center;}.views-view-grid tbody{border-top:none;}.view .progress-disabled{float:none;} -/*})'"*/ -.rteindent1{margin-left:40px;}.rteindent2{margin-left:80px;}.rteindent3{margin-left:120px;}.rteindent4{margin-left:160px;}.rteleft{text-align:left;}.rteright{text-align:right;}.rtecenter{text-align:center;}.rtejustify{text-align:justify;}.ibimage_left{float:left;}.ibimage_right{float:right;} -/*})'"*/ diff --git a/spaces/oliver2023/chatgpt-on-wechat/plugins/role/README.md b/spaces/oliver2023/chatgpt-on-wechat/plugins/role/README.md deleted file mode 100644 index f53e9575f2e563fae758990ea1c45513e0e8ba49..0000000000000000000000000000000000000000 --- a/spaces/oliver2023/chatgpt-on-wechat/plugins/role/README.md +++ /dev/null @@ -1,26 +0,0 @@ -用于让Bot扮演指定角色的聊天插件,触发方法如下: - -- `$角色/$role help/帮助` - 打印目前支持的角色列表。 -- `$角色/$role <角色名>` - 让AI扮演该角色,角色名支持模糊匹配。 -- `$停止扮演` - 停止角色扮演。 - -添加自定义角色请在`roles/roles.json`中添加。 - -(大部分prompt来自https://github.com/rockbenben/ChatGPT-Shortcut/blob/main/src/data/users.tsx) - -以下为例子: -```json - { - "title": "写作助理", - "description": "As a writing improvement assistant, your task is to improve the spelling, grammar, clarity, concision, and overall readability of the text I provided, while breaking down long sentences, reducing repetition, and providing suggestions for improvement. Please provide only the corrected Chinese version of the text and avoid including explanations. Please treat every message I send later as text content.", - "descn": "作为一名中文写作改进助理,你的任务是改进所提供文本的拼写、语法、清晰、简洁和整体可读性,同时分解长句,减少重复,并提供改进建议。请只提供文本的更正版本,避免包括解释。请把我之后的每一条消息都当作文本内容。", - "wrapper": "内容是:\n\"%s\"", - "remark": "最常使用的角色,用于优化文本的语法、清晰度和简洁度,提高可读性。" - } -``` - -- `title`: 角色名。 -- `description`: 使用`$role`触发时,使用英语prompt。 -- `descn`: 使用`$角色`触发时,使用中文prompt。 -- `wrapper`: 用于包装用户消息,可起到强调作用,避免回复离题。 -- `remark`: 简短描述该角色,在打印帮助文档时显示。 diff --git a/spaces/omdena-lc/omdena-ng-lagos-chatbot-model/README.md b/spaces/omdena-lc/omdena-ng-lagos-chatbot-model/README.md deleted file mode 100644 index 83ebe2f60c362d04f5178e3631be37c0586c5b3e..0000000000000000000000000000000000000000 --- a/spaces/omdena-lc/omdena-ng-lagos-chatbot-model/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: Omdena NG Lagos Comunity Chat Model -emoji: 📉 -colorFrom: indigo -colorTo: indigo -sdk: docker -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/ongxuanhong/listing-content-with-ai/Makefile b/spaces/ongxuanhong/listing-content-with-ai/Makefile deleted file mode 100644 index 6eae2b4e9716aa57d679331f0141e6a1c72da776..0000000000000000000000000000000000000000 --- a/spaces/ongxuanhong/listing-content-with-ai/Makefile +++ /dev/null @@ -1,8 +0,0 @@ -format: - black *.py - -lint: - flake8 *.py - -run: - streamlit run app.py \ No newline at end of file diff --git a/spaces/openbio/calculator/utils/indicators.py b/spaces/openbio/calculator/utils/indicators.py deleted file mode 100644 index cbab6438197a2ebdac9a7388743bb8ec7228b6e2..0000000000000000000000000000000000000000 --- a/spaces/openbio/calculator/utils/indicators.py +++ /dev/null @@ -1,347 +0,0 @@ -import datetime -import json -import os -from itertools import repeat - -import ee -import numpy as np -import pandas as pd -import plotly.graph_objects as go -import yaml - -from utils import duckdb_queries as dq - -from . import logging - -GEE_SERVICE_ACCOUNT = ( - "climatebase-july-2023@ee-geospatialml-aquarry.iam.gserviceaccount.com" -) - -class IndexGenerator: - """ - A class to generate indices and compute zonal means. - - Args: - indices (string[], required): Array of index names to include in aggregate index generation. - """ - - def __init__(self): - # Authenticate to GEE & DuckDB - self._authenticate_ee(GEE_SERVICE_ACCOUNT) - - self.roi = None - self.project_name = None - self.project_geometry = None - self.project_centroid = None - self.indices = None - self.metric_name = None - - def set_metric(self, metric_name): - # Use defined subset of indices - indices_file = f'metrics/{metric_name.replace(" ", "_")}.yaml' - self.indices = self._load_indices(indices_file) - self.metric_name = metric_name - - def set_project(self, project_name): - self.project_name = project_name - self.project_geometry = dq.get_project_geometry(self.project_name) - self.project_centroid = dq.get_project_centroid(self.project_name) - - # to-do: refactor to involve fewer transformations - _polygon = json.dumps( - json.loads(self.project_geometry[0][0])["features"][0]["geometry"] - ) - # to-do: don't use self.roi and instead pass patameter strategically - self.roi = ee.Geometry.Polygon(json.loads(_polygon)["coordinates"]) - - def _cloudfree(self, gee_path, daterange): - """ - Internal method to generate a cloud-free composite. - - Args: - gee_path (str): The path to the Google Earth Engine (GEE) image or image collection. - - Returns: - ee.Image: The cloud-free composite clipped to the region of interest. - """ - # Load a raw Landsat ImageCollection for a single year. - collection = ( - ee.ImageCollection(gee_path).filterDate(*daterange).filterBounds(self.roi) - ) - - # Create a cloud-free composite with custom parameters for cloud score threshold and percentile. - composite_cloudfree = ee.Algorithms.Landsat.simpleComposite( - **{"collection": collection, "percentile": 75, "cloudScoreRange": 5} - ) - return composite_cloudfree.clip(self.roi) - - @staticmethod - def _load_indices(indices_file): - # Read index configurations - with open(indices_file, "r") as stream: - try: - return yaml.safe_load(stream) - except yaml.YAMLError as e: - logging.error(e) - return None - - def generate_index(self, index_config, year): - """ - Generates an index based on the provided index configuration. - - Args: - index_config (dict): Configuration for generating the index. - - Returns: - ee.Image: The generated index clipped to the region of interest. - """ - - # Calculate date range, assume 1 year - start_date = str(datetime.date(year, 1, 1)) - end_date = str(datetime.date(year, 12, 31)) - daterange = [start_date, end_date] - - # Calculate index based on type - logging.info( - f"Generating index: {index_config['name']} of type {index_config['gee_type']}" - ) - match index_config["gee_type"]: - case "image": - dataset = ee.Image(index_config["gee_path"]).clip(self.roi) - if index_config.get("select"): - dataset = dataset.select(index_config["select"]) - case "image_collection": - dataset = ( - ee.ImageCollection(index_config["gee_path"]) - .filterBounds(self.roi) - .map(lambda image: image.clip(self.roi)) - .mean() - ) - if index_config.get("select"): - dataset = dataset.select(index_config["select"]) - case "feature_collection": - dataset = ( - ee.Image() - .float() - .paint( - ee.FeatureCollection(index_config["gee_path"]), - index_config["select"], - ) - .clip(self.roi) - ) - case "algebraic": - image = self._cloudfree(index_config["gee_path"], daterange) - # to-do: params should come from index_config - dataset = image.normalizedDifference(["B4", "B3"]) - case _: - dataset = None - - if not dataset: - raise Exception("Failed to generate dataset.") - - # Normalize to a range of [0, 1] - min_val = 0 - max_val = 1 - if type(index_config['min'])==int or type(index_config['min']==float): - min_val = index_config['min'] - if str(index_config['max'])=='roi_area': - max_val = self.roi.area().getInfo() # in m^2 - elif type(index_config['max'])==int or type(index_config['max']==float): - max_val = index_config['max'] - dataset.subtract(min_val)\ - .divide(max_val - min_val) - - logging.info(f"Generated index: {index_config['name']}") - return dataset - - def zonal_mean_index(self, index_key, year): - index_config = self.indices[index_key] - dataset = self.generate_index(index_config, year) - - logging.info(f"Calculating zonal mean for {index_key}...") - out = dataset.reduceRegion( - **{ - "reducer": ee.Reducer.mean(), - "geometry": self.roi, - "scale": 2000, # map scale - "bestEffort": True, - "maxPixels": 1e3, - } - ).getInfo() - - if index_config.get("bandname"): - return out[index_config.get("bandname")] - - logging.info(f"Calculated zonal mean for {index_key}.") - return out - - def generate_composite_index_df(self, year): - data = { - "metric": self.metric_name, - "year": year, - "centroid": "", - "project_name": "", - "value": list(map(self.zonal_mean_index, self.indices, repeat(year))), - # to-do: calculate with duckdb; also, should be part of project table instead - "area": self.roi.area().getInfo(), # m^2 - "geojson": "", - "coefficient": list(map(lambda x: self.indices[x]['coefficient'], self.indices)) - } - - logging.info("data", data) - df = pd.DataFrame(data) - return df - - @staticmethod - def _authenticate_ee(ee_service_account): - """ - Huggingface Spaces does not support secret files, therefore authenticate with an environment variable containing the JSON. - """ - logging.info("Authenticating to Google Earth Engine...") - credentials = ee.ServiceAccountCredentials( - ee_service_account, key_data=os.environ["ee_service_account"] - ) - ee.Initialize(credentials) - logging.info("Authenticated to Google Earth Engine.") - - def _calculate_yearly_index(self, years): - dfs = [] - logging.info(years) - - # to-do: pararelize? - for year in years: - logging.info(year) - df = self.generate_composite_index_df(year) - dfs.append(df) - - # Concatenate all dataframes - df_concat = pd.concat(dfs) - df_concat["centroid"] = str(self.project_centroid) - df_concat["project_name"] = self.project_name - df_concat["geojson"] = str(self.project_geometry) - return df_concat.round(2) - - # h/t: https://community.plotly.com/t/dynamic-zoom-for-mapbox/32658/12\ - @staticmethod - def _latlon_to_config(longitudes=None, latitudes=None): - """Function documentation:\n - Basic framework adopted from Krichardson under the following thread: - https://community.plotly.com/t/dynamic-zoom-for-mapbox/32658/7 - - # NOTE: - # THIS IS A TEMPORARY SOLUTION UNTIL THE DASH TEAM IMPLEMENTS DYNAMIC ZOOM - # in their plotly-functions associated with mapbox, such as go.Densitymapbox() etc. - - Returns the appropriate zoom-level for these plotly-mapbox-graphics along with - the center coordinate tuple of all provided coordinate tuples. - """ - - # Check whether both latitudes and longitudes have been passed, - # or if the list lenghts don't match - if (latitudes is None or longitudes is None) or ( - len(latitudes) != len(longitudes) - ): - # Otherwise, return the default values of 0 zoom and the coordinate origin as center point - return 0, (0, 0) - - # Get the boundary-box - b_box = {} - b_box["height"] = latitudes.max() - latitudes.min() - b_box["width"] = longitudes.max() - longitudes.min() - b_box["center"] = (np.mean(longitudes), np.mean(latitudes)) - - # get the area of the bounding box in order to calculate a zoom-level - area = b_box["height"] * b_box["width"] - - # * 1D-linear interpolation with numpy: - # - Pass the area as the only x-value and not as a list, in order to return a scalar as well - # - The x-points "xp" should be in parts in comparable order of magnitude of the given area - # - The zpom-levels are adapted to the areas, i.e. start with the smallest area possible of 0 - # which leads to the highest possible zoom value 20, and so forth decreasing with increasing areas - # as these variables are antiproportional - zoom = np.interp( - x=area, - xp=[0, 5**-10, 4**-10, 3**-10, 2**-10, 1**-10, 1**-5], - fp=[20, 15, 14, 13, 12, 7, 5], - ) - - # Finally, return the zoom level and the associated boundary-box center coordinates - return zoom, b_box["center"] - - def show_project_map(self): - features = json.loads(self.project_geometry[0][0].replace("'", '"'))["features"] - geometry = features[0]["geometry"] - longitudes = np.array(geometry["coordinates"])[0, :, 0] - latitudes = np.array(geometry["coordinates"])[0, :, 1] - zoom, bbox_center = self._latlon_to_config(longitudes, latitudes) - fig = go.Figure( - go.Scattermapbox( - mode="markers", - lon=[bbox_center[0]], - lat=[bbox_center[1]], - marker={"size": 20, "color": ["cyan"]}, - ) - ) - - fig.update_layout( - mapbox={ - "style": "satellite", - "accesstoken":os.environ['MAPBOX_ACCESS_TOKEN'], - "center": {"lon": bbox_center[0], "lat": bbox_center[1]}, - "zoom": zoom, - "layers": [ - { - "source": { - "type": "FeatureCollection", - "features": [{"type": "Feature", "geometry": geometry}], - }, - "type": "fill", - "below": "traces", - "color": "royalblue", - "opacity": 0.5, - } - ], - }, - margin={"l": 0, "r": 0, "b": 0, "t": 0}, - ) - - return fig - - def calculate_score(self, start_year, end_year): - years = [] - # Create `bioindicator` table IF NOT EXISTS. - dq.get_or_create_bioindicator_table() - for year in range(start_year, end_year+1): - row_exists = dq.check_if_project_exists_for_year(self.project_name, year) - if not row_exists: - years.append(year) - - if len(years) > 0: - df = self._calculate_yearly_index(years) - - # Write score table to `_temptable` - dq.write_score_to_temptable(df) - - # UPSERT project record - dq.upsert_project_record() - logging.info("upserted records into motherduck") - scores = dq.get_project_scores(self.project_name, start_year, end_year) - scores.columns = scores.columns.str.replace('_', ' ').str.title() - if 'Area' in scores.columns: - scores['Area'] /= 1000**2 - scores.rename(columns={'Area':'Area (km^2)'}, inplace=True) - if 'Score' in scores.columns: - scores['Score'] /= 1000**2 - scores.rename(columns={'Score': 'Score (Area * Value)'}, inplace=True) - # Round scores to 4 significant figures - scores = scores.apply( - lambda x: ['%.4g'%x_i for x_i in x] - if pd.api.types.is_numeric_dtype(x) - else x) - return scores - - def get_metric_file(self): - # Use defined subset of indices - indices_file = f'metrics/{self.metric_name.replace(" ", "_")}.yaml' - with open(indices_file, "r") as stream: - return stream.read() \ No newline at end of file diff --git a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/docs/source/en/api/pipelines/alt_diffusion.md b/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/docs/source/en/api/pipelines/alt_diffusion.md deleted file mode 100644 index ed8db52f9a51198260c4f0d1927b29f7e3913f8a..0000000000000000000000000000000000000000 --- a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/docs/source/en/api/pipelines/alt_diffusion.md +++ /dev/null @@ -1,47 +0,0 @@ - - -# AltDiffusion - -AltDiffusion was proposed in [AltCLIP: Altering the Language Encoder in CLIP for Extended Language Capabilities](https://huggingface.co/papers/2211.06679) by Zhongzhi Chen, Guang Liu, Bo-Wen Zhang, Fulong Ye, Qinghong Yang, Ledell Wu. - -The abstract from the paper is: - -*In this work, we present a conceptually simple and effective method to train a strong bilingual multimodal representation model. Starting from the pretrained multimodal representation model CLIP released by OpenAI, we switched its text encoder with a pretrained multilingual text encoder XLM-R, and aligned both languages and image representations by a two-stage training schema consisting of teacher learning and contrastive learning. We validate our method through evaluations of a wide range of tasks. We set new state-of-the-art performances on a bunch of tasks including ImageNet-CN, Flicker30k- CN, and COCO-CN. Further, we obtain very close performances with CLIP on almost all tasks, suggesting that one can simply alter the text encoder in CLIP for extended capabilities such as multilingual understanding.* - -## Tips - -`AltDiffusion` is conceptually the same as [Stable Diffusion](./stable_diffusion/overview). - - - -Make sure to check out the Schedulers [guide](/using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](/using-diffusers/loading#reuse-components-across-pipelines) section to learn how to efficiently load the same components into multiple pipelines. - - - -## AltDiffusionPipeline - -[[autodoc]] AltDiffusionPipeline - - all - - __call__ - -## AltDiffusionImg2ImgPipeline - -[[autodoc]] AltDiffusionImg2ImgPipeline - - all - - __call__ - -## AltDiffusionPipelineOutput - -[[autodoc]] pipelines.alt_diffusion.AltDiffusionPipelineOutput - - all - - __call__ \ No newline at end of file diff --git a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/examples/custom_diffusion/README.md b/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/examples/custom_diffusion/README.md deleted file mode 100644 index 9e3c387e3d342c270fa72b22643ba7bd7548095e..0000000000000000000000000000000000000000 --- a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/examples/custom_diffusion/README.md +++ /dev/null @@ -1,280 +0,0 @@ -# Custom Diffusion training example - -[Custom Diffusion](https://arxiv.org/abs/2212.04488) is a method to customize text-to-image models like Stable Diffusion given just a few (4~5) images of a subject. -The `train_custom_diffusion.py` script shows how to implement the training procedure and adapt it for stable diffusion. - -## Running locally with PyTorch - -### Installing the dependencies - -Before running the scripts, make sure to install the library's training dependencies: - -**Important** - -To make sure you can successfully run the latest versions of the example scripts, we highly recommend **installing from source** and keeping the install up to date as we update the example scripts frequently and install some example-specific requirements. To do this, execute the following steps in a new virtual environment: - -```bash -git clone https://github.com/huggingface/diffusers -cd diffusers -pip install -e . -``` - -Then cd in the example folder and run - -```bash -pip install -r requirements.txt -pip install clip-retrieval -``` - -And initialize an [🤗Accelerate](https://github.com/huggingface/accelerate/) environment with: - -```bash -accelerate config -``` - -Or for a default accelerate configuration without answering questions about your environment - -```bash -accelerate config default -``` - -Or if your environment doesn't support an interactive shell e.g. a notebook - -```python -from accelerate.utils import write_basic_config -write_basic_config() -``` -### Cat example 😺 - -Now let's get our dataset. Download dataset from [here](https://www.cs.cmu.edu/~custom-diffusion/assets/data.zip) and unzip it. - -We also collect 200 real images using `clip-retrieval` which are combined with the target images in the training dataset as a regularization. This prevents overfitting to the the given target image. The following flags enable the regularization `with_prior_preservation`, `real_prior` with `prior_loss_weight=1.`. -The `class_prompt` should be the category name same as target image. The collected real images are with text captions similar to the `class_prompt`. The retrieved image are saved in `class_data_dir`. You can disable `real_prior` to use generated images as regularization. To collect the real images use this command first before training. - -```bash -pip install clip-retrieval -python retrieve.py --class_prompt cat --class_data_dir real_reg/samples_cat --num_class_images 200 -``` - -**___Note: Change the `resolution` to 768 if you are using the [stable-diffusion-2](https://huggingface.co/stabilityai/stable-diffusion-2) 768x768 model.___** - -```bash -export MODEL_NAME="CompVis/stable-diffusion-v1-4" -export OUTPUT_DIR="path-to-save-model" -export INSTANCE_DIR="./data/cat" - -accelerate launch train_custom_diffusion.py \ - --pretrained_model_name_or_path=$MODEL_NAME \ - --instance_data_dir=$INSTANCE_DIR \ - --output_dir=$OUTPUT_DIR \ - --class_data_dir=./real_reg/samples_cat/ \ - --with_prior_preservation --real_prior --prior_loss_weight=1.0 \ - --class_prompt="cat" --num_class_images=200 \ - --instance_prompt="photo of a cat" \ - --resolution=512 \ - --train_batch_size=2 \ - --learning_rate=1e-5 \ - --lr_warmup_steps=0 \ - --max_train_steps=250 \ - --scale_lr --hflip \ - --modifier_token "" -``` - -**Use `--enable_xformers_memory_efficient_attention` for faster training with lower VRAM requirement (16GB per GPU). Follow [this guide](https://github.com/facebookresearch/xformers) for installation instructions.** - -To track your experiments using Weights and Biases (`wandb`) and to save intermediate results (whcih we HIGHLY recommend), follow these steps: - -* Install `wandb`: `pip install wandb`. -* Authorize: `wandb login`. -* Then specify a `validation_prompt` and set `report_to` to `wandb` while launching training. You can also configure the following related arguments: - * `num_validation_images` - * `validation_steps` - -Here is an example command: - -```bash -accelerate launch train_custom_diffusion.py \ - --pretrained_model_name_or_path=$MODEL_NAME \ - --instance_data_dir=$INSTANCE_DIR \ - --output_dir=$OUTPUT_DIR \ - --class_data_dir=./real_reg/samples_cat/ \ - --with_prior_preservation --real_prior --prior_loss_weight=1.0 \ - --class_prompt="cat" --num_class_images=200 \ - --instance_prompt="photo of a cat" \ - --resolution=512 \ - --train_batch_size=2 \ - --learning_rate=1e-5 \ - --lr_warmup_steps=0 \ - --max_train_steps=250 \ - --scale_lr --hflip \ - --modifier_token "" \ - --validation_prompt=" cat sitting in a bucket" \ - --report_to="wandb" -``` - -Here is an example [Weights and Biases page](https://wandb.ai/sayakpaul/custom-diffusion/runs/26ghrcau) where you can check out the intermediate results along with other training details. - -If you specify `--push_to_hub`, the learned parameters will be pushed to a repository on the Hugging Face Hub. Here is an [example repository](https://huggingface.co/sayakpaul/custom-diffusion-cat). - -### Training on multiple concepts 🐱🪵 - -Provide a [json](https://github.com/adobe-research/custom-diffusion/blob/main/assets/concept_list.json) file with the info about each concept, similar to [this](https://github.com/ShivamShrirao/diffusers/blob/main/examples/dreambooth/train_dreambooth.py). - -To collect the real images run this command for each concept in the json file. - -```bash -pip install clip-retrieval -python retrieve.py --class_prompt {} --class_data_dir {} --num_class_images 200 -``` - -And then we're ready to start training! - -```bash -export MODEL_NAME="CompVis/stable-diffusion-v1-4" -export OUTPUT_DIR="path-to-save-model" - -accelerate launch train_custom_diffusion.py \ - --pretrained_model_name_or_path=$MODEL_NAME \ - --output_dir=$OUTPUT_DIR \ - --concepts_list=./concept_list.json \ - --with_prior_preservation --real_prior --prior_loss_weight=1.0 \ - --resolution=512 \ - --train_batch_size=2 \ - --learning_rate=1e-5 \ - --lr_warmup_steps=0 \ - --max_train_steps=500 \ - --num_class_images=200 \ - --scale_lr --hflip \ - --modifier_token "+" -``` - -Here is an example [Weights and Biases page](https://wandb.ai/sayakpaul/custom-diffusion/runs/3990tzkg) where you can check out the intermediate results along with other training details. - -### Training on human faces - -For fine-tuning on human faces we found the following configuration to work better: `learning_rate=5e-6`, `max_train_steps=1000 to 2000`, and `freeze_model=crossattn` with at least 15-20 images. - -To collect the real images use this command first before training. - -```bash -pip install clip-retrieval -python retrieve.py --class_prompt person --class_data_dir real_reg/samples_person --num_class_images 200 -``` - -Then start training! - -```bash -export MODEL_NAME="CompVis/stable-diffusion-v1-4" -export OUTPUT_DIR="path-to-save-model" -export INSTANCE_DIR="path-to-images" - -accelerate launch train_custom_diffusion.py \ - --pretrained_model_name_or_path=$MODEL_NAME \ - --instance_data_dir=$INSTANCE_DIR \ - --output_dir=$OUTPUT_DIR \ - --class_data_dir=./real_reg/samples_person/ \ - --with_prior_preservation --real_prior --prior_loss_weight=1.0 \ - --class_prompt="person" --num_class_images=200 \ - --instance_prompt="photo of a person" \ - --resolution=512 \ - --train_batch_size=2 \ - --learning_rate=5e-6 \ - --lr_warmup_steps=0 \ - --max_train_steps=1000 \ - --scale_lr --hflip --noaug \ - --freeze_model crossattn \ - --modifier_token "" \ - --enable_xformers_memory_efficient_attention -``` - -## Inference - -Once you have trained a model using the above command, you can run inference using the below command. Make sure to include the `modifier token` (e.g. \ in above example) in your prompt. - -```python -import torch -from diffusers import DiffusionPipeline - -pipe = DiffusionPipeline.from_pretrained( - "CompVis/stable-diffusion-v1-4", torch_dtype=torch.float16 -).to("cuda") -pipe.unet.load_attn_procs( - "path-to-save-model", weight_name="pytorch_custom_diffusion_weights.bin" -) -pipe.load_textual_inversion("path-to-save-model", weight_name=".bin") - -image = pipe( - " cat sitting in a bucket", - num_inference_steps=100, - guidance_scale=6.0, - eta=1.0, -).images[0] -image.save("cat.png") -``` - -It's possible to directly load these parameters from a Hub repository: - -```python -import torch -from huggingface_hub.repocard import RepoCard -from diffusers import DiffusionPipeline - -model_id = "sayakpaul/custom-diffusion-cat" -card = RepoCard.load(model_id) -base_model_id = card.data.to_dict()["base_model"] - -pipe = DiffusionPipeline.from_pretrained(base_model_id, torch_dtype=torch.float16).to( -"cuda") -pipe.unet.load_attn_procs(model_id, weight_name="pytorch_custom_diffusion_weights.bin") -pipe.load_textual_inversion(model_id, weight_name=".bin") - -image = pipe( - " cat sitting in a bucket", - num_inference_steps=100, - guidance_scale=6.0, - eta=1.0, -).images[0] -image.save("cat.png") -``` - -Here is an example of performing inference with multiple concepts: - -```python -import torch -from huggingface_hub.repocard import RepoCard -from diffusers import DiffusionPipeline - -model_id = "sayakpaul/custom-diffusion-cat-wooden-pot" -card = RepoCard.load(model_id) -base_model_id = card.data.to_dict()["base_model"] - -pipe = DiffusionPipeline.from_pretrained(base_model_id, torch_dtype=torch.float16).to( -"cuda") -pipe.unet.load_attn_procs(model_id, weight_name="pytorch_custom_diffusion_weights.bin") -pipe.load_textual_inversion(model_id, weight_name=".bin") -pipe.load_textual_inversion(model_id, weight_name=".bin") - -image = pipe( - "the cat sculpture in the style of a wooden pot", - num_inference_steps=100, - guidance_scale=6.0, - eta=1.0, -).images[0] -image.save("multi-subject.png") -``` - -Here, `cat` and `wooden pot` refer to the multiple concepts. - -### Inference from a training checkpoint - -You can also perform inference from one of the complete checkpoint saved during the training process, if you used the `--checkpointing_steps` argument. - -TODO. - -## Set grads to none -To save even more memory, pass the `--set_grads_to_none` argument to the script. This will set grads to None instead of zero. However, be aware that it changes certain behaviors, so if you start experiencing any problems, remove this argument. - -More info: https://pytorch.org/docs/stable/generated/torch.optim.Optimizer.zero_grad.html - -## Experimental results -You can refer to [our webpage](https://www.cs.cmu.edu/~custom-diffusion/) that discusses our experiments in detail. We also released a more extensive dataset of 101 concepts for evaluating model customization methods. For more details please refer to our [dataset webpage](https://www.cs.cmu.edu/~custom-diffusion/dataset.html). \ No newline at end of file diff --git a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/examples/research_projects/lora/train_text_to_image_lora.py b/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/examples/research_projects/lora/train_text_to_image_lora.py deleted file mode 100644 index d69284042af4dd1552378d7293221afe2ec05788..0000000000000000000000000000000000000000 --- a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/examples/research_projects/lora/train_text_to_image_lora.py +++ /dev/null @@ -1,1014 +0,0 @@ -# coding=utf-8 -# Copyright 2023 The HuggingFace Inc. team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -"""Fine-tuning script for Stable Diffusion for text2image with support for LoRA.""" - -import argparse -import itertools -import json -import logging -import math -import os -import random -from pathlib import Path - -import datasets -import numpy as np -import torch -import torch.nn.functional as F -import torch.utils.checkpoint -import transformers -from accelerate import Accelerator -from accelerate.logging import get_logger -from accelerate.utils import ProjectConfiguration, set_seed -from datasets import load_dataset -from huggingface_hub import create_repo, upload_folder -from packaging import version -from torchvision import transforms -from tqdm.auto import tqdm -from transformers import CLIPTextModel, CLIPTokenizer - -import diffusers -from diffusers import AutoencoderKL, DDPMScheduler, DiffusionPipeline, UNet2DConditionModel -from diffusers.loaders import AttnProcsLayers -from diffusers.models.attention_processor import LoRAAttnProcessor -from diffusers.optimization import get_scheduler -from diffusers.utils import check_min_version, is_wandb_available -from diffusers.utils.import_utils import is_xformers_available - - -# Will error if the minimal version of diffusers is not installed. Remove at your own risks. -check_min_version("0.14.0.dev0") - -logger = get_logger(__name__, log_level="INFO") - - -def save_model_card(repo_id: str, images=None, base_model=str, dataset_name=str, repo_folder=None): - img_str = "" - for i, image in enumerate(images): - image.save(os.path.join(repo_folder, f"image_{i}.png")) - img_str += f"![img_{i}](./image_{i}.png)\n" - - yaml = f""" ---- -license: creativeml-openrail-m -base_model: {base_model} -tags: -- stable-diffusion -- stable-diffusion-diffusers -- text-to-image -- diffusers -- lora -inference: true ---- - """ - model_card = f""" -# LoRA text2image fine-tuning - {repo_id} -These are LoRA adaption weights for {base_model}. The weights were fine-tuned on the {dataset_name} dataset. You can find some example images in the following. \n -{img_str} -""" - with open(os.path.join(repo_folder, "README.md"), "w") as f: - f.write(yaml + model_card) - - -def parse_args(): - parser = argparse.ArgumentParser(description="Simple example of a training script.") - parser.add_argument( - "--pretrained_model_name_or_path", - type=str, - default=None, - required=True, - help="Path to pretrained model or model identifier from huggingface.co/models.", - ) - parser.add_argument( - "--revision", - type=str, - default=None, - required=False, - help="Revision of pretrained model identifier from huggingface.co/models.", - ) - parser.add_argument( - "--dataset_name", - type=str, - default=None, - help=( - "The name of the Dataset (from the HuggingFace hub) to train on (could be your own, possibly private," - " dataset). It can also be a path pointing to a local copy of a dataset in your filesystem," - " or to a folder containing files that 🤗 Datasets can understand." - ), - ) - parser.add_argument( - "--dataset_config_name", - type=str, - default=None, - help="The config of the Dataset, leave as None if there's only one config.", - ) - parser.add_argument( - "--train_data_dir", - type=str, - default=None, - help=( - "A folder containing the training data. Folder contents must follow the structure described in" - " https://huggingface.co/docs/datasets/image_dataset#imagefolder. In particular, a `metadata.jsonl` file" - " must exist to provide the captions for the images. Ignored if `dataset_name` is specified." - ), - ) - parser.add_argument( - "--image_column", type=str, default="image", help="The column of the dataset containing an image." - ) - parser.add_argument( - "--caption_column", - type=str, - default="text", - help="The column of the dataset containing a caption or a list of captions.", - ) - parser.add_argument( - "--validation_prompt", type=str, default=None, help="A prompt that is sampled during training for inference." - ) - parser.add_argument( - "--num_validation_images", - type=int, - default=4, - help="Number of images that should be generated during validation with `validation_prompt`.", - ) - parser.add_argument( - "--validation_epochs", - type=int, - default=1, - help=( - "Run fine-tuning validation every X epochs. The validation process consists of running the prompt" - " `args.validation_prompt` multiple times: `args.num_validation_images`." - ), - ) - parser.add_argument( - "--max_train_samples", - type=int, - default=None, - help=( - "For debugging purposes or quicker training, truncate the number of training examples to this " - "value if set." - ), - ) - parser.add_argument( - "--output_dir", - type=str, - default="sd-model-finetuned-lora", - help="The output directory where the model predictions and checkpoints will be written.", - ) - parser.add_argument( - "--cache_dir", - type=str, - default=None, - help="The directory where the downloaded models and datasets will be stored.", - ) - parser.add_argument("--seed", type=int, default=None, help="A seed for reproducible training.") - parser.add_argument( - "--resolution", - type=int, - default=512, - help=( - "The resolution for input images, all the images in the train/validation dataset will be resized to this" - " resolution" - ), - ) - parser.add_argument( - "--center_crop", - default=False, - action="store_true", - help=( - "Whether to center crop the input images to the resolution. If not set, the images will be randomly" - " cropped. The images will be resized to the resolution first before cropping." - ), - ) - parser.add_argument( - "--random_flip", - action="store_true", - help="whether to randomly flip images horizontally", - ) - parser.add_argument("--train_text_encoder", action="store_true", help="Whether to train the text encoder") - - # lora args - parser.add_argument("--use_peft", action="store_true", help="Whether to use peft to support lora") - parser.add_argument("--lora_r", type=int, default=4, help="Lora rank, only used if use_lora is True") - parser.add_argument("--lora_alpha", type=int, default=32, help="Lora alpha, only used if lora is True") - parser.add_argument("--lora_dropout", type=float, default=0.0, help="Lora dropout, only used if use_lora is True") - parser.add_argument( - "--lora_bias", - type=str, - default="none", - help="Bias type for Lora. Can be 'none', 'all' or 'lora_only', only used if use_lora is True", - ) - parser.add_argument( - "--lora_text_encoder_r", - type=int, - default=4, - help="Lora rank for text encoder, only used if `use_lora` and `train_text_encoder` are True", - ) - parser.add_argument( - "--lora_text_encoder_alpha", - type=int, - default=32, - help="Lora alpha for text encoder, only used if `use_lora` and `train_text_encoder` are True", - ) - parser.add_argument( - "--lora_text_encoder_dropout", - type=float, - default=0.0, - help="Lora dropout for text encoder, only used if `use_lora` and `train_text_encoder` are True", - ) - parser.add_argument( - "--lora_text_encoder_bias", - type=str, - default="none", - help="Bias type for Lora. Can be 'none', 'all' or 'lora_only', only used if use_lora and `train_text_encoder` are True", - ) - - parser.add_argument( - "--train_batch_size", type=int, default=16, help="Batch size (per device) for the training dataloader." - ) - parser.add_argument("--num_train_epochs", type=int, default=100) - parser.add_argument( - "--max_train_steps", - type=int, - default=None, - help="Total number of training steps to perform. If provided, overrides num_train_epochs.", - ) - parser.add_argument( - "--gradient_accumulation_steps", - type=int, - default=1, - help="Number of updates steps to accumulate before performing a backward/update pass.", - ) - parser.add_argument( - "--gradient_checkpointing", - action="store_true", - help="Whether or not to use gradient checkpointing to save memory at the expense of slower backward pass.", - ) - parser.add_argument( - "--learning_rate", - type=float, - default=1e-4, - help="Initial learning rate (after the potential warmup period) to use.", - ) - parser.add_argument( - "--scale_lr", - action="store_true", - default=False, - help="Scale the learning rate by the number of GPUs, gradient accumulation steps, and batch size.", - ) - parser.add_argument( - "--lr_scheduler", - type=str, - default="constant", - help=( - 'The scheduler type to use. Choose between ["linear", "cosine", "cosine_with_restarts", "polynomial",' - ' "constant", "constant_with_warmup"]' - ), - ) - parser.add_argument( - "--lr_warmup_steps", type=int, default=500, help="Number of steps for the warmup in the lr scheduler." - ) - parser.add_argument( - "--use_8bit_adam", action="store_true", help="Whether or not to use 8-bit Adam from bitsandbytes." - ) - parser.add_argument( - "--allow_tf32", - action="store_true", - help=( - "Whether or not to allow TF32 on Ampere GPUs. Can be used to speed up training. For more information, see" - " https://pytorch.org/docs/stable/notes/cuda.html#tensorfloat-32-tf32-on-ampere-devices" - ), - ) - parser.add_argument( - "--dataloader_num_workers", - type=int, - default=0, - help=( - "Number of subprocesses to use for data loading. 0 means that the data will be loaded in the main process." - ), - ) - parser.add_argument("--adam_beta1", type=float, default=0.9, help="The beta1 parameter for the Adam optimizer.") - parser.add_argument("--adam_beta2", type=float, default=0.999, help="The beta2 parameter for the Adam optimizer.") - parser.add_argument("--adam_weight_decay", type=float, default=1e-2, help="Weight decay to use.") - parser.add_argument("--adam_epsilon", type=float, default=1e-08, help="Epsilon value for the Adam optimizer") - parser.add_argument("--max_grad_norm", default=1.0, type=float, help="Max gradient norm.") - parser.add_argument("--push_to_hub", action="store_true", help="Whether or not to push the model to the Hub.") - parser.add_argument("--hub_token", type=str, default=None, help="The token to use to push to the Model Hub.") - parser.add_argument( - "--hub_model_id", - type=str, - default=None, - help="The name of the repository to keep in sync with the local `output_dir`.", - ) - parser.add_argument( - "--logging_dir", - type=str, - default="logs", - help=( - "[TensorBoard](https://www.tensorflow.org/tensorboard) log directory. Will default to" - " *output_dir/runs/**CURRENT_DATETIME_HOSTNAME***." - ), - ) - parser.add_argument( - "--mixed_precision", - type=str, - default=None, - choices=["no", "fp16", "bf16"], - help=( - "Whether to use mixed precision. Choose between fp16 and bf16 (bfloat16). Bf16 requires PyTorch >=" - " 1.10.and an Nvidia Ampere GPU. Default to the value of accelerate config of the current system or the" - " flag passed with the `accelerate.launch` command. Use this argument to override the accelerate config." - ), - ) - parser.add_argument( - "--report_to", - type=str, - default="tensorboard", - help=( - 'The integration to report the results and logs to. Supported platforms are `"tensorboard"`' - ' (default), `"wandb"` and `"comet_ml"`. Use `"all"` to report to all integrations.' - ), - ) - parser.add_argument("--local_rank", type=int, default=-1, help="For distributed training: local_rank") - parser.add_argument( - "--checkpointing_steps", - type=int, - default=500, - help=( - "Save a checkpoint of the training state every X updates. These checkpoints are only suitable for resuming" - " training using `--resume_from_checkpoint`." - ), - ) - parser.add_argument( - "--checkpoints_total_limit", - type=int, - default=None, - help=( - "Max number of checkpoints to store. Passed as `total_limit` to the `Accelerator` `ProjectConfiguration`." - " See Accelerator::save_state https://huggingface.co/docs/accelerate/package_reference/accelerator#accelerate.Accelerator.save_state" - " for more docs" - ), - ) - parser.add_argument( - "--resume_from_checkpoint", - type=str, - default=None, - help=( - "Whether training should be resumed from a previous checkpoint. Use a path saved by" - ' `--checkpointing_steps`, or `"latest"` to automatically select the last available checkpoint.' - ), - ) - parser.add_argument( - "--enable_xformers_memory_efficient_attention", action="store_true", help="Whether or not to use xformers." - ) - - args = parser.parse_args() - env_local_rank = int(os.environ.get("LOCAL_RANK", -1)) - if env_local_rank != -1 and env_local_rank != args.local_rank: - args.local_rank = env_local_rank - - # Sanity checks - if args.dataset_name is None and args.train_data_dir is None: - raise ValueError("Need either a dataset name or a training folder.") - - return args - - -DATASET_NAME_MAPPING = { - "lambdalabs/pokemon-blip-captions": ("image", "text"), -} - - -def main(): - args = parse_args() - logging_dir = os.path.join(args.output_dir, args.logging_dir) - - accelerator_project_config = ProjectConfiguration( - total_limit=args.checkpoints_total_limit, project_dir=args.output_dir, logging_dir=logging_dir - ) - - accelerator = Accelerator( - gradient_accumulation_steps=args.gradient_accumulation_steps, - mixed_precision=args.mixed_precision, - log_with=args.report_to, - project_config=accelerator_project_config, - ) - if args.report_to == "wandb": - if not is_wandb_available(): - raise ImportError("Make sure to install wandb if you want to use it for logging during training.") - import wandb - - # Make one log on every process with the configuration for debugging. - logging.basicConfig( - format="%(asctime)s - %(levelname)s - %(name)s - %(message)s", - datefmt="%m/%d/%Y %H:%M:%S", - level=logging.INFO, - ) - logger.info(accelerator.state, main_process_only=False) - if accelerator.is_local_main_process: - datasets.utils.logging.set_verbosity_warning() - transformers.utils.logging.set_verbosity_warning() - diffusers.utils.logging.set_verbosity_info() - else: - datasets.utils.logging.set_verbosity_error() - transformers.utils.logging.set_verbosity_error() - diffusers.utils.logging.set_verbosity_error() - - # If passed along, set the training seed now. - if args.seed is not None: - set_seed(args.seed) - - # Handle the repository creation - if accelerator.is_main_process: - if args.output_dir is not None: - os.makedirs(args.output_dir, exist_ok=True) - - if args.push_to_hub: - repo_id = create_repo( - repo_id=args.hub_model_id or Path(args.output_dir).name, exist_ok=True, token=args.hub_token - ).repo_id - - # Load scheduler, tokenizer and models. - noise_scheduler = DDPMScheduler.from_pretrained(args.pretrained_model_name_or_path, subfolder="scheduler") - tokenizer = CLIPTokenizer.from_pretrained( - args.pretrained_model_name_or_path, subfolder="tokenizer", revision=args.revision - ) - text_encoder = CLIPTextModel.from_pretrained( - args.pretrained_model_name_or_path, subfolder="text_encoder", revision=args.revision - ) - vae = AutoencoderKL.from_pretrained(args.pretrained_model_name_or_path, subfolder="vae", revision=args.revision) - unet = UNet2DConditionModel.from_pretrained( - args.pretrained_model_name_or_path, subfolder="unet", revision=args.revision - ) - - # For mixed precision training we cast the text_encoder and vae weights to half-precision - # as these models are only used for inference, keeping weights in full precision is not required. - weight_dtype = torch.float32 - if accelerator.mixed_precision == "fp16": - weight_dtype = torch.float16 - elif accelerator.mixed_precision == "bf16": - weight_dtype = torch.bfloat16 - - if args.use_peft: - from peft import LoraConfig, LoraModel, get_peft_model_state_dict, set_peft_model_state_dict - - UNET_TARGET_MODULES = ["to_q", "to_v", "query", "value"] - TEXT_ENCODER_TARGET_MODULES = ["q_proj", "v_proj"] - - config = LoraConfig( - r=args.lora_r, - lora_alpha=args.lora_alpha, - target_modules=UNET_TARGET_MODULES, - lora_dropout=args.lora_dropout, - bias=args.lora_bias, - ) - unet = LoraModel(config, unet) - - vae.requires_grad_(False) - if args.train_text_encoder: - config = LoraConfig( - r=args.lora_text_encoder_r, - lora_alpha=args.lora_text_encoder_alpha, - target_modules=TEXT_ENCODER_TARGET_MODULES, - lora_dropout=args.lora_text_encoder_dropout, - bias=args.lora_text_encoder_bias, - ) - text_encoder = LoraModel(config, text_encoder) - else: - # freeze parameters of models to save more memory - unet.requires_grad_(False) - vae.requires_grad_(False) - - text_encoder.requires_grad_(False) - - # now we will add new LoRA weights to the attention layers - # It's important to realize here how many attention weights will be added and of which sizes - # The sizes of the attention layers consist only of two different variables: - # 1) - the "hidden_size", which is increased according to `unet.config.block_out_channels`. - # 2) - the "cross attention size", which is set to `unet.config.cross_attention_dim`. - - # Let's first see how many attention processors we will have to set. - # For Stable Diffusion, it should be equal to: - # - down blocks (2x attention layers) * (2x transformer layers) * (3x down blocks) = 12 - # - mid blocks (2x attention layers) * (1x transformer layers) * (1x mid blocks) = 2 - # - up blocks (2x attention layers) * (3x transformer layers) * (3x down blocks) = 18 - # => 32 layers - - # Set correct lora layers - lora_attn_procs = {} - for name in unet.attn_processors.keys(): - cross_attention_dim = None if name.endswith("attn1.processor") else unet.config.cross_attention_dim - if name.startswith("mid_block"): - hidden_size = unet.config.block_out_channels[-1] - elif name.startswith("up_blocks"): - block_id = int(name[len("up_blocks.")]) - hidden_size = list(reversed(unet.config.block_out_channels))[block_id] - elif name.startswith("down_blocks"): - block_id = int(name[len("down_blocks.")]) - hidden_size = unet.config.block_out_channels[block_id] - - lora_attn_procs[name] = LoRAAttnProcessor(hidden_size=hidden_size, cross_attention_dim=cross_attention_dim) - - unet.set_attn_processor(lora_attn_procs) - lora_layers = AttnProcsLayers(unet.attn_processors) - - # Move unet, vae and text_encoder to device and cast to weight_dtype - vae.to(accelerator.device, dtype=weight_dtype) - if not args.train_text_encoder: - text_encoder.to(accelerator.device, dtype=weight_dtype) - - if args.enable_xformers_memory_efficient_attention: - if is_xformers_available(): - import xformers - - xformers_version = version.parse(xformers.__version__) - if xformers_version == version.parse("0.0.16"): - logger.warn( - "xFormers 0.0.16 cannot be used for training in some GPUs. If you observe problems during training, please update xFormers to at least 0.0.17. See https://huggingface.co/docs/diffusers/main/en/optimization/xformers for more details." - ) - unet.enable_xformers_memory_efficient_attention() - else: - raise ValueError("xformers is not available. Make sure it is installed correctly") - - # Enable TF32 for faster training on Ampere GPUs, - # cf https://pytorch.org/docs/stable/notes/cuda.html#tensorfloat-32-tf32-on-ampere-devices - if args.allow_tf32: - torch.backends.cuda.matmul.allow_tf32 = True - - if args.scale_lr: - args.learning_rate = ( - args.learning_rate * args.gradient_accumulation_steps * args.train_batch_size * accelerator.num_processes - ) - - # Initialize the optimizer - if args.use_8bit_adam: - try: - import bitsandbytes as bnb - except ImportError: - raise ImportError( - "Please install bitsandbytes to use 8-bit Adam. You can do so by running `pip install bitsandbytes`" - ) - - optimizer_cls = bnb.optim.AdamW8bit - else: - optimizer_cls = torch.optim.AdamW - - if args.use_peft: - # Optimizer creation - params_to_optimize = ( - itertools.chain(unet.parameters(), text_encoder.parameters()) - if args.train_text_encoder - else unet.parameters() - ) - optimizer = optimizer_cls( - params_to_optimize, - lr=args.learning_rate, - betas=(args.adam_beta1, args.adam_beta2), - weight_decay=args.adam_weight_decay, - eps=args.adam_epsilon, - ) - else: - optimizer = optimizer_cls( - lora_layers.parameters(), - lr=args.learning_rate, - betas=(args.adam_beta1, args.adam_beta2), - weight_decay=args.adam_weight_decay, - eps=args.adam_epsilon, - ) - - # Get the datasets: you can either provide your own training and evaluation files (see below) - # or specify a Dataset from the hub (the dataset will be downloaded automatically from the datasets Hub). - - # In distributed training, the load_dataset function guarantees that only one local process can concurrently - # download the dataset. - if args.dataset_name is not None: - # Downloading and loading a dataset from the hub. - dataset = load_dataset( - args.dataset_name, - args.dataset_config_name, - cache_dir=args.cache_dir, - ) - else: - data_files = {} - if args.train_data_dir is not None: - data_files["train"] = os.path.join(args.train_data_dir, "**") - dataset = load_dataset( - "imagefolder", - data_files=data_files, - cache_dir=args.cache_dir, - ) - # See more about loading custom images at - # https://huggingface.co/docs/datasets/v2.4.0/en/image_load#imagefolder - - # Preprocessing the datasets. - # We need to tokenize inputs and targets. - column_names = dataset["train"].column_names - - # 6. Get the column names for input/target. - dataset_columns = DATASET_NAME_MAPPING.get(args.dataset_name, None) - if args.image_column is None: - image_column = dataset_columns[0] if dataset_columns is not None else column_names[0] - else: - image_column = args.image_column - if image_column not in column_names: - raise ValueError( - f"--image_column' value '{args.image_column}' needs to be one of: {', '.join(column_names)}" - ) - if args.caption_column is None: - caption_column = dataset_columns[1] if dataset_columns is not None else column_names[1] - else: - caption_column = args.caption_column - if caption_column not in column_names: - raise ValueError( - f"--caption_column' value '{args.caption_column}' needs to be one of: {', '.join(column_names)}" - ) - - # Preprocessing the datasets. - # We need to tokenize input captions and transform the images. - def tokenize_captions(examples, is_train=True): - captions = [] - for caption in examples[caption_column]: - if isinstance(caption, str): - captions.append(caption) - elif isinstance(caption, (list, np.ndarray)): - # take a random caption if there are multiple - captions.append(random.choice(caption) if is_train else caption[0]) - else: - raise ValueError( - f"Caption column `{caption_column}` should contain either strings or lists of strings." - ) - inputs = tokenizer( - captions, max_length=tokenizer.model_max_length, padding="max_length", truncation=True, return_tensors="pt" - ) - return inputs.input_ids - - # Preprocessing the datasets. - train_transforms = transforms.Compose( - [ - transforms.Resize(args.resolution, interpolation=transforms.InterpolationMode.BILINEAR), - transforms.CenterCrop(args.resolution) if args.center_crop else transforms.RandomCrop(args.resolution), - transforms.RandomHorizontalFlip() if args.random_flip else transforms.Lambda(lambda x: x), - transforms.ToTensor(), - transforms.Normalize([0.5], [0.5]), - ] - ) - - def preprocess_train(examples): - images = [image.convert("RGB") for image in examples[image_column]] - examples["pixel_values"] = [train_transforms(image) for image in images] - examples["input_ids"] = tokenize_captions(examples) - return examples - - with accelerator.main_process_first(): - if args.max_train_samples is not None: - dataset["train"] = dataset["train"].shuffle(seed=args.seed).select(range(args.max_train_samples)) - # Set the training transforms - train_dataset = dataset["train"].with_transform(preprocess_train) - - def collate_fn(examples): - pixel_values = torch.stack([example["pixel_values"] for example in examples]) - pixel_values = pixel_values.to(memory_format=torch.contiguous_format).float() - input_ids = torch.stack([example["input_ids"] for example in examples]) - return {"pixel_values": pixel_values, "input_ids": input_ids} - - # DataLoaders creation: - train_dataloader = torch.utils.data.DataLoader( - train_dataset, - shuffle=True, - collate_fn=collate_fn, - batch_size=args.train_batch_size, - num_workers=args.dataloader_num_workers, - ) - - # Scheduler and math around the number of training steps. - overrode_max_train_steps = False - num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps) - if args.max_train_steps is None: - args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch - overrode_max_train_steps = True - - lr_scheduler = get_scheduler( - args.lr_scheduler, - optimizer=optimizer, - num_warmup_steps=args.lr_warmup_steps * accelerator.num_processes, - num_training_steps=args.max_train_steps * accelerator.num_processes, - ) - - # Prepare everything with our `accelerator`. - if args.use_peft: - if args.train_text_encoder: - unet, text_encoder, optimizer, train_dataloader, lr_scheduler = accelerator.prepare( - unet, text_encoder, optimizer, train_dataloader, lr_scheduler - ) - else: - unet, optimizer, train_dataloader, lr_scheduler = accelerator.prepare( - unet, optimizer, train_dataloader, lr_scheduler - ) - else: - lora_layers, optimizer, train_dataloader, lr_scheduler = accelerator.prepare( - lora_layers, optimizer, train_dataloader, lr_scheduler - ) - - # We need to recalculate our total training steps as the size of the training dataloader may have changed. - num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps) - if overrode_max_train_steps: - args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch - # Afterwards we recalculate our number of training epochs - args.num_train_epochs = math.ceil(args.max_train_steps / num_update_steps_per_epoch) - - # We need to initialize the trackers we use, and also store our configuration. - # The trackers initializes automatically on the main process. - if accelerator.is_main_process: - accelerator.init_trackers("text2image-fine-tune", config=vars(args)) - - # Train! - total_batch_size = args.train_batch_size * accelerator.num_processes * args.gradient_accumulation_steps - - logger.info("***** Running training *****") - logger.info(f" Num examples = {len(train_dataset)}") - logger.info(f" Num Epochs = {args.num_train_epochs}") - logger.info(f" Instantaneous batch size per device = {args.train_batch_size}") - logger.info(f" Total train batch size (w. parallel, distributed & accumulation) = {total_batch_size}") - logger.info(f" Gradient Accumulation steps = {args.gradient_accumulation_steps}") - logger.info(f" Total optimization steps = {args.max_train_steps}") - global_step = 0 - first_epoch = 0 - - # Potentially load in the weights and states from a previous save - if args.resume_from_checkpoint: - if args.resume_from_checkpoint != "latest": - path = os.path.basename(args.resume_from_checkpoint) - else: - # Get the most recent checkpoint - dirs = os.listdir(args.output_dir) - dirs = [d for d in dirs if d.startswith("checkpoint")] - dirs = sorted(dirs, key=lambda x: int(x.split("-")[1])) - path = dirs[-1] if len(dirs) > 0 else None - - if path is None: - accelerator.print( - f"Checkpoint '{args.resume_from_checkpoint}' does not exist. Starting a new training run." - ) - args.resume_from_checkpoint = None - else: - accelerator.print(f"Resuming from checkpoint {path}") - accelerator.load_state(os.path.join(args.output_dir, path)) - global_step = int(path.split("-")[1]) - - resume_global_step = global_step * args.gradient_accumulation_steps - first_epoch = global_step // num_update_steps_per_epoch - resume_step = resume_global_step % (num_update_steps_per_epoch * args.gradient_accumulation_steps) - - # Only show the progress bar once on each machine. - progress_bar = tqdm(range(global_step, args.max_train_steps), disable=not accelerator.is_local_main_process) - progress_bar.set_description("Steps") - - for epoch in range(first_epoch, args.num_train_epochs): - unet.train() - if args.train_text_encoder: - text_encoder.train() - train_loss = 0.0 - for step, batch in enumerate(train_dataloader): - # Skip steps until we reach the resumed step - if args.resume_from_checkpoint and epoch == first_epoch and step < resume_step: - if step % args.gradient_accumulation_steps == 0: - progress_bar.update(1) - continue - - with accelerator.accumulate(unet): - # Convert images to latent space - latents = vae.encode(batch["pixel_values"].to(dtype=weight_dtype)).latent_dist.sample() - latents = latents * vae.config.scaling_factor - - # Sample noise that we'll add to the latents - noise = torch.randn_like(latents) - bsz = latents.shape[0] - # Sample a random timestep for each image - timesteps = torch.randint(0, noise_scheduler.config.num_train_timesteps, (bsz,), device=latents.device) - timesteps = timesteps.long() - - # Add noise to the latents according to the noise magnitude at each timestep - # (this is the forward diffusion process) - noisy_latents = noise_scheduler.add_noise(latents, noise, timesteps) - - # Get the text embedding for conditioning - encoder_hidden_states = text_encoder(batch["input_ids"])[0] - - # Get the target for loss depending on the prediction type - if noise_scheduler.config.prediction_type == "epsilon": - target = noise - elif noise_scheduler.config.prediction_type == "v_prediction": - target = noise_scheduler.get_velocity(latents, noise, timesteps) - else: - raise ValueError(f"Unknown prediction type {noise_scheduler.config.prediction_type}") - - # Predict the noise residual and compute loss - model_pred = unet(noisy_latents, timesteps, encoder_hidden_states).sample - loss = F.mse_loss(model_pred.float(), target.float(), reduction="mean") - - # Gather the losses across all processes for logging (if we use distributed training). - avg_loss = accelerator.gather(loss.repeat(args.train_batch_size)).mean() - train_loss += avg_loss.item() / args.gradient_accumulation_steps - - # Backpropagate - accelerator.backward(loss) - if accelerator.sync_gradients: - if args.use_peft: - params_to_clip = ( - itertools.chain(unet.parameters(), text_encoder.parameters()) - if args.train_text_encoder - else unet.parameters() - ) - else: - params_to_clip = lora_layers.parameters() - accelerator.clip_grad_norm_(params_to_clip, args.max_grad_norm) - optimizer.step() - lr_scheduler.step() - optimizer.zero_grad() - - # Checks if the accelerator has performed an optimization step behind the scenes - if accelerator.sync_gradients: - progress_bar.update(1) - global_step += 1 - accelerator.log({"train_loss": train_loss}, step=global_step) - train_loss = 0.0 - - if global_step % args.checkpointing_steps == 0: - if accelerator.is_main_process: - save_path = os.path.join(args.output_dir, f"checkpoint-{global_step}") - accelerator.save_state(save_path) - logger.info(f"Saved state to {save_path}") - - logs = {"step_loss": loss.detach().item(), "lr": lr_scheduler.get_last_lr()[0]} - progress_bar.set_postfix(**logs) - - if global_step >= args.max_train_steps: - break - - if accelerator.is_main_process: - if args.validation_prompt is not None and epoch % args.validation_epochs == 0: - logger.info( - f"Running validation... \n Generating {args.num_validation_images} images with prompt:" - f" {args.validation_prompt}." - ) - # create pipeline - pipeline = DiffusionPipeline.from_pretrained( - args.pretrained_model_name_or_path, - unet=accelerator.unwrap_model(unet), - text_encoder=accelerator.unwrap_model(text_encoder), - revision=args.revision, - torch_dtype=weight_dtype, - ) - pipeline = pipeline.to(accelerator.device) - pipeline.set_progress_bar_config(disable=True) - - # run inference - generator = torch.Generator(device=accelerator.device).manual_seed(args.seed) - images = [] - for _ in range(args.num_validation_images): - images.append( - pipeline(args.validation_prompt, num_inference_steps=30, generator=generator).images[0] - ) - - if accelerator.is_main_process: - for tracker in accelerator.trackers: - if tracker.name == "tensorboard": - np_images = np.stack([np.asarray(img) for img in images]) - tracker.writer.add_images("validation", np_images, epoch, dataformats="NHWC") - if tracker.name == "wandb": - tracker.log( - { - "validation": [ - wandb.Image(image, caption=f"{i}: {args.validation_prompt}") - for i, image in enumerate(images) - ] - } - ) - - del pipeline - torch.cuda.empty_cache() - - # Save the lora layers - accelerator.wait_for_everyone() - if accelerator.is_main_process: - if args.use_peft: - lora_config = {} - unwarpped_unet = accelerator.unwrap_model(unet) - state_dict = get_peft_model_state_dict(unwarpped_unet, state_dict=accelerator.get_state_dict(unet)) - lora_config["peft_config"] = unwarpped_unet.get_peft_config_as_dict(inference=True) - if args.train_text_encoder: - unwarpped_text_encoder = accelerator.unwrap_model(text_encoder) - text_encoder_state_dict = get_peft_model_state_dict( - unwarpped_text_encoder, state_dict=accelerator.get_state_dict(text_encoder) - ) - text_encoder_state_dict = {f"text_encoder_{k}": v for k, v in text_encoder_state_dict.items()} - state_dict.update(text_encoder_state_dict) - lora_config["text_encoder_peft_config"] = unwarpped_text_encoder.get_peft_config_as_dict( - inference=True - ) - - accelerator.save(state_dict, os.path.join(args.output_dir, f"{global_step}_lora.pt")) - with open(os.path.join(args.output_dir, f"{global_step}_lora_config.json"), "w") as f: - json.dump(lora_config, f) - else: - unet = unet.to(torch.float32) - unet.save_attn_procs(args.output_dir) - - if args.push_to_hub: - save_model_card( - repo_id, - images=images, - base_model=args.pretrained_model_name_or_path, - dataset_name=args.dataset_name, - repo_folder=args.output_dir, - ) - upload_folder( - repo_id=repo_id, - folder_path=args.output_dir, - commit_message="End of training", - ignore_patterns=["step_*", "epoch_*"], - ) - - # Final inference - # Load previous pipeline - pipeline = DiffusionPipeline.from_pretrained( - args.pretrained_model_name_or_path, revision=args.revision, torch_dtype=weight_dtype - ) - - if args.use_peft: - - def load_and_set_lora_ckpt(pipe, ckpt_dir, global_step, device, dtype): - with open(os.path.join(args.output_dir, f"{global_step}_lora_config.json"), "r") as f: - lora_config = json.load(f) - print(lora_config) - - checkpoint = os.path.join(args.output_dir, f"{global_step}_lora.pt") - lora_checkpoint_sd = torch.load(checkpoint) - unet_lora_ds = {k: v for k, v in lora_checkpoint_sd.items() if "text_encoder_" not in k} - text_encoder_lora_ds = { - k.replace("text_encoder_", ""): v for k, v in lora_checkpoint_sd.items() if "text_encoder_" in k - } - - unet_config = LoraConfig(**lora_config["peft_config"]) - pipe.unet = LoraModel(unet_config, pipe.unet) - set_peft_model_state_dict(pipe.unet, unet_lora_ds) - - if "text_encoder_peft_config" in lora_config: - text_encoder_config = LoraConfig(**lora_config["text_encoder_peft_config"]) - pipe.text_encoder = LoraModel(text_encoder_config, pipe.text_encoder) - set_peft_model_state_dict(pipe.text_encoder, text_encoder_lora_ds) - - if dtype in (torch.float16, torch.bfloat16): - pipe.unet.half() - pipe.text_encoder.half() - - pipe.to(device) - return pipe - - pipeline = load_and_set_lora_ckpt(pipeline, args.output_dir, global_step, accelerator.device, weight_dtype) - - else: - pipeline = pipeline.to(accelerator.device) - # load attention processors - pipeline.unet.load_attn_procs(args.output_dir) - - # run inference - if args.seed is not None: - generator = torch.Generator(device=accelerator.device).manual_seed(args.seed) - else: - generator = None - images = [] - for _ in range(args.num_validation_images): - images.append(pipeline(args.validation_prompt, num_inference_steps=30, generator=generator).images[0]) - - if accelerator.is_main_process: - for tracker in accelerator.trackers: - if tracker.name == "tensorboard": - np_images = np.stack([np.asarray(img) for img in images]) - tracker.writer.add_images("test", np_images, epoch, dataformats="NHWC") - if tracker.name == "wandb": - tracker.log( - { - "test": [ - wandb.Image(image, caption=f"{i}: {args.validation_prompt}") - for i, image in enumerate(images) - ] - } - ) - - accelerator.end_training() - - -if __name__ == "__main__": - main() diff --git a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/scripts/convert_asymmetric_vqgan_to_diffusers.py b/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/scripts/convert_asymmetric_vqgan_to_diffusers.py deleted file mode 100644 index ffb735e18224a7ef48503367112f5ce8142bdf9c..0000000000000000000000000000000000000000 --- a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/scripts/convert_asymmetric_vqgan_to_diffusers.py +++ /dev/null @@ -1,184 +0,0 @@ -import argparse -import time -from pathlib import Path -from typing import Any, Dict, Literal - -import torch - -from diffusers import AsymmetricAutoencoderKL - - -ASYMMETRIC_AUTOENCODER_KL_x_1_5_CONFIG = { - "in_channels": 3, - "out_channels": 3, - "down_block_types": [ - "DownEncoderBlock2D", - "DownEncoderBlock2D", - "DownEncoderBlock2D", - "DownEncoderBlock2D", - ], - "down_block_out_channels": [128, 256, 512, 512], - "layers_per_down_block": 2, - "up_block_types": [ - "UpDecoderBlock2D", - "UpDecoderBlock2D", - "UpDecoderBlock2D", - "UpDecoderBlock2D", - ], - "up_block_out_channels": [192, 384, 768, 768], - "layers_per_up_block": 3, - "act_fn": "silu", - "latent_channels": 4, - "norm_num_groups": 32, - "sample_size": 256, - "scaling_factor": 0.18215, -} - -ASYMMETRIC_AUTOENCODER_KL_x_2_CONFIG = { - "in_channels": 3, - "out_channels": 3, - "down_block_types": [ - "DownEncoderBlock2D", - "DownEncoderBlock2D", - "DownEncoderBlock2D", - "DownEncoderBlock2D", - ], - "down_block_out_channels": [128, 256, 512, 512], - "layers_per_down_block": 2, - "up_block_types": [ - "UpDecoderBlock2D", - "UpDecoderBlock2D", - "UpDecoderBlock2D", - "UpDecoderBlock2D", - ], - "up_block_out_channels": [256, 512, 1024, 1024], - "layers_per_up_block": 5, - "act_fn": "silu", - "latent_channels": 4, - "norm_num_groups": 32, - "sample_size": 256, - "scaling_factor": 0.18215, -} - - -def convert_asymmetric_autoencoder_kl_state_dict(original_state_dict: Dict[str, Any]) -> Dict[str, Any]: - converted_state_dict = {} - for k, v in original_state_dict.items(): - if k.startswith("encoder."): - converted_state_dict[ - k.replace("encoder.down.", "encoder.down_blocks.") - .replace("encoder.mid.", "encoder.mid_block.") - .replace("encoder.norm_out.", "encoder.conv_norm_out.") - .replace(".downsample.", ".downsamplers.0.") - .replace(".nin_shortcut.", ".conv_shortcut.") - .replace(".block.", ".resnets.") - .replace(".block_1.", ".resnets.0.") - .replace(".block_2.", ".resnets.1.") - .replace(".attn_1.k.", ".attentions.0.to_k.") - .replace(".attn_1.q.", ".attentions.0.to_q.") - .replace(".attn_1.v.", ".attentions.0.to_v.") - .replace(".attn_1.proj_out.", ".attentions.0.to_out.0.") - .replace(".attn_1.norm.", ".attentions.0.group_norm.") - ] = v - elif k.startswith("decoder.") and "up_layers" not in k: - converted_state_dict[ - k.replace("decoder.encoder.", "decoder.condition_encoder.") - .replace(".norm_out.", ".conv_norm_out.") - .replace(".up.0.", ".up_blocks.3.") - .replace(".up.1.", ".up_blocks.2.") - .replace(".up.2.", ".up_blocks.1.") - .replace(".up.3.", ".up_blocks.0.") - .replace(".block.", ".resnets.") - .replace("mid", "mid_block") - .replace(".0.upsample.", ".0.upsamplers.0.") - .replace(".1.upsample.", ".1.upsamplers.0.") - .replace(".2.upsample.", ".2.upsamplers.0.") - .replace(".nin_shortcut.", ".conv_shortcut.") - .replace(".block_1.", ".resnets.0.") - .replace(".block_2.", ".resnets.1.") - .replace(".attn_1.k.", ".attentions.0.to_k.") - .replace(".attn_1.q.", ".attentions.0.to_q.") - .replace(".attn_1.v.", ".attentions.0.to_v.") - .replace(".attn_1.proj_out.", ".attentions.0.to_out.0.") - .replace(".attn_1.norm.", ".attentions.0.group_norm.") - ] = v - elif k.startswith("quant_conv."): - converted_state_dict[k] = v - elif k.startswith("post_quant_conv."): - converted_state_dict[k] = v - else: - print(f" skipping key `{k}`") - # fix weights shape - for k, v in converted_state_dict.items(): - if ( - (k.startswith("encoder.mid_block.attentions.0") or k.startswith("decoder.mid_block.attentions.0")) - and k.endswith("weight") - and ("to_q" in k or "to_k" in k or "to_v" in k or "to_out" in k) - ): - converted_state_dict[k] = converted_state_dict[k][:, :, 0, 0] - - return converted_state_dict - - -def get_asymmetric_autoencoder_kl_from_original_checkpoint( - scale: Literal["1.5", "2"], original_checkpoint_path: str, map_location: torch.device -) -> AsymmetricAutoencoderKL: - print("Loading original state_dict") - original_state_dict = torch.load(original_checkpoint_path, map_location=map_location) - original_state_dict = original_state_dict["state_dict"] - print("Converting state_dict") - converted_state_dict = convert_asymmetric_autoencoder_kl_state_dict(original_state_dict) - kwargs = ASYMMETRIC_AUTOENCODER_KL_x_1_5_CONFIG if scale == "1.5" else ASYMMETRIC_AUTOENCODER_KL_x_2_CONFIG - print("Initializing AsymmetricAutoencoderKL model") - asymmetric_autoencoder_kl = AsymmetricAutoencoderKL(**kwargs) - print("Loading weight from converted state_dict") - asymmetric_autoencoder_kl.load_state_dict(converted_state_dict) - asymmetric_autoencoder_kl.eval() - print("AsymmetricAutoencoderKL successfully initialized") - return asymmetric_autoencoder_kl - - -if __name__ == "__main__": - start = time.time() - parser = argparse.ArgumentParser() - parser.add_argument( - "--scale", - default=None, - type=str, - required=True, - help="Asymmetric VQGAN scale: `1.5` or `2`", - ) - parser.add_argument( - "--original_checkpoint_path", - default=None, - type=str, - required=True, - help="Path to the original Asymmetric VQGAN checkpoint", - ) - parser.add_argument( - "--output_path", - default=None, - type=str, - required=True, - help="Path to save pretrained AsymmetricAutoencoderKL model", - ) - parser.add_argument( - "--map_location", - default="cpu", - type=str, - required=False, - help="The device passed to `map_location` when loading the checkpoint", - ) - args = parser.parse_args() - - assert args.scale in ["1.5", "2"], f"{args.scale} should be `1.5` of `2`" - assert Path(args.original_checkpoint_path).is_file() - - asymmetric_autoencoder_kl = get_asymmetric_autoencoder_kl_from_original_checkpoint( - scale=args.scale, - original_checkpoint_path=args.original_checkpoint_path, - map_location=torch.device(args.map_location), - ) - print("Saving pretrained AsymmetricAutoencoderKL") - asymmetric_autoencoder_kl.save_pretrained(args.output_path) - print(f"Done in {time.time() - start:.2f} seconds") diff --git a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/scripts/convert_models_diffuser_to_diffusers.py b/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/scripts/convert_models_diffuser_to_diffusers.py deleted file mode 100644 index cc5321e33fe088c652f6014c6dab813bb8d5f246..0000000000000000000000000000000000000000 --- a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/scripts/convert_models_diffuser_to_diffusers.py +++ /dev/null @@ -1,100 +0,0 @@ -import json -import os - -import torch - -from diffusers import UNet1DModel - - -os.makedirs("hub/hopper-medium-v2/unet/hor32", exist_ok=True) -os.makedirs("hub/hopper-medium-v2/unet/hor128", exist_ok=True) - -os.makedirs("hub/hopper-medium-v2/value_function", exist_ok=True) - - -def unet(hor): - if hor == 128: - down_block_types = ("DownResnetBlock1D", "DownResnetBlock1D", "DownResnetBlock1D") - block_out_channels = (32, 128, 256) - up_block_types = ("UpResnetBlock1D", "UpResnetBlock1D") - - elif hor == 32: - down_block_types = ("DownResnetBlock1D", "DownResnetBlock1D", "DownResnetBlock1D", "DownResnetBlock1D") - block_out_channels = (32, 64, 128, 256) - up_block_types = ("UpResnetBlock1D", "UpResnetBlock1D", "UpResnetBlock1D") - model = torch.load(f"/Users/bglickenhaus/Documents/diffuser/temporal_unet-hopper-mediumv2-hor{hor}.torch") - state_dict = model.state_dict() - config = { - "down_block_types": down_block_types, - "block_out_channels": block_out_channels, - "up_block_types": up_block_types, - "layers_per_block": 1, - "use_timestep_embedding": True, - "out_block_type": "OutConv1DBlock", - "norm_num_groups": 8, - "downsample_each_block": False, - "in_channels": 14, - "out_channels": 14, - "extra_in_channels": 0, - "time_embedding_type": "positional", - "flip_sin_to_cos": False, - "freq_shift": 1, - "sample_size": 65536, - "mid_block_type": "MidResTemporalBlock1D", - "act_fn": "mish", - } - hf_value_function = UNet1DModel(**config) - print(f"length of state dict: {len(state_dict.keys())}") - print(f"length of value function dict: {len(hf_value_function.state_dict().keys())}") - mapping = dict(zip(model.state_dict().keys(), hf_value_function.state_dict().keys())) - for k, v in mapping.items(): - state_dict[v] = state_dict.pop(k) - hf_value_function.load_state_dict(state_dict) - - torch.save(hf_value_function.state_dict(), f"hub/hopper-medium-v2/unet/hor{hor}/diffusion_pytorch_model.bin") - with open(f"hub/hopper-medium-v2/unet/hor{hor}/config.json", "w") as f: - json.dump(config, f) - - -def value_function(): - config = { - "in_channels": 14, - "down_block_types": ("DownResnetBlock1D", "DownResnetBlock1D", "DownResnetBlock1D", "DownResnetBlock1D"), - "up_block_types": (), - "out_block_type": "ValueFunction", - "mid_block_type": "ValueFunctionMidBlock1D", - "block_out_channels": (32, 64, 128, 256), - "layers_per_block": 1, - "downsample_each_block": True, - "sample_size": 65536, - "out_channels": 14, - "extra_in_channels": 0, - "time_embedding_type": "positional", - "use_timestep_embedding": True, - "flip_sin_to_cos": False, - "freq_shift": 1, - "norm_num_groups": 8, - "act_fn": "mish", - } - - model = torch.load("/Users/bglickenhaus/Documents/diffuser/value_function-hopper-mediumv2-hor32.torch") - state_dict = model - hf_value_function = UNet1DModel(**config) - print(f"length of state dict: {len(state_dict.keys())}") - print(f"length of value function dict: {len(hf_value_function.state_dict().keys())}") - - mapping = dict(zip(state_dict.keys(), hf_value_function.state_dict().keys())) - for k, v in mapping.items(): - state_dict[v] = state_dict.pop(k) - - hf_value_function.load_state_dict(state_dict) - - torch.save(hf_value_function.state_dict(), "hub/hopper-medium-v2/value_function/diffusion_pytorch_model.bin") - with open("hub/hopper-medium-v2/value_function/config.json", "w") as f: - json.dump(config, f) - - -if __name__ == "__main__": - unet(32) - # unet(128) - value_function() diff --git a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/scripts/generate_logits.py b/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/scripts/generate_logits.py deleted file mode 100644 index 89dce0e78d4ef50e060ac554ac3f7e760f55983f..0000000000000000000000000000000000000000 --- a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/scripts/generate_logits.py +++ /dev/null @@ -1,127 +0,0 @@ -import random - -import torch -from huggingface_hub import HfApi - -from diffusers import UNet2DModel - - -api = HfApi() - -results = {} -# fmt: off -results["google_ddpm_cifar10_32"] = torch.tensor([ - -0.7515, -1.6883, 0.2420, 0.0300, 0.6347, 1.3433, -1.1743, -3.7467, - 1.2342, -2.2485, 0.4636, 0.8076, -0.7991, 0.3969, 0.8498, 0.9189, - -1.8887, -3.3522, 0.7639, 0.2040, 0.6271, -2.7148, -1.6316, 3.0839, - 0.3186, 0.2721, -0.9759, -1.2461, 2.6257, 1.3557 -]) -results["google_ddpm_ema_bedroom_256"] = torch.tensor([ - -2.3639, -2.5344, 0.0054, -0.6674, 1.5990, 1.0158, 0.3124, -2.1436, - 1.8795, -2.5429, -0.1566, -0.3973, 1.2490, 2.6447, 1.2283, -0.5208, - -2.8154, -3.5119, 2.3838, 1.2033, 1.7201, -2.1256, -1.4576, 2.7948, - 2.4204, -0.9752, -1.2546, 0.8027, 3.2758, 3.1365 -]) -results["CompVis_ldm_celebahq_256"] = torch.tensor([ - -0.6531, -0.6891, -0.3172, -0.5375, -0.9140, -0.5367, -0.1175, -0.7869, - -0.3808, -0.4513, -0.2098, -0.0083, 0.3183, 0.5140, 0.2247, -0.1304, - -0.1302, -0.2802, -0.2084, -0.2025, -0.4967, -0.4873, -0.0861, 0.6925, - 0.0250, 0.1290, -0.1543, 0.6316, 1.0460, 1.4943 -]) -results["google_ncsnpp_ffhq_1024"] = torch.tensor([ - 0.0911, 0.1107, 0.0182, 0.0435, -0.0805, -0.0608, 0.0381, 0.2172, - -0.0280, 0.1327, -0.0299, -0.0255, -0.0050, -0.1170, -0.1046, 0.0309, - 0.1367, 0.1728, -0.0533, -0.0748, -0.0534, 0.1624, 0.0384, -0.1805, - -0.0707, 0.0642, 0.0220, -0.0134, -0.1333, -0.1505 -]) -results["google_ncsnpp_bedroom_256"] = torch.tensor([ - 0.1321, 0.1337, 0.0440, 0.0622, -0.0591, -0.0370, 0.0503, 0.2133, - -0.0177, 0.1415, -0.0116, -0.0112, 0.0044, -0.0980, -0.0789, 0.0395, - 0.1502, 0.1785, -0.0488, -0.0514, -0.0404, 0.1539, 0.0454, -0.1559, - -0.0665, 0.0659, 0.0383, -0.0005, -0.1266, -0.1386 -]) -results["google_ncsnpp_celebahq_256"] = torch.tensor([ - 0.1154, 0.1218, 0.0307, 0.0526, -0.0711, -0.0541, 0.0366, 0.2078, - -0.0267, 0.1317, -0.0226, -0.0193, -0.0014, -0.1055, -0.0902, 0.0330, - 0.1391, 0.1709, -0.0562, -0.0693, -0.0560, 0.1482, 0.0381, -0.1683, - -0.0681, 0.0661, 0.0331, -0.0046, -0.1268, -0.1431 -]) -results["google_ncsnpp_church_256"] = torch.tensor([ - 0.1192, 0.1240, 0.0414, 0.0606, -0.0557, -0.0412, 0.0430, 0.2042, - -0.0200, 0.1385, -0.0115, -0.0132, 0.0017, -0.0965, -0.0802, 0.0398, - 0.1433, 0.1747, -0.0458, -0.0533, -0.0407, 0.1545, 0.0419, -0.1574, - -0.0645, 0.0626, 0.0341, -0.0010, -0.1199, -0.1390 -]) -results["google_ncsnpp_ffhq_256"] = torch.tensor([ - 0.1075, 0.1074, 0.0205, 0.0431, -0.0774, -0.0607, 0.0298, 0.2042, - -0.0320, 0.1267, -0.0281, -0.0250, -0.0064, -0.1091, -0.0946, 0.0290, - 0.1328, 0.1650, -0.0580, -0.0738, -0.0586, 0.1440, 0.0337, -0.1746, - -0.0712, 0.0605, 0.0250, -0.0099, -0.1316, -0.1473 -]) -results["google_ddpm_cat_256"] = torch.tensor([ - -1.4572, -2.0481, -0.0414, -0.6005, 1.4136, 0.5848, 0.4028, -2.7330, - 1.2212, -2.1228, 0.2155, 0.4039, 0.7662, 2.0535, 0.7477, -0.3243, - -2.1758, -2.7648, 1.6947, 0.7026, 1.2338, -1.6078, -0.8682, 2.2810, - 1.8574, -0.5718, -0.5586, -0.0186, 2.3415, 2.1251]) -results["google_ddpm_celebahq_256"] = torch.tensor([ - -1.3690, -1.9720, -0.4090, -0.6966, 1.4660, 0.9938, -0.1385, -2.7324, - 0.7736, -1.8917, 0.2923, 0.4293, 0.1693, 1.4112, 1.1887, -0.3181, - -2.2160, -2.6381, 1.3170, 0.8163, 0.9240, -1.6544, -0.6099, 2.5259, - 1.6430, -0.9090, -0.9392, -0.0126, 2.4268, 2.3266 -]) -results["google_ddpm_ema_celebahq_256"] = torch.tensor([ - -1.3525, -1.9628, -0.3956, -0.6860, 1.4664, 1.0014, -0.1259, -2.7212, - 0.7772, -1.8811, 0.2996, 0.4388, 0.1704, 1.4029, 1.1701, -0.3027, - -2.2053, -2.6287, 1.3350, 0.8131, 0.9274, -1.6292, -0.6098, 2.5131, - 1.6505, -0.8958, -0.9298, -0.0151, 2.4257, 2.3355 -]) -results["google_ddpm_church_256"] = torch.tensor([ - -2.0585, -2.7897, -0.2850, -0.8940, 1.9052, 0.5702, 0.6345, -3.8959, - 1.5932, -3.2319, 0.1974, 0.0287, 1.7566, 2.6543, 0.8387, -0.5351, - -3.2736, -4.3375, 2.9029, 1.6390, 1.4640, -2.1701, -1.9013, 2.9341, - 3.4981, -0.6255, -1.1644, -0.1591, 3.7097, 3.2066 -]) -results["google_ddpm_bedroom_256"] = torch.tensor([ - -2.3139, -2.5594, -0.0197, -0.6785, 1.7001, 1.1606, 0.3075, -2.1740, - 1.8071, -2.5630, -0.0926, -0.3811, 1.2116, 2.6246, 1.2731, -0.5398, - -2.8153, -3.6140, 2.3893, 1.3262, 1.6258, -2.1856, -1.3267, 2.8395, - 2.3779, -1.0623, -1.2468, 0.8959, 3.3367, 3.2243 -]) -results["google_ddpm_ema_church_256"] = torch.tensor([ - -2.0628, -2.7667, -0.2089, -0.8263, 2.0539, 0.5992, 0.6495, -3.8336, - 1.6025, -3.2817, 0.1721, -0.0633, 1.7516, 2.7039, 0.8100, -0.5908, - -3.2113, -4.4343, 2.9257, 1.3632, 1.5562, -2.1489, -1.9894, 3.0560, - 3.3396, -0.7328, -1.0417, 0.0383, 3.7093, 3.2343 -]) -results["google_ddpm_ema_cat_256"] = torch.tensor([ - -1.4574, -2.0569, -0.0473, -0.6117, 1.4018, 0.5769, 0.4129, -2.7344, - 1.2241, -2.1397, 0.2000, 0.3937, 0.7616, 2.0453, 0.7324, -0.3391, - -2.1746, -2.7744, 1.6963, 0.6921, 1.2187, -1.6172, -0.8877, 2.2439, - 1.8471, -0.5839, -0.5605, -0.0464, 2.3250, 2.1219 -]) -# fmt: on - -models = api.list_models(filter="diffusers") -for mod in models: - if "google" in mod.author or mod.modelId == "CompVis/ldm-celebahq-256": - local_checkpoint = "/home/patrick/google_checkpoints/" + mod.modelId.split("/")[-1] - - print(f"Started running {mod.modelId}!!!") - - if mod.modelId.startswith("CompVis"): - model = UNet2DModel.from_pretrained(local_checkpoint, subfolder="unet") - else: - model = UNet2DModel.from_pretrained(local_checkpoint) - - torch.manual_seed(0) - random.seed(0) - - noise = torch.randn(1, model.config.in_channels, model.config.sample_size, model.config.sample_size) - time_step = torch.tensor([10] * noise.shape[0]) - with torch.no_grad(): - logits = model(noise, time_step).sample - - assert torch.allclose( - logits[0, 0, 0, :30], results["_".join("_".join(mod.modelId.split("/")).split("-"))], atol=1e-3 - ) - print(f"{mod.modelId} has passed successfully!!!") diff --git a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/src/diffusers/pipelines/deepfloyd_if/pipeline_if_inpainting.py b/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/src/diffusers/pipelines/deepfloyd_if/pipeline_if_inpainting.py deleted file mode 100644 index 466d386a959c29d5e398e95fb4eb3ed5d4e1884e..0000000000000000000000000000000000000000 --- a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/src/diffusers/pipelines/deepfloyd_if/pipeline_if_inpainting.py +++ /dev/null @@ -1,1016 +0,0 @@ -import html -import inspect -import re -import urllib.parse as ul -from typing import Any, Callable, Dict, List, Optional, Union - -import numpy as np -import PIL -import torch -from transformers import CLIPImageProcessor, T5EncoderModel, T5Tokenizer - -from ...loaders import LoraLoaderMixin -from ...models import UNet2DConditionModel -from ...schedulers import DDPMScheduler -from ...utils import ( - BACKENDS_MAPPING, - PIL_INTERPOLATION, - is_accelerate_available, - is_bs4_available, - is_ftfy_available, - logging, - replace_example_docstring, -) -from ...utils.torch_utils import randn_tensor -from ..pipeline_utils import DiffusionPipeline -from . import IFPipelineOutput -from .safety_checker import IFSafetyChecker -from .watermark import IFWatermarker - - -logger = logging.get_logger(__name__) # pylint: disable=invalid-name - -if is_bs4_available(): - from bs4 import BeautifulSoup - -if is_ftfy_available(): - import ftfy - - -# Copied from diffusers.pipelines.deepfloyd_if.pipeline_if_img2img.resize -def resize(images: PIL.Image.Image, img_size: int) -> PIL.Image.Image: - w, h = images.size - - coef = w / h - - w, h = img_size, img_size - - if coef >= 1: - w = int(round(img_size / 8 * coef) * 8) - else: - h = int(round(img_size / 8 / coef) * 8) - - images = images.resize((w, h), resample=PIL_INTERPOLATION["bicubic"], reducing_gap=None) - - return images - - -EXAMPLE_DOC_STRING = """ - Examples: - ```py - >>> from diffusers import IFInpaintingPipeline, IFInpaintingSuperResolutionPipeline, DiffusionPipeline - >>> from diffusers.utils import pt_to_pil - >>> import torch - >>> from PIL import Image - >>> import requests - >>> from io import BytesIO - - >>> url = "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/if/person.png" - >>> response = requests.get(url) - >>> original_image = Image.open(BytesIO(response.content)).convert("RGB") - >>> original_image = original_image - - >>> url = "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/if/glasses_mask.png" - >>> response = requests.get(url) - >>> mask_image = Image.open(BytesIO(response.content)) - >>> mask_image = mask_image - - >>> pipe = IFInpaintingPipeline.from_pretrained( - ... "DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16 - ... ) - >>> pipe.enable_model_cpu_offload() - - >>> prompt = "blue sunglasses" - >>> prompt_embeds, negative_embeds = pipe.encode_prompt(prompt) - - >>> image = pipe( - ... image=original_image, - ... mask_image=mask_image, - ... prompt_embeds=prompt_embeds, - ... negative_prompt_embeds=negative_embeds, - ... output_type="pt", - ... ).images - - >>> # save intermediate image - >>> pil_image = pt_to_pil(image) - >>> pil_image[0].save("./if_stage_I.png") - - >>> super_res_1_pipe = IFInpaintingSuperResolutionPipeline.from_pretrained( - ... "DeepFloyd/IF-II-L-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16 - ... ) - >>> super_res_1_pipe.enable_model_cpu_offload() - - >>> image = super_res_1_pipe( - ... image=image, - ... mask_image=mask_image, - ... original_image=original_image, - ... prompt_embeds=prompt_embeds, - ... negative_prompt_embeds=negative_embeds, - ... ).images - >>> image[0].save("./if_stage_II.png") - ``` -""" - - -class IFInpaintingPipeline(DiffusionPipeline, LoraLoaderMixin): - tokenizer: T5Tokenizer - text_encoder: T5EncoderModel - - unet: UNet2DConditionModel - scheduler: DDPMScheduler - - feature_extractor: Optional[CLIPImageProcessor] - safety_checker: Optional[IFSafetyChecker] - - watermarker: Optional[IFWatermarker] - - bad_punct_regex = re.compile( - r"[" + "#®•©™&@·º½¾¿¡§~" + "\)" + "\(" + "\]" + "\[" + "\}" + "\{" + "\|" + "\\" + "\/" + "\*" + r"]{1,}" - ) # noqa - - _optional_components = ["tokenizer", "text_encoder", "safety_checker", "feature_extractor", "watermarker"] - model_cpu_offload_seq = "text_encoder->unet" - - def __init__( - self, - tokenizer: T5Tokenizer, - text_encoder: T5EncoderModel, - unet: UNet2DConditionModel, - scheduler: DDPMScheduler, - safety_checker: Optional[IFSafetyChecker], - feature_extractor: Optional[CLIPImageProcessor], - watermarker: Optional[IFWatermarker], - requires_safety_checker: bool = True, - ): - super().__init__() - - if safety_checker is None and requires_safety_checker: - logger.warning( - f"You have disabled the safety checker for {self.__class__} by passing `safety_checker=None`. Ensure" - " that you abide to the conditions of the IF license and do not expose unfiltered" - " results in services or applications open to the public. Both the diffusers team and Hugging Face" - " strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling" - " it only for use-cases that involve analyzing network behavior or auditing its results. For more" - " information, please have a look at https://github.com/huggingface/diffusers/pull/254 ." - ) - - if safety_checker is not None and feature_extractor is None: - raise ValueError( - "Make sure to define a feature extractor when loading {self.__class__} if you want to use the safety" - " checker. If you do not want to use the safety checker, you can pass `'safety_checker=None'` instead." - ) - - self.register_modules( - tokenizer=tokenizer, - text_encoder=text_encoder, - unet=unet, - scheduler=scheduler, - safety_checker=safety_checker, - feature_extractor=feature_extractor, - watermarker=watermarker, - ) - self.register_to_config(requires_safety_checker=requires_safety_checker) - - # Copied from diffusers.pipelines.deepfloyd_if.pipeline_if.IFPipeline.remove_all_hooks - def remove_all_hooks(self): - if is_accelerate_available(): - from accelerate.hooks import remove_hook_from_module - else: - raise ImportError("Please install accelerate via `pip install accelerate`") - - for model in [self.text_encoder, self.unet, self.safety_checker]: - if model is not None: - remove_hook_from_module(model, recurse=True) - - self.unet_offload_hook = None - self.text_encoder_offload_hook = None - self.final_offload_hook = None - - @torch.no_grad() - # Copied from diffusers.pipelines.deepfloyd_if.pipeline_if.IFPipeline.encode_prompt - def encode_prompt( - self, - prompt, - do_classifier_free_guidance=True, - num_images_per_prompt=1, - device=None, - negative_prompt=None, - prompt_embeds: Optional[torch.FloatTensor] = None, - negative_prompt_embeds: Optional[torch.FloatTensor] = None, - clean_caption: bool = False, - ): - r""" - Encodes the prompt into text encoder hidden states. - - Args: - prompt (`str` or `List[str]`, *optional*): - prompt to be encoded - device: (`torch.device`, *optional*): - torch device to place the resulting embeddings on - num_images_per_prompt (`int`, *optional*, defaults to 1): - number of images that should be generated per prompt - do_classifier_free_guidance (`bool`, *optional*, defaults to `True`): - whether to use classifier free guidance or not - negative_prompt (`str` or `List[str]`, *optional*): - The prompt or prompts not to guide the image generation. If not defined, one has to pass - `negative_prompt_embeds`. instead. If not defined, one has to pass `negative_prompt_embeds`. instead. - Ignored when not using guidance (i.e., ignored if `guidance_scale` is less than `1`). - prompt_embeds (`torch.FloatTensor`, *optional*): - Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not - provided, text embeddings will be generated from `prompt` input argument. - negative_prompt_embeds (`torch.FloatTensor`, *optional*): - Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt - weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input - argument. - """ - if prompt is not None and negative_prompt is not None: - if type(prompt) is not type(negative_prompt): - raise TypeError( - f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !=" - f" {type(prompt)}." - ) - - if device is None: - device = self._execution_device - - if prompt is not None and isinstance(prompt, str): - batch_size = 1 - elif prompt is not None and isinstance(prompt, list): - batch_size = len(prompt) - else: - batch_size = prompt_embeds.shape[0] - - # while T5 can handle much longer input sequences than 77, the text encoder was trained with a max length of 77 for IF - max_length = 77 - - if prompt_embeds is None: - prompt = self._text_preprocessing(prompt, clean_caption=clean_caption) - text_inputs = self.tokenizer( - prompt, - padding="max_length", - max_length=max_length, - truncation=True, - add_special_tokens=True, - return_tensors="pt", - ) - text_input_ids = text_inputs.input_ids - untruncated_ids = self.tokenizer(prompt, padding="longest", return_tensors="pt").input_ids - - if untruncated_ids.shape[-1] >= text_input_ids.shape[-1] and not torch.equal( - text_input_ids, untruncated_ids - ): - removed_text = self.tokenizer.batch_decode(untruncated_ids[:, max_length - 1 : -1]) - logger.warning( - "The following part of your input was truncated because CLIP can only handle sequences up to" - f" {max_length} tokens: {removed_text}" - ) - - attention_mask = text_inputs.attention_mask.to(device) - - prompt_embeds = self.text_encoder( - text_input_ids.to(device), - attention_mask=attention_mask, - ) - prompt_embeds = prompt_embeds[0] - - if self.text_encoder is not None: - dtype = self.text_encoder.dtype - elif self.unet is not None: - dtype = self.unet.dtype - else: - dtype = None - - prompt_embeds = prompt_embeds.to(dtype=dtype, device=device) - - bs_embed, seq_len, _ = prompt_embeds.shape - # duplicate text embeddings for each generation per prompt, using mps friendly method - prompt_embeds = prompt_embeds.repeat(1, num_images_per_prompt, 1) - prompt_embeds = prompt_embeds.view(bs_embed * num_images_per_prompt, seq_len, -1) - - # get unconditional embeddings for classifier free guidance - if do_classifier_free_guidance and negative_prompt_embeds is None: - uncond_tokens: List[str] - if negative_prompt is None: - uncond_tokens = [""] * batch_size - elif isinstance(negative_prompt, str): - uncond_tokens = [negative_prompt] - elif batch_size != len(negative_prompt): - raise ValueError( - f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:" - f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches" - " the batch size of `prompt`." - ) - else: - uncond_tokens = negative_prompt - - uncond_tokens = self._text_preprocessing(uncond_tokens, clean_caption=clean_caption) - max_length = prompt_embeds.shape[1] - uncond_input = self.tokenizer( - uncond_tokens, - padding="max_length", - max_length=max_length, - truncation=True, - return_attention_mask=True, - add_special_tokens=True, - return_tensors="pt", - ) - attention_mask = uncond_input.attention_mask.to(device) - - negative_prompt_embeds = self.text_encoder( - uncond_input.input_ids.to(device), - attention_mask=attention_mask, - ) - negative_prompt_embeds = negative_prompt_embeds[0] - - if do_classifier_free_guidance: - # duplicate unconditional embeddings for each generation per prompt, using mps friendly method - seq_len = negative_prompt_embeds.shape[1] - - negative_prompt_embeds = negative_prompt_embeds.to(dtype=dtype, device=device) - - negative_prompt_embeds = negative_prompt_embeds.repeat(1, num_images_per_prompt, 1) - negative_prompt_embeds = negative_prompt_embeds.view(batch_size * num_images_per_prompt, seq_len, -1) - - # For classifier free guidance, we need to do two forward passes. - # Here we concatenate the unconditional and text embeddings into a single batch - # to avoid doing two forward passes - else: - negative_prompt_embeds = None - - return prompt_embeds, negative_prompt_embeds - - # Copied from diffusers.pipelines.deepfloyd_if.pipeline_if.IFPipeline.run_safety_checker - def run_safety_checker(self, image, device, dtype): - if self.safety_checker is not None: - safety_checker_input = self.feature_extractor(self.numpy_to_pil(image), return_tensors="pt").to(device) - image, nsfw_detected, watermark_detected = self.safety_checker( - images=image, - clip_input=safety_checker_input.pixel_values.to(dtype=dtype), - ) - else: - nsfw_detected = None - watermark_detected = None - - if hasattr(self, "unet_offload_hook") and self.unet_offload_hook is not None: - self.unet_offload_hook.offload() - - return image, nsfw_detected, watermark_detected - - # Copied from diffusers.pipelines.deepfloyd_if.pipeline_if.IFPipeline.prepare_extra_step_kwargs - def prepare_extra_step_kwargs(self, generator, eta): - # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature - # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers. - # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502 - # and should be between [0, 1] - - accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys()) - extra_step_kwargs = {} - if accepts_eta: - extra_step_kwargs["eta"] = eta - - # check if the scheduler accepts generator - accepts_generator = "generator" in set(inspect.signature(self.scheduler.step).parameters.keys()) - if accepts_generator: - extra_step_kwargs["generator"] = generator - return extra_step_kwargs - - def check_inputs( - self, - prompt, - image, - mask_image, - batch_size, - callback_steps, - negative_prompt=None, - prompt_embeds=None, - negative_prompt_embeds=None, - ): - if (callback_steps is None) or ( - callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0) - ): - raise ValueError( - f"`callback_steps` has to be a positive integer but is {callback_steps} of type" - f" {type(callback_steps)}." - ) - - if prompt is not None and prompt_embeds is not None: - raise ValueError( - f"Cannot forward both `prompt`: {prompt} and `prompt_embeds`: {prompt_embeds}. Please make sure to" - " only forward one of the two." - ) - elif prompt is None and prompt_embeds is None: - raise ValueError( - "Provide either `prompt` or `prompt_embeds`. Cannot leave both `prompt` and `prompt_embeds` undefined." - ) - elif prompt is not None and (not isinstance(prompt, str) and not isinstance(prompt, list)): - raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}") - - if negative_prompt is not None and negative_prompt_embeds is not None: - raise ValueError( - f"Cannot forward both `negative_prompt`: {negative_prompt} and `negative_prompt_embeds`:" - f" {negative_prompt_embeds}. Please make sure to only forward one of the two." - ) - - if prompt_embeds is not None and negative_prompt_embeds is not None: - if prompt_embeds.shape != negative_prompt_embeds.shape: - raise ValueError( - "`prompt_embeds` and `negative_prompt_embeds` must have the same shape when passed directly, but" - f" got: `prompt_embeds` {prompt_embeds.shape} != `negative_prompt_embeds`" - f" {negative_prompt_embeds.shape}." - ) - - # image - - if isinstance(image, list): - check_image_type = image[0] - else: - check_image_type = image - - if ( - not isinstance(check_image_type, torch.Tensor) - and not isinstance(check_image_type, PIL.Image.Image) - and not isinstance(check_image_type, np.ndarray) - ): - raise ValueError( - "`image` has to be of type `torch.FloatTensor`, `PIL.Image.Image`, `np.ndarray`, or List[...] but is" - f" {type(check_image_type)}" - ) - - if isinstance(image, list): - image_batch_size = len(image) - elif isinstance(image, torch.Tensor): - image_batch_size = image.shape[0] - elif isinstance(image, PIL.Image.Image): - image_batch_size = 1 - elif isinstance(image, np.ndarray): - image_batch_size = image.shape[0] - else: - assert False - - if batch_size != image_batch_size: - raise ValueError(f"image batch size: {image_batch_size} must be same as prompt batch size {batch_size}") - - # mask_image - - if isinstance(mask_image, list): - check_image_type = mask_image[0] - else: - check_image_type = mask_image - - if ( - not isinstance(check_image_type, torch.Tensor) - and not isinstance(check_image_type, PIL.Image.Image) - and not isinstance(check_image_type, np.ndarray) - ): - raise ValueError( - "`mask_image` has to be of type `torch.FloatTensor`, `PIL.Image.Image`, `np.ndarray`, or List[...] but is" - f" {type(check_image_type)}" - ) - - if isinstance(mask_image, list): - image_batch_size = len(mask_image) - elif isinstance(mask_image, torch.Tensor): - image_batch_size = mask_image.shape[0] - elif isinstance(mask_image, PIL.Image.Image): - image_batch_size = 1 - elif isinstance(mask_image, np.ndarray): - image_batch_size = mask_image.shape[0] - else: - assert False - - if image_batch_size != 1 and batch_size != image_batch_size: - raise ValueError( - f"mask_image batch size: {image_batch_size} must be `1` or the same as prompt batch size {batch_size}" - ) - - # Copied from diffusers.pipelines.deepfloyd_if.pipeline_if.IFPipeline._text_preprocessing - def _text_preprocessing(self, text, clean_caption=False): - if clean_caption and not is_bs4_available(): - logger.warn(BACKENDS_MAPPING["bs4"][-1].format("Setting `clean_caption=True`")) - logger.warn("Setting `clean_caption` to False...") - clean_caption = False - - if clean_caption and not is_ftfy_available(): - logger.warn(BACKENDS_MAPPING["ftfy"][-1].format("Setting `clean_caption=True`")) - logger.warn("Setting `clean_caption` to False...") - clean_caption = False - - if not isinstance(text, (tuple, list)): - text = [text] - - def process(text: str): - if clean_caption: - text = self._clean_caption(text) - text = self._clean_caption(text) - else: - text = text.lower().strip() - return text - - return [process(t) for t in text] - - # Copied from diffusers.pipelines.deepfloyd_if.pipeline_if.IFPipeline._clean_caption - def _clean_caption(self, caption): - caption = str(caption) - caption = ul.unquote_plus(caption) - caption = caption.strip().lower() - caption = re.sub("", "person", caption) - # urls: - caption = re.sub( - r"\b((?:https?:(?:\/{1,3}|[a-zA-Z0-9%])|[a-zA-Z0-9.\-]+[.](?:com|co|ru|net|org|edu|gov|it)[\w/-]*\b\/?(?!@)))", # noqa - "", - caption, - ) # regex for urls - caption = re.sub( - r"\b((?:www:(?:\/{1,3}|[a-zA-Z0-9%])|[a-zA-Z0-9.\-]+[.](?:com|co|ru|net|org|edu|gov|it)[\w/-]*\b\/?(?!@)))", # noqa - "", - caption, - ) # regex for urls - # html: - caption = BeautifulSoup(caption, features="html.parser").text - - # @ - caption = re.sub(r"@[\w\d]+\b", "", caption) - - # 31C0—31EF CJK Strokes - # 31F0—31FF Katakana Phonetic Extensions - # 3200—32FF Enclosed CJK Letters and Months - # 3300—33FF CJK Compatibility - # 3400—4DBF CJK Unified Ideographs Extension A - # 4DC0—4DFF Yijing Hexagram Symbols - # 4E00—9FFF CJK Unified Ideographs - caption = re.sub(r"[\u31c0-\u31ef]+", "", caption) - caption = re.sub(r"[\u31f0-\u31ff]+", "", caption) - caption = re.sub(r"[\u3200-\u32ff]+", "", caption) - caption = re.sub(r"[\u3300-\u33ff]+", "", caption) - caption = re.sub(r"[\u3400-\u4dbf]+", "", caption) - caption = re.sub(r"[\u4dc0-\u4dff]+", "", caption) - caption = re.sub(r"[\u4e00-\u9fff]+", "", caption) - ####################################################### - - # все виды тире / all types of dash --> "-" - caption = re.sub( - r"[\u002D\u058A\u05BE\u1400\u1806\u2010-\u2015\u2E17\u2E1A\u2E3A\u2E3B\u2E40\u301C\u3030\u30A0\uFE31\uFE32\uFE58\uFE63\uFF0D]+", # noqa - "-", - caption, - ) - - # кавычки к одному стандарту - caption = re.sub(r"[`´«»“”¨]", '"', caption) - caption = re.sub(r"[‘’]", "'", caption) - - # " - caption = re.sub(r""?", "", caption) - # & - caption = re.sub(r"&", "", caption) - - # ip adresses: - caption = re.sub(r"\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}", " ", caption) - - # article ids: - caption = re.sub(r"\d:\d\d\s+$", "", caption) - - # \n - caption = re.sub(r"\\n", " ", caption) - - # "#123" - caption = re.sub(r"#\d{1,3}\b", "", caption) - # "#12345.." - caption = re.sub(r"#\d{5,}\b", "", caption) - # "123456.." - caption = re.sub(r"\b\d{6,}\b", "", caption) - # filenames: - caption = re.sub(r"[\S]+\.(?:png|jpg|jpeg|bmp|webp|eps|pdf|apk|mp4)", "", caption) - - # - caption = re.sub(r"[\"\']{2,}", r'"', caption) # """AUSVERKAUFT""" - caption = re.sub(r"[\.]{2,}", r" ", caption) # """AUSVERKAUFT""" - - caption = re.sub(self.bad_punct_regex, r" ", caption) # ***AUSVERKAUFT***, #AUSVERKAUFT - caption = re.sub(r"\s+\.\s+", r" ", caption) # " . " - - # this-is-my-cute-cat / this_is_my_cute_cat - regex2 = re.compile(r"(?:\-|\_)") - if len(re.findall(regex2, caption)) > 3: - caption = re.sub(regex2, " ", caption) - - caption = ftfy.fix_text(caption) - caption = html.unescape(html.unescape(caption)) - - caption = re.sub(r"\b[a-zA-Z]{1,3}\d{3,15}\b", "", caption) # jc6640 - caption = re.sub(r"\b[a-zA-Z]+\d+[a-zA-Z]+\b", "", caption) # jc6640vc - caption = re.sub(r"\b\d+[a-zA-Z]+\d+\b", "", caption) # 6640vc231 - - caption = re.sub(r"(worldwide\s+)?(free\s+)?shipping", "", caption) - caption = re.sub(r"(free\s)?download(\sfree)?", "", caption) - caption = re.sub(r"\bclick\b\s(?:for|on)\s\w+", "", caption) - caption = re.sub(r"\b(?:png|jpg|jpeg|bmp|webp|eps|pdf|apk|mp4)(\simage[s]?)?", "", caption) - caption = re.sub(r"\bpage\s+\d+\b", "", caption) - - caption = re.sub(r"\b\d*[a-zA-Z]+\d+[a-zA-Z]+\d+[a-zA-Z\d]*\b", r" ", caption) # j2d1a2a... - - caption = re.sub(r"\b\d+\.?\d*[xх×]\d+\.?\d*\b", "", caption) - - caption = re.sub(r"\b\s+\:\s+", r": ", caption) - caption = re.sub(r"(\D[,\./])\b", r"\1 ", caption) - caption = re.sub(r"\s+", " ", caption) - - caption.strip() - - caption = re.sub(r"^[\"\']([\w\W]+)[\"\']$", r"\1", caption) - caption = re.sub(r"^[\'\_,\-\:;]", r"", caption) - caption = re.sub(r"[\'\_,\-\:\-\+]$", r"", caption) - caption = re.sub(r"^\.\S+$", "", caption) - - return caption.strip() - - # Copied from diffusers.pipelines.deepfloyd_if.pipeline_if_img2img.IFImg2ImgPipeline.preprocess_image - def preprocess_image(self, image: PIL.Image.Image) -> torch.Tensor: - if not isinstance(image, list): - image = [image] - - def numpy_to_pt(images): - if images.ndim == 3: - images = images[..., None] - - images = torch.from_numpy(images.transpose(0, 3, 1, 2)) - return images - - if isinstance(image[0], PIL.Image.Image): - new_image = [] - - for image_ in image: - image_ = image_.convert("RGB") - image_ = resize(image_, self.unet.sample_size) - image_ = np.array(image_) - image_ = image_.astype(np.float32) - image_ = image_ / 127.5 - 1 - new_image.append(image_) - - image = new_image - - image = np.stack(image, axis=0) # to np - image = numpy_to_pt(image) # to pt - - elif isinstance(image[0], np.ndarray): - image = np.concatenate(image, axis=0) if image[0].ndim == 4 else np.stack(image, axis=0) - image = numpy_to_pt(image) - - elif isinstance(image[0], torch.Tensor): - image = torch.cat(image, axis=0) if image[0].ndim == 4 else torch.stack(image, axis=0) - - return image - - def preprocess_mask_image(self, mask_image) -> torch.Tensor: - if not isinstance(mask_image, list): - mask_image = [mask_image] - - if isinstance(mask_image[0], torch.Tensor): - mask_image = torch.cat(mask_image, axis=0) if mask_image[0].ndim == 4 else torch.stack(mask_image, axis=0) - - if mask_image.ndim == 2: - # Batch and add channel dim for single mask - mask_image = mask_image.unsqueeze(0).unsqueeze(0) - elif mask_image.ndim == 3 and mask_image.shape[0] == 1: - # Single mask, the 0'th dimension is considered to be - # the existing batch size of 1 - mask_image = mask_image.unsqueeze(0) - elif mask_image.ndim == 3 and mask_image.shape[0] != 1: - # Batch of mask, the 0'th dimension is considered to be - # the batching dimension - mask_image = mask_image.unsqueeze(1) - - mask_image[mask_image < 0.5] = 0 - mask_image[mask_image >= 0.5] = 1 - - elif isinstance(mask_image[0], PIL.Image.Image): - new_mask_image = [] - - for mask_image_ in mask_image: - mask_image_ = mask_image_.convert("L") - mask_image_ = resize(mask_image_, self.unet.sample_size) - mask_image_ = np.array(mask_image_) - mask_image_ = mask_image_[None, None, :] - new_mask_image.append(mask_image_) - - mask_image = new_mask_image - - mask_image = np.concatenate(mask_image, axis=0) - mask_image = mask_image.astype(np.float32) / 255.0 - mask_image[mask_image < 0.5] = 0 - mask_image[mask_image >= 0.5] = 1 - mask_image = torch.from_numpy(mask_image) - - elif isinstance(mask_image[0], np.ndarray): - mask_image = np.concatenate([m[None, None, :] for m in mask_image], axis=0) - - mask_image[mask_image < 0.5] = 0 - mask_image[mask_image >= 0.5] = 1 - mask_image = torch.from_numpy(mask_image) - - return mask_image - - # Copied from diffusers.pipelines.deepfloyd_if.pipeline_if_img2img.IFImg2ImgPipeline.get_timesteps - def get_timesteps(self, num_inference_steps, strength): - # get the original timestep using init_timestep - init_timestep = min(int(num_inference_steps * strength), num_inference_steps) - - t_start = max(num_inference_steps - init_timestep, 0) - timesteps = self.scheduler.timesteps[t_start:] - - return timesteps, num_inference_steps - t_start - - def prepare_intermediate_images( - self, image, timestep, batch_size, num_images_per_prompt, dtype, device, mask_image, generator=None - ): - image_batch_size, channels, height, width = image.shape - - batch_size = batch_size * num_images_per_prompt - - shape = (batch_size, channels, height, width) - - if isinstance(generator, list) and len(generator) != batch_size: - raise ValueError( - f"You have passed a list of generators of length {len(generator)}, but requested an effective batch" - f" size of {batch_size}. Make sure the batch size matches the length of the generators." - ) - - noise = randn_tensor(shape, generator=generator, device=device, dtype=dtype) - - image = image.repeat_interleave(num_images_per_prompt, dim=0) - noised_image = self.scheduler.add_noise(image, noise, timestep) - - image = (1 - mask_image) * image + mask_image * noised_image - - return image - - @torch.no_grad() - @replace_example_docstring(EXAMPLE_DOC_STRING) - def __call__( - self, - prompt: Union[str, List[str]] = None, - image: Union[ - PIL.Image.Image, torch.Tensor, np.ndarray, List[PIL.Image.Image], List[torch.Tensor], List[np.ndarray] - ] = None, - mask_image: Union[ - PIL.Image.Image, torch.Tensor, np.ndarray, List[PIL.Image.Image], List[torch.Tensor], List[np.ndarray] - ] = None, - strength: float = 1.0, - num_inference_steps: int = 50, - timesteps: List[int] = None, - guidance_scale: float = 7.0, - negative_prompt: Optional[Union[str, List[str]]] = None, - num_images_per_prompt: Optional[int] = 1, - eta: float = 0.0, - generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None, - prompt_embeds: Optional[torch.FloatTensor] = None, - negative_prompt_embeds: Optional[torch.FloatTensor] = None, - output_type: Optional[str] = "pil", - return_dict: bool = True, - callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None, - callback_steps: int = 1, - clean_caption: bool = True, - cross_attention_kwargs: Optional[Dict[str, Any]] = None, - ): - """ - Function invoked when calling the pipeline for generation. - - Args: - prompt (`str` or `List[str]`, *optional*): - The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`. - instead. - image (`torch.FloatTensor` or `PIL.Image.Image`): - `Image`, or tensor representing an image batch, that will be used as the starting point for the - process. - mask_image (`PIL.Image.Image`): - `Image`, or tensor representing an image batch, to mask `image`. White pixels in the mask will be - repainted, while black pixels will be preserved. If `mask_image` is a PIL image, it will be converted - to a single channel (luminance) before use. If it's a tensor, it should contain one color channel (L) - instead of 3, so the expected shape would be `(B, H, W, 1)`. - strength (`float`, *optional*, defaults to 0.8): - Conceptually, indicates how much to transform the reference `image`. Must be between 0 and 1. `image` - will be used as a starting point, adding more noise to it the larger the `strength`. The number of - denoising steps depends on the amount of noise initially added. When `strength` is 1, added noise will - be maximum and the denoising process will run for the full number of iterations specified in - `num_inference_steps`. A value of 1, therefore, essentially ignores `image`. - num_inference_steps (`int`, *optional*, defaults to 50): - The number of denoising steps. More denoising steps usually lead to a higher quality image at the - expense of slower inference. - timesteps (`List[int]`, *optional*): - Custom timesteps to use for the denoising process. If not defined, equal spaced `num_inference_steps` - timesteps are used. Must be in descending order. - guidance_scale (`float`, *optional*, defaults to 7.5): - Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598). - `guidance_scale` is defined as `w` of equation 2. of [Imagen - Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale > - 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`, - usually at the expense of lower image quality. - negative_prompt (`str` or `List[str]`, *optional*): - The prompt or prompts not to guide the image generation. If not defined, one has to pass - `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is - less than `1`). - num_images_per_prompt (`int`, *optional*, defaults to 1): - The number of images to generate per prompt. - eta (`float`, *optional*, defaults to 0.0): - Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to - [`schedulers.DDIMScheduler`], will be ignored for others. - generator (`torch.Generator` or `List[torch.Generator]`, *optional*): - One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html) - to make generation deterministic. - prompt_embeds (`torch.FloatTensor`, *optional*): - Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not - provided, text embeddings will be generated from `prompt` input argument. - negative_prompt_embeds (`torch.FloatTensor`, *optional*): - Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt - weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input - argument. - output_type (`str`, *optional*, defaults to `"pil"`): - The output format of the generate image. Choose between - [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`. - return_dict (`bool`, *optional*, defaults to `True`): - Whether or not to return a [`~pipelines.stable_diffusion.IFPipelineOutput`] instead of a plain tuple. - callback (`Callable`, *optional*): - A function that will be called every `callback_steps` steps during inference. The function will be - called with the following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`. - callback_steps (`int`, *optional*, defaults to 1): - The frequency at which the `callback` function will be called. If not specified, the callback will be - called at every step. - clean_caption (`bool`, *optional*, defaults to `True`): - Whether or not to clean the caption before creating embeddings. Requires `beautifulsoup4` and `ftfy` to - be installed. If the dependencies are not installed, the embeddings will be created from the raw - prompt. - cross_attention_kwargs (`dict`, *optional*): - A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under - `self.processor` in - [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py). - - Examples: - - Returns: - [`~pipelines.stable_diffusion.IFPipelineOutput`] or `tuple`: - [`~pipelines.stable_diffusion.IFPipelineOutput`] if `return_dict` is True, otherwise a `tuple. When - returning a tuple, the first element is a list with the generated images, and the second element is a list - of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) - or watermarked content, according to the `safety_checker`. - """ - # 1. Check inputs. Raise error if not correct - if prompt is not None and isinstance(prompt, str): - batch_size = 1 - elif prompt is not None and isinstance(prompt, list): - batch_size = len(prompt) - else: - batch_size = prompt_embeds.shape[0] - - self.check_inputs( - prompt, - image, - mask_image, - batch_size, - callback_steps, - negative_prompt, - prompt_embeds, - negative_prompt_embeds, - ) - - # 2. Define call parameters - device = self._execution_device - - # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2) - # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1` - # corresponds to doing no classifier free guidance. - do_classifier_free_guidance = guidance_scale > 1.0 - - # 3. Encode input prompt - prompt_embeds, negative_prompt_embeds = self.encode_prompt( - prompt, - do_classifier_free_guidance, - num_images_per_prompt=num_images_per_prompt, - device=device, - negative_prompt=negative_prompt, - prompt_embeds=prompt_embeds, - negative_prompt_embeds=negative_prompt_embeds, - clean_caption=clean_caption, - ) - - if do_classifier_free_guidance: - prompt_embeds = torch.cat([negative_prompt_embeds, prompt_embeds]) - - dtype = prompt_embeds.dtype - - # 4. Prepare timesteps - if timesteps is not None: - self.scheduler.set_timesteps(timesteps=timesteps, device=device) - timesteps = self.scheduler.timesteps - num_inference_steps = len(timesteps) - else: - self.scheduler.set_timesteps(num_inference_steps, device=device) - timesteps = self.scheduler.timesteps - - timesteps, num_inference_steps = self.get_timesteps(num_inference_steps, strength) - - # 5. Prepare intermediate images - image = self.preprocess_image(image) - image = image.to(device=device, dtype=dtype) - - mask_image = self.preprocess_mask_image(mask_image) - mask_image = mask_image.to(device=device, dtype=dtype) - - if mask_image.shape[0] == 1: - mask_image = mask_image.repeat_interleave(batch_size * num_images_per_prompt, dim=0) - else: - mask_image = mask_image.repeat_interleave(num_images_per_prompt, dim=0) - - noise_timestep = timesteps[0:1] - noise_timestep = noise_timestep.repeat(batch_size * num_images_per_prompt) - - intermediate_images = self.prepare_intermediate_images( - image, noise_timestep, batch_size, num_images_per_prompt, dtype, device, mask_image, generator - ) - - # 6. Prepare extra step kwargs. TODO: Logic should ideally just be moved out of the pipeline - extra_step_kwargs = self.prepare_extra_step_kwargs(generator, eta) - - # HACK: see comment in `enable_model_cpu_offload` - if hasattr(self, "text_encoder_offload_hook") and self.text_encoder_offload_hook is not None: - self.text_encoder_offload_hook.offload() - - # 7. Denoising loop - num_warmup_steps = len(timesteps) - num_inference_steps * self.scheduler.order - with self.progress_bar(total=num_inference_steps) as progress_bar: - for i, t in enumerate(timesteps): - model_input = ( - torch.cat([intermediate_images] * 2) if do_classifier_free_guidance else intermediate_images - ) - model_input = self.scheduler.scale_model_input(model_input, t) - - # predict the noise residual - noise_pred = self.unet( - model_input, - t, - encoder_hidden_states=prompt_embeds, - cross_attention_kwargs=cross_attention_kwargs, - return_dict=False, - )[0] - - # perform guidance - if do_classifier_free_guidance: - noise_pred_uncond, noise_pred_text = noise_pred.chunk(2) - noise_pred_uncond, _ = noise_pred_uncond.split(model_input.shape[1], dim=1) - noise_pred_text, predicted_variance = noise_pred_text.split(model_input.shape[1], dim=1) - noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond) - noise_pred = torch.cat([noise_pred, predicted_variance], dim=1) - - if self.scheduler.config.variance_type not in ["learned", "learned_range"]: - noise_pred, _ = noise_pred.split(model_input.shape[1], dim=1) - - # compute the previous noisy sample x_t -> x_t-1 - prev_intermediate_images = intermediate_images - - intermediate_images = self.scheduler.step( - noise_pred, t, intermediate_images, **extra_step_kwargs, return_dict=False - )[0] - - intermediate_images = (1 - mask_image) * prev_intermediate_images + mask_image * intermediate_images - - # call the callback, if provided - if i == len(timesteps) - 1 or ((i + 1) > num_warmup_steps and (i + 1) % self.scheduler.order == 0): - progress_bar.update() - if callback is not None and i % callback_steps == 0: - callback(i, t, intermediate_images) - - image = intermediate_images - - if output_type == "pil": - # 8. Post-processing - image = (image / 2 + 0.5).clamp(0, 1) - image = image.cpu().permute(0, 2, 3, 1).float().numpy() - - # 9. Run safety checker - image, nsfw_detected, watermark_detected = self.run_safety_checker(image, device, prompt_embeds.dtype) - - # 10. Convert to PIL - image = self.numpy_to_pil(image) - - # 11. Apply watermark - if self.watermarker is not None: - self.watermarker.apply_watermark(image, self.unet.config.sample_size) - elif output_type == "pt": - nsfw_detected = None - watermark_detected = None - - if hasattr(self, "unet_offload_hook") and self.unet_offload_hook is not None: - self.unet_offload_hook.offload() - else: - # 8. Post-processing - image = (image / 2 + 0.5).clamp(0, 1) - image = image.cpu().permute(0, 2, 3, 1).float().numpy() - - # 9. Run safety checker - image, nsfw_detected, watermark_detected = self.run_safety_checker(image, device, prompt_embeds.dtype) - - # Offload all models - self.maybe_free_model_hooks() - - if not return_dict: - return (image, nsfw_detected, watermark_detected) - - return IFPipelineOutput(images=image, nsfw_detected=nsfw_detected, watermark_detected=watermark_detected) diff --git a/spaces/parkyzh/bingo/src/components/ui/codeblock.tsx b/spaces/parkyzh/bingo/src/components/ui/codeblock.tsx deleted file mode 100644 index aabda4e3b59f4e36b6ab79feb19d8d18b70e881b..0000000000000000000000000000000000000000 --- a/spaces/parkyzh/bingo/src/components/ui/codeblock.tsx +++ /dev/null @@ -1,142 +0,0 @@ -'use client' - -import { FC, memo } from 'react' -import { Prism as SyntaxHighlighter } from 'react-syntax-highlighter' -import { coldarkDark } from 'react-syntax-highlighter/dist/cjs/styles/prism' - -import { useCopyToClipboard } from '@/lib/hooks/use-copy-to-clipboard' -import { IconCheck, IconCopy, IconDownload } from '@/components/ui/icons' -import { Button } from '@/components/ui/button' - -interface Props { - language: string - value: string -} - -interface languageMap { - [key: string]: string | undefined -} - -export const programmingLanguages: languageMap = { - javascript: '.js', - python: '.py', - java: '.java', - c: '.c', - cpp: '.cpp', - 'c++': '.cpp', - 'c#': '.cs', - ruby: '.rb', - php: '.php', - swift: '.swift', - 'objective-c': '.m', - kotlin: '.kt', - typescript: '.ts', - go: '.go', - perl: '.pl', - rust: '.rs', - scala: '.scala', - haskell: '.hs', - lua: '.lua', - shell: '.sh', - sql: '.sql', - html: '.html', - css: '.css' - // add more file extensions here, make sure the key is same as language prop in CodeBlock.tsx component -} - -export const generateRandomString = (length: number, lowercase = false) => { - const chars = 'ABCDEFGHJKLMNPQRSTUVWXY3456789' // excluding similar looking characters like Z, 2, I, 1, O, 0 - let result = '' - for (let i = 0; i < length; i++) { - result += chars.charAt(Math.floor(Math.random() * chars.length)) - } - return lowercase ? result.toLowerCase() : result -} - -const CodeBlock: FC = memo(({ language, value }) => { - const { isCopied, copyToClipboard } = useCopyToClipboard({ timeout: 2000 }) - - const downloadAsFile = () => { - if (typeof window === 'undefined') { - return - } - const fileExtension = programmingLanguages[language] || '.file' - const suggestedFileName = `file-${generateRandomString( - 3, - true - )}${fileExtension}` - const fileName = window.prompt('Enter file name' || '', suggestedFileName) - - if (!fileName) { - // User pressed cancel on prompt. - return - } - - const blob = new Blob([value], { type: 'text/plain' }) - const url = URL.createObjectURL(blob) - const link = document.createElement('a') - link.download = fileName - link.href = url - link.style.display = 'none' - document.body.appendChild(link) - link.click() - document.body.removeChild(link) - URL.revokeObjectURL(url) - } - - const onCopy = () => { - if (isCopied) return - copyToClipboard(value) - } - - return ( -
        -
        - {language} -
        - - -
        -
        - - {value} - -
        - ) -}) -CodeBlock.displayName = 'CodeBlock' - -export { CodeBlock } diff --git a/spaces/parthb3/YouTube_Podcast_Summary/app.py b/spaces/parthb3/YouTube_Podcast_Summary/app.py deleted file mode 100644 index 14cec40774ed33d942d1df16393a85b642eefcad..0000000000000000000000000000000000000000 --- a/spaces/parthb3/YouTube_Podcast_Summary/app.py +++ /dev/null @@ -1,210 +0,0 @@ -# -*- coding: utf-8 -*- -"""pod_to_sum_v3.ipynb - -Automatically generated by Colaboratory. - -Original file is located at - https://colab.research.google.com/drive/1rbZ98r1Z_IM0Z3VDuNQObxpuZf5KUgmL - -### Initialization -""" - -import os -save_dir= os.path.join('./','docs') -if not os.path.exists(save_dir): - os.mkdir(save_dir) - -transcription_model = "openai/whisper-base" -llm_model = "gmurro/bart-large-finetuned-filtered-spotify-podcast-summ" - -import pandas as pd -import numpy as np -import pytube -from pytube import YouTube -import transformers -from transformers import pipeline -import torch - -device = "cuda" if torch.cuda.is_available() else "cpu" - -"""### Define how to get transcript of the YT video""" - -def get_transcript(url): - yt_video = YouTube(str(url)) - yt_audio = yt_video.streams.filter(only_audio=True, file_extension='mp4').first() # get 1st available audio stream - out_file = yt_audio.download(filename="audio.mp4", output_path = save_dir) - - asr = pipeline("automatic-speech-recognition", model=transcription_model, device=device) - - import librosa - speech_array, sampling_rate = librosa.load(out_file, sr=16000) # getting audio file array - - audio_text = asr( - speech_array, - max_new_tokens=256, - generate_kwargs={"task": "transcribe"}, - chunk_length_s=30, - batch_size=8) # calling whisper model - - del(asr) - torch.cuda.empty_cache() #deleting cache - - return audio_text['text'] - -"""### Define functions to generate summary""" - -def clean_sent(sent_list): - new_sent_list = [sent_list[0]] - for i in range(len(sent_list)): - if sent_list[i] != new_sent_list[-1]: new_sent_list.append(sent_list[i]) - return new_sent_list - -import nltk -nltk.download('punkt') - -def get_chunks (audio_text, sent_overlap, max_token, tokenizer): - # pre-processing text - sentences = nltk.tokenize.sent_tokenize(audio_text) - sentences = clean_sent(sentences) - - first_sentence = 0 - last_sentence = 0 - chunks=[] - while last_sentence <= len(sentences) - 1: - last_sentence = first_sentence - chunk_parts = [] - chunk_size = 0 - for sentence in sentences[first_sentence:]: - sentence_sz = len(tokenizer.tokenize(sentence)) - if chunk_size + sentence_sz > max_token: - break - - chunk_parts.append(sentence) - chunk_size += sentence_sz - last_sentence += 1 - - chunks.append(" ".join(chunk_parts)) - first_sentence = last_sentence - sent_overlap - return chunks - -"""### Define how to get summary of the transcript""" - -def get_summary(audio_text): - import re - audio_text = re.sub(r'\b(\w+) \1\b', r'\1', audio_text, flags=re.IGNORECASE) # cleaning text - - from transformers import AutoTokenizer - tokenizer = AutoTokenizer.from_pretrained(llm_model) # set tockenizer - - from transformers import pipeline - summarizer = pipeline("summarization", model=llm_model) # set summarizer - - model_max_tokens = tokenizer.model_max_length # get max tockens model can process - text_tokens = len(tokenizer.tokenize(audio_text)) # get number of tockens in audio text - - def get_map_summary(chunk_text, summarizer): - max_token = model_max_tokens - 2 #protect for "" before and after the text - sent_overlap = 3 #overlapping sentences between 2 chunks - sent_chunks = get_chunks(audio_text = chunk_text,sent_overlap = sent_overlap,max_token = max_token, tokenizer = tokenizer) # get chunks - chunk_summary_list = summarizer(sent_chunks,min_length=50, max_length=200, batch_size=8) # get summary per chunk - - grouped_summary = "" - for c in chunk_summary_list: grouped_summary += c['summary_text'] + " " - - - return grouped_summary - - # check text requires map-reduce stategy - map_text = audio_text - long_summary = "" - - while text_tokens > model_max_tokens: - map_summary = get_map_summary(chunk_text=map_text, summarizer=summarizer) - text_tokens = len(tokenizer.tokenize(map_summary)) - long_summary = map_summary - map_text = map_summary - - # else deploy reduce method - else: - max_token = round(text_tokens*0.3) # 1/3rd reduction - final_summary = summarizer(map_text,min_length=35, max_length=max_token) - final_summary = final_summary[0]["summary_text"] - - if long_summary == "": long_summary = "The video is too short to produce a descriptive summary" - - del(tokenizer, summarizer) - torch.cuda.empty_cache() #deleting cache - - return final_summary, long_summary - - -"""### Defining Gradio App""" - -import gradio as gr - -import pytube -from pytube import YouTube - -def get_youtube_title(url): - yt = YouTube(str(url)) - return yt.title - -def get_video(url): - vid_id = pytube.extract.video_id(url) - embed_html = ''.format(vid_id) - return embed_html - -def summarize_youtube_video(url): - print("URL:",url) - text = get_transcript(url) - print("Transcript:",text[:500]) - short_summary, long_summary = get_summary(text) - print("Short Summary:",short_summary) - print("Long Summary:",long_summary) - return text, short_summary, long_summary - -html = '' - -# Defining the structure of the UI -with gr.Blocks() as demo: - with gr.Row(): - gr.Markdown("# Summarize a Long YouTube Video") - - with gr.Row(): - with gr.Column(scale=4): - url = gr.Textbox(label="Enter YouTube video link here:",placeholder="Place for youtube link..") - with gr.Column(scale=1): - sum_btn = gr.Button("Summarize!") - - gr.Markdown("# Results") - - title = gr.Textbox(label="Video Title",placeholder="title...") - - with gr.Row(): - with gr.Column(scale=4): - video = gr.HTML(html,scale=1) - with gr.Column(): - with gr.Row(): - short_summary = gr.Textbox(label="Gist",placeholder="short summary...",scale=1) - with gr.Row(): - long_summary = gr.Textbox(label="Summary",placeholder="long summary...",scale=2) - - - with gr.Row(): - with gr.Group(): - text = gr.Textbox(label="Full Transcript",placeholder="transcript...",show_label=True) - - with gr.Accordion("Credits and Notes",open=False): - gr.Markdown(""" - 1. Transcipt is generated by openai/whisper-base model by downloading YouTube video.\n - 2. Summary is generated by gmurro/bart-large-finetuned-filtered-spotify-podcast-summ.\n - 3. The app is possible because of Hugging Face Transformers.\n - """) - - # Defining the functions to call on clicking the button - sum_btn.click(fn=get_youtube_title, inputs=url, outputs=title, api_name="get_youtube_title", queue=False) - sum_btn.click(fn=summarize_youtube_video, inputs=url, outputs=[text, short_summary, long_summary], api_name="summarize_youtube_video", queue=True) - sum_btn.click(fn=get_video, inputs=url, outputs=video, api_name="get_youtube_video", queue=False) - -demo.queue() -demo.launch(share=False) \ No newline at end of file diff --git a/spaces/passaglia/yomikata-demo/README.md b/spaces/passaglia/yomikata-demo/README.md deleted file mode 100644 index ba453ed14324991cca0a91de1368f9c4c44c163a..0000000000000000000000000000000000000000 --- a/spaces/passaglia/yomikata-demo/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Yomikata -emoji: 🐠 -colorFrom: green -colorTo: red -sdk: streamlit -sdk_version: 1.17.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/paulengstler/interpretable-vertebral-fracture-diagnosis/netdissect/upsegmodel/prroi_pool/prroi_pool.py b/spaces/paulengstler/interpretable-vertebral-fracture-diagnosis/netdissect/upsegmodel/prroi_pool/prroi_pool.py deleted file mode 100644 index 998b2b80531058fa91ac138e79ae39c5c0174601..0000000000000000000000000000000000000000 --- a/spaces/paulengstler/interpretable-vertebral-fracture-diagnosis/netdissect/upsegmodel/prroi_pool/prroi_pool.py +++ /dev/null @@ -1,28 +0,0 @@ -#! /usr/bin/env python3 -# -*- coding: utf-8 -*- -# File : prroi_pool.py -# Author : Jiayuan Mao, Tete Xiao -# Email : maojiayuan@gmail.com, jasonhsiao97@gmail.com -# Date : 07/13/2018 -# -# This file is part of PreciseRoIPooling. -# Distributed under terms of the MIT license. -# Copyright (c) 2017 Megvii Technology Limited. - -import torch.nn as nn - -from .functional import prroi_pool2d - -__all__ = ['PrRoIPool2D'] - - -class PrRoIPool2D(nn.Module): - def __init__(self, pooled_height, pooled_width, spatial_scale): - super().__init__() - - self.pooled_height = int(pooled_height) - self.pooled_width = int(pooled_width) - self.spatial_scale = float(spatial_scale) - - def forward(self, features, rois): - return prroi_pool2d(features, rois, self.pooled_height, self.pooled_width, self.spatial_scale) diff --git a/spaces/pikto/Elite-freegpt-webui/g4f/Provider/Providers/You.py b/spaces/pikto/Elite-freegpt-webui/g4f/Provider/Providers/You.py deleted file mode 100644 index 02a2774ce62bae33612a73272d584dc2acaf3eb0..0000000000000000000000000000000000000000 --- a/spaces/pikto/Elite-freegpt-webui/g4f/Provider/Providers/You.py +++ /dev/null @@ -1,24 +0,0 @@ -import os -import json -import time -import subprocess - -from ...typing import sha256, Dict, get_type_hints - -url = 'https://you.com' -model = 'gpt-3.5-turbo' -supports_stream = True -needs_auth = False - -def _create_completion(model: str, messages: list, stream: bool, **kwargs): - - path = os.path.dirname(os.path.realpath(__file__)) - config = json.dumps({ - 'messages': messages}, separators=(',', ':')) - - cmd = ['python3', f'{path}/helpers/you.py', config] - - p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT) - - for line in iter(p.stdout.readline, b''): - yield line.decode('utf-8') #[:-1] \ No newline at end of file diff --git a/spaces/pixiou/bingo/src/components/theme-toggle.tsx b/spaces/pixiou/bingo/src/components/theme-toggle.tsx deleted file mode 100644 index 67d3f1a2c163ccbeb52c40a7e42f107190237154..0000000000000000000000000000000000000000 --- a/spaces/pixiou/bingo/src/components/theme-toggle.tsx +++ /dev/null @@ -1,31 +0,0 @@ -'use client' - -import * as React from 'react' -import { useTheme } from 'next-themes' - -import { Button } from '@/components/ui/button' -import { IconMoon, IconSun } from '@/components/ui/icons' - -export function ThemeToggle() { - const { setTheme, theme } = useTheme() - const [_, startTransition] = React.useTransition() - - return ( - - ) -} diff --git a/spaces/pkiage/time_series_decomposition_demo/README.md b/spaces/pkiage/time_series_decomposition_demo/README.md deleted file mode 100644 index 8492ff276758835aaedf343d1c3165e182c2d03c..0000000000000000000000000000000000000000 --- a/spaces/pkiage/time_series_decomposition_demo/README.md +++ /dev/null @@ -1,105 +0,0 @@ ---- -title: Time Series Decomposition Demo -emoji: 📈 -colorFrom: green -colorTo: blue -sdk: streamlit -sdk_version: 1.21.0 -app_file: app.py -pinned: false -license: openrail ---- - -# Time series decomposition tool - -Tool demonstrating time series decomposition in Python. - -Assumes uploaded data is clean. - -## Built With - -- [Streamlit](https://streamlit.io/) - - -## Local setup - -### Obtain the repo locally and open its root folder - -#### To potentially contribute - -```shell -git clone https://github.com/pkiage/tool-time-series-decomposition-demo -``` - -or - -```shell -gh repo clone pkiage/tool-time-series-autocorrelation-demo -``` - -#### Just to deploy locally - -Download ZIP - -### (optional) Setup virtual environment: - -```shell -python -m venv venv -``` - -### (optional) Activate virtual environment: - -#### If using Unix based OS run the following in terminal: - -```shell -.\venv\bin\activate -``` - -#### If using Windows run the following in terminal: - -```shell -.\venv\Scripts\activate -``` - -### Install requirements by running the following in terminal: - -#### Required packages - -```shell -pip install -r requirements.txt -``` - -## Build and install local package - -```shell -python setup.py build -``` - -```shell -python setup.py install -``` - -### Run the streamlit app (app.py) by running the following in terminal (from repository root folder): - -```shell -streamlit run src/app.py -``` - -## Hugging Face Tips - -Initial Setup -- [When creating the Spaces Configuration Reference](https://huggingface.co/docs/hub/spaces-config-reference) ensure the [Streamlit Space](https://huggingface.co/docs/hub/spaces-sdks-streamlit) version (sdk_version) specified is supported by HF - -```shell -git remote add space https://huggingface.co/spaces/pkiage/time_series_decomposition_demo - -git push --force space main -``` -- [When syncing with Hugging Face via Github Actions](https://huggingface.co/docs/hub/spaces-github-actions) the [User Access Token](https://huggingface.co/docs/hub/security-tokens) created on Hugging Face (HF) should have write access - - -## Demo Links -- Hugging Face Space: https://huggingface.co/spaces/pkiage/time_series_decomposition_demo -- Streamlit Community Cloud: https://pkiage-tool-time-series-autocorrelation-demo-app-l0umps.streamlit.app/ - - diff --git a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/commands/show.py b/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/commands/show.py deleted file mode 100644 index 3f10701f6b28c72b62c9904fec37b96bdd199dcc..0000000000000000000000000000000000000000 --- a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/commands/show.py +++ /dev/null @@ -1,189 +0,0 @@ -import logging -from optparse import Values -from typing import Generator, Iterable, Iterator, List, NamedTuple, Optional - -from pip._vendor.packaging.utils import canonicalize_name - -from pip._internal.cli.base_command import Command -from pip._internal.cli.status_codes import ERROR, SUCCESS -from pip._internal.metadata import BaseDistribution, get_default_environment -from pip._internal.utils.misc import write_output - -logger = logging.getLogger(__name__) - - -class ShowCommand(Command): - """ - Show information about one or more installed packages. - - The output is in RFC-compliant mail header format. - """ - - usage = """ - %prog [options] ...""" - ignore_require_venv = True - - def add_options(self) -> None: - self.cmd_opts.add_option( - "-f", - "--files", - dest="files", - action="store_true", - default=False, - help="Show the full list of installed files for each package.", - ) - - self.parser.insert_option_group(0, self.cmd_opts) - - def run(self, options: Values, args: List[str]) -> int: - if not args: - logger.warning("ERROR: Please provide a package name or names.") - return ERROR - query = args - - results = search_packages_info(query) - if not print_results( - results, list_files=options.files, verbose=options.verbose - ): - return ERROR - return SUCCESS - - -class _PackageInfo(NamedTuple): - name: str - version: str - location: str - editable_project_location: Optional[str] - requires: List[str] - required_by: List[str] - installer: str - metadata_version: str - classifiers: List[str] - summary: str - homepage: str - project_urls: List[str] - author: str - author_email: str - license: str - entry_points: List[str] - files: Optional[List[str]] - - -def search_packages_info(query: List[str]) -> Generator[_PackageInfo, None, None]: - """ - Gather details from installed distributions. Print distribution name, - version, location, and installed files. Installed files requires a - pip generated 'installed-files.txt' in the distributions '.egg-info' - directory. - """ - env = get_default_environment() - - installed = {dist.canonical_name: dist for dist in env.iter_all_distributions()} - query_names = [canonicalize_name(name) for name in query] - missing = sorted( - [name for name, pkg in zip(query, query_names) if pkg not in installed] - ) - if missing: - logger.warning("Package(s) not found: %s", ", ".join(missing)) - - def _get_requiring_packages(current_dist: BaseDistribution) -> Iterator[str]: - return ( - dist.metadata["Name"] or "UNKNOWN" - for dist in installed.values() - if current_dist.canonical_name - in {canonicalize_name(d.name) for d in dist.iter_dependencies()} - ) - - for query_name in query_names: - try: - dist = installed[query_name] - except KeyError: - continue - - requires = sorted((req.name for req in dist.iter_dependencies()), key=str.lower) - required_by = sorted(_get_requiring_packages(dist), key=str.lower) - - try: - entry_points_text = dist.read_text("entry_points.txt") - entry_points = entry_points_text.splitlines(keepends=False) - except FileNotFoundError: - entry_points = [] - - files_iter = dist.iter_declared_entries() - if files_iter is None: - files: Optional[List[str]] = None - else: - files = sorted(files_iter) - - metadata = dist.metadata - - yield _PackageInfo( - name=dist.raw_name, - version=str(dist.version), - location=dist.location or "", - editable_project_location=dist.editable_project_location, - requires=requires, - required_by=required_by, - installer=dist.installer, - metadata_version=dist.metadata_version or "", - classifiers=metadata.get_all("Classifier", []), - summary=metadata.get("Summary", ""), - homepage=metadata.get("Home-page", ""), - project_urls=metadata.get_all("Project-URL", []), - author=metadata.get("Author", ""), - author_email=metadata.get("Author-email", ""), - license=metadata.get("License", ""), - entry_points=entry_points, - files=files, - ) - - -def print_results( - distributions: Iterable[_PackageInfo], - list_files: bool, - verbose: bool, -) -> bool: - """ - Print the information from installed distributions found. - """ - results_printed = False - for i, dist in enumerate(distributions): - results_printed = True - if i > 0: - write_output("---") - - write_output("Name: %s", dist.name) - write_output("Version: %s", dist.version) - write_output("Summary: %s", dist.summary) - write_output("Home-page: %s", dist.homepage) - write_output("Author: %s", dist.author) - write_output("Author-email: %s", dist.author_email) - write_output("License: %s", dist.license) - write_output("Location: %s", dist.location) - if dist.editable_project_location is not None: - write_output( - "Editable project location: %s", dist.editable_project_location - ) - write_output("Requires: %s", ", ".join(dist.requires)) - write_output("Required-by: %s", ", ".join(dist.required_by)) - - if verbose: - write_output("Metadata-Version: %s", dist.metadata_version) - write_output("Installer: %s", dist.installer) - write_output("Classifiers:") - for classifier in dist.classifiers: - write_output(" %s", classifier) - write_output("Entry-points:") - for entry in dist.entry_points: - write_output(" %s", entry.strip()) - write_output("Project-URLs:") - for project_url in dist.project_urls: - write_output(" %s", project_url) - if list_files: - write_output("Files:") - if dist.files is None: - write_output("Cannot locate RECORD or installed-files.txt") - else: - for line in dist.files: - write_output(" %s", line.strip()) - return results_printed diff --git a/spaces/power2/JoJoGan-powerhow2/op/upfirdn2d.py b/spaces/power2/JoJoGan-powerhow2/op/upfirdn2d.py deleted file mode 100644 index f1bbf96777f2c7267c1fef1733972014684ea22b..0000000000000000000000000000000000000000 --- a/spaces/power2/JoJoGan-powerhow2/op/upfirdn2d.py +++ /dev/null @@ -1,187 +0,0 @@ -import os - -import torch -from torch.autograd import Function -from torch.utils.cpp_extension import load - - -module_path = os.path.dirname(__file__) -upfirdn2d_op = load( - 'upfirdn2d', - sources=[ - os.path.join(module_path, 'upfirdn2d.cpp'), - os.path.join(module_path, 'upfirdn2d_kernel.cu'), - ], -) - - -class UpFirDn2dBackward(Function): - @staticmethod - def forward( - ctx, grad_output, kernel, grad_kernel, up, down, pad, g_pad, in_size, out_size - ): - - up_x, up_y = up - down_x, down_y = down - g_pad_x0, g_pad_x1, g_pad_y0, g_pad_y1 = g_pad - - grad_output = grad_output.reshape(-1, out_size[0], out_size[1], 1) - - grad_input = upfirdn2d_op.upfirdn2d( - grad_output, - grad_kernel, - down_x, - down_y, - up_x, - up_y, - g_pad_x0, - g_pad_x1, - g_pad_y0, - g_pad_y1, - ) - grad_input = grad_input.view(in_size[0], in_size[1], in_size[2], in_size[3]) - - ctx.save_for_backward(kernel) - - pad_x0, pad_x1, pad_y0, pad_y1 = pad - - ctx.up_x = up_x - ctx.up_y = up_y - ctx.down_x = down_x - ctx.down_y = down_y - ctx.pad_x0 = pad_x0 - ctx.pad_x1 = pad_x1 - ctx.pad_y0 = pad_y0 - ctx.pad_y1 = pad_y1 - ctx.in_size = in_size - ctx.out_size = out_size - - return grad_input - - @staticmethod - def backward(ctx, gradgrad_input): - kernel, = ctx.saved_tensors - - gradgrad_input = gradgrad_input.reshape(-1, ctx.in_size[2], ctx.in_size[3], 1) - - gradgrad_out = upfirdn2d_op.upfirdn2d( - gradgrad_input, - kernel, - ctx.up_x, - ctx.up_y, - ctx.down_x, - ctx.down_y, - ctx.pad_x0, - ctx.pad_x1, - ctx.pad_y0, - ctx.pad_y1, - ) - # gradgrad_out = gradgrad_out.view(ctx.in_size[0], ctx.out_size[0], ctx.out_size[1], ctx.in_size[3]) - gradgrad_out = gradgrad_out.view( - ctx.in_size[0], ctx.in_size[1], ctx.out_size[0], ctx.out_size[1] - ) - - return gradgrad_out, None, None, None, None, None, None, None, None - - -class UpFirDn2d(Function): - @staticmethod - def forward(ctx, input, kernel, up, down, pad): - up_x, up_y = up - down_x, down_y = down - pad_x0, pad_x1, pad_y0, pad_y1 = pad - - kernel_h, kernel_w = kernel.shape - batch, channel, in_h, in_w = input.shape - ctx.in_size = input.shape - - input = input.reshape(-1, in_h, in_w, 1) - - ctx.save_for_backward(kernel, torch.flip(kernel, [0, 1])) - - out_h = (in_h * up_y + pad_y0 + pad_y1 - kernel_h) // down_y + 1 - out_w = (in_w * up_x + pad_x0 + pad_x1 - kernel_w) // down_x + 1 - ctx.out_size = (out_h, out_w) - - ctx.up = (up_x, up_y) - ctx.down = (down_x, down_y) - ctx.pad = (pad_x0, pad_x1, pad_y0, pad_y1) - - g_pad_x0 = kernel_w - pad_x0 - 1 - g_pad_y0 = kernel_h - pad_y0 - 1 - g_pad_x1 = in_w * up_x - out_w * down_x + pad_x0 - up_x + 1 - g_pad_y1 = in_h * up_y - out_h * down_y + pad_y0 - up_y + 1 - - ctx.g_pad = (g_pad_x0, g_pad_x1, g_pad_y0, g_pad_y1) - - out = upfirdn2d_op.upfirdn2d( - input, kernel, up_x, up_y, down_x, down_y, pad_x0, pad_x1, pad_y0, pad_y1 - ) - # out = out.view(major, out_h, out_w, minor) - out = out.view(-1, channel, out_h, out_w) - - return out - - @staticmethod - def backward(ctx, grad_output): - kernel, grad_kernel = ctx.saved_tensors - - grad_input = UpFirDn2dBackward.apply( - grad_output, - kernel, - grad_kernel, - ctx.up, - ctx.down, - ctx.pad, - ctx.g_pad, - ctx.in_size, - ctx.out_size, - ) - - return grad_input, None, None, None, None - - -def upfirdn2d(input, kernel, up=1, down=1, pad=(0, 0)): - out = UpFirDn2d.apply( - input, kernel, (up, up), (down, down), (pad[0], pad[1], pad[0], pad[1]) - ) - - return out - - -def upfirdn2d_native( - input, kernel, up_x, up_y, down_x, down_y, pad_x0, pad_x1, pad_y0, pad_y1 -): - _, in_h, in_w, minor = input.shape - kernel_h, kernel_w = kernel.shape - - out = input.view(-1, in_h, 1, in_w, 1, minor) - out = F.pad(out, [0, 0, 0, up_x - 1, 0, 0, 0, up_y - 1]) - out = out.view(-1, in_h * up_y, in_w * up_x, minor) - - out = F.pad( - out, [0, 0, max(pad_x0, 0), max(pad_x1, 0), max(pad_y0, 0), max(pad_y1, 0)] - ) - out = out[ - :, - max(-pad_y0, 0) : out.shape[1] - max(-pad_y1, 0), - max(-pad_x0, 0) : out.shape[2] - max(-pad_x1, 0), - :, - ] - - out = out.permute(0, 3, 1, 2) - out = out.reshape( - [-1, 1, in_h * up_y + pad_y0 + pad_y1, in_w * up_x + pad_x0 + pad_x1] - ) - w = torch.flip(kernel, [0, 1]).view(1, 1, kernel_h, kernel_w) - out = F.conv2d(out, w) - out = out.reshape( - -1, - minor, - in_h * up_y + pad_y0 + pad_y1 - kernel_h + 1, - in_w * up_x + pad_x0 + pad_x1 - kernel_w + 1, - ) - out = out.permute(0, 2, 3, 1) - - return out[:, ::down_y, ::down_x, :] - diff --git a/spaces/pranked03/amazon-product-comparer/create_table.py b/spaces/pranked03/amazon-product-comparer/create_table.py deleted file mode 100644 index 37c9b803612aa3963c2a178c74b6009afd173e46..0000000000000000000000000000000000000000 --- a/spaces/pranked03/amazon-product-comparer/create_table.py +++ /dev/null @@ -1,206 +0,0 @@ -import validators -from selectorlib import Extractor -import requests -import json -import time -import csv -from dateutil.parser import parse -import sys, os -import re -from datetime import date, datetime -import numpy as np -import math -import concurrent.futures -import boto3 -import botocore -from io import StringIO -import pandas as pd -import streamlit as st -import streamlit.components.v1 as components -import base64 -import uuid -#import pyperclip -#from IPython.core.display import HTML -from bokeh.plotting import figure -import plotly.express as px -import plotly.graph_objects as go -from all_funcs import * - - -def create_table(theurls): - e = Extractor.from_yaml_file('selectors.yml') - all_five_star = [] - all_time_diff = [] - all_hun_days = [] - all_rating = [] - all_verified = [] - all_helped = [] - urls_used = [] - product_names = [] - all_reviews = [] - all_amazon_ratings = [] - all_count_of_day = [] - string = "" - fig = go.Figure() - prime = False - today = parse(date.today().strftime("%Y-%m-%d")) - url_dataframe = pd.DataFrame() - - spin = st.empty() - stat = st.empty() - print(theurls) - for i in theurls: - try: - asin = find_asin(i) - print(asin) - if len(asin) != 10: - raise ValueError - except: - st.error("ASIN NUMBER NOT FOUND IN URL! PLEASE CHECK FORMAT OF URL") - prime = False - break - file_name = asin+'.csv' - print(file_name) - try: - df = s3.get_object(Bucket='productreviewsdata', Key="alldata/"+file_name) - body = df["Body"].read().decode('utf-8') - df_data = pd.read_csv(StringIO(body)) - try: - title = list(set(df_data["product"]))[0] - print(list(set(df_data["title"]))) - if list(set(df_data["title"]))[0] == "-": - st.error(title + " has 0 reviews. Please remove it from your list and try again!") - break - - except IndexError: - string = string + "https://www.amazon.in/product-reviews/"+asin+"\n" - break - stat.info("Getting " + title + "....") - product_names.append(title) - try: - all_amazon_ratings.append(str(list(set(df_data["amazon_rating"]))[0])) - except: - all_amazon_ratings.append("-") - urls_used.append(list(set(df_data["url"]))[0]) - string = string+list(set(df_data["url"]))[0]+"\n" - #st.write(df_data) - if len(df_data)==0: - pass - #string = string + "https://www.amazon.in/product-reviews/"+asin+"\n" - #st.write(string) - else: - fig = create_graph(fig, df_data) - df_len, deltaT, rate, ind_time_diff, ind_rating, ind_verified, ind_helped, count_of_day, count_of_five_star, ind_hun_days = getrate(df_data) - #print(df_len) - all_reviews.append(str(df_len)) - all_time_diff.append(ind_time_diff) - all_rating.append(ind_rating) - all_verified.append(ind_verified) - all_helped.append(ind_helped) - all_count_of_day.append(count_of_day) - all_five_star.append(count_of_five_star) - all_hun_days.append(ind_hun_days) - prime=True - - except botocore.exceptions.ClientError: - st.info("Request sent for " + asin) - create_df = pd.DataFrame({"title":[], "content": [], 'date':[], "author": [], "rating":[], "product":[], "url":[], "verified":[], "helped": [], "amazon_rating": []}) - bucket = 'productreviewsdata' - csv_buffer = StringIO() - create_df.to_csv(csv_buffer, index=False) - res.Object(bucket, 'alldata/'+asin+'.csv').put(Body=csv_buffer.getvalue()) - string = string + "https://www.amazon.in/product-reviews/"+asin+"\n" - prime=False - dataf = pd.DataFrame({'Product': [], - 'Our Rating': [], - 'Total Verified Purchases': [], - 'No. of Verified Purchases in last 100 days':[], - 'No. of Verified Purchases that have 5 stars in the last 100 days':[], - 'Amazon Rating': [], - 'URL': []}) - - if prime and len(all_time_diff) == len(st.session_state["linksFinal"]): - fig.update_layout( - title="Graph of reviews", - xaxis_title="Date", - yaxis_title="No. of Reviews", - legend_title="Products", - font=dict( - family="Courier New, monospace", - color="black")) - rates = relative_rates(all_time_diff, all_rating, all_verified, all_helped) - for record in range(0, len(urls_used)): - #dataf.append([product_names[record], all_reviews[record], rates[record], all_amazon_ratings[record]]) - - to_insert = { - 'Product': product_names[record][:70]+"...", - 'Our Rating': rates[record], - 'Total Verified Purchases': all_reviews[record], - 'No. of Verified Purchases in last 100 days': str(all_count_of_day[record]), - 'No. of Verified Purchases that have 5 stars in the last 100 days': str(all_five_star[record]), - 'Amazon Rating': all_amazon_ratings[record], - 'URL': urls_used[record] - } - dataf = dataf.append(to_insert, ignore_index=True) - dataf = dataf.sort_values(by=['Our Rating'], ascending=False) - dataf.set_index('Product', inplace=True) - stat.empty() - #st.table(dataf.style.format({"Total Reviews": "{:.0f}"})) - - st.table(dataf) - st.plotly_chart(fig) - #st.dataframe(dataf) - else: - stat.empty() - #reqs_spin.empty() - spin.info("Your request is being processed...") - - time.sleep(10) - #st.write(string) - return string - -def save_data_in_session(string, prime_session, sessions_here): - if prime_session ==True: - s_check = string.split("\n") - try: - while True: - s_check.remove("") - except ValueError: - pass - print("THIS") - print(s_check) - if len(s_check) != len(st.session_state.linksFinal): - pass - else: - for ses in sessions_here: - ses_check = ses.split("\n") - try: - while True: - ses_check.remove("") - except ValueError: - pass - print("ses_check") - print(ses_check) - if set(s_check) == set(ses_check): - break - else: - print("HIIIIIIIIIIIIII") - string = st.session_state.dataInBucket+",\n"+string - st.success("Session Saved") - res.Object('productreviewsdata', 'sessions/'+st.session_state["iden"]).put(Body=string) - - else: - s_check = string.split("\n") - try: - while True: - s_check.remove("") - except ValueError: - pass - if len(s_check) !=len(st.session_state.linksFinal): - pass - else: - st.success("Session Saved") - res.Object('productreviewsdata', 'sessions/'+st.session_state["iden"]).put(Body=string) - - - diff --git a/spaces/prerna9811/Chord/portaudio/test/patest_sine_channelmaps.c b/spaces/prerna9811/Chord/portaudio/test/patest_sine_channelmaps.c deleted file mode 100644 index 34767017a6bfba3e40825899cc1a6bf763c2a05f..0000000000000000000000000000000000000000 --- a/spaces/prerna9811/Chord/portaudio/test/patest_sine_channelmaps.c +++ /dev/null @@ -1,190 +0,0 @@ -/* - * patest_sine_channelmaps.c - * - * This program uses the PortAudio Portable Audio Library. - * For more information see: http://www.portaudio.com/ - * Copyright (c) 1999-2000 Ross Bencina and Phil Burk - * - * Permission is hereby granted, free of charge, to any person obtaining - * a copy of this software and associated documentation files - * (the "Software"), to deal in the Software without restriction, - * including without limitation the rights to use, copy, modify, merge, - * publish, distribute, sublicense, and/or sell copies of the Software, - * and to permit persons to whom the Software is furnished to do so, - * subject to the following conditions: - * - * The above copyright notice and this permission notice shall be - * included in all copies or substantial portions of the Software. - * - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, - * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. - * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR - * ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF - * CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION - * WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - */ - -/* - * The text above constitutes the entire PortAudio license; however, - * the PortAudio community also makes the following non-binding requests: - * - * Any person wishing to distribute modifications to the Software is - * requested to send the modifications to the original developer so that - * they can be incorporated into the canonical version. It is also - * requested that these non-binding requests be included along with the - * license above. - */ - -/** @file patest_sine_channelmaps.c - @ingroup test_src - @brief Plays sine waves using sme simple channel maps. - Designed for use with CoreAudio, but should made to work with other APIs - @author Bjorn Roche - @author Ross Bencina - @author Phil Burk -*/ - -#include -#include -#include "portaudio.h" - -#ifdef __APPLE__ -#include "pa_mac_core.h" -#endif - -#define NUM_SECONDS (5) -#define SAMPLE_RATE (44100) -#define FRAMES_PER_BUFFER (64) - -#ifndef M_PI -#define M_PI (3.14159265) -#endif - -#define TABLE_SIZE (200) -typedef struct -{ - float sine[TABLE_SIZE]; - int left_phase; - int right_phase; -} -paTestData; - -/* This routine will be called by the PortAudio engine when audio is needed. -** It may called at interrupt level on some machines so don't do anything -** that could mess up the system like calling malloc() or free(). -*/ -static int patestCallback( const void *inputBuffer, void *outputBuffer, - unsigned long framesPerBuffer, - const PaStreamCallbackTimeInfo* timeInfo, - PaStreamCallbackFlags statusFlags, - void *userData ) -{ - paTestData *data = (paTestData*)userData; - float *out = (float*)outputBuffer; - unsigned long i; - - (void) timeInfo; /* Prevent unused variable warnings. */ - (void) statusFlags; - (void) inputBuffer; - - for( i=0; isine[data->left_phase]; /* left */ - *out++ = data->sine[data->right_phase]; /* right */ - data->left_phase += 1; - if( data->left_phase >= TABLE_SIZE ) data->left_phase -= TABLE_SIZE; - data->right_phase += 3; /* higher pitch so we can distinguish left and right. */ - if( data->right_phase >= TABLE_SIZE ) data->right_phase -= TABLE_SIZE; - } - - return paContinue; -} - -/*******************************************************************/ -int main(void); -int main(void) -{ - PaStreamParameters outputParameters; - PaStream *stream; - PaError err; - paTestData data; -#ifdef __APPLE__ - PaMacCoreStreamInfo macInfo; - const SInt32 channelMap[4] = { -1, -1, 0, 1 }; -#endif - int i; - - - printf("PortAudio Test: output sine wave. SR = %d, BufSize = %d\n", SAMPLE_RATE, FRAMES_PER_BUFFER); - printf("Output will be mapped to channels 2 and 3 instead of 0 and 1.\n"); - - /* initialise sinusoidal wavetable */ - for( i=0; idefaultLowOutputLatency; -#ifdef __APPLE__ - outputParameters.hostApiSpecificStreamInfo = &macInfo; -#else - outputParameters.hostApiSpecificStreamInfo = NULL; -#endif - - err = Pa_OpenStream( - &stream, - NULL, /* no input */ - &outputParameters, - SAMPLE_RATE, - FRAMES_PER_BUFFER, - paClipOff, /* we won't output out of range samples so don't bother clipping them */ - patestCallback, - &data ); - if( err != paNoError ) goto error; - - err = Pa_StartStream( stream ); - if( err != paNoError ) goto error; - - printf("Play for %d seconds.\n", NUM_SECONDS ); - Pa_Sleep( NUM_SECONDS * 1000 ); - - err = Pa_StopStream( stream ); - if( err != paNoError ) goto error; - - err = Pa_CloseStream( stream ); - if( err != paNoError ) goto error; - - Pa_Terminate(); - printf("Test finished.\n"); - - return err; -error: - Pa_Terminate(); - fprintf( stderr, "An error occurred while using the portaudio stream\n" ); - fprintf( stderr, "Error number: %d\n", err ); - fprintf( stderr, "Error message: %s\n", Pa_GetErrorText( err ) ); - return err; -} diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/PIL/ImageSequence.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/PIL/ImageSequence.py deleted file mode 100644 index c4bb6334acfde7d245c5bb1722b7c2381661e4ca..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/PIL/ImageSequence.py +++ /dev/null @@ -1,76 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# sequence support classes -# -# history: -# 1997-02-20 fl Created -# -# Copyright (c) 1997 by Secret Labs AB. -# Copyright (c) 1997 by Fredrik Lundh. -# -# See the README file for information on usage and redistribution. -# - -## - - -class Iterator: - """ - This class implements an iterator object that can be used to loop - over an image sequence. - - You can use the ``[]`` operator to access elements by index. This operator - will raise an :py:exc:`IndexError` if you try to access a nonexistent - frame. - - :param im: An image object. - """ - - def __init__(self, im): - if not hasattr(im, "seek"): - msg = "im must have seek method" - raise AttributeError(msg) - self.im = im - self.position = getattr(self.im, "_min_frame", 0) - - def __getitem__(self, ix): - try: - self.im.seek(ix) - return self.im - except EOFError as e: - raise IndexError from e # end of sequence - - def __iter__(self): - return self - - def __next__(self): - try: - self.im.seek(self.position) - self.position += 1 - return self.im - except EOFError as e: - raise StopIteration from e - - -def all_frames(im, func=None): - """ - Applies a given function to all frames in an image or a list of images. - The frames are returned as a list of separate images. - - :param im: An image, or a list of images. - :param func: The function to apply to all of the image frames. - :returns: A list of images. - """ - if not isinstance(im, list): - im = [im] - - ims = [] - for imSequence in im: - current = imSequence.tell() - - ims += [im_frame.copy() for im_frame in Iterator(imSequence)] - - imSequence.seek(current) - return [func(im) for im in ims] if func else ims diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fastapi/dependencies/models.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fastapi/dependencies/models.py deleted file mode 100644 index 61ef006387781b81c55fb8222449435c851097fa..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fastapi/dependencies/models.py +++ /dev/null @@ -1,58 +0,0 @@ -from typing import Any, Callable, List, Optional, Sequence - -from fastapi._compat import ModelField -from fastapi.security.base import SecurityBase - - -class SecurityRequirement: - def __init__( - self, security_scheme: SecurityBase, scopes: Optional[Sequence[str]] = None - ): - self.security_scheme = security_scheme - self.scopes = scopes - - -class Dependant: - def __init__( - self, - *, - path_params: Optional[List[ModelField]] = None, - query_params: Optional[List[ModelField]] = None, - header_params: Optional[List[ModelField]] = None, - cookie_params: Optional[List[ModelField]] = None, - body_params: Optional[List[ModelField]] = None, - dependencies: Optional[List["Dependant"]] = None, - security_schemes: Optional[List[SecurityRequirement]] = None, - name: Optional[str] = None, - call: Optional[Callable[..., Any]] = None, - request_param_name: Optional[str] = None, - websocket_param_name: Optional[str] = None, - http_connection_param_name: Optional[str] = None, - response_param_name: Optional[str] = None, - background_tasks_param_name: Optional[str] = None, - security_scopes_param_name: Optional[str] = None, - security_scopes: Optional[List[str]] = None, - use_cache: bool = True, - path: Optional[str] = None, - ) -> None: - self.path_params = path_params or [] - self.query_params = query_params or [] - self.header_params = header_params or [] - self.cookie_params = cookie_params or [] - self.body_params = body_params or [] - self.dependencies = dependencies or [] - self.security_requirements = security_schemes or [] - self.request_param_name = request_param_name - self.websocket_param_name = websocket_param_name - self.http_connection_param_name = http_connection_param_name - self.response_param_name = response_param_name - self.background_tasks_param_name = background_tasks_param_name - self.security_scopes = security_scopes - self.security_scopes_param_name = security_scopes_param_name - self.name = name - self.call = call - self.use_cache = use_cache - # Store the path to be able to re-generate a dependable from it in overrides - self.path = path - # Save the cache key at creation to optimize performance - self.cache_key = (self.call, tuple(sorted(set(self.security_scopes or [])))) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/utils.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/utils.py deleted file mode 100644 index 0daf2c0b0eef87ded65a49119460b93630a045a0..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/utils.py +++ /dev/null @@ -1,950 +0,0 @@ -""" Handy utility functions. """ - -from __future__ import annotations - -import asyncio -import copy -import functools -import importlib -import inspect -import json -import json.decoder -import os -import pkgutil -import re -import threading -import time -import traceback -import typing -import warnings -from abc import ABC, abstractmethod -from contextlib import contextmanager -from io import BytesIO -from numbers import Number -from pathlib import Path -from types import GeneratorType -from typing import ( - TYPE_CHECKING, - Any, - Callable, - Iterable, - Iterator, - Optional, - TypeVar, -) - -import anyio -import matplotlib -import requests -from typing_extensions import ParamSpec - -import gradio -from gradio.context import Context -from gradio.strings import en - -if TYPE_CHECKING: # Only import for type checking (is False at runtime). - from gradio.blocks import BlockContext, Blocks - from gradio.components import Component - from gradio.routes import App, Request - -JSON_PATH = os.path.join(os.path.dirname(gradio.__file__), "launches.json") - -P = ParamSpec("P") -T = TypeVar("T") - - -def get_package_version() -> str: - try: - package_json_data = ( - pkgutil.get_data(__name__, "package.json").decode("utf-8").strip() # type: ignore - ) - package_data = json.loads(package_json_data) - version = package_data.get("version", "") - return version - except Exception: - return "" - - -def safe_get_lock() -> asyncio.Lock: - """Get asyncio.Lock() without fear of getting an Exception. - - Needed because in reload mode we import the Blocks object outside - the main thread. - """ - try: - asyncio.get_event_loop() - return asyncio.Lock() - except RuntimeError: - return None # type: ignore - - -class BaseReloader(ABC): - @property - @abstractmethod - def running_app(self) -> App: - pass - - def queue_changed(self, demo: Blocks): - return ( - hasattr(self.running_app.blocks, "_queue") and not hasattr(demo, "_queue") - ) or ( - not hasattr(self.running_app.blocks, "_queue") and hasattr(demo, "_queue") - ) - - def swap_blocks(self, demo: Blocks): - assert self.running_app.blocks - # Copy over the blocks to get new components and events but - # not a new queue - if self.running_app.blocks._queue: - self.running_app.blocks._queue.block_fns = demo.fns - demo._queue = self.running_app.blocks._queue - self.running_app.blocks = demo - - -class SourceFileReloader(BaseReloader): - def __init__( - self, - app: App, - watch_dirs: list[str], - watch_file: str, - stop_event: threading.Event, - change_event: threading.Event, - demo_name: str = "demo", - ) -> None: - super().__init__() - self.app = app - self.watch_dirs = watch_dirs - self.watch_file = watch_file - self.stop_event = stop_event - self.change_event = change_event - self.demo_name = demo_name - - @property - def running_app(self) -> App: - return self.app - - def should_watch(self) -> bool: - return not self.stop_event.is_set() - - def stop(self) -> None: - self.stop_event.set() - - def alert_change(self): - self.change_event.set() - - def swap_blocks(self, demo: Blocks): - super().swap_blocks(demo) - self.alert_change() - - -def watchfn(reloader: SourceFileReloader): - """Watch python files in a given module. - - get_changes is taken from uvicorn's default file watcher. - """ - - # The thread running watchfn will be the thread reloading - # the app. So we need to modify this thread_data attr here - # so that subsequent calls to reload don't launch the app - from gradio.cli.commands.reload import reload_thread - - reload_thread.running_reload = True - - def get_changes() -> Path | None: - for file in iter_py_files(): - try: - mtime = file.stat().st_mtime - except OSError: # pragma: nocover - continue - - old_time = mtimes.get(file) - if old_time is None: - mtimes[file] = mtime - continue - elif mtime > old_time: - return file - return None - - def iter_py_files() -> Iterator[Path]: - for reload_dir in reload_dirs: - for path in list(reload_dir.rglob("*.py")): - yield path.resolve() - - module = None - reload_dirs = [Path(dir_) for dir_ in reloader.watch_dirs] - import sys - - for dir_ in reload_dirs: - sys.path.insert(0, str(dir_)) - - mtimes = {} - while reloader.should_watch(): - changed = get_changes() - if changed: - print(f"Changes detected in: {changed}") - # To simulate a fresh reload, delete all module references from sys.modules - # for the modules in the package the change came from. - dir_ = next(d for d in reload_dirs if is_in_or_equal(changed, d)) - modules = list(sys.modules) - for k in modules: - v = sys.modules[k] - sourcefile = getattr(v, "__file__", None) - # Do not reload `reload.py` to keep thread data - if ( - sourcefile - and dir_ == Path(inspect.getfile(gradio)).parent - and sourcefile.endswith("reload.py") - ): - continue - if sourcefile and is_in_or_equal(sourcefile, dir_): - del sys.modules[k] - try: - module = importlib.import_module(reloader.watch_file) - module = importlib.reload(module) - except Exception as e: - print( - f"Reloading {reloader.watch_file} failed with the following exception: " - ) - traceback.print_exception(None, value=e, tb=None) - mtimes = {} - continue - - demo = getattr(module, reloader.demo_name) - if reloader.queue_changed(demo): - print( - "Reloading failed. The new demo has a queue and the old one doesn't (or vice versa). " - "Please launch your demo again" - ) - else: - reloader.swap_blocks(demo) - mtimes = {} - - -def colab_check() -> bool: - """ - Check if interface is launching from Google Colab - :return is_colab (bool): True or False - """ - is_colab = False - try: # Check if running interactively using ipython. - from IPython.core.getipython import get_ipython - - from_ipynb = get_ipython() - if "google.colab" in str(from_ipynb): - is_colab = True - except (ImportError, NameError): - pass - return is_colab - - -def kaggle_check() -> bool: - return bool( - os.environ.get("KAGGLE_KERNEL_RUN_TYPE") or os.environ.get("GFOOTBALL_DATA_DIR") - ) - - -def sagemaker_check() -> bool: - try: - import boto3 # type: ignore - - client = boto3.client("sts") - response = client.get_caller_identity() - return "sagemaker" in response["Arn"].lower() - except Exception: - return False - - -def ipython_check() -> bool: - """ - Check if interface is launching from iPython (not colab) - :return is_ipython (bool): True or False - """ - is_ipython = False - try: # Check if running interactively using ipython. - from IPython.core.getipython import get_ipython - - if get_ipython() is not None: - is_ipython = True - except (ImportError, NameError): - pass - return is_ipython - - -def get_space() -> str | None: - if os.getenv("SYSTEM") == "spaces": - return os.getenv("SPACE_ID") - return None - - -def is_zero_gpu_space() -> bool: - return os.getenv("SPACES_ZERO_GPU") == "true" - - -def readme_to_html(article: str) -> str: - try: - response = requests.get(article, timeout=3) - if response.status_code == requests.codes.ok: # pylint: disable=no-member - article = response.text - except requests.exceptions.RequestException: - pass - return article - - -def launch_counter() -> None: - try: - if not os.path.exists(JSON_PATH): - launches = {"launches": 1} - with open(JSON_PATH, "w+") as j: - json.dump(launches, j) - else: - with open(JSON_PATH) as j: - launches = json.load(j) - launches["launches"] += 1 - if launches["launches"] in [25, 50, 150, 500, 1000]: - print(en["BETA_INVITE"]) - with open(JSON_PATH, "w") as j: - j.write(json.dumps(launches)) - except Exception: - pass - - -def get_default_args(func: Callable) -> list[Any]: - signature = inspect.signature(func) - return [ - v.default if v.default is not inspect.Parameter.empty else None - for v in signature.parameters.values() - ] - - -def assert_configs_are_equivalent_besides_ids( - config1: dict, config2: dict, root_keys: tuple = ("mode",) -): - """Allows you to test if two different Blocks configs produce the same demo. - - Parameters: - config1 (dict): nested dict with config from the first Blocks instance - config2 (dict): nested dict with config from the second Blocks instance - root_keys (Tuple): an interable consisting of which keys to test for equivalence at - the root level of the config. By default, only "mode" is tested, - so keys like "version" are ignored. - """ - config1 = copy.deepcopy(config1) - config2 = copy.deepcopy(config2) - config1 = json.loads(json.dumps(config1)) # convert tuples to lists - config2 = json.loads(json.dumps(config2)) - - for key in root_keys: - if config1[key] != config2[key]: - raise ValueError(f"Configs have different: {key}") - - if len(config1["components"]) != len(config2["components"]): - raise ValueError("# of components are different") - - def assert_same_components(config1_id, config2_id): - c1 = list(filter(lambda c: c["id"] == config1_id, config1["components"])) - if len(c1) == 0: - raise ValueError(f"Could not find component with id {config1_id}") - c1 = c1[0] - c2 = list(filter(lambda c: c["id"] == config2_id, config2["components"])) - if len(c2) == 0: - raise ValueError(f"Could not find component with id {config2_id}") - c2 = c2[0] - c1 = copy.deepcopy(c1) - c1.pop("id") - c2 = copy.deepcopy(c2) - c2.pop("id") - if c1 != c2: - raise ValueError(f"{c1} does not match {c2}") - - def same_children_recursive(children1, chidren2): - for child1, child2 in zip(children1, chidren2): - assert_same_components(child1["id"], child2["id"]) - if "children" in child1 or "children" in child2: - same_children_recursive(child1["children"], child2["children"]) - - children1 = config1["layout"]["children"] - children2 = config2["layout"]["children"] - same_children_recursive(children1, children2) - - for d1, d2 in zip(config1["dependencies"], config2["dependencies"]): - for t1, t2 in zip(d1.pop("targets"), d2.pop("targets")): - assert_same_components(t1[0], t2[0]) - for i1, i2 in zip(d1.pop("inputs"), d2.pop("inputs")): - assert_same_components(i1, i2) - for o1, o2 in zip(d1.pop("outputs"), d2.pop("outputs")): - assert_same_components(o1, o2) - - if d1 != d2: - raise ValueError(f"{d1} does not match {d2}") - - return True - - -def format_ner_list(input_string: str, ner_groups: list[dict[str, str | int]]): - if len(ner_groups) == 0: - return [(input_string, None)] - - output = [] - end = 0 - prev_end = 0 - - for group in ner_groups: - entity, start, end = group["entity_group"], group["start"], group["end"] - output.append((input_string[prev_end:start], None)) - output.append((input_string[start:end], entity)) - prev_end = end - - output.append((input_string[end:], None)) - return output - - -def delete_none(_dict: dict, skip_value: bool = False) -> dict: - """ - Delete keys whose values are None from a dictionary - """ - for key, value in list(_dict.items()): - if skip_value and key == "value": - continue - elif value is None: - del _dict[key] - return _dict - - -def resolve_singleton(_list: list[Any] | Any) -> Any: - if len(_list) == 1: - return _list[0] - else: - return _list - - -def component_or_layout_class(cls_name: str) -> type[Component] | type[BlockContext]: - """ - Returns the component, template, or layout class with the given class name, or - raises a ValueError if not found. - - Parameters: - cls_name (str): lower-case string class name of a component - Returns: - cls: the component class - """ - import gradio.blocks - import gradio.components - import gradio.layouts - import gradio.templates - - components = [ - (name, cls) - for name, cls in gradio.components.__dict__.items() - if isinstance(cls, type) - ] - templates = [ - (name, cls) - for name, cls in gradio.templates.__dict__.items() - if isinstance(cls, type) - ] - layouts = [ - (name, cls) - for name, cls in gradio.layouts.__dict__.items() - if isinstance(cls, type) - ] - for name, cls in components + templates + layouts: - if name.lower() == cls_name.replace("_", "") and ( - issubclass(cls, gradio.components.Component) - or issubclass(cls, gradio.blocks.BlockContext) - ): - return cls - raise ValueError(f"No such component or layout: {cls_name}") - - -def run_coro_in_background(func: Callable, *args, **kwargs): - """ - Runs coroutines in background. - - Warning, be careful to not use this function in other than FastAPI scope, because the event_loop has not started yet. - You can use it in any scope reached by FastAPI app. - - correct scope examples: endpoints in routes, Blocks.process_api - incorrect scope examples: Blocks.launch - - Use startup_events in routes.py if you need to run a coro in background in Blocks.launch(). - - - Example: - utils.run_coro_in_background(fn, *args, **kwargs) - - Args: - func: - *args: - **kwargs: - - Returns: - - """ - event_loop = asyncio.get_event_loop() - return event_loop.create_task(func(*args, **kwargs)) - - -def run_sync_iterator_async(iterator): - """Helper for yielding StopAsyncIteration from sync iterators.""" - try: - return next(iterator) - except StopIteration: - # raise a ValueError here because co-routines can't raise StopIteration themselves - raise StopAsyncIteration() from None - - -class SyncToAsyncIterator: - """Treat a synchronous iterator as async one.""" - - def __init__(self, iterator, limiter) -> None: - self.iterator = iterator - self.limiter = limiter - - def __aiter__(self): - return self - - async def __anext__(self): - return await anyio.to_thread.run_sync( - run_sync_iterator_async, self.iterator, limiter=self.limiter - ) - - -async def async_iteration(iterator): - # anext not introduced until 3.10 :( - return await iterator.__anext__() - - -@contextmanager -def set_directory(path: Path | str): - """Context manager that sets the working directory to the given path.""" - origin = Path().absolute() - try: - os.chdir(path) - yield - finally: - os.chdir(origin) - - -@contextmanager -def no_raise_exception(): - """Context manager that suppresses exceptions.""" - try: - yield - except Exception: - pass - - -def sanitize_value_for_csv(value: str | Number) -> str | Number: - """ - Sanitizes a value that is being written to a CSV file to prevent CSV injection attacks. - Reference: https://owasp.org/www-community/attacks/CSV_Injection - """ - if isinstance(value, Number): - return value - unsafe_prefixes = ["=", "+", "-", "@", "\t", "\n"] - unsafe_sequences = [",=", ",+", ",-", ",@", ",\t", ",\n"] - if any(value.startswith(prefix) for prefix in unsafe_prefixes) or any( - sequence in value for sequence in unsafe_sequences - ): - value = f"'{value}" - return value - - -def sanitize_list_for_csv(values: list[Any]) -> list[Any]: - """ - Sanitizes a list of values (or a list of list of values) that is being written to a - CSV file to prevent CSV injection attacks. - """ - sanitized_values = [] - for value in values: - if isinstance(value, list): - sanitized_value = [sanitize_value_for_csv(v) for v in value] - sanitized_values.append(sanitized_value) - else: - sanitized_value = sanitize_value_for_csv(value) - sanitized_values.append(sanitized_value) - return sanitized_values - - -def append_unique_suffix(name: str, list_of_names: list[str]): - """Appends a numerical suffix to `name` so that it does not appear in `list_of_names`.""" - set_of_names: set[str] = set(list_of_names) # for O(1) lookup - if name not in set_of_names: - return name - else: - suffix_counter = 1 - new_name = f"{name}_{suffix_counter}" - while new_name in set_of_names: - suffix_counter += 1 - new_name = f"{name}_{suffix_counter}" - return new_name - - -def validate_url(possible_url: str) -> bool: - headers = {"User-Agent": "gradio (https://gradio.app/; team@gradio.app)"} - try: - head_request = requests.head(possible_url, headers=headers) - # some URLs, such as AWS S3 presigned URLs, return a 405 or a 403 for HEAD requests - if head_request.status_code == 405 or head_request.status_code == 403: - return requests.get(possible_url, headers=headers).ok - return head_request.ok - except Exception: - return False - - -def is_update(val): - return isinstance(val, dict) and "update" in val.get("__type__", "") - - -def get_continuous_fn(fn: Callable, every: float) -> Callable: - def continuous_fn(*args): - while True: - output = fn(*args) - if isinstance(output, GeneratorType): - yield from output - else: - yield output - time.sleep(every) - - return continuous_fn - - -def function_wrapper( - f: Callable, - before_fn: Callable | None = None, - before_args: Iterable | None = None, - after_fn: Callable | None = None, - after_args: Iterable | None = None, -): - before_args = [] if before_args is None else before_args - after_args = [] if after_args is None else after_args - if inspect.isasyncgenfunction(f): - - @functools.wraps(f) - async def asyncgen_wrapper(*args, **kwargs): - if before_fn: - before_fn(*before_args) - async for response in f(*args, **kwargs): - yield response - if after_fn: - after_fn(*after_args) - - return asyncgen_wrapper - - elif asyncio.iscoroutinefunction(f): - - @functools.wraps(f) - async def async_wrapper(*args, **kwargs): - if before_fn: - before_fn(*before_args) - response = await f(*args, **kwargs) - if after_fn: - after_fn(*after_args) - return response - - return async_wrapper - - elif inspect.isgeneratorfunction(f): - - @functools.wraps(f) - def gen_wrapper(*args, **kwargs): - if before_fn: - before_fn(*before_args) - yield from f(*args, **kwargs) - if after_fn: - after_fn(*after_args) - - return gen_wrapper - - else: - - @functools.wraps(f) - def wrapper(*args, **kwargs): - if before_fn: - before_fn(*before_args) - response = f(*args, **kwargs) - if after_fn: - after_fn(*after_args) - return response - - return wrapper - - -def get_function_with_locals( - fn: Callable, - blocks: Blocks, - event_id: str | None, - in_event_listener: bool, - request: Request | None, -): - def before_fn(blocks, event_id): - from gradio.context import LocalContext - - LocalContext.blocks.set(blocks) - LocalContext.in_event_listener.set(in_event_listener) - LocalContext.event_id.set(event_id) - LocalContext.request.set(request) - - def after_fn(): - from gradio.context import LocalContext - - LocalContext.in_event_listener.set(False) - LocalContext.request.set(None) - - return function_wrapper( - fn, before_fn=before_fn, before_args=(blocks, event_id), after_fn=after_fn - ) - - -async def cancel_tasks(task_ids: set[str]): - matching_tasks = [ - task for task in asyncio.all_tasks() if task.get_name() in task_ids - ] - for task in matching_tasks: - task.cancel() - await asyncio.gather(*matching_tasks, return_exceptions=True) - - -def set_task_name(task, session_hash: str, fn_index: int, batch: bool): - if not batch: - task.set_name(f"{session_hash}_{fn_index}") - - -def get_cancel_function( - dependencies: list[dict[str, Any]] -) -> tuple[Callable, list[int]]: - fn_to_comp = {} - for dep in dependencies: - if Context.root_block: - fn_index = next( - i for i, d in enumerate(Context.root_block.dependencies) if d == dep - ) - fn_to_comp[fn_index] = [ - Context.root_block.blocks[o] for o in dep["outputs"] - ] - - async def cancel(session_hash: str) -> None: - task_ids = {f"{session_hash}_{fn}" for fn in fn_to_comp} - await cancel_tasks(task_ids) - - return ( - cancel, - list(fn_to_comp.keys()), - ) - - -def get_type_hints(fn): - # Importing gradio with the canonical abbreviation. Used in typing._eval_type. - import gradio as gr # noqa: F401 - from gradio import OAuthProfile, Request # noqa: F401 - - if inspect.isfunction(fn) or inspect.ismethod(fn): - pass - elif callable(fn): - fn = fn.__call__ - else: - return {} - - try: - return typing.get_type_hints(fn) - except TypeError: - # On Python 3.9 or earlier, get_type_hints throws a TypeError if the function - # has a type annotation that include "|". We resort to parsing the signature - # manually using inspect.signature. - type_hints = {} - sig = inspect.signature(fn) - for name, param in sig.parameters.items(): - if param.annotation is inspect.Parameter.empty: - continue - if param.annotation == "gr.OAuthProfile | None": - # Special case: we want to inject the OAuthProfile value even on Python 3.9 - type_hints[name] = Optional[OAuthProfile] - if "|" in str(param.annotation): - continue - # To convert the string annotation to a class, we use the - # internal typing._eval_type function. This is not ideal, but - # it's the only way to do it without eval-ing the string. - # Since the API is internal, it may change in the future. - try: - type_hints[name] = typing._eval_type( # type: ignore - typing.ForwardRef(param.annotation), globals(), locals() - ) - except (NameError, TypeError): - pass - return type_hints - - -def is_special_typed_parameter(name, parameter_types): - from gradio.helpers import EventData - from gradio.oauth import OAuthProfile - from gradio.routes import Request - - """Checks if parameter has a type hint designating it as a gr.Request, gr.EventData or gr.OAuthProfile.""" - hint = parameter_types.get(name) - if not hint: - return False - is_request = hint == Request - is_oauth_arg = hint in (OAuthProfile, Optional[OAuthProfile]) - is_event_data = inspect.isclass(hint) and issubclass(hint, EventData) - return is_request or is_event_data or is_oauth_arg - - -def check_function_inputs_match(fn: Callable, inputs: list, inputs_as_dict: bool): - """ - Checks if the input component set matches the function - Returns: None if valid, a string error message if mismatch - """ - - signature = inspect.signature(fn) - parameter_types = get_type_hints(fn) - min_args = 0 - max_args = 0 - infinity = -1 - for name, param in signature.parameters.items(): - has_default = param.default != param.empty - if param.kind in [param.POSITIONAL_ONLY, param.POSITIONAL_OR_KEYWORD]: - if not is_special_typed_parameter(name, parameter_types): - if not has_default: - min_args += 1 - max_args += 1 - elif param.kind == param.VAR_POSITIONAL: - max_args = infinity - elif param.kind == param.KEYWORD_ONLY and not has_default: - return f"Keyword-only args must have default values for function {fn}" - arg_count = 1 if inputs_as_dict else len(inputs) - if min_args == max_args and max_args != arg_count: - warnings.warn( - f"Expected {max_args} arguments for function {fn}, received {arg_count}." - ) - if arg_count < min_args: - warnings.warn( - f"Expected at least {min_args} arguments for function {fn}, received {arg_count}." - ) - if max_args != infinity and arg_count > max_args: - warnings.warn( - f"Expected maximum {max_args} arguments for function {fn}, received {arg_count}." - ) - - -class TupleNoPrint(tuple): - # To remove printing function return in notebook - def __repr__(self): - return "" - - def __str__(self): - return "" - - -class MatplotlibBackendMananger: - def __enter__(self): - self._original_backend = matplotlib.get_backend() - matplotlib.use("agg") - - def __exit__(self, exc_type, exc_val, exc_tb): - matplotlib.use(self._original_backend) - - -def tex2svg(formula, *args): - with MatplotlibBackendMananger(): - import matplotlib.pyplot as plt - - fontsize = 20 - dpi = 300 - plt.rc("mathtext", fontset="cm") - fig = plt.figure(figsize=(0.01, 0.01)) - fig.text(0, 0, rf"${formula}$", fontsize=fontsize) - output = BytesIO() - fig.savefig( # type: ignore - output, - dpi=dpi, - transparent=True, - format="svg", - bbox_inches="tight", - pad_inches=0.0, - ) - plt.close(fig) - output.seek(0) - xml_code = output.read().decode("utf-8") - svg_start = xml_code.index(".*<\/metadata>", "", svg_code, flags=re.DOTALL) - svg_code = re.sub(r' width="[^"]+"', "", svg_code) - height_match = re.search(r'height="([\d.]+)pt"', svg_code) - if height_match: - height = float(height_match.group(1)) - new_height = height / fontsize # conversion from pt to em - svg_code = re.sub( - r'height="[\d.]+pt"', f'height="{new_height}em"', svg_code - ) - copy_code = f"{formula}" - return f"{copy_code}{svg_code}" - - -def abspath(path: str | Path) -> Path: - """Returns absolute path of a str or Path path, but does not resolve symlinks.""" - path = Path(path) - - if path.is_absolute(): - return path - - # recursively check if there is a symlink within the path - is_symlink = path.is_symlink() or any( - parent.is_symlink() for parent in path.parents - ) - - if is_symlink or path == path.resolve(): # in case path couldn't be resolved - return Path.cwd() / path - else: - return path.resolve() - - -def is_in_or_equal(path_1: str | Path, path_2: str | Path): - """ - True if path_1 is a descendant (i.e. located within) path_2 or if the paths are the - same, returns False otherwise. - Parameters: - path_1: str or Path (should be a file) - path_2: str or Path (can be a file or directory) - """ - path_1, path_2 = abspath(path_1), abspath(path_2) - try: - if str(path_1.relative_to(path_2)).startswith(".."): # prevent path traversal - return False - except ValueError: - return False - return True - - -HTML_TAG_RE = re.compile("<.*?>") - - -def remove_html_tags(raw_html: str | None) -> str: - return re.sub(HTML_TAG_RE, "", raw_html or "") - - -def find_user_stack_level() -> int: - """ - Find the first stack frame not inside Gradio. - """ - frame = inspect.currentframe() - n = 0 - while frame: - fname = inspect.getfile(frame) - if "/gradio/" not in fname.replace(os.sep, "/"): - break - frame = frame.f_back - n += 1 - return n - - -class NamedString(str): - """ - Subclass of str that includes a .name attribute equal to the value of the string itself. This class is used when returning - a value from the `.preprocess()` methods of the File and UploadButton components. Before Gradio 4.0, these methods returned a file - object which was then converted to a string filepath using the `.name` attribute. In Gradio 4.0, these methods now return a str - filepath directly, but to maintain backwards compatibility, we use this class instead of a regular str. - """ - - def __init__(self, *args): - super().__init__() - self.name = str(self) if args else "" diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/multipart/tests/__init__.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/multipart/tests/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/_utils/_pep440.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/_utils/_pep440.py deleted file mode 100644 index 73d0afb5e95f099f8b04253177e8a3ab3d80d0c4..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/_utils/_pep440.py +++ /dev/null @@ -1,487 +0,0 @@ -"""Utility to compare pep440 compatible version strings. - -The LooseVersion and StrictVersion classes that distutils provides don't -work; they don't recognize anything like alpha/beta/rc/dev versions. -""" - -# Copyright (c) Donald Stufft and individual contributors. -# All rights reserved. - -# Redistribution and use in source and binary forms, with or without -# modification, are permitted provided that the following conditions are met: - -# 1. Redistributions of source code must retain the above copyright notice, -# this list of conditions and the following disclaimer. - -# 2. Redistributions in binary form must reproduce the above copyright -# notice, this list of conditions and the following disclaimer in the -# documentation and/or other materials provided with the distribution. - -# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" -# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE -# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE -# ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE -# LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR -# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF -# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS -# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN -# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) -# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE -# POSSIBILITY OF SUCH DAMAGE. - -import collections -import itertools -import re - - -__all__ = [ - "parse", "Version", "LegacyVersion", "InvalidVersion", "VERSION_PATTERN", -] - - -# BEGIN packaging/_structures.py - - -class Infinity: - def __repr__(self): - return "Infinity" - - def __hash__(self): - return hash(repr(self)) - - def __lt__(self, other): - return False - - def __le__(self, other): - return False - - def __eq__(self, other): - return isinstance(other, self.__class__) - - def __ne__(self, other): - return not isinstance(other, self.__class__) - - def __gt__(self, other): - return True - - def __ge__(self, other): - return True - - def __neg__(self): - return NegativeInfinity - - -Infinity = Infinity() - - -class NegativeInfinity: - def __repr__(self): - return "-Infinity" - - def __hash__(self): - return hash(repr(self)) - - def __lt__(self, other): - return True - - def __le__(self, other): - return True - - def __eq__(self, other): - return isinstance(other, self.__class__) - - def __ne__(self, other): - return not isinstance(other, self.__class__) - - def __gt__(self, other): - return False - - def __ge__(self, other): - return False - - def __neg__(self): - return Infinity - - -# BEGIN packaging/version.py - - -NegativeInfinity = NegativeInfinity() - -_Version = collections.namedtuple( - "_Version", - ["epoch", "release", "dev", "pre", "post", "local"], -) - - -def parse(version): - """ - Parse the given version string and return either a :class:`Version` object - or a :class:`LegacyVersion` object depending on if the given version is - a valid PEP 440 version or a legacy version. - """ - try: - return Version(version) - except InvalidVersion: - return LegacyVersion(version) - - -class InvalidVersion(ValueError): - """ - An invalid version was found, users should refer to PEP 440. - """ - - -class _BaseVersion: - - def __hash__(self): - return hash(self._key) - - def __lt__(self, other): - return self._compare(other, lambda s, o: s < o) - - def __le__(self, other): - return self._compare(other, lambda s, o: s <= o) - - def __eq__(self, other): - return self._compare(other, lambda s, o: s == o) - - def __ge__(self, other): - return self._compare(other, lambda s, o: s >= o) - - def __gt__(self, other): - return self._compare(other, lambda s, o: s > o) - - def __ne__(self, other): - return self._compare(other, lambda s, o: s != o) - - def _compare(self, other, method): - if not isinstance(other, _BaseVersion): - return NotImplemented - - return method(self._key, other._key) - - -class LegacyVersion(_BaseVersion): - - def __init__(self, version): - self._version = str(version) - self._key = _legacy_cmpkey(self._version) - - def __str__(self): - return self._version - - def __repr__(self): - return "".format(repr(str(self))) - - @property - def public(self): - return self._version - - @property - def base_version(self): - return self._version - - @property - def local(self): - return None - - @property - def is_prerelease(self): - return False - - @property - def is_postrelease(self): - return False - - -_legacy_version_component_re = re.compile( - r"(\d+ | [a-z]+ | \.| -)", re.VERBOSE, -) - -_legacy_version_replacement_map = { - "pre": "c", "preview": "c", "-": "final-", "rc": "c", "dev": "@", -} - - -def _parse_version_parts(s): - for part in _legacy_version_component_re.split(s): - part = _legacy_version_replacement_map.get(part, part) - - if not part or part == ".": - continue - - if part[:1] in "0123456789": - # pad for numeric comparison - yield part.zfill(8) - else: - yield "*" + part - - # ensure that alpha/beta/candidate are before final - yield "*final" - - -def _legacy_cmpkey(version): - # We hardcode an epoch of -1 here. A PEP 440 version can only have an epoch - # greater than or equal to 0. This will effectively put the LegacyVersion, - # which uses the defacto standard originally implemented by setuptools, - # as before all PEP 440 versions. - epoch = -1 - - # This scheme is taken from pkg_resources.parse_version setuptools prior to - # its adoption of the packaging library. - parts = [] - for part in _parse_version_parts(version.lower()): - if part.startswith("*"): - # remove "-" before a prerelease tag - if part < "*final": - while parts and parts[-1] == "*final-": - parts.pop() - - # remove trailing zeros from each series of numeric parts - while parts and parts[-1] == "00000000": - parts.pop() - - parts.append(part) - parts = tuple(parts) - - return epoch, parts - - -# Deliberately not anchored to the start and end of the string, to make it -# easier for 3rd party code to reuse -VERSION_PATTERN = r""" - v? - (?: - (?:(?P[0-9]+)!)? # epoch - (?P[0-9]+(?:\.[0-9]+)*) # release segment - (?P
                                                  # pre-release
        -            [-_\.]?
        -            (?P(a|b|c|rc|alpha|beta|pre|preview))
        -            [-_\.]?
        -            (?P[0-9]+)?
        -        )?
        -        (?P                                         # post release
        -            (?:-(?P[0-9]+))
        -            |
        -            (?:
        -                [-_\.]?
        -                (?Ppost|rev|r)
        -                [-_\.]?
        -                (?P[0-9]+)?
        -            )
        -        )?
        -        (?P                                          # dev release
        -            [-_\.]?
        -            (?Pdev)
        -            [-_\.]?
        -            (?P[0-9]+)?
        -        )?
        -    )
        -    (?:\+(?P[a-z0-9]+(?:[-_\.][a-z0-9]+)*))?       # local version
        -"""
        -
        -
        -class Version(_BaseVersion):
        -
        -    _regex = re.compile(
        -        r"^\s*" + VERSION_PATTERN + r"\s*$",
        -        re.VERBOSE | re.IGNORECASE,
        -    )
        -
        -    def __init__(self, version):
        -        # Validate the version and parse it into pieces
        -        match = self._regex.search(version)
        -        if not match:
        -            raise InvalidVersion("Invalid version: '{0}'".format(version))
        -
        -        # Store the parsed out pieces of the version
        -        self._version = _Version(
        -            epoch=int(match.group("epoch")) if match.group("epoch") else 0,
        -            release=tuple(int(i) for i in match.group("release").split(".")),
        -            pre=_parse_letter_version(
        -                match.group("pre_l"),
        -                match.group("pre_n"),
        -            ),
        -            post=_parse_letter_version(
        -                match.group("post_l"),
        -                match.group("post_n1") or match.group("post_n2"),
        -            ),
        -            dev=_parse_letter_version(
        -                match.group("dev_l"),
        -                match.group("dev_n"),
        -            ),
        -            local=_parse_local_version(match.group("local")),
        -        )
        -
        -        # Generate a key which will be used for sorting
        -        self._key = _cmpkey(
        -            self._version.epoch,
        -            self._version.release,
        -            self._version.pre,
        -            self._version.post,
        -            self._version.dev,
        -            self._version.local,
        -        )
        -
        -    def __repr__(self):
        -        return "".format(repr(str(self)))
        -
        -    def __str__(self):
        -        parts = []
        -
        -        # Epoch
        -        if self._version.epoch != 0:
        -            parts.append("{0}!".format(self._version.epoch))
        -
        -        # Release segment
        -        parts.append(".".join(str(x) for x in self._version.release))
        -
        -        # Pre-release
        -        if self._version.pre is not None:
        -            parts.append("".join(str(x) for x in self._version.pre))
        -
        -        # Post-release
        -        if self._version.post is not None:
        -            parts.append(".post{0}".format(self._version.post[1]))
        -
        -        # Development release
        -        if self._version.dev is not None:
        -            parts.append(".dev{0}".format(self._version.dev[1]))
        -
        -        # Local version segment
        -        if self._version.local is not None:
        -            parts.append(
        -                "+{0}".format(".".join(str(x) for x in self._version.local))
        -            )
        -
        -        return "".join(parts)
        -
        -    @property
        -    def public(self):
        -        return str(self).split("+", 1)[0]
        -
        -    @property
        -    def base_version(self):
        -        parts = []
        -
        -        # Epoch
        -        if self._version.epoch != 0:
        -            parts.append("{0}!".format(self._version.epoch))
        -
        -        # Release segment
        -        parts.append(".".join(str(x) for x in self._version.release))
        -
        -        return "".join(parts)
        -
        -    @property
        -    def local(self):
        -        version_string = str(self)
        -        if "+" in version_string:
        -            return version_string.split("+", 1)[1]
        -
        -    @property
        -    def is_prerelease(self):
        -        return bool(self._version.dev or self._version.pre)
        -
        -    @property
        -    def is_postrelease(self):
        -        return bool(self._version.post)
        -
        -
        -def _parse_letter_version(letter, number):
        -    if letter:
        -        # We assume there is an implicit 0 in a pre-release if there is
        -        # no numeral associated with it.
        -        if number is None:
        -            number = 0
        -
        -        # We normalize any letters to their lower-case form
        -        letter = letter.lower()
        -
        -        # We consider some words to be alternate spellings of other words and
        -        # in those cases we want to normalize the spellings to our preferred
        -        # spelling.
        -        if letter == "alpha":
        -            letter = "a"
        -        elif letter == "beta":
        -            letter = "b"
        -        elif letter in ["c", "pre", "preview"]:
        -            letter = "rc"
        -        elif letter in ["rev", "r"]:
        -            letter = "post"
        -
        -        return letter, int(number)
        -    if not letter and number:
        -        # We assume that if we are given a number but not given a letter,
        -        # then this is using the implicit post release syntax (e.g., 1.0-1)
        -        letter = "post"
        -
        -        return letter, int(number)
        -
        -
        -_local_version_seperators = re.compile(r"[\._-]")
        -
        -
        -def _parse_local_version(local):
        -    """
        -    Takes a string like abc.1.twelve and turns it into ("abc", 1, "twelve").
        -    """
        -    if local is not None:
        -        return tuple(
        -            part.lower() if not part.isdigit() else int(part)
        -            for part in _local_version_seperators.split(local)
        -        )
        -
        -
        -def _cmpkey(epoch, release, pre, post, dev, local):
        -    # When we compare a release version, we want to compare it with all of the
        -    # trailing zeros removed. So we'll use a reverse the list, drop all the now
        -    # leading zeros until we come to something non-zero, then take the rest,
        -    # re-reverse it back into the correct order, and make it a tuple and use
        -    # that for our sorting key.
        -    release = tuple(
        -        reversed(list(
        -            itertools.dropwhile(
        -                lambda x: x == 0,
        -                reversed(release),
        -            )
        -        ))
        -    )
        -
        -    # We need to "trick" the sorting algorithm to put 1.0.dev0 before 1.0a0.
        -    # We'll do this by abusing the pre-segment, but we _only_ want to do this
        -    # if there is no pre- or a post-segment. If we have one of those, then
        -    # the normal sorting rules will handle this case correctly.
        -    if pre is None and post is None and dev is not None:
        -        pre = -Infinity
        -    # Versions without a pre-release (except as noted above) should sort after
        -    # those with one.
        -    elif pre is None:
        -        pre = Infinity
        -
        -    # Versions without a post-segment should sort before those with one.
        -    if post is None:
        -        post = -Infinity
        -
        -    # Versions without a development segment should sort after those with one.
        -    if dev is None:
        -        dev = Infinity
        -
        -    if local is None:
        -        # Versions without a local segment should sort before those with one.
        -        local = -Infinity
        -    else:
        -        # Versions with a local segment need that segment parsed to implement
        -        # the sorting rules in PEP440.
        -        # - Alphanumeric segments sort before numeric segments
        -        # - Alphanumeric segments sort lexicographically
        -        # - Numeric segments sort numerically
        -        # - Shorter versions sort before longer versions when the prefixes
        -        #   match exactly
        -        local = tuple(
        -            (i, "") if isinstance(i, int) else (-Infinity, i)
        -            for i in local
        -        )
        -
        -    return epoch, release, pre, post, dev, local
        diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/core/_ufunc_config.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/core/_ufunc_config.py
        deleted file mode 100644
        index df821309581671a125e47f34de9289a7f481fda3..0000000000000000000000000000000000000000
        --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/core/_ufunc_config.py
        +++ /dev/null
        @@ -1,466 +0,0 @@
        -"""
        -Functions for changing global ufunc configuration
        -
        -This provides helpers which wrap `umath.geterrobj` and `umath.seterrobj`
        -"""
        -import collections.abc
        -import contextlib
        -import contextvars
        -
        -from .._utils import set_module
        -from .umath import (
        -    UFUNC_BUFSIZE_DEFAULT,
        -    ERR_IGNORE, ERR_WARN, ERR_RAISE, ERR_CALL, ERR_PRINT, ERR_LOG, ERR_DEFAULT,
        -    SHIFT_DIVIDEBYZERO, SHIFT_OVERFLOW, SHIFT_UNDERFLOW, SHIFT_INVALID,
        -)
        -from . import umath
        -
        -__all__ = [
        -    "seterr", "geterr", "setbufsize", "getbufsize", "seterrcall", "geterrcall",
        -    "errstate", '_no_nep50_warning'
        -]
        -
        -_errdict = {"ignore": ERR_IGNORE,
        -            "warn": ERR_WARN,
        -            "raise": ERR_RAISE,
        -            "call": ERR_CALL,
        -            "print": ERR_PRINT,
        -            "log": ERR_LOG}
        -
        -_errdict_rev = {value: key for key, value in _errdict.items()}
        -
        -
        -@set_module('numpy')
        -def seterr(all=None, divide=None, over=None, under=None, invalid=None):
        -    """
        -    Set how floating-point errors are handled.
        -
        -    Note that operations on integer scalar types (such as `int16`) are
        -    handled like floating point, and are affected by these settings.
        -
        -    Parameters
        -    ----------
        -    all : {'ignore', 'warn', 'raise', 'call', 'print', 'log'}, optional
        -        Set treatment for all types of floating-point errors at once:
        -
        -        - ignore: Take no action when the exception occurs.
        -        - warn: Print a `RuntimeWarning` (via the Python `warnings` module).
        -        - raise: Raise a `FloatingPointError`.
        -        - call: Call a function specified using the `seterrcall` function.
        -        - print: Print a warning directly to ``stdout``.
        -        - log: Record error in a Log object specified by `seterrcall`.
        -
        -        The default is not to change the current behavior.
        -    divide : {'ignore', 'warn', 'raise', 'call', 'print', 'log'}, optional
        -        Treatment for division by zero.
        -    over : {'ignore', 'warn', 'raise', 'call', 'print', 'log'}, optional
        -        Treatment for floating-point overflow.
        -    under : {'ignore', 'warn', 'raise', 'call', 'print', 'log'}, optional
        -        Treatment for floating-point underflow.
        -    invalid : {'ignore', 'warn', 'raise', 'call', 'print', 'log'}, optional
        -        Treatment for invalid floating-point operation.
        -
        -    Returns
        -    -------
        -    old_settings : dict
        -        Dictionary containing the old settings.
        -
        -    See also
        -    --------
        -    seterrcall : Set a callback function for the 'call' mode.
        -    geterr, geterrcall, errstate
        -
        -    Notes
        -    -----
        -    The floating-point exceptions are defined in the IEEE 754 standard [1]_:
        -
        -    - Division by zero: infinite result obtained from finite numbers.
        -    - Overflow: result too large to be expressed.
        -    - Underflow: result so close to zero that some precision
        -      was lost.
        -    - Invalid operation: result is not an expressible number, typically
        -      indicates that a NaN was produced.
        -
        -    .. [1] https://en.wikipedia.org/wiki/IEEE_754
        -
        -    Examples
        -    --------
        -    >>> old_settings = np.seterr(all='ignore')  #seterr to known value
        -    >>> np.seterr(over='raise')
        -    {'divide': 'ignore', 'over': 'ignore', 'under': 'ignore', 'invalid': 'ignore'}
        -    >>> np.seterr(**old_settings)  # reset to default
        -    {'divide': 'ignore', 'over': 'raise', 'under': 'ignore', 'invalid': 'ignore'}
        -
        -    >>> np.int16(32000) * np.int16(3)
        -    30464
        -    >>> old_settings = np.seterr(all='warn', over='raise')
        -    >>> np.int16(32000) * np.int16(3)
        -    Traceback (most recent call last):
        -      File "", line 1, in 
        -    FloatingPointError: overflow encountered in scalar multiply
        -
        -    >>> old_settings = np.seterr(all='print')
        -    >>> np.geterr()
        -    {'divide': 'print', 'over': 'print', 'under': 'print', 'invalid': 'print'}
        -    >>> np.int16(32000) * np.int16(3)
        -    30464
        -
        -    """
        -
        -    pyvals = umath.geterrobj()
        -    old = geterr()
        -
        -    if divide is None:
        -        divide = all or old['divide']
        -    if over is None:
        -        over = all or old['over']
        -    if under is None:
        -        under = all or old['under']
        -    if invalid is None:
        -        invalid = all or old['invalid']
        -
        -    maskvalue = ((_errdict[divide] << SHIFT_DIVIDEBYZERO) +
        -                 (_errdict[over] << SHIFT_OVERFLOW) +
        -                 (_errdict[under] << SHIFT_UNDERFLOW) +
        -                 (_errdict[invalid] << SHIFT_INVALID))
        -
        -    pyvals[1] = maskvalue
        -    umath.seterrobj(pyvals)
        -    return old
        -
        -
        -@set_module('numpy')
        -def geterr():
        -    """
        -    Get the current way of handling floating-point errors.
        -
        -    Returns
        -    -------
        -    res : dict
        -        A dictionary with keys "divide", "over", "under", and "invalid",
        -        whose values are from the strings "ignore", "print", "log", "warn",
        -        "raise", and "call". The keys represent possible floating-point
        -        exceptions, and the values define how these exceptions are handled.
        -
        -    See Also
        -    --------
        -    geterrcall, seterr, seterrcall
        -
        -    Notes
        -    -----
        -    For complete documentation of the types of floating-point exceptions and
        -    treatment options, see `seterr`.
        -
        -    Examples
        -    --------
        -    >>> np.geterr()
        -    {'divide': 'warn', 'over': 'warn', 'under': 'ignore', 'invalid': 'warn'}
        -    >>> np.arange(3.) / np.arange(3.)
        -    array([nan,  1.,  1.])
        -
        -    >>> oldsettings = np.seterr(all='warn', over='raise')
        -    >>> np.geterr()
        -    {'divide': 'warn', 'over': 'raise', 'under': 'warn', 'invalid': 'warn'}
        -    >>> np.arange(3.) / np.arange(3.)
        -    array([nan,  1.,  1.])
        -
        -    """
        -    maskvalue = umath.geterrobj()[1]
        -    mask = 7
        -    res = {}
        -    val = (maskvalue >> SHIFT_DIVIDEBYZERO) & mask
        -    res['divide'] = _errdict_rev[val]
        -    val = (maskvalue >> SHIFT_OVERFLOW) & mask
        -    res['over'] = _errdict_rev[val]
        -    val = (maskvalue >> SHIFT_UNDERFLOW) & mask
        -    res['under'] = _errdict_rev[val]
        -    val = (maskvalue >> SHIFT_INVALID) & mask
        -    res['invalid'] = _errdict_rev[val]
        -    return res
        -
        -
        -@set_module('numpy')
        -def setbufsize(size):
        -    """
        -    Set the size of the buffer used in ufuncs.
        -
        -    Parameters
        -    ----------
        -    size : int
        -        Size of buffer.
        -
        -    """
        -    if size > 10e6:
        -        raise ValueError("Buffer size, %s, is too big." % size)
        -    if size < 5:
        -        raise ValueError("Buffer size, %s, is too small." % size)
        -    if size % 16 != 0:
        -        raise ValueError("Buffer size, %s, is not a multiple of 16." % size)
        -
        -    pyvals = umath.geterrobj()
        -    old = getbufsize()
        -    pyvals[0] = size
        -    umath.seterrobj(pyvals)
        -    return old
        -
        -
        -@set_module('numpy')
        -def getbufsize():
        -    """
        -    Return the size of the buffer used in ufuncs.
        -
        -    Returns
        -    -------
        -    getbufsize : int
        -        Size of ufunc buffer in bytes.
        -
        -    """
        -    return umath.geterrobj()[0]
        -
        -
        -@set_module('numpy')
        -def seterrcall(func):
        -    """
        -    Set the floating-point error callback function or log object.
        -
        -    There are two ways to capture floating-point error messages.  The first
        -    is to set the error-handler to 'call', using `seterr`.  Then, set
        -    the function to call using this function.
        -
        -    The second is to set the error-handler to 'log', using `seterr`.
        -    Floating-point errors then trigger a call to the 'write' method of
        -    the provided object.
        -
        -    Parameters
        -    ----------
        -    func : callable f(err, flag) or object with write method
        -        Function to call upon floating-point errors ('call'-mode) or
        -        object whose 'write' method is used to log such message ('log'-mode).
        -
        -        The call function takes two arguments. The first is a string describing
        -        the type of error (such as "divide by zero", "overflow", "underflow",
        -        or "invalid value"), and the second is the status flag.  The flag is a
        -        byte, whose four least-significant bits indicate the type of error, one
        -        of "divide", "over", "under", "invalid"::
        -
        -          [0 0 0 0 divide over under invalid]
        -
        -        In other words, ``flags = divide + 2*over + 4*under + 8*invalid``.
        -
        -        If an object is provided, its write method should take one argument,
        -        a string.
        -
        -    Returns
        -    -------
        -    h : callable, log instance or None
        -        The old error handler.
        -
        -    See Also
        -    --------
        -    seterr, geterr, geterrcall
        -
        -    Examples
        -    --------
        -    Callback upon error:
        -
        -    >>> def err_handler(type, flag):
        -    ...     print("Floating point error (%s), with flag %s" % (type, flag))
        -    ...
        -
        -    >>> saved_handler = np.seterrcall(err_handler)
        -    >>> save_err = np.seterr(all='call')
        -
        -    >>> np.array([1, 2, 3]) / 0.0
        -    Floating point error (divide by zero), with flag 1
        -    array([inf, inf, inf])
        -
        -    >>> np.seterrcall(saved_handler)
        -    
        -    >>> np.seterr(**save_err)
        -    {'divide': 'call', 'over': 'call', 'under': 'call', 'invalid': 'call'}
        -
        -    Log error message:
        -
        -    >>> class Log:
        -    ...     def write(self, msg):
        -    ...         print("LOG: %s" % msg)
        -    ...
        -
        -    >>> log = Log()
        -    >>> saved_handler = np.seterrcall(log)
        -    >>> save_err = np.seterr(all='log')
        -
        -    >>> np.array([1, 2, 3]) / 0.0
        -    LOG: Warning: divide by zero encountered in divide
        -    array([inf, inf, inf])
        -
        -    >>> np.seterrcall(saved_handler)
        -    
        -    >>> np.seterr(**save_err)
        -    {'divide': 'log', 'over': 'log', 'under': 'log', 'invalid': 'log'}
        -
        -    """
        -    if func is not None and not isinstance(func, collections.abc.Callable):
        -        if (not hasattr(func, 'write') or
        -                not isinstance(func.write, collections.abc.Callable)):
        -            raise ValueError("Only callable can be used as callback")
        -    pyvals = umath.geterrobj()
        -    old = geterrcall()
        -    pyvals[2] = func
        -    umath.seterrobj(pyvals)
        -    return old
        -
        -
        -@set_module('numpy')
        -def geterrcall():
        -    """
        -    Return the current callback function used on floating-point errors.
        -
        -    When the error handling for a floating-point error (one of "divide",
        -    "over", "under", or "invalid") is set to 'call' or 'log', the function
        -    that is called or the log instance that is written to is returned by
        -    `geterrcall`. This function or log instance has been set with
        -    `seterrcall`.
        -
        -    Returns
        -    -------
        -    errobj : callable, log instance or None
        -        The current error handler. If no handler was set through `seterrcall`,
        -        ``None`` is returned.
        -
        -    See Also
        -    --------
        -    seterrcall, seterr, geterr
        -
        -    Notes
        -    -----
        -    For complete documentation of the types of floating-point exceptions and
        -    treatment options, see `seterr`.
        -
        -    Examples
        -    --------
        -    >>> np.geterrcall()  # we did not yet set a handler, returns None
        -
        -    >>> oldsettings = np.seterr(all='call')
        -    >>> def err_handler(type, flag):
        -    ...     print("Floating point error (%s), with flag %s" % (type, flag))
        -    >>> oldhandler = np.seterrcall(err_handler)
        -    >>> np.array([1, 2, 3]) / 0.0
        -    Floating point error (divide by zero), with flag 1
        -    array([inf, inf, inf])
        -
        -    >>> cur_handler = np.geterrcall()
        -    >>> cur_handler is err_handler
        -    True
        -
        -    """
        -    return umath.geterrobj()[2]
        -
        -
        -class _unspecified:
        -    pass
        -
        -
        -_Unspecified = _unspecified()
        -
        -
        -@set_module('numpy')
        -class errstate(contextlib.ContextDecorator):
        -    """
        -    errstate(**kwargs)
        -
        -    Context manager for floating-point error handling.
        -
        -    Using an instance of `errstate` as a context manager allows statements in
        -    that context to execute with a known error handling behavior. Upon entering
        -    the context the error handling is set with `seterr` and `seterrcall`, and
        -    upon exiting it is reset to what it was before.
        -
        -    ..  versionchanged:: 1.17.0
        -        `errstate` is also usable as a function decorator, saving
        -        a level of indentation if an entire function is wrapped.
        -        See :py:class:`contextlib.ContextDecorator` for more information.
        -
        -    Parameters
        -    ----------
        -    kwargs : {divide, over, under, invalid}
        -        Keyword arguments. The valid keywords are the possible floating-point
        -        exceptions. Each keyword should have a string value that defines the
        -        treatment for the particular error. Possible values are
        -        {'ignore', 'warn', 'raise', 'call', 'print', 'log'}.
        -
        -    See Also
        -    --------
        -    seterr, geterr, seterrcall, geterrcall
        -
        -    Notes
        -    -----
        -    For complete documentation of the types of floating-point exceptions and
        -    treatment options, see `seterr`.
        -
        -    Examples
        -    --------
        -    >>> olderr = np.seterr(all='ignore')  # Set error handling to known state.
        -
        -    >>> np.arange(3) / 0.
        -    array([nan, inf, inf])
        -    >>> with np.errstate(divide='warn'):
        -    ...     np.arange(3) / 0.
        -    array([nan, inf, inf])
        -
        -    >>> np.sqrt(-1)
        -    nan
        -    >>> with np.errstate(invalid='raise'):
        -    ...     np.sqrt(-1)
        -    Traceback (most recent call last):
        -      File "", line 2, in 
        -    FloatingPointError: invalid value encountered in sqrt
        -
        -    Outside the context the error handling behavior has not changed:
        -
        -    >>> np.geterr()
        -    {'divide': 'ignore', 'over': 'ignore', 'under': 'ignore', 'invalid': 'ignore'}
        -
        -    """
        -
        -    def __init__(self, *, call=_Unspecified, **kwargs):
        -        self.call = call
        -        self.kwargs = kwargs
        -
        -    def __enter__(self):
        -        self.oldstate = seterr(**self.kwargs)
        -        if self.call is not _Unspecified:
        -            self.oldcall = seterrcall(self.call)
        -
        -    def __exit__(self, *exc_info):
        -        seterr(**self.oldstate)
        -        if self.call is not _Unspecified:
        -            seterrcall(self.oldcall)
        -
        -
        -def _setdef():
        -    defval = [UFUNC_BUFSIZE_DEFAULT, ERR_DEFAULT, None]
        -    umath.seterrobj(defval)
        -
        -
        -# set the default values
        -_setdef()
        -
        -
        -NO_NEP50_WARNING = contextvars.ContextVar("_no_nep50_warning", default=False)
        -
        -@set_module('numpy')
        -@contextlib.contextmanager
        -def _no_nep50_warning():
        -    """
        -    Context manager to disable NEP 50 warnings.  This context manager is
        -    only relevant if the NEP 50 warnings are enabled globally (which is not
        -    thread/context safe).
        -
        -    This warning context manager itself is fully safe, however.
        -    """
        -    token = NO_NEP50_WARNING.set(True)
        -    try:
        -        yield
        -    finally:
        -        NO_NEP50_WARNING.reset(token)
        diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/io/formats/style/test_html.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/io/formats/style/test_html.py
        deleted file mode 100644
        index 1e345eb82ed3c31e7a5e0f89fa574aea84923dd7..0000000000000000000000000000000000000000
        --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/io/formats/style/test_html.py
        +++ /dev/null
        @@ -1,1009 +0,0 @@
        -from textwrap import (
        -    dedent,
        -    indent,
        -)
        -
        -import numpy as np
        -import pytest
        -
        -from pandas import (
        -    DataFrame,
        -    MultiIndex,
        -    option_context,
        -)
        -
        -jinja2 = pytest.importorskip("jinja2")
        -from pandas.io.formats.style import Styler
        -
        -
        -@pytest.fixture
        -def env():
        -    loader = jinja2.PackageLoader("pandas", "io/formats/templates")
        -    env = jinja2.Environment(loader=loader, trim_blocks=True)
        -    return env
        -
        -
        -@pytest.fixture
        -def styler():
        -    return Styler(DataFrame([[2.61], [2.69]], index=["a", "b"], columns=["A"]))
        -
        -
        -@pytest.fixture
        -def styler_mi():
        -    midx = MultiIndex.from_product([["a", "b"], ["c", "d"]])
        -    return Styler(DataFrame(np.arange(16).reshape(4, 4), index=midx, columns=midx))
        -
        -
        -@pytest.fixture
        -def tpl_style(env):
        -    return env.get_template("html_style.tpl")
        -
        -
        -@pytest.fixture
        -def tpl_table(env):
        -    return env.get_template("html_table.tpl")
        -
        -
        -def test_html_template_extends_options():
        -    # make sure if templates are edited tests are updated as are setup fixtures
        -    # to understand the dependency
        -    with open("pandas/io/formats/templates/html.tpl", encoding="utf-8") as file:
        -        result = file.read()
        -    assert "{% include html_style_tpl %}" in result
        -    assert "{% include html_table_tpl %}" in result
        -
        -
        -def test_exclude_styles(styler):
        -    result = styler.to_html(exclude_styles=True, doctype_html=True)
        -    expected = dedent(
        -        """\
        -        
        -        
        -        
        -        
        -        
        -        
        -        
        -          
        -            
        -              
        -              
        -            
        -          
        -          
        -            
        -              
        -              
        -            
        -            
        -              
        -              
        -            
        -          
        -        
         A
        a2.610000
        b2.690000
        - - - """ - ) - assert result == expected - - -def test_w3_html_format(styler): - styler.set_uuid("").set_table_styles([{"selector": "th", "props": "att2:v2;"}]).map( - lambda x: "att1:v1;" - ).set_table_attributes('class="my-cls1" style="attr3:v3;"').set_td_classes( - DataFrame(["my-cls2"], index=["a"], columns=["A"]) - ).format( - "{:.1f}" - ).set_caption( - "A comprehensive test" - ) - expected = dedent( - """\ - - - - - - - - - - - - - - - - - - - -
        A comprehensive test
         A
        a2.6
        b2.7
        - """ - ) - assert expected == styler.to_html() - - -def test_colspan_w3(): - # GH 36223 - df = DataFrame(data=[[1, 2]], columns=[["l0", "l0"], ["l1a", "l1b"]]) - styler = Styler(df, uuid="_", cell_ids=False) - assert '
    l0l0
    - - - - - - - - - - - - - - - - -
     A
    a2.610000
    b2.690000
    - - - """ - ) - assert result == expected - - -def test_doctype(styler): - result = styler.to_html(doctype_html=False) - assert "" not in result - assert "" not in result - assert "" not in result - assert "" not in result - - -def test_doctype_encoding(styler): - with option_context("styler.render.encoding", "ASCII"): - result = styler.to_html(doctype_html=True) - assert '' in result - result = styler.to_html(doctype_html=True, encoding="ANSI") - assert '' in result - - -def test_bold_headers_arg(styler): - result = styler.to_html(bold_headers=True) - assert "th {\n font-weight: bold;\n}" in result - result = styler.to_html() - assert "th {\n font-weight: bold;\n}" not in result - - -def test_caption_arg(styler): - result = styler.to_html(caption="foo bar") - assert "
  • foo barfoo bar
    2.6100002.690000abA
    - - - - - - - - - - - - - - - - - - - - - - - - -
     n1a
     n2c
    n1n2 
    ac0
    - """ - ) - result = styler_mi.to_html() - assert result == expected - - -def test_include_css_style_rules_only_for_visible_cells(styler_mi): - # GH 43619 - result = ( - styler_mi.set_uuid("") - .map(lambda v: "color: blue;") - .hide(styler_mi.data.columns[1:], axis="columns") - .hide(styler_mi.data.index[1:], axis="index") - .to_html() - ) - expected_styles = dedent( - """\ - - """ - ) - assert expected_styles in result - - -def test_include_css_style_rules_only_for_visible_index_labels(styler_mi): - # GH 43619 - result = ( - styler_mi.set_uuid("") - .map_index(lambda v: "color: blue;", axis="index") - .hide(styler_mi.data.columns, axis="columns") - .hide(styler_mi.data.index[1:], axis="index") - .to_html() - ) - expected_styles = dedent( - """\ - - """ - ) - assert expected_styles in result - - -def test_include_css_style_rules_only_for_visible_column_labels(styler_mi): - # GH 43619 - result = ( - styler_mi.set_uuid("") - .map_index(lambda v: "color: blue;", axis="columns") - .hide(styler_mi.data.columns[1:], axis="columns") - .hide(styler_mi.data.index, axis="index") - .to_html() - ) - expected_styles = dedent( - """\ - - """ - ) - assert expected_styles in result - - -def test_hiding_index_columns_multiindex_alignment(): - # gh 43644 - midx = MultiIndex.from_product( - [["i0", "j0"], ["i1"], ["i2", "j2"]], names=["i-0", "i-1", "i-2"] - ) - cidx = MultiIndex.from_product( - [["c0"], ["c1", "d1"], ["c2", "d2"]], names=["c-0", "c-1", "c-2"] - ) - df = DataFrame(np.arange(16).reshape(4, 4), index=midx, columns=cidx) - styler = Styler(df, uuid_len=0) - styler.hide(level=1, axis=0).hide(level=0, axis=1) - styler.hide([("j0", "i1", "j2")], axis=0) - styler.hide([("c0", "d1", "d2")], axis=1) - result = styler.to_html() - expected = dedent( - """\ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
     c-1c1d1
     c-2c2d2c2
    i-0i-2   
    i0i2012
    j2456
    j0i28910
    - """ - ) - assert result == expected - - -def test_hiding_index_columns_multiindex_trimming(): - # gh 44272 - df = DataFrame(np.arange(64).reshape(8, 8)) - df.columns = MultiIndex.from_product([[0, 1, 2, 3], [0, 1]]) - df.index = MultiIndex.from_product([[0, 1, 2, 3], [0, 1]]) - df.index.names, df.columns.names = ["a", "b"], ["c", "d"] - styler = Styler(df, cell_ids=False, uuid_len=0) - styler.hide([(0, 0), (0, 1), (1, 0)], axis=1).hide([(0, 0), (0, 1), (1, 0)], axis=0) - with option_context("styler.render.max_rows", 4, "styler.render.max_columns", 4): - result = styler.to_html() - - expected = dedent( - """\ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
     c123
     d1010...
    ab     
    1127282930...
    2035363738...
    143444546...
    3051525354...
    .....................
    - """ - ) - - assert result == expected - - -@pytest.mark.parametrize("type", ["data", "index"]) -@pytest.mark.parametrize( - "text, exp, found", - [ - ("no link, just text", False, ""), - ("subdomain not www: sub.web.com", False, ""), - ("www subdomain: www.web.com other", True, "www.web.com"), - ("scheme full structure: http://www.web.com", True, "http://www.web.com"), - ("scheme no top-level: http://www.web", True, "http://www.web"), - ("no scheme, no top-level: www.web", False, "www.web"), - ("https scheme: https://www.web.com", True, "https://www.web.com"), - ("ftp scheme: ftp://www.web", True, "ftp://www.web"), - ("ftps scheme: ftps://www.web", True, "ftps://www.web"), - ("subdirectories: www.web.com/directory", True, "www.web.com/directory"), - ("Multiple domains: www.1.2.3.4", True, "www.1.2.3.4"), - ("with port: http://web.com:80", True, "http://web.com:80"), - ( - "full net_loc scheme: http://user:pass@web.com", - True, - "http://user:pass@web.com", - ), - ( - "with valid special chars: http://web.com/,.':;~!@#$*()[]", - True, - "http://web.com/,.':;~!@#$*()[]", - ), - ], -) -def test_rendered_links(type, text, exp, found): - if type == "data": - df = DataFrame([text]) - styler = df.style.format(hyperlinks="html") - else: - df = DataFrame([0], index=[text]) - styler = df.style.format_index(hyperlinks="html") - - rendered = f'{found}' - result = styler.to_html() - assert (rendered in result) is exp - assert (text in result) is not exp # test conversion done when expected and not - - -def test_multiple_rendered_links(): - links = ("www.a.b", "http://a.c", "https://a.d", "ftp://a.e") - # pylint: disable-next=consider-using-f-string - df = DataFrame(["text {} {} text {} {}".format(*links)]) - result = df.style.format(hyperlinks="html").to_html() - href = '{0}' - for link in links: - assert href.format(link) in result - assert href.format("text") not in result - - -def test_concat(styler): - other = styler.data.agg(["mean"]).style - styler.concat(other).set_uuid("X") - result = styler.to_html() - fp = "foot0_" - expected = dedent( - f"""\ - - b - 2.690000 - - - mean - 2.650000 - - - - """ - ) - assert expected in result - - -def test_concat_recursion(styler): - df = styler.data - styler1 = styler - styler2 = Styler(df.agg(["mean"]), precision=3) - styler3 = Styler(df.agg(["mean"]), precision=4) - styler1.concat(styler2.concat(styler3)).set_uuid("X") - result = styler.to_html() - # notice that the second concat (last of the output html), - # there are two `foot_` in the id and class - fp1 = "foot0_" - fp2 = "foot0_foot0_" - expected = dedent( - f"""\ - - b - 2.690000 - - - mean - 2.650 - - - mean - 2.6500 - - - - """ - ) - assert expected in result - - -def test_concat_chain(styler): - df = styler.data - styler1 = styler - styler2 = Styler(df.agg(["mean"]), precision=3) - styler3 = Styler(df.agg(["mean"]), precision=4) - styler1.concat(styler2).concat(styler3).set_uuid("X") - result = styler.to_html() - fp1 = "foot0_" - fp2 = "foot1_" - expected = dedent( - f"""\ - - b - 2.690000 - - - mean - 2.650 - - - mean - 2.6500 - - - - """ - ) - assert expected in result - - -def test_concat_combined(): - def html_lines(foot_prefix: str): - assert foot_prefix.endswith("_") or foot_prefix == "" - fp = foot_prefix - return indent( - dedent( - f"""\ - - a - 2.610000 - - - b - 2.690000 - - """ - ), - prefix=" " * 4, - ) - - df = DataFrame([[2.61], [2.69]], index=["a", "b"], columns=["A"]) - s1 = df.style.highlight_max(color="red") - s2 = df.style.highlight_max(color="green") - s3 = df.style.highlight_max(color="blue") - s4 = df.style.highlight_max(color="yellow") - - result = s1.concat(s2).concat(s3.concat(s4)).set_uuid("X").to_html() - expected_css = dedent( - """\ - - """ - ) - expected_table = ( - dedent( - """\ - - - - - - - - - """ - ) - + html_lines("") - + html_lines("foot0_") - + html_lines("foot1_") - + html_lines("foot1_foot0_") - + dedent( - """\ - -
     A
    - """ - ) - ) - assert expected_css + expected_table == result - - -def test_to_html_na_rep_non_scalar_data(datapath): - # GH47103 - df = DataFrame([{"a": 1, "b": [1, 2, 3], "c": np.nan}]) - result = df.style.format(na_rep="-").to_html(table_uuid="test") - expected = """\ - - - - - - - - - - - - - - - - - - -
     abc
    01[1, 2, 3]-
    -""" - assert result == expected diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/chardet/big5freq.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/chardet/big5freq.py deleted file mode 100644 index 38f32517aa8f6cf5970f7ceddd1a415289184c3e..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/chardet/big5freq.py +++ /dev/null @@ -1,386 +0,0 @@ -######################## BEGIN LICENSE BLOCK ######################## -# The Original Code is Mozilla Communicator client code. -# -# The Initial Developer of the Original Code is -# Netscape Communications Corporation. -# Portions created by the Initial Developer are Copyright (C) 1998 -# the Initial Developer. All Rights Reserved. -# -# Contributor(s): -# Mark Pilgrim - port to Python -# -# This library is free software; you can redistribute it and/or -# modify it under the terms of the GNU Lesser General Public -# License as published by the Free Software Foundation; either -# version 2.1 of the License, or (at your option) any later version. -# -# This library is distributed in the hope that it will be useful, -# but WITHOUT ANY WARRANTY; without even the implied warranty of -# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU -# Lesser General Public License for more details. -# -# You should have received a copy of the GNU Lesser General Public -# License along with this library; if not, write to the Free Software -# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA -# 02110-1301 USA -######################### END LICENSE BLOCK ######################### - -# Big5 frequency table -# by Taiwan's Mandarin Promotion Council -# -# -# 128 --> 0.42261 -# 256 --> 0.57851 -# 512 --> 0.74851 -# 1024 --> 0.89384 -# 2048 --> 0.97583 -# -# Ideal Distribution Ratio = 0.74851/(1-0.74851) =2.98 -# Random Distribution Ration = 512/(5401-512)=0.105 -# -# Typical Distribution Ratio about 25% of Ideal one, still much higher than RDR - -BIG5_TYPICAL_DISTRIBUTION_RATIO = 0.75 - -#Char to FreqOrder table -BIG5_TABLE_SIZE = 5376 - -BIG5_CHAR_TO_FREQ_ORDER = ( - 1,1801,1506, 255,1431, 198, 9, 82, 6,5008, 177, 202,3681,1256,2821, 110, # 16 -3814, 33,3274, 261, 76, 44,2114, 16,2946,2187,1176, 659,3971, 26,3451,2653, # 32 -1198,3972,3350,4202, 410,2215, 302, 590, 361,1964, 8, 204, 58,4510,5009,1932, # 48 - 63,5010,5011, 317,1614, 75, 222, 159,4203,2417,1480,5012,3555,3091, 224,2822, # 64 -3682, 3, 10,3973,1471, 29,2787,1135,2866,1940, 873, 130,3275,1123, 312,5013, # 80 -4511,2052, 507, 252, 682,5014, 142,1915, 124, 206,2947, 34,3556,3204, 64, 604, # 96 -5015,2501,1977,1978, 155,1991, 645, 641,1606,5016,3452, 337, 72, 406,5017, 80, # 112 - 630, 238,3205,1509, 263, 939,1092,2654, 756,1440,1094,3453, 449, 69,2987, 591, # 128 - 179,2096, 471, 115,2035,1844, 60, 50,2988, 134, 806,1869, 734,2036,3454, 180, # 144 - 995,1607, 156, 537,2907, 688,5018, 319,1305, 779,2145, 514,2379, 298,4512, 359, # 160 -2502, 90,2716,1338, 663, 11, 906,1099,2553, 20,2441, 182, 532,1716,5019, 732, # 176 -1376,4204,1311,1420,3206, 25,2317,1056, 113, 399, 382,1950, 242,3455,2474, 529, # 192 -3276, 475,1447,3683,5020, 117, 21, 656, 810,1297,2300,2334,3557,5021, 126,4205, # 208 - 706, 456, 150, 613,4513, 71,1118,2037,4206, 145,3092, 85, 835, 486,2115,1246, # 224 -1426, 428, 727,1285,1015, 800, 106, 623, 303,1281,5022,2128,2359, 347,3815, 221, # 240 -3558,3135,5023,1956,1153,4207, 83, 296,1199,3093, 192, 624, 93,5024, 822,1898, # 256 -2823,3136, 795,2065, 991,1554,1542,1592, 27, 43,2867, 859, 139,1456, 860,4514, # 272 - 437, 712,3974, 164,2397,3137, 695, 211,3037,2097, 195,3975,1608,3559,3560,3684, # 288 -3976, 234, 811,2989,2098,3977,2233,1441,3561,1615,2380, 668,2077,1638, 305, 228, # 304 -1664,4515, 467, 415,5025, 262,2099,1593, 239, 108, 300, 200,1033, 512,1247,2078, # 320 -5026,5027,2176,3207,3685,2682, 593, 845,1062,3277, 88,1723,2038,3978,1951, 212, # 336 - 266, 152, 149, 468,1899,4208,4516, 77, 187,5028,3038, 37, 5,2990,5029,3979, # 352 -5030,5031, 39,2524,4517,2908,3208,2079, 55, 148, 74,4518, 545, 483,1474,1029, # 368 -1665, 217,1870,1531,3138,1104,2655,4209, 24, 172,3562, 900,3980,3563,3564,4519, # 384 - 32,1408,2824,1312, 329, 487,2360,2251,2717, 784,2683, 4,3039,3351,1427,1789, # 400 - 188, 109, 499,5032,3686,1717,1790, 888,1217,3040,4520,5033,3565,5034,3352,1520, # 416 -3687,3981, 196,1034, 775,5035,5036, 929,1816, 249, 439, 38,5037,1063,5038, 794, # 432 -3982,1435,2301, 46, 178,3278,2066,5039,2381,5040, 214,1709,4521, 804, 35, 707, # 448 - 324,3688,1601,2554, 140, 459,4210,5041,5042,1365, 839, 272, 978,2262,2580,3456, # 464 -2129,1363,3689,1423, 697, 100,3094, 48, 70,1231, 495,3139,2196,5043,1294,5044, # 480 -2080, 462, 586,1042,3279, 853, 256, 988, 185,2382,3457,1698, 434,1084,5045,3458, # 496 - 314,2625,2788,4522,2335,2336, 569,2285, 637,1817,2525, 757,1162,1879,1616,3459, # 512 - 287,1577,2116, 768,4523,1671,2868,3566,2526,1321,3816, 909,2418,5046,4211, 933, # 528 -3817,4212,2053,2361,1222,4524, 765,2419,1322, 786,4525,5047,1920,1462,1677,2909, # 544 -1699,5048,4526,1424,2442,3140,3690,2600,3353,1775,1941,3460,3983,4213, 309,1369, # 560 -1130,2825, 364,2234,1653,1299,3984,3567,3985,3986,2656, 525,1085,3041, 902,2001, # 576 -1475, 964,4527, 421,1845,1415,1057,2286, 940,1364,3141, 376,4528,4529,1381, 7, # 592 -2527, 983,2383, 336,1710,2684,1846, 321,3461, 559,1131,3042,2752,1809,1132,1313, # 608 - 265,1481,1858,5049, 352,1203,2826,3280, 167,1089, 420,2827, 776, 792,1724,3568, # 624 -4214,2443,3281,5050,4215,5051, 446, 229, 333,2753, 901,3818,1200,1557,4530,2657, # 640 -1921, 395,2754,2685,3819,4216,1836, 125, 916,3209,2626,4531,5052,5053,3820,5054, # 656 -5055,5056,4532,3142,3691,1133,2555,1757,3462,1510,2318,1409,3569,5057,2146, 438, # 672 -2601,2910,2384,3354,1068, 958,3043, 461, 311,2869,2686,4217,1916,3210,4218,1979, # 688 - 383, 750,2755,2627,4219, 274, 539, 385,1278,1442,5058,1154,1965, 384, 561, 210, # 704 - 98,1295,2556,3570,5059,1711,2420,1482,3463,3987,2911,1257, 129,5060,3821, 642, # 720 - 523,2789,2790,2658,5061, 141,2235,1333, 68, 176, 441, 876, 907,4220, 603,2602, # 736 - 710, 171,3464, 404, 549, 18,3143,2398,1410,3692,1666,5062,3571,4533,2912,4534, # 752 -5063,2991, 368,5064, 146, 366, 99, 871,3693,1543, 748, 807,1586,1185, 22,2263, # 768 - 379,3822,3211,5065,3212, 505,1942,2628,1992,1382,2319,5066, 380,2362, 218, 702, # 784 -1818,1248,3465,3044,3572,3355,3282,5067,2992,3694, 930,3283,3823,5068, 59,5069, # 800 - 585, 601,4221, 497,3466,1112,1314,4535,1802,5070,1223,1472,2177,5071, 749,1837, # 816 - 690,1900,3824,1773,3988,1476, 429,1043,1791,2236,2117, 917,4222, 447,1086,1629, # 832 -5072, 556,5073,5074,2021,1654, 844,1090, 105, 550, 966,1758,2828,1008,1783, 686, # 848 -1095,5075,2287, 793,1602,5076,3573,2603,4536,4223,2948,2302,4537,3825, 980,2503, # 864 - 544, 353, 527,4538, 908,2687,2913,5077, 381,2629,1943,1348,5078,1341,1252, 560, # 880 -3095,5079,3467,2870,5080,2054, 973, 886,2081, 143,4539,5081,5082, 157,3989, 496, # 896 -4224, 57, 840, 540,2039,4540,4541,3468,2118,1445, 970,2264,1748,1966,2082,4225, # 912 -3144,1234,1776,3284,2829,3695, 773,1206,2130,1066,2040,1326,3990,1738,1725,4226, # 928 - 279,3145, 51,1544,2604, 423,1578,2131,2067, 173,4542,1880,5083,5084,1583, 264, # 944 - 610,3696,4543,2444, 280, 154,5085,5086,5087,1739, 338,1282,3096, 693,2871,1411, # 960 -1074,3826,2445,5088,4544,5089,5090,1240, 952,2399,5091,2914,1538,2688, 685,1483, # 976 -4227,2475,1436, 953,4228,2055,4545, 671,2400, 79,4229,2446,3285, 608, 567,2689, # 992 -3469,4230,4231,1691, 393,1261,1792,2401,5092,4546,5093,5094,5095,5096,1383,1672, # 1008 -3827,3213,1464, 522,1119, 661,1150, 216, 675,4547,3991,1432,3574, 609,4548,2690, # 1024 -2402,5097,5098,5099,4232,3045, 0,5100,2476, 315, 231,2447, 301,3356,4549,2385, # 1040 -5101, 233,4233,3697,1819,4550,4551,5102, 96,1777,1315,2083,5103, 257,5104,1810, # 1056 -3698,2718,1139,1820,4234,2022,1124,2164,2791,1778,2659,5105,3097, 363,1655,3214, # 1072 -5106,2993,5107,5108,5109,3992,1567,3993, 718, 103,3215, 849,1443, 341,3357,2949, # 1088 -1484,5110,1712, 127, 67, 339,4235,2403, 679,1412, 821,5111,5112, 834, 738, 351, # 1104 -2994,2147, 846, 235,1497,1881, 418,1993,3828,2719, 186,1100,2148,2756,3575,1545, # 1120 -1355,2950,2872,1377, 583,3994,4236,2581,2995,5113,1298,3699,1078,2557,3700,2363, # 1136 - 78,3829,3830, 267,1289,2100,2002,1594,4237, 348, 369,1274,2197,2178,1838,4552, # 1152 -1821,2830,3701,2757,2288,2003,4553,2951,2758, 144,3358, 882,4554,3995,2759,3470, # 1168 -4555,2915,5114,4238,1726, 320,5115,3996,3046, 788,2996,5116,2831,1774,1327,2873, # 1184 -3997,2832,5117,1306,4556,2004,1700,3831,3576,2364,2660, 787,2023, 506, 824,3702, # 1200 - 534, 323,4557,1044,3359,2024,1901, 946,3471,5118,1779,1500,1678,5119,1882,4558, # 1216 - 165, 243,4559,3703,2528, 123, 683,4239, 764,4560, 36,3998,1793, 589,2916, 816, # 1232 - 626,1667,3047,2237,1639,1555,1622,3832,3999,5120,4000,2874,1370,1228,1933, 891, # 1248 -2084,2917, 304,4240,5121, 292,2997,2720,3577, 691,2101,4241,1115,4561, 118, 662, # 1264 -5122, 611,1156, 854,2386,1316,2875, 2, 386, 515,2918,5123,5124,3286, 868,2238, # 1280 -1486, 855,2661, 785,2216,3048,5125,1040,3216,3578,5126,3146, 448,5127,1525,5128, # 1296 -2165,4562,5129,3833,5130,4242,2833,3579,3147, 503, 818,4001,3148,1568, 814, 676, # 1312 -1444, 306,1749,5131,3834,1416,1030, 197,1428, 805,2834,1501,4563,5132,5133,5134, # 1328 -1994,5135,4564,5136,5137,2198, 13,2792,3704,2998,3149,1229,1917,5138,3835,2132, # 1344 -5139,4243,4565,2404,3580,5140,2217,1511,1727,1120,5141,5142, 646,3836,2448, 307, # 1360 -5143,5144,1595,3217,5145,5146,5147,3705,1113,1356,4002,1465,2529,2530,5148, 519, # 1376 -5149, 128,2133, 92,2289,1980,5150,4003,1512, 342,3150,2199,5151,2793,2218,1981, # 1392 -3360,4244, 290,1656,1317, 789, 827,2365,5152,3837,4566, 562, 581,4004,5153, 401, # 1408 -4567,2252, 94,4568,5154,1399,2794,5155,1463,2025,4569,3218,1944,5156, 828,1105, # 1424 -4245,1262,1394,5157,4246, 605,4570,5158,1784,2876,5159,2835, 819,2102, 578,2200, # 1440 -2952,5160,1502, 436,3287,4247,3288,2836,4005,2919,3472,3473,5161,2721,2320,5162, # 1456 -5163,2337,2068, 23,4571, 193, 826,3838,2103, 699,1630,4248,3098, 390,1794,1064, # 1472 -3581,5164,1579,3099,3100,1400,5165,4249,1839,1640,2877,5166,4572,4573, 137,4250, # 1488 - 598,3101,1967, 780, 104, 974,2953,5167, 278, 899, 253, 402, 572, 504, 493,1339, # 1504 -5168,4006,1275,4574,2582,2558,5169,3706,3049,3102,2253, 565,1334,2722, 863, 41, # 1520 -5170,5171,4575,5172,1657,2338, 19, 463,2760,4251, 606,5173,2999,3289,1087,2085, # 1536 -1323,2662,3000,5174,1631,1623,1750,4252,2691,5175,2878, 791,2723,2663,2339, 232, # 1552 -2421,5176,3001,1498,5177,2664,2630, 755,1366,3707,3290,3151,2026,1609, 119,1918, # 1568 -3474, 862,1026,4253,5178,4007,3839,4576,4008,4577,2265,1952,2477,5179,1125, 817, # 1584 -4254,4255,4009,1513,1766,2041,1487,4256,3050,3291,2837,3840,3152,5180,5181,1507, # 1600 -5182,2692, 733, 40,1632,1106,2879, 345,4257, 841,2531, 230,4578,3002,1847,3292, # 1616 -3475,5183,1263, 986,3476,5184, 735, 879, 254,1137, 857, 622,1300,1180,1388,1562, # 1632 -4010,4011,2954, 967,2761,2665,1349, 592,2134,1692,3361,3003,1995,4258,1679,4012, # 1648 -1902,2188,5185, 739,3708,2724,1296,1290,5186,4259,2201,2202,1922,1563,2605,2559, # 1664 -1871,2762,3004,5187, 435,5188, 343,1108, 596, 17,1751,4579,2239,3477,3709,5189, # 1680 -4580, 294,3582,2955,1693, 477, 979, 281,2042,3583, 643,2043,3710,2631,2795,2266, # 1696 -1031,2340,2135,2303,3584,4581, 367,1249,2560,5190,3585,5191,4582,1283,3362,2005, # 1712 - 240,1762,3363,4583,4584, 836,1069,3153, 474,5192,2149,2532, 268,3586,5193,3219, # 1728 -1521,1284,5194,1658,1546,4260,5195,3587,3588,5196,4261,3364,2693,1685,4262, 961, # 1744 -1673,2632, 190,2006,2203,3841,4585,4586,5197, 570,2504,3711,1490,5198,4587,2633, # 1760 -3293,1957,4588, 584,1514, 396,1045,1945,5199,4589,1968,2449,5200,5201,4590,4013, # 1776 - 619,5202,3154,3294, 215,2007,2796,2561,3220,4591,3221,4592, 763,4263,3842,4593, # 1792 -5203,5204,1958,1767,2956,3365,3712,1174, 452,1477,4594,3366,3155,5205,2838,1253, # 1808 -2387,2189,1091,2290,4264, 492,5206, 638,1169,1825,2136,1752,4014, 648, 926,1021, # 1824 -1324,4595, 520,4596, 997, 847,1007, 892,4597,3843,2267,1872,3713,2405,1785,4598, # 1840 -1953,2957,3103,3222,1728,4265,2044,3714,4599,2008,1701,3156,1551, 30,2268,4266, # 1856 -5207,2027,4600,3589,5208, 501,5209,4267, 594,3478,2166,1822,3590,3479,3591,3223, # 1872 - 829,2839,4268,5210,1680,3157,1225,4269,5211,3295,4601,4270,3158,2341,5212,4602, # 1888 -4271,5213,4015,4016,5214,1848,2388,2606,3367,5215,4603, 374,4017, 652,4272,4273, # 1904 - 375,1140, 798,5216,5217,5218,2366,4604,2269, 546,1659, 138,3051,2450,4605,5219, # 1920 -2254, 612,1849, 910, 796,3844,1740,1371, 825,3845,3846,5220,2920,2562,5221, 692, # 1936 - 444,3052,2634, 801,4606,4274,5222,1491, 244,1053,3053,4275,4276, 340,5223,4018, # 1952 -1041,3005, 293,1168, 87,1357,5224,1539, 959,5225,2240, 721, 694,4277,3847, 219, # 1968 -1478, 644,1417,3368,2666,1413,1401,1335,1389,4019,5226,5227,3006,2367,3159,1826, # 1984 - 730,1515, 184,2840, 66,4607,5228,1660,2958, 246,3369, 378,1457, 226,3480, 975, # 2000 -4020,2959,1264,3592, 674, 696,5229, 163,5230,1141,2422,2167, 713,3593,3370,4608, # 2016 -4021,5231,5232,1186, 15,5233,1079,1070,5234,1522,3224,3594, 276,1050,2725, 758, # 2032 -1126, 653,2960,3296,5235,2342, 889,3595,4022,3104,3007, 903,1250,4609,4023,3481, # 2048 -3596,1342,1681,1718, 766,3297, 286, 89,2961,3715,5236,1713,5237,2607,3371,3008, # 2064 -5238,2962,2219,3225,2880,5239,4610,2505,2533, 181, 387,1075,4024, 731,2190,3372, # 2080 -5240,3298, 310, 313,3482,2304, 770,4278, 54,3054, 189,4611,3105,3848,4025,5241, # 2096 -1230,1617,1850, 355,3597,4279,4612,3373, 111,4280,3716,1350,3160,3483,3055,4281, # 2112 -2150,3299,3598,5242,2797,4026,4027,3009, 722,2009,5243,1071, 247,1207,2343,2478, # 2128 -1378,4613,2010, 864,1437,1214,4614, 373,3849,1142,2220, 667,4615, 442,2763,2563, # 2144 -3850,4028,1969,4282,3300,1840, 837, 170,1107, 934,1336,1883,5244,5245,2119,4283, # 2160 -2841, 743,1569,5246,4616,4284, 582,2389,1418,3484,5247,1803,5248, 357,1395,1729, # 2176 -3717,3301,2423,1564,2241,5249,3106,3851,1633,4617,1114,2086,4285,1532,5250, 482, # 2192 -2451,4618,5251,5252,1492, 833,1466,5253,2726,3599,1641,2842,5254,1526,1272,3718, # 2208 -4286,1686,1795, 416,2564,1903,1954,1804,5255,3852,2798,3853,1159,2321,5256,2881, # 2224 -4619,1610,1584,3056,2424,2764, 443,3302,1163,3161,5257,5258,4029,5259,4287,2506, # 2240 -3057,4620,4030,3162,2104,1647,3600,2011,1873,4288,5260,4289, 431,3485,5261, 250, # 2256 - 97, 81,4290,5262,1648,1851,1558, 160, 848,5263, 866, 740,1694,5264,2204,2843, # 2272 -3226,4291,4621,3719,1687, 950,2479, 426, 469,3227,3720,3721,4031,5265,5266,1188, # 2288 - 424,1996, 861,3601,4292,3854,2205,2694, 168,1235,3602,4293,5267,2087,1674,4622, # 2304 -3374,3303, 220,2565,1009,5268,3855, 670,3010, 332,1208, 717,5269,5270,3603,2452, # 2320 -4032,3375,5271, 513,5272,1209,2882,3376,3163,4623,1080,5273,5274,5275,5276,2534, # 2336 -3722,3604, 815,1587,4033,4034,5277,3605,3486,3856,1254,4624,1328,3058,1390,4035, # 2352 -1741,4036,3857,4037,5278, 236,3858,2453,3304,5279,5280,3723,3859,1273,3860,4625, # 2368 -5281, 308,5282,4626, 245,4627,1852,2480,1307,2583, 430, 715,2137,2454,5283, 270, # 2384 - 199,2883,4038,5284,3606,2727,1753, 761,1754, 725,1661,1841,4628,3487,3724,5285, # 2400 -5286, 587, 14,3305, 227,2608, 326, 480,2270, 943,2765,3607, 291, 650,1884,5287, # 2416 -1702,1226, 102,1547, 62,3488, 904,4629,3489,1164,4294,5288,5289,1224,1548,2766, # 2432 - 391, 498,1493,5290,1386,1419,5291,2056,1177,4630, 813, 880,1081,2368, 566,1145, # 2448 -4631,2291,1001,1035,2566,2609,2242, 394,1286,5292,5293,2069,5294, 86,1494,1730, # 2464 -4039, 491,1588, 745, 897,2963, 843,3377,4040,2767,2884,3306,1768, 998,2221,2070, # 2480 - 397,1827,1195,1970,3725,3011,3378, 284,5295,3861,2507,2138,2120,1904,5296,4041, # 2496 -2151,4042,4295,1036,3490,1905, 114,2567,4296, 209,1527,5297,5298,2964,2844,2635, # 2512 -2390,2728,3164, 812,2568,5299,3307,5300,1559, 737,1885,3726,1210, 885, 28,2695, # 2528 -3608,3862,5301,4297,1004,1780,4632,5302, 346,1982,2222,2696,4633,3863,1742, 797, # 2544 -1642,4043,1934,1072,1384,2152, 896,4044,3308,3727,3228,2885,3609,5303,2569,1959, # 2560 -4634,2455,1786,5304,5305,5306,4045,4298,1005,1308,3728,4299,2729,4635,4636,1528, # 2576 -2610, 161,1178,4300,1983, 987,4637,1101,4301, 631,4046,1157,3229,2425,1343,1241, # 2592 -1016,2243,2570, 372, 877,2344,2508,1160, 555,1935, 911,4047,5307, 466,1170, 169, # 2608 -1051,2921,2697,3729,2481,3012,1182,2012,2571,1251,2636,5308, 992,2345,3491,1540, # 2624 -2730,1201,2071,2406,1997,2482,5309,4638, 528,1923,2191,1503,1874,1570,2369,3379, # 2640 -3309,5310, 557,1073,5311,1828,3492,2088,2271,3165,3059,3107, 767,3108,2799,4639, # 2656 -1006,4302,4640,2346,1267,2179,3730,3230, 778,4048,3231,2731,1597,2667,5312,4641, # 2672 -5313,3493,5314,5315,5316,3310,2698,1433,3311, 131, 95,1504,4049, 723,4303,3166, # 2688 -1842,3610,2768,2192,4050,2028,2105,3731,5317,3013,4051,1218,5318,3380,3232,4052, # 2704 -4304,2584, 248,1634,3864, 912,5319,2845,3732,3060,3865, 654, 53,5320,3014,5321, # 2720 -1688,4642, 777,3494,1032,4053,1425,5322, 191, 820,2121,2846, 971,4643, 931,3233, # 2736 - 135, 664, 783,3866,1998, 772,2922,1936,4054,3867,4644,2923,3234, 282,2732, 640, # 2752 -1372,3495,1127, 922, 325,3381,5323,5324, 711,2045,5325,5326,4055,2223,2800,1937, # 2768 -4056,3382,2224,2255,3868,2305,5327,4645,3869,1258,3312,4057,3235,2139,2965,4058, # 2784 -4059,5328,2225, 258,3236,4646, 101,1227,5329,3313,1755,5330,1391,3314,5331,2924, # 2800 -2057, 893,5332,5333,5334,1402,4305,2347,5335,5336,3237,3611,5337,5338, 878,1325, # 2816 -1781,2801,4647, 259,1385,2585, 744,1183,2272,4648,5339,4060,2509,5340, 684,1024, # 2832 -4306,5341, 472,3612,3496,1165,3315,4061,4062, 322,2153, 881, 455,1695,1152,1340, # 2848 - 660, 554,2154,4649,1058,4650,4307, 830,1065,3383,4063,4651,1924,5342,1703,1919, # 2864 -5343, 932,2273, 122,5344,4652, 947, 677,5345,3870,2637, 297,1906,1925,2274,4653, # 2880 -2322,3316,5346,5347,4308,5348,4309, 84,4310, 112, 989,5349, 547,1059,4064, 701, # 2896 -3613,1019,5350,4311,5351,3497, 942, 639, 457,2306,2456, 993,2966, 407, 851, 494, # 2912 -4654,3384, 927,5352,1237,5353,2426,3385, 573,4312, 680, 921,2925,1279,1875, 285, # 2928 - 790,1448,1984, 719,2168,5354,5355,4655,4065,4066,1649,5356,1541, 563,5357,1077, # 2944 -5358,3386,3061,3498, 511,3015,4067,4068,3733,4069,1268,2572,3387,3238,4656,4657, # 2960 -5359, 535,1048,1276,1189,2926,2029,3167,1438,1373,2847,2967,1134,2013,5360,4313, # 2976 -1238,2586,3109,1259,5361, 700,5362,2968,3168,3734,4314,5363,4315,1146,1876,1907, # 2992 -4658,2611,4070, 781,2427, 132,1589, 203, 147, 273,2802,2407, 898,1787,2155,4071, # 3008 -4072,5364,3871,2803,5365,5366,4659,4660,5367,3239,5368,1635,3872, 965,5369,1805, # 3024 -2699,1516,3614,1121,1082,1329,3317,4073,1449,3873, 65,1128,2848,2927,2769,1590, # 3040 -3874,5370,5371, 12,2668, 45, 976,2587,3169,4661, 517,2535,1013,1037,3240,5372, # 3056 -3875,2849,5373,3876,5374,3499,5375,2612, 614,1999,2323,3877,3110,2733,2638,5376, # 3072 -2588,4316, 599,1269,5377,1811,3735,5378,2700,3111, 759,1060, 489,1806,3388,3318, # 3088 -1358,5379,5380,2391,1387,1215,2639,2256, 490,5381,5382,4317,1759,2392,2348,5383, # 3104 -4662,3878,1908,4074,2640,1807,3241,4663,3500,3319,2770,2349, 874,5384,5385,3501, # 3120 -3736,1859, 91,2928,3737,3062,3879,4664,5386,3170,4075,2669,5387,3502,1202,1403, # 3136 -3880,2969,2536,1517,2510,4665,3503,2511,5388,4666,5389,2701,1886,1495,1731,4076, # 3152 -2370,4667,5390,2030,5391,5392,4077,2702,1216, 237,2589,4318,2324,4078,3881,4668, # 3168 -4669,2703,3615,3504, 445,4670,5393,5394,5395,5396,2771, 61,4079,3738,1823,4080, # 3184 -5397, 687,2046, 935, 925, 405,2670, 703,1096,1860,2734,4671,4081,1877,1367,2704, # 3200 -3389, 918,2106,1782,2483, 334,3320,1611,1093,4672, 564,3171,3505,3739,3390, 945, # 3216 -2641,2058,4673,5398,1926, 872,4319,5399,3506,2705,3112, 349,4320,3740,4082,4674, # 3232 -3882,4321,3741,2156,4083,4675,4676,4322,4677,2408,2047, 782,4084, 400, 251,4323, # 3248 -1624,5400,5401, 277,3742, 299,1265, 476,1191,3883,2122,4324,4325,1109, 205,5402, # 3264 -2590,1000,2157,3616,1861,5403,5404,5405,4678,5406,4679,2573, 107,2484,2158,4085, # 3280 -3507,3172,5407,1533, 541,1301, 158, 753,4326,2886,3617,5408,1696, 370,1088,4327, # 3296 -4680,3618, 579, 327, 440, 162,2244, 269,1938,1374,3508, 968,3063, 56,1396,3113, # 3312 -2107,3321,3391,5409,1927,2159,4681,3016,5410,3619,5411,5412,3743,4682,2485,5413, # 3328 -2804,5414,1650,4683,5415,2613,5416,5417,4086,2671,3392,1149,3393,4087,3884,4088, # 3344 -5418,1076, 49,5419, 951,3242,3322,3323, 450,2850, 920,5420,1812,2805,2371,4328, # 3360 -1909,1138,2372,3885,3509,5421,3243,4684,1910,1147,1518,2428,4685,3886,5422,4686, # 3376 -2393,2614, 260,1796,3244,5423,5424,3887,3324, 708,5425,3620,1704,5426,3621,1351, # 3392 -1618,3394,3017,1887, 944,4329,3395,4330,3064,3396,4331,5427,3744, 422, 413,1714, # 3408 -3325, 500,2059,2350,4332,2486,5428,1344,1911, 954,5429,1668,5430,5431,4089,2409, # 3424 -4333,3622,3888,4334,5432,2307,1318,2512,3114, 133,3115,2887,4687, 629, 31,2851, # 3440 -2706,3889,4688, 850, 949,4689,4090,2970,1732,2089,4335,1496,1853,5433,4091, 620, # 3456 -3245, 981,1242,3745,3397,1619,3746,1643,3326,2140,2457,1971,1719,3510,2169,5434, # 3472 -3246,5435,5436,3398,1829,5437,1277,4690,1565,2048,5438,1636,3623,3116,5439, 869, # 3488 -2852, 655,3890,3891,3117,4092,3018,3892,1310,3624,4691,5440,5441,5442,1733, 558, # 3504 -4692,3747, 335,1549,3065,1756,4336,3748,1946,3511,1830,1291,1192, 470,2735,2108, # 3520 -2806, 913,1054,4093,5443,1027,5444,3066,4094,4693, 982,2672,3399,3173,3512,3247, # 3536 -3248,1947,2807,5445, 571,4694,5446,1831,5447,3625,2591,1523,2429,5448,2090, 984, # 3552 -4695,3749,1960,5449,3750, 852, 923,2808,3513,3751, 969,1519, 999,2049,2325,1705, # 3568 -5450,3118, 615,1662, 151, 597,4095,2410,2326,1049, 275,4696,3752,4337, 568,3753, # 3584 -3626,2487,4338,3754,5451,2430,2275, 409,3249,5452,1566,2888,3514,1002, 769,2853, # 3600 - 194,2091,3174,3755,2226,3327,4339, 628,1505,5453,5454,1763,2180,3019,4096, 521, # 3616 -1161,2592,1788,2206,2411,4697,4097,1625,4340,4341, 412, 42,3119, 464,5455,2642, # 3632 -4698,3400,1760,1571,2889,3515,2537,1219,2207,3893,2643,2141,2373,4699,4700,3328, # 3648 -1651,3401,3627,5456,5457,3628,2488,3516,5458,3756,5459,5460,2276,2092, 460,5461, # 3664 -4701,5462,3020, 962, 588,3629, 289,3250,2644,1116, 52,5463,3067,1797,5464,5465, # 3680 -5466,1467,5467,1598,1143,3757,4342,1985,1734,1067,4702,1280,3402, 465,4703,1572, # 3696 - 510,5468,1928,2245,1813,1644,3630,5469,4704,3758,5470,5471,2673,1573,1534,5472, # 3712 -5473, 536,1808,1761,3517,3894,3175,2645,5474,5475,5476,4705,3518,2929,1912,2809, # 3728 -5477,3329,1122, 377,3251,5478, 360,5479,5480,4343,1529, 551,5481,2060,3759,1769, # 3744 -2431,5482,2930,4344,3330,3120,2327,2109,2031,4706,1404, 136,1468,1479, 672,1171, # 3760 -3252,2308, 271,3176,5483,2772,5484,2050, 678,2736, 865,1948,4707,5485,2014,4098, # 3776 -2971,5486,2737,2227,1397,3068,3760,4708,4709,1735,2931,3403,3631,5487,3895, 509, # 3792 -2854,2458,2890,3896,5488,5489,3177,3178,4710,4345,2538,4711,2309,1166,1010, 552, # 3808 - 681,1888,5490,5491,2972,2973,4099,1287,1596,1862,3179, 358, 453, 736, 175, 478, # 3824 -1117, 905,1167,1097,5492,1854,1530,5493,1706,5494,2181,3519,2292,3761,3520,3632, # 3840 -4346,2093,4347,5495,3404,1193,2489,4348,1458,2193,2208,1863,1889,1421,3331,2932, # 3856 -3069,2182,3521, 595,2123,5496,4100,5497,5498,4349,1707,2646, 223,3762,1359, 751, # 3872 -3121, 183,3522,5499,2810,3021, 419,2374, 633, 704,3897,2394, 241,5500,5501,5502, # 3888 - 838,3022,3763,2277,2773,2459,3898,1939,2051,4101,1309,3122,2246,1181,5503,1136, # 3904 -2209,3899,2375,1446,4350,2310,4712,5504,5505,4351,1055,2615, 484,3764,5506,4102, # 3920 - 625,4352,2278,3405,1499,4353,4103,5507,4104,4354,3253,2279,2280,3523,5508,5509, # 3936 -2774, 808,2616,3765,3406,4105,4355,3123,2539, 526,3407,3900,4356, 955,5510,1620, # 3952 -4357,2647,2432,5511,1429,3766,1669,1832, 994, 928,5512,3633,1260,5513,5514,5515, # 3968 -1949,2293, 741,2933,1626,4358,2738,2460, 867,1184, 362,3408,1392,5516,5517,4106, # 3984 -4359,1770,1736,3254,2934,4713,4714,1929,2707,1459,1158,5518,3070,3409,2891,1292, # 4000 -1930,2513,2855,3767,1986,1187,2072,2015,2617,4360,5519,2574,2514,2170,3768,2490, # 4016 -3332,5520,3769,4715,5521,5522, 666,1003,3023,1022,3634,4361,5523,4716,1814,2257, # 4032 - 574,3901,1603, 295,1535, 705,3902,4362, 283, 858, 417,5524,5525,3255,4717,4718, # 4048 -3071,1220,1890,1046,2281,2461,4107,1393,1599, 689,2575, 388,4363,5526,2491, 802, # 4064 -5527,2811,3903,2061,1405,2258,5528,4719,3904,2110,1052,1345,3256,1585,5529, 809, # 4080 -5530,5531,5532, 575,2739,3524, 956,1552,1469,1144,2328,5533,2329,1560,2462,3635, # 4096 -3257,4108, 616,2210,4364,3180,2183,2294,5534,1833,5535,3525,4720,5536,1319,3770, # 4112 -3771,1211,3636,1023,3258,1293,2812,5537,5538,5539,3905, 607,2311,3906, 762,2892, # 4128 -1439,4365,1360,4721,1485,3072,5540,4722,1038,4366,1450,2062,2648,4367,1379,4723, # 4144 -2593,5541,5542,4368,1352,1414,2330,2935,1172,5543,5544,3907,3908,4724,1798,1451, # 4160 -5545,5546,5547,5548,2936,4109,4110,2492,2351, 411,4111,4112,3637,3333,3124,4725, # 4176 -1561,2674,1452,4113,1375,5549,5550, 47,2974, 316,5551,1406,1591,2937,3181,5552, # 4192 -1025,2142,3125,3182, 354,2740, 884,2228,4369,2412, 508,3772, 726,3638, 996,2433, # 4208 -3639, 729,5553, 392,2194,1453,4114,4726,3773,5554,5555,2463,3640,2618,1675,2813, # 4224 - 919,2352,2975,2353,1270,4727,4115, 73,5556,5557, 647,5558,3259,2856,2259,1550, # 4240 -1346,3024,5559,1332, 883,3526,5560,5561,5562,5563,3334,2775,5564,1212, 831,1347, # 4256 -4370,4728,2331,3909,1864,3073, 720,3910,4729,4730,3911,5565,4371,5566,5567,4731, # 4272 -5568,5569,1799,4732,3774,2619,4733,3641,1645,2376,4734,5570,2938, 669,2211,2675, # 4288 -2434,5571,2893,5572,5573,1028,3260,5574,4372,2413,5575,2260,1353,5576,5577,4735, # 4304 -3183, 518,5578,4116,5579,4373,1961,5580,2143,4374,5581,5582,3025,2354,2355,3912, # 4320 - 516,1834,1454,4117,2708,4375,4736,2229,2620,1972,1129,3642,5583,2776,5584,2976, # 4336 -1422, 577,1470,3026,1524,3410,5585,5586, 432,4376,3074,3527,5587,2594,1455,2515, # 4352 -2230,1973,1175,5588,1020,2741,4118,3528,4737,5589,2742,5590,1743,1361,3075,3529, # 4368 -2649,4119,4377,4738,2295, 895, 924,4378,2171, 331,2247,3076, 166,1627,3077,1098, # 4384 -5591,1232,2894,2231,3411,4739, 657, 403,1196,2377, 542,3775,3412,1600,4379,3530, # 4400 -5592,4740,2777,3261, 576, 530,1362,4741,4742,2540,2676,3776,4120,5593, 842,3913, # 4416 -5594,2814,2032,1014,4121, 213,2709,3413, 665, 621,4380,5595,3777,2939,2435,5596, # 4432 -2436,3335,3643,3414,4743,4381,2541,4382,4744,3644,1682,4383,3531,1380,5597, 724, # 4448 -2282, 600,1670,5598,1337,1233,4745,3126,2248,5599,1621,4746,5600, 651,4384,5601, # 4464 -1612,4385,2621,5602,2857,5603,2743,2312,3078,5604, 716,2464,3079, 174,1255,2710, # 4480 -4122,3645, 548,1320,1398, 728,4123,1574,5605,1891,1197,3080,4124,5606,3081,3082, # 4496 -3778,3646,3779, 747,5607, 635,4386,4747,5608,5609,5610,4387,5611,5612,4748,5613, # 4512 -3415,4749,2437, 451,5614,3780,2542,2073,4388,2744,4389,4125,5615,1764,4750,5616, # 4528 -4390, 350,4751,2283,2395,2493,5617,4391,4126,2249,1434,4127, 488,4752, 458,4392, # 4544 -4128,3781, 771,1330,2396,3914,2576,3184,2160,2414,1553,2677,3185,4393,5618,2494, # 4560 -2895,2622,1720,2711,4394,3416,4753,5619,2543,4395,5620,3262,4396,2778,5621,2016, # 4576 -2745,5622,1155,1017,3782,3915,5623,3336,2313, 201,1865,4397,1430,5624,4129,5625, # 4592 -5626,5627,5628,5629,4398,1604,5630, 414,1866, 371,2595,4754,4755,3532,2017,3127, # 4608 -4756,1708, 960,4399, 887, 389,2172,1536,1663,1721,5631,2232,4130,2356,2940,1580, # 4624 -5632,5633,1744,4757,2544,4758,4759,5634,4760,5635,2074,5636,4761,3647,3417,2896, # 4640 -4400,5637,4401,2650,3418,2815, 673,2712,2465, 709,3533,4131,3648,4402,5638,1148, # 4656 - 502, 634,5639,5640,1204,4762,3649,1575,4763,2623,3783,5641,3784,3128, 948,3263, # 4672 - 121,1745,3916,1110,5642,4403,3083,2516,3027,4132,3785,1151,1771,3917,1488,4133, # 4688 -1987,5643,2438,3534,5644,5645,2094,5646,4404,3918,1213,1407,2816, 531,2746,2545, # 4704 -3264,1011,1537,4764,2779,4405,3129,1061,5647,3786,3787,1867,2897,5648,2018, 120, # 4720 -4406,4407,2063,3650,3265,2314,3919,2678,3419,1955,4765,4134,5649,3535,1047,2713, # 4736 -1266,5650,1368,4766,2858, 649,3420,3920,2546,2747,1102,2859,2679,5651,5652,2000, # 4752 -5653,1111,3651,2977,5654,2495,3921,3652,2817,1855,3421,3788,5655,5656,3422,2415, # 4768 -2898,3337,3266,3653,5657,2577,5658,3654,2818,4135,1460, 856,5659,3655,5660,2899, # 4784 -2978,5661,2900,3922,5662,4408, 632,2517, 875,3923,1697,3924,2296,5663,5664,4767, # 4800 -3028,1239, 580,4768,4409,5665, 914, 936,2075,1190,4136,1039,2124,5666,5667,5668, # 4816 -5669,3423,1473,5670,1354,4410,3925,4769,2173,3084,4137, 915,3338,4411,4412,3339, # 4832 -1605,1835,5671,2748, 398,3656,4413,3926,4138, 328,1913,2860,4139,3927,1331,4414, # 4848 -3029, 937,4415,5672,3657,4140,4141,3424,2161,4770,3425, 524, 742, 538,3085,1012, # 4864 -5673,5674,3928,2466,5675, 658,1103, 225,3929,5676,5677,4771,5678,4772,5679,3267, # 4880 -1243,5680,4142, 963,2250,4773,5681,2714,3658,3186,5682,5683,2596,2332,5684,4774, # 4896 -5685,5686,5687,3536, 957,3426,2547,2033,1931,2941,2467, 870,2019,3659,1746,2780, # 4912 -2781,2439,2468,5688,3930,5689,3789,3130,3790,3537,3427,3791,5690,1179,3086,5691, # 4928 -3187,2378,4416,3792,2548,3188,3131,2749,4143,5692,3428,1556,2549,2297, 977,2901, # 4944 -2034,4144,1205,3429,5693,1765,3430,3189,2125,1271, 714,1689,4775,3538,5694,2333, # 4960 -3931, 533,4417,3660,2184, 617,5695,2469,3340,3539,2315,5696,5697,3190,5698,5699, # 4976 -3932,1988, 618, 427,2651,3540,3431,5700,5701,1244,1690,5702,2819,4418,4776,5703, # 4992 -3541,4777,5704,2284,1576, 473,3661,4419,3432, 972,5705,3662,5706,3087,5707,5708, # 5008 -4778,4779,5709,3793,4145,4146,5710, 153,4780, 356,5711,1892,2902,4420,2144, 408, # 5024 - 803,2357,5712,3933,5713,4421,1646,2578,2518,4781,4782,3934,5714,3935,4422,5715, # 5040 -2416,3433, 752,5716,5717,1962,3341,2979,5718, 746,3030,2470,4783,4423,3794, 698, # 5056 -4784,1893,4424,3663,2550,4785,3664,3936,5719,3191,3434,5720,1824,1302,4147,2715, # 5072 -3937,1974,4425,5721,4426,3192, 823,1303,1288,1236,2861,3542,4148,3435, 774,3938, # 5088 -5722,1581,4786,1304,2862,3939,4787,5723,2440,2162,1083,3268,4427,4149,4428, 344, # 5104 -1173, 288,2316, 454,1683,5724,5725,1461,4788,4150,2597,5726,5727,4789, 985, 894, # 5120 -5728,3436,3193,5729,1914,2942,3795,1989,5730,2111,1975,5731,4151,5732,2579,1194, # 5136 - 425,5733,4790,3194,1245,3796,4429,5734,5735,2863,5736, 636,4791,1856,3940, 760, # 5152 -1800,5737,4430,2212,1508,4792,4152,1894,1684,2298,5738,5739,4793,4431,4432,2213, # 5168 - 479,5740,5741, 832,5742,4153,2496,5743,2980,2497,3797, 990,3132, 627,1815,2652, # 5184 -4433,1582,4434,2126,2112,3543,4794,5744, 799,4435,3195,5745,4795,2113,1737,3031, # 5200 -1018, 543, 754,4436,3342,1676,4796,4797,4154,4798,1489,5746,3544,5747,2624,2903, # 5216 -4155,5748,5749,2981,5750,5751,5752,5753,3196,4799,4800,2185,1722,5754,3269,3270, # 5232 -1843,3665,1715, 481, 365,1976,1857,5755,5756,1963,2498,4801,5757,2127,3666,3271, # 5248 - 433,1895,2064,2076,5758, 602,2750,5759,5760,5761,5762,5763,3032,1628,3437,5764, # 5264 -3197,4802,4156,2904,4803,2519,5765,2551,2782,5766,5767,5768,3343,4804,2905,5769, # 5280 -4805,5770,2864,4806,4807,1221,2982,4157,2520,5771,5772,5773,1868,1990,5774,5775, # 5296 -5776,1896,5777,5778,4808,1897,4158, 318,5779,2095,4159,4437,5780,5781, 485,5782, # 5312 - 938,3941, 553,2680, 116,5783,3942,3667,5784,3545,2681,2783,3438,3344,2820,5785, # 5328 -3668,2943,4160,1747,2944,2983,5786,5787, 207,5788,4809,5789,4810,2521,5790,3033, # 5344 - 890,3669,3943,5791,1878,3798,3439,5792,2186,2358,3440,1652,5793,5794,5795, 941, # 5360 -2299, 208,3546,4161,2020, 330,4438,3944,2906,2499,3799,4439,4811,5796,5797,5798, # 5376 -) - diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/rich/prompt.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/rich/prompt.py deleted file mode 100644 index b2cea2b529a3cdf265bb6d2259ead42227958072..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/rich/prompt.py +++ /dev/null @@ -1,376 +0,0 @@ -from typing import Any, Generic, List, Optional, TextIO, TypeVar, Union, overload - -from . import get_console -from .console import Console -from .text import Text, TextType - -PromptType = TypeVar("PromptType") -DefaultType = TypeVar("DefaultType") - - -class PromptError(Exception): - """Exception base class for prompt related errors.""" - - -class InvalidResponse(PromptError): - """Exception to indicate a response was invalid. Raise this within process_response() to indicate an error - and provide an error message. - - Args: - message (Union[str, Text]): Error message. - """ - - def __init__(self, message: TextType) -> None: - self.message = message - - def __rich__(self) -> TextType: - return self.message - - -class PromptBase(Generic[PromptType]): - """Ask the user for input until a valid response is received. This is the base class, see one of - the concrete classes for examples. - - Args: - prompt (TextType, optional): Prompt text. Defaults to "". - console (Console, optional): A Console instance or None to use global console. Defaults to None. - password (bool, optional): Enable password input. Defaults to False. - choices (List[str], optional): A list of valid choices. Defaults to None. - show_default (bool, optional): Show default in prompt. Defaults to True. - show_choices (bool, optional): Show choices in prompt. Defaults to True. - """ - - response_type: type = str - - validate_error_message = "[prompt.invalid]Please enter a valid value" - illegal_choice_message = ( - "[prompt.invalid.choice]Please select one of the available options" - ) - prompt_suffix = ": " - - choices: Optional[List[str]] = None - - def __init__( - self, - prompt: TextType = "", - *, - console: Optional[Console] = None, - password: bool = False, - choices: Optional[List[str]] = None, - show_default: bool = True, - show_choices: bool = True, - ) -> None: - self.console = console or get_console() - self.prompt = ( - Text.from_markup(prompt, style="prompt") - if isinstance(prompt, str) - else prompt - ) - self.password = password - if choices is not None: - self.choices = choices - self.show_default = show_default - self.show_choices = show_choices - - @classmethod - @overload - def ask( - cls, - prompt: TextType = "", - *, - console: Optional[Console] = None, - password: bool = False, - choices: Optional[List[str]] = None, - show_default: bool = True, - show_choices: bool = True, - default: DefaultType, - stream: Optional[TextIO] = None, - ) -> Union[DefaultType, PromptType]: - ... - - @classmethod - @overload - def ask( - cls, - prompt: TextType = "", - *, - console: Optional[Console] = None, - password: bool = False, - choices: Optional[List[str]] = None, - show_default: bool = True, - show_choices: bool = True, - stream: Optional[TextIO] = None, - ) -> PromptType: - ... - - @classmethod - def ask( - cls, - prompt: TextType = "", - *, - console: Optional[Console] = None, - password: bool = False, - choices: Optional[List[str]] = None, - show_default: bool = True, - show_choices: bool = True, - default: Any = ..., - stream: Optional[TextIO] = None, - ) -> Any: - """Shortcut to construct and run a prompt loop and return the result. - - Example: - >>> filename = Prompt.ask("Enter a filename") - - Args: - prompt (TextType, optional): Prompt text. Defaults to "". - console (Console, optional): A Console instance or None to use global console. Defaults to None. - password (bool, optional): Enable password input. Defaults to False. - choices (List[str], optional): A list of valid choices. Defaults to None. - show_default (bool, optional): Show default in prompt. Defaults to True. - show_choices (bool, optional): Show choices in prompt. Defaults to True. - stream (TextIO, optional): Optional text file open for reading to get input. Defaults to None. - """ - _prompt = cls( - prompt, - console=console, - password=password, - choices=choices, - show_default=show_default, - show_choices=show_choices, - ) - return _prompt(default=default, stream=stream) - - def render_default(self, default: DefaultType) -> Text: - """Turn the supplied default in to a Text instance. - - Args: - default (DefaultType): Default value. - - Returns: - Text: Text containing rendering of default value. - """ - return Text(f"({default})", "prompt.default") - - def make_prompt(self, default: DefaultType) -> Text: - """Make prompt text. - - Args: - default (DefaultType): Default value. - - Returns: - Text: Text to display in prompt. - """ - prompt = self.prompt.copy() - prompt.end = "" - - if self.show_choices and self.choices: - _choices = "/".join(self.choices) - choices = f"[{_choices}]" - prompt.append(" ") - prompt.append(choices, "prompt.choices") - - if ( - default != ... - and self.show_default - and isinstance(default, (str, self.response_type)) - ): - prompt.append(" ") - _default = self.render_default(default) - prompt.append(_default) - - prompt.append(self.prompt_suffix) - - return prompt - - @classmethod - def get_input( - cls, - console: Console, - prompt: TextType, - password: bool, - stream: Optional[TextIO] = None, - ) -> str: - """Get input from user. - - Args: - console (Console): Console instance. - prompt (TextType): Prompt text. - password (bool): Enable password entry. - - Returns: - str: String from user. - """ - return console.input(prompt, password=password, stream=stream) - - def check_choice(self, value: str) -> bool: - """Check value is in the list of valid choices. - - Args: - value (str): Value entered by user. - - Returns: - bool: True if choice was valid, otherwise False. - """ - assert self.choices is not None - return value.strip() in self.choices - - def process_response(self, value: str) -> PromptType: - """Process response from user, convert to prompt type. - - Args: - value (str): String typed by user. - - Raises: - InvalidResponse: If ``value`` is invalid. - - Returns: - PromptType: The value to be returned from ask method. - """ - value = value.strip() - try: - return_value = self.response_type(value) - except ValueError: - raise InvalidResponse(self.validate_error_message) - - if self.choices is not None and not self.check_choice(value): - raise InvalidResponse(self.illegal_choice_message) - - return return_value # type: ignore - - def on_validate_error(self, value: str, error: InvalidResponse) -> None: - """Called to handle validation error. - - Args: - value (str): String entered by user. - error (InvalidResponse): Exception instance the initiated the error. - """ - self.console.print(error) - - def pre_prompt(self) -> None: - """Hook to display something before the prompt.""" - - @overload - def __call__(self, *, stream: Optional[TextIO] = None) -> PromptType: - ... - - @overload - def __call__( - self, *, default: DefaultType, stream: Optional[TextIO] = None - ) -> Union[PromptType, DefaultType]: - ... - - def __call__(self, *, default: Any = ..., stream: Optional[TextIO] = None) -> Any: - """Run the prompt loop. - - Args: - default (Any, optional): Optional default value. - - Returns: - PromptType: Processed value. - """ - while True: - self.pre_prompt() - prompt = self.make_prompt(default) - value = self.get_input(self.console, prompt, self.password, stream=stream) - if value == "" and default != ...: - return default - try: - return_value = self.process_response(value) - except InvalidResponse as error: - self.on_validate_error(value, error) - continue - else: - return return_value - - -class Prompt(PromptBase[str]): - """A prompt that returns a str. - - Example: - >>> name = Prompt.ask("Enter your name") - - - """ - - response_type = str - - -class IntPrompt(PromptBase[int]): - """A prompt that returns an integer. - - Example: - >>> burrito_count = IntPrompt.ask("How many burritos do you want to order") - - """ - - response_type = int - validate_error_message = "[prompt.invalid]Please enter a valid integer number" - - -class FloatPrompt(PromptBase[int]): - """A prompt that returns a float. - - Example: - >>> temperature = FloatPrompt.ask("Enter desired temperature") - - """ - - response_type = float - validate_error_message = "[prompt.invalid]Please enter a number" - - -class Confirm(PromptBase[bool]): - """A yes / no confirmation prompt. - - Example: - >>> if Confirm.ask("Continue"): - run_job() - - """ - - response_type = bool - validate_error_message = "[prompt.invalid]Please enter Y or N" - choices: List[str] = ["y", "n"] - - def render_default(self, default: DefaultType) -> Text: - """Render the default as (y) or (n) rather than True/False.""" - yes, no = self.choices - return Text(f"({yes})" if default else f"({no})", style="prompt.default") - - def process_response(self, value: str) -> bool: - """Convert choices to a bool.""" - value = value.strip().lower() - if value not in self.choices: - raise InvalidResponse(self.validate_error_message) - return value == self.choices[0] - - -if __name__ == "__main__": # pragma: no cover - - from pip._vendor.rich import print - - if Confirm.ask("Run [i]prompt[/i] tests?", default=True): - while True: - result = IntPrompt.ask( - ":rocket: Enter a number between [b]1[/b] and [b]10[/b]", default=5 - ) - if result >= 1 and result <= 10: - break - print(":pile_of_poo: [prompt.invalid]Number must be between 1 and 10") - print(f"number={result}") - - while True: - password = Prompt.ask( - "Please enter a password [cyan](must be at least 5 characters)", - password=True, - ) - if len(password) >= 5: - break - print("[prompt.invalid]password too short") - print(f"password={password!r}") - - fruit = Prompt.ask("Enter a fruit", choices=["apple", "orange", "pear"]) - print(f"fruit={fruit!r}") - - else: - print("[b]OK :loudly_crying_face:") diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pkg_resources/_vendor/packaging/version.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pkg_resources/_vendor/packaging/version.py deleted file mode 100644 index 00371e86a87edfc5f8d1d1352360bfae0cce8e65..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pkg_resources/_vendor/packaging/version.py +++ /dev/null @@ -1,535 +0,0 @@ -# This file is dual licensed under the terms of the Apache License, Version -# 2.0, and the BSD License. See the LICENSE file in the root of this repository -# for complete details. -from __future__ import absolute_import, division, print_function - -import collections -import itertools -import re - -from ._structures import Infinity, NegativeInfinity -from ._typing import TYPE_CHECKING - -if TYPE_CHECKING: # pragma: no cover - from typing import Callable, Iterator, List, Optional, SupportsInt, Tuple, Union - - from ._structures import InfinityType, NegativeInfinityType - - InfiniteTypes = Union[InfinityType, NegativeInfinityType] - PrePostDevType = Union[InfiniteTypes, Tuple[str, int]] - SubLocalType = Union[InfiniteTypes, int, str] - LocalType = Union[ - NegativeInfinityType, - Tuple[ - Union[ - SubLocalType, - Tuple[SubLocalType, str], - Tuple[NegativeInfinityType, SubLocalType], - ], - ..., - ], - ] - CmpKey = Tuple[ - int, Tuple[int, ...], PrePostDevType, PrePostDevType, PrePostDevType, LocalType - ] - LegacyCmpKey = Tuple[int, Tuple[str, ...]] - VersionComparisonMethod = Callable[ - [Union[CmpKey, LegacyCmpKey], Union[CmpKey, LegacyCmpKey]], bool - ] - -__all__ = ["parse", "Version", "LegacyVersion", "InvalidVersion", "VERSION_PATTERN"] - - -_Version = collections.namedtuple( - "_Version", ["epoch", "release", "dev", "pre", "post", "local"] -) - - -def parse(version): - # type: (str) -> Union[LegacyVersion, Version] - """ - Parse the given version string and return either a :class:`Version` object - or a :class:`LegacyVersion` object depending on if the given version is - a valid PEP 440 version or a legacy version. - """ - try: - return Version(version) - except InvalidVersion: - return LegacyVersion(version) - - -class InvalidVersion(ValueError): - """ - An invalid version was found, users should refer to PEP 440. - """ - - -class _BaseVersion(object): - _key = None # type: Union[CmpKey, LegacyCmpKey] - - def __hash__(self): - # type: () -> int - return hash(self._key) - - def __lt__(self, other): - # type: (_BaseVersion) -> bool - return self._compare(other, lambda s, o: s < o) - - def __le__(self, other): - # type: (_BaseVersion) -> bool - return self._compare(other, lambda s, o: s <= o) - - def __eq__(self, other): - # type: (object) -> bool - return self._compare(other, lambda s, o: s == o) - - def __ge__(self, other): - # type: (_BaseVersion) -> bool - return self._compare(other, lambda s, o: s >= o) - - def __gt__(self, other): - # type: (_BaseVersion) -> bool - return self._compare(other, lambda s, o: s > o) - - def __ne__(self, other): - # type: (object) -> bool - return self._compare(other, lambda s, o: s != o) - - def _compare(self, other, method): - # type: (object, VersionComparisonMethod) -> Union[bool, NotImplemented] - if not isinstance(other, _BaseVersion): - return NotImplemented - - return method(self._key, other._key) - - -class LegacyVersion(_BaseVersion): - def __init__(self, version): - # type: (str) -> None - self._version = str(version) - self._key = _legacy_cmpkey(self._version) - - def __str__(self): - # type: () -> str - return self._version - - def __repr__(self): - # type: () -> str - return "".format(repr(str(self))) - - @property - def public(self): - # type: () -> str - return self._version - - @property - def base_version(self): - # type: () -> str - return self._version - - @property - def epoch(self): - # type: () -> int - return -1 - - @property - def release(self): - # type: () -> None - return None - - @property - def pre(self): - # type: () -> None - return None - - @property - def post(self): - # type: () -> None - return None - - @property - def dev(self): - # type: () -> None - return None - - @property - def local(self): - # type: () -> None - return None - - @property - def is_prerelease(self): - # type: () -> bool - return False - - @property - def is_postrelease(self): - # type: () -> bool - return False - - @property - def is_devrelease(self): - # type: () -> bool - return False - - -_legacy_version_component_re = re.compile(r"(\d+ | [a-z]+ | \.| -)", re.VERBOSE) - -_legacy_version_replacement_map = { - "pre": "c", - "preview": "c", - "-": "final-", - "rc": "c", - "dev": "@", -} - - -def _parse_version_parts(s): - # type: (str) -> Iterator[str] - for part in _legacy_version_component_re.split(s): - part = _legacy_version_replacement_map.get(part, part) - - if not part or part == ".": - continue - - if part[:1] in "0123456789": - # pad for numeric comparison - yield part.zfill(8) - else: - yield "*" + part - - # ensure that alpha/beta/candidate are before final - yield "*final" - - -def _legacy_cmpkey(version): - # type: (str) -> LegacyCmpKey - - # We hardcode an epoch of -1 here. A PEP 440 version can only have a epoch - # greater than or equal to 0. This will effectively put the LegacyVersion, - # which uses the defacto standard originally implemented by setuptools, - # as before all PEP 440 versions. - epoch = -1 - - # This scheme is taken from pkg_resources.parse_version setuptools prior to - # it's adoption of the packaging library. - parts = [] # type: List[str] - for part in _parse_version_parts(version.lower()): - if part.startswith("*"): - # remove "-" before a prerelease tag - if part < "*final": - while parts and parts[-1] == "*final-": - parts.pop() - - # remove trailing zeros from each series of numeric parts - while parts and parts[-1] == "00000000": - parts.pop() - - parts.append(part) - - return epoch, tuple(parts) - - -# Deliberately not anchored to the start and end of the string, to make it -# easier for 3rd party code to reuse -VERSION_PATTERN = r""" - v? - (?: - (?:(?P[0-9]+)!)? # epoch - (?P[0-9]+(?:\.[0-9]+)*) # release segment - (?P
                                              # pre-release
    -            [-_\.]?
    -            (?P(a|b|c|rc|alpha|beta|pre|preview))
    -            [-_\.]?
    -            (?P[0-9]+)?
    -        )?
    -        (?P                                         # post release
    -            (?:-(?P[0-9]+))
    -            |
    -            (?:
    -                [-_\.]?
    -                (?Ppost|rev|r)
    -                [-_\.]?
    -                (?P[0-9]+)?
    -            )
    -        )?
    -        (?P                                          # dev release
    -            [-_\.]?
    -            (?Pdev)
    -            [-_\.]?
    -            (?P[0-9]+)?
    -        )?
    -    )
    -    (?:\+(?P[a-z0-9]+(?:[-_\.][a-z0-9]+)*))?       # local version
    -"""
    -
    -
    -class Version(_BaseVersion):
    -
    -    _regex = re.compile(r"^\s*" + VERSION_PATTERN + r"\s*$", re.VERBOSE | re.IGNORECASE)
    -
    -    def __init__(self, version):
    -        # type: (str) -> None
    -
    -        # Validate the version and parse it into pieces
    -        match = self._regex.search(version)
    -        if not match:
    -            raise InvalidVersion("Invalid version: '{0}'".format(version))
    -
    -        # Store the parsed out pieces of the version
    -        self._version = _Version(
    -            epoch=int(match.group("epoch")) if match.group("epoch") else 0,
    -            release=tuple(int(i) for i in match.group("release").split(".")),
    -            pre=_parse_letter_version(match.group("pre_l"), match.group("pre_n")),
    -            post=_parse_letter_version(
    -                match.group("post_l"), match.group("post_n1") or match.group("post_n2")
    -            ),
    -            dev=_parse_letter_version(match.group("dev_l"), match.group("dev_n")),
    -            local=_parse_local_version(match.group("local")),
    -        )
    -
    -        # Generate a key which will be used for sorting
    -        self._key = _cmpkey(
    -            self._version.epoch,
    -            self._version.release,
    -            self._version.pre,
    -            self._version.post,
    -            self._version.dev,
    -            self._version.local,
    -        )
    -
    -    def __repr__(self):
    -        # type: () -> str
    -        return "".format(repr(str(self)))
    -
    -    def __str__(self):
    -        # type: () -> str
    -        parts = []
    -
    -        # Epoch
    -        if self.epoch != 0:
    -            parts.append("{0}!".format(self.epoch))
    -
    -        # Release segment
    -        parts.append(".".join(str(x) for x in self.release))
    -
    -        # Pre-release
    -        if self.pre is not None:
    -            parts.append("".join(str(x) for x in self.pre))
    -
    -        # Post-release
    -        if self.post is not None:
    -            parts.append(".post{0}".format(self.post))
    -
    -        # Development release
    -        if self.dev is not None:
    -            parts.append(".dev{0}".format(self.dev))
    -
    -        # Local version segment
    -        if self.local is not None:
    -            parts.append("+{0}".format(self.local))
    -
    -        return "".join(parts)
    -
    -    @property
    -    def epoch(self):
    -        # type: () -> int
    -        _epoch = self._version.epoch  # type: int
    -        return _epoch
    -
    -    @property
    -    def release(self):
    -        # type: () -> Tuple[int, ...]
    -        _release = self._version.release  # type: Tuple[int, ...]
    -        return _release
    -
    -    @property
    -    def pre(self):
    -        # type: () -> Optional[Tuple[str, int]]
    -        _pre = self._version.pre  # type: Optional[Tuple[str, int]]
    -        return _pre
    -
    -    @property
    -    def post(self):
    -        # type: () -> Optional[Tuple[str, int]]
    -        return self._version.post[1] if self._version.post else None
    -
    -    @property
    -    def dev(self):
    -        # type: () -> Optional[Tuple[str, int]]
    -        return self._version.dev[1] if self._version.dev else None
    -
    -    @property
    -    def local(self):
    -        # type: () -> Optional[str]
    -        if self._version.local:
    -            return ".".join(str(x) for x in self._version.local)
    -        else:
    -            return None
    -
    -    @property
    -    def public(self):
    -        # type: () -> str
    -        return str(self).split("+", 1)[0]
    -
    -    @property
    -    def base_version(self):
    -        # type: () -> str
    -        parts = []
    -
    -        # Epoch
    -        if self.epoch != 0:
    -            parts.append("{0}!".format(self.epoch))
    -
    -        # Release segment
    -        parts.append(".".join(str(x) for x in self.release))
    -
    -        return "".join(parts)
    -
    -    @property
    -    def is_prerelease(self):
    -        # type: () -> bool
    -        return self.dev is not None or self.pre is not None
    -
    -    @property
    -    def is_postrelease(self):
    -        # type: () -> bool
    -        return self.post is not None
    -
    -    @property
    -    def is_devrelease(self):
    -        # type: () -> bool
    -        return self.dev is not None
    -
    -    @property
    -    def major(self):
    -        # type: () -> int
    -        return self.release[0] if len(self.release) >= 1 else 0
    -
    -    @property
    -    def minor(self):
    -        # type: () -> int
    -        return self.release[1] if len(self.release) >= 2 else 0
    -
    -    @property
    -    def micro(self):
    -        # type: () -> int
    -        return self.release[2] if len(self.release) >= 3 else 0
    -
    -
    -def _parse_letter_version(
    -    letter,  # type: str
    -    number,  # type: Union[str, bytes, SupportsInt]
    -):
    -    # type: (...) -> Optional[Tuple[str, int]]
    -
    -    if letter:
    -        # We consider there to be an implicit 0 in a pre-release if there is
    -        # not a numeral associated with it.
    -        if number is None:
    -            number = 0
    -
    -        # We normalize any letters to their lower case form
    -        letter = letter.lower()
    -
    -        # We consider some words to be alternate spellings of other words and
    -        # in those cases we want to normalize the spellings to our preferred
    -        # spelling.
    -        if letter == "alpha":
    -            letter = "a"
    -        elif letter == "beta":
    -            letter = "b"
    -        elif letter in ["c", "pre", "preview"]:
    -            letter = "rc"
    -        elif letter in ["rev", "r"]:
    -            letter = "post"
    -
    -        return letter, int(number)
    -    if not letter and number:
    -        # We assume if we are given a number, but we are not given a letter
    -        # then this is using the implicit post release syntax (e.g. 1.0-1)
    -        letter = "post"
    -
    -        return letter, int(number)
    -
    -    return None
    -
    -
    -_local_version_separators = re.compile(r"[\._-]")
    -
    -
    -def _parse_local_version(local):
    -    # type: (str) -> Optional[LocalType]
    -    """
    -    Takes a string like abc.1.twelve and turns it into ("abc", 1, "twelve").
    -    """
    -    if local is not None:
    -        return tuple(
    -            part.lower() if not part.isdigit() else int(part)
    -            for part in _local_version_separators.split(local)
    -        )
    -    return None
    -
    -
    -def _cmpkey(
    -    epoch,  # type: int
    -    release,  # type: Tuple[int, ...]
    -    pre,  # type: Optional[Tuple[str, int]]
    -    post,  # type: Optional[Tuple[str, int]]
    -    dev,  # type: Optional[Tuple[str, int]]
    -    local,  # type: Optional[Tuple[SubLocalType]]
    -):
    -    # type: (...) -> CmpKey
    -
    -    # When we compare a release version, we want to compare it with all of the
    -    # trailing zeros removed. So we'll use a reverse the list, drop all the now
    -    # leading zeros until we come to something non zero, then take the rest
    -    # re-reverse it back into the correct order and make it a tuple and use
    -    # that for our sorting key.
    -    _release = tuple(
    -        reversed(list(itertools.dropwhile(lambda x: x == 0, reversed(release))))
    -    )
    -
    -    # We need to "trick" the sorting algorithm to put 1.0.dev0 before 1.0a0.
    -    # We'll do this by abusing the pre segment, but we _only_ want to do this
    -    # if there is not a pre or a post segment. If we have one of those then
    -    # the normal sorting rules will handle this case correctly.
    -    if pre is None and post is None and dev is not None:
    -        _pre = NegativeInfinity  # type: PrePostDevType
    -    # Versions without a pre-release (except as noted above) should sort after
    -    # those with one.
    -    elif pre is None:
    -        _pre = Infinity
    -    else:
    -        _pre = pre
    -
    -    # Versions without a post segment should sort before those with one.
    -    if post is None:
    -        _post = NegativeInfinity  # type: PrePostDevType
    -
    -    else:
    -        _post = post
    -
    -    # Versions without a development segment should sort after those with one.
    -    if dev is None:
    -        _dev = Infinity  # type: PrePostDevType
    -
    -    else:
    -        _dev = dev
    -
    -    if local is None:
    -        # Versions without a local segment should sort before those with one.
    -        _local = NegativeInfinity  # type: LocalType
    -    else:
    -        # Versions with a local segment need that segment parsed to implement
    -        # the sorting rules in PEP440.
    -        # - Alpha numeric segments sort before numeric segments
    -        # - Alpha numeric segments sort lexicographically
    -        # - Numeric segments sort numerically
    -        # - Shorter versions sort before longer versions when the prefixes
    -        #   match exactly
    -        _local = tuple(
    -            (i, "") if isinstance(i, int) else (NegativeInfinity, i) for i in local
    -        )
    -
    -    return epoch, _release, _pre, _post, _dev, _local
    diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/formatters/groff.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/formatters/groff.py
    deleted file mode 100644
    index 687fd5496717b31588cf766ae5d77f60e8ecd8d4..0000000000000000000000000000000000000000
    --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/formatters/groff.py
    +++ /dev/null
    @@ -1,170 +0,0 @@
    -"""
    -    pygments.formatters.groff
    -    ~~~~~~~~~~~~~~~~~~~~~~~~~
    -
    -    Formatter for groff output.
    -
    -    :copyright: Copyright 2006-2023 by the Pygments team, see AUTHORS.
    -    :license: BSD, see LICENSE for details.
    -"""
    -
    -import math
    -from pygments.formatter import Formatter
    -from pygments.util import get_bool_opt, get_int_opt
    -
    -__all__ = ['GroffFormatter']
    -
    -
    -class GroffFormatter(Formatter):
    -    """
    -    Format tokens with groff escapes to change their color and font style.
    -
    -    .. versionadded:: 2.11
    -
    -    Additional options accepted:
    -
    -    `style`
    -        The style to use, can be a string or a Style subclass (default:
    -        ``'default'``).
    -
    -    `monospaced`
    -        If set to true, monospace font will be used (default: ``true``).
    -
    -    `linenos`
    -        If set to true, print the line numbers (default: ``false``).
    -
    -    `wrap`
    -        Wrap lines to the specified number of characters. Disabled if set to 0
    -        (default: ``0``).
    -    """
    -
    -    name = 'groff'
    -    aliases = ['groff','troff','roff']
    -    filenames = []
    -
    -    def __init__(self, **options):
    -        Formatter.__init__(self, **options)
    -
    -        self.monospaced = get_bool_opt(options, 'monospaced', True)
    -        self.linenos = get_bool_opt(options, 'linenos', False)
    -        self._lineno = 0
    -        self.wrap = get_int_opt(options, 'wrap', 0)
    -        self._linelen = 0
    -
    -        self.styles = {}
    -        self._make_styles()
    -
    -
    -    def _make_styles(self):
    -        regular = '\\f[CR]' if self.monospaced else '\\f[R]'
    -        bold = '\\f[CB]' if self.monospaced else '\\f[B]'
    -        italic = '\\f[CI]' if self.monospaced else '\\f[I]'
    -
    -        for ttype, ndef in self.style:
    -            start = end = ''
    -            if ndef['color']:
    -                start += '\\m[%s]' % ndef['color']
    -                end = '\\m[]' + end
    -            if ndef['bold']:
    -                start += bold
    -                end = regular + end
    -            if ndef['italic']:
    -                start += italic
    -                end = regular + end
    -            if ndef['bgcolor']:
    -                start += '\\M[%s]' % ndef['bgcolor']
    -                end = '\\M[]' + end
    -
    -            self.styles[ttype] = start, end
    -
    -
    -    def _define_colors(self, outfile):
    -        colors = set()
    -        for _, ndef in self.style:
    -            if ndef['color'] is not None:
    -                colors.add(ndef['color'])
    -
    -        for color in sorted(colors):
    -            outfile.write('.defcolor ' + color + ' rgb #' + color + '\n')
    -
    -
    -    def _write_lineno(self, outfile):
    -        self._lineno += 1
    -        outfile.write("%s% 4d " % (self._lineno != 1 and '\n' or '', self._lineno))
    -
    -
    -    def _wrap_line(self, line):
    -        length = len(line.rstrip('\n'))
    -        space = '     ' if self.linenos else ''
    -        newline = ''
    -
    -        if length > self.wrap:
    -            for i in range(0, math.floor(length / self.wrap)):
    -                chunk = line[i*self.wrap:i*self.wrap+self.wrap]
    -                newline += (chunk + '\n' + space)
    -            remainder = length % self.wrap
    -            if remainder > 0:
    -                newline += line[-remainder-1:]
    -                self._linelen = remainder
    -        elif self._linelen + length > self.wrap:
    -            newline = ('\n' + space) + line
    -            self._linelen = length
    -        else:
    -            newline = line
    -            self._linelen += length
    -
    -        return newline
    -
    -
    -    def _escape_chars(self, text):
    -        text = text.replace('\\', '\\[u005C]'). \
    -                    replace('.', '\\[char46]'). \
    -                    replace('\'', '\\[u0027]'). \
    -                    replace('`', '\\[u0060]'). \
    -                    replace('~', '\\[u007E]')
    -        copy = text
    -
    -        for char in copy:
    -            if len(char) != len(char.encode()):
    -                uni = char.encode('unicode_escape') \
    -                    .decode()[1:] \
    -                    .replace('x', 'u00') \
    -                    .upper()
    -                text = text.replace(char, '\\[u' + uni[1:] + ']')
    -
    -        return text
    -
    -
    -    def format_unencoded(self, tokensource, outfile):
    -        self._define_colors(outfile)
    -
    -        outfile.write('.nf\n\\f[CR]\n')
    -
    -        if self.linenos:
    -            self._write_lineno(outfile)
    -
    -        for ttype, value in tokensource:
    -            while ttype not in self.styles:
    -                ttype = ttype.parent
    -            start, end = self.styles[ttype]
    -
    -            for line in value.splitlines(True):
    -                if self.wrap > 0:
    -                    line = self._wrap_line(line)
    -
    -                if start and end:
    -                    text = self._escape_chars(line.rstrip('\n'))
    -                    if text != '':
    -                        outfile.write(''.join((start, text, end)))
    -                else:
    -                    outfile.write(self._escape_chars(line.rstrip('\n')))
    -
    -                if line.endswith('\n'):
    -                    if self.linenos:
    -                        self._write_lineno(outfile)
    -                        self._linelen = 0
    -                    else:
    -                        outfile.write('\n')
    -                        self._linelen = 0
    -
    -        outfile.write('\n.fi')
    diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/toolz/__init__.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/toolz/__init__.py
    deleted file mode 100644
    index ba49a662fc77c6b8eb3b9ef18ca0c8375a6dd31c..0000000000000000000000000000000000000000
    --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/toolz/__init__.py
    +++ /dev/null
    @@ -1,26 +0,0 @@
    -from .itertoolz import *
    -
    -from .functoolz import *
    -
    -from .dicttoolz import *
    -
    -from .recipes import *
    -
    -from functools import partial, reduce
    -
    -sorted = sorted
    -
    -map = map
    -
    -filter = filter
    -
    -# Aliases
    -comp = compose
    -
    -from . import curried, sandbox
    -
    -functoolz._sigs.create_signature_registry()
    -
    -from ._version import get_versions
    -__version__ = get_versions()['version']
    -del get_versions
    diff --git a/spaces/pycui/RealChar/client/web/src/components/Header/style.css b/spaces/pycui/RealChar/client/web/src/components/Header/style.css
    deleted file mode 100644
    index b3658985aff24730dc3c3f621597237717588518..0000000000000000000000000000000000000000
    --- a/spaces/pycui/RealChar/client/web/src/components/Header/style.css
    +++ /dev/null
    @@ -1,18 +0,0 @@
    -header {
    -  margin-top: 50px;
    -  margin-bottom: 20px;
    -  display: flex;
    -  justify-content: space-between;
    -  align-items: center;
    -  width: 100%;
    -}
    -
    -.logo-container {
    -  text-align: center;
    -  width: 100%;
    -}
    -
    -.auth-container {
    -  position: absolute;
    -  right: 3vw;
    -}
    diff --git "a/spaces/qingxu98/gpt-academic/crazy_functions/\344\270\213\350\275\275arxiv\350\256\272\346\226\207\347\277\273\350\257\221\346\221\230\350\246\201.py" "b/spaces/qingxu98/gpt-academic/crazy_functions/\344\270\213\350\275\275arxiv\350\256\272\346\226\207\347\277\273\350\257\221\346\221\230\350\246\201.py"
    deleted file mode 100644
    index 8b4a5037a21d326ddcdcc7ee5dd6082d949c5a55..0000000000000000000000000000000000000000
    --- "a/spaces/qingxu98/gpt-academic/crazy_functions/\344\270\213\350\275\275arxiv\350\256\272\346\226\207\347\277\273\350\257\221\346\221\230\350\246\201.py"
    +++ /dev/null
    @@ -1,191 +0,0 @@
    -from toolbox import update_ui, get_log_folder
    -from toolbox import write_history_to_file, promote_file_to_downloadzone
    -from toolbox import CatchException, report_execption, get_conf
    -import re, requests, unicodedata, os
    -from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
    -def download_arxiv_(url_pdf):
    -    if 'arxiv.org' not in url_pdf:
    -        if ('.' in url_pdf) and ('/' not in url_pdf):
    -            new_url = 'https://arxiv.org/abs/'+url_pdf
    -            print('下载编号:', url_pdf, '自动定位:', new_url)
    -            # download_arxiv_(new_url)
    -            return download_arxiv_(new_url)
    -        else:
    -            print('不能识别的URL!')
    -            return None
    -    if 'abs' in url_pdf:
    -        url_pdf = url_pdf.replace('abs', 'pdf')
    -        url_pdf = url_pdf + '.pdf'
    -
    -    url_abs = url_pdf.replace('.pdf', '').replace('pdf', 'abs')
    -    title, other_info = get_name(_url_=url_abs)
    -
    -    paper_id = title.split()[0]  # '[1712.00559]'
    -    if '2' in other_info['year']:
    -        title = other_info['year'] + ' ' + title
    -
    -    known_conf = ['NeurIPS', 'NIPS', 'Nature', 'Science', 'ICLR', 'AAAI']
    -    for k in known_conf:
    -        if k in other_info['comment']:
    -            title = k + ' ' + title
    -
    -    download_dir = get_log_folder(plugin_name='arxiv')
    -    os.makedirs(download_dir, exist_ok=True)
    -
    -    title_str = title.replace('?', '?')\
    -        .replace(':', ':')\
    -        .replace('\"', '“')\
    -        .replace('\n', '')\
    -        .replace('  ', ' ')\
    -        .replace('  ', ' ')
    -
    -    requests_pdf_url = url_pdf
    -    file_path = download_dir+title_str
    -
    -    print('下载中')
    -    proxies, = get_conf('proxies')
    -    r = requests.get(requests_pdf_url, proxies=proxies)
    -    with open(file_path, 'wb+') as f:
    -        f.write(r.content)
    -    print('下载完成')
    -
    -    # print('输出下载命令:','aria2c -o \"%s\" %s'%(title_str,url_pdf))
    -    # subprocess.call('aria2c --all-proxy=\"172.18.116.150:11084\" -o \"%s\" %s'%(download_dir+title_str,url_pdf), shell=True)
    -
    -    x = "%s  %s %s.bib" % (paper_id, other_info['year'], other_info['authors'])
    -    x = x.replace('?', '?')\
    -        .replace(':', ':')\
    -        .replace('\"', '“')\
    -        .replace('\n', '')\
    -        .replace('  ', ' ')\
    -        .replace('  ', ' ')
    -    return file_path, other_info
    -
    -
    -def get_name(_url_):
    -    import os
    -    from bs4 import BeautifulSoup
    -    print('正在获取文献名!')
    -    print(_url_)
    -
    -    # arxiv_recall = {}
    -    # if os.path.exists('./arxiv_recall.pkl'):
    -    #     with open('./arxiv_recall.pkl', 'rb') as f:
    -    #         arxiv_recall = pickle.load(f)
    -
    -    # if _url_ in arxiv_recall:
    -    #     print('在缓存中')
    -    #     return arxiv_recall[_url_]
    -
    -    proxies, = get_conf('proxies')
    -    res = requests.get(_url_, proxies=proxies)
    -
    -    bs = BeautifulSoup(res.text, 'html.parser')
    -    other_details = {}
    -
    -    # get year
    -    try:
    -        year = bs.find_all(class_='dateline')[0].text
    -        year = re.search(r'(\d{4})', year, re.M | re.I).group(1)
    -        other_details['year'] = year
    -        abstract = bs.find_all(class_='abstract mathjax')[0].text
    -        other_details['abstract'] = abstract
    -    except:
    -        other_details['year'] = ''
    -        print('年份获取失败')
    -
    -    # get author
    -    try:
    -        authors = bs.find_all(class_='authors')[0].text
    -        authors = authors.split('Authors:')[1]
    -        other_details['authors'] = authors
    -    except:
    -        other_details['authors'] = ''
    -        print('authors获取失败')
    -
    -    # get comment
    -    try:
    -        comment = bs.find_all(class_='metatable')[0].text
    -        real_comment = None
    -        for item in comment.replace('\n', ' ').split('   '):
    -            if 'Comments' in item:
    -                real_comment = item
    -        if real_comment is not None:
    -            other_details['comment'] = real_comment
    -        else:
    -            other_details['comment'] = ''
    -    except:
    -        other_details['comment'] = ''
    -        print('年份获取失败')
    -
    -    title_str = BeautifulSoup(
    -        res.text, 'html.parser').find('title').contents[0]
    -    print('获取成功:', title_str)
    -    # arxiv_recall[_url_] = (title_str+'.pdf', other_details)
    -    # with open('./arxiv_recall.pkl', 'wb') as f:
    -    #     pickle.dump(arxiv_recall, f)
    -
    -    return title_str+'.pdf', other_details
    -
    -
    -
    -@CatchException
    -def 下载arxiv论文并翻译摘要(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
    -
    -    CRAZY_FUNCTION_INFO = "下载arxiv论文并翻译摘要,函数插件作者[binary-husky]。正在提取摘要并下载PDF文档……"
    -    import glob
    -    import os
    -
    -    # 基本信息:功能、贡献者
    -    chatbot.append(["函数插件功能?", CRAZY_FUNCTION_INFO])
    -    yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
    -
    -    # 尝试导入依赖,如果缺少依赖,则给出安装建议
    -    try:
    -        import bs4
    -    except:
    -        report_execption(chatbot, history, 
    -            a = f"解析项目: {txt}", 
    -            b = f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade beautifulsoup4```。")
    -        yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
    -        return
    -
    -    # 清空历史,以免输入溢出
    -    history = []
    -
    -    # 提取摘要,下载PDF文档
    -    try:
    -        pdf_path, info = download_arxiv_(txt)
    -    except:
    -        report_execption(chatbot, history, 
    -            a = f"解析项目: {txt}", 
    -            b = f"下载pdf文件未成功")
    -        yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
    -        return
    -    
    -    # 翻译摘要等
    -    i_say =            f"请你阅读以下学术论文相关的材料,提取摘要,翻译为中文。材料如下:{str(info)}"
    -    i_say_show_user =  f'请你阅读以下学术论文相关的材料,提取摘要,翻译为中文。论文:{pdf_path}'
    -    chatbot.append((i_say_show_user, "[Local Message] waiting gpt response."))
    -    yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
    -    msg = '正常'
    -    # ** gpt request **
    -    # 单线,获取文章meta信息
    -    gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
    -        inputs=i_say,
    -        inputs_show_user=i_say_show_user,
    -        llm_kwargs=llm_kwargs,
    -        chatbot=chatbot, history=[],
    -        sys_prompt="Your job is to collect information from materials and translate to Chinese。",
    -    )
    -
    -    chatbot[-1] = (i_say_show_user, gpt_say)
    -    history.append(i_say_show_user); history.append(gpt_say)
    -    yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面
    -    res = write_history_to_file(history)
    -    promote_file_to_downloadzone(res, chatbot=chatbot)
    -    promote_file_to_downloadzone(pdf_path, chatbot=chatbot)
    -
    -    chatbot.append(("完成了吗?", res + "\n\nPDF文件也已经下载"))
    -    yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面
    -
    diff --git a/spaces/r3gm/Aesthetic_RVC_Inference_HF/lib/infer/infer_libs/uvr5_pack/demucs/audio.py b/spaces/r3gm/Aesthetic_RVC_Inference_HF/lib/infer/infer_libs/uvr5_pack/demucs/audio.py
    deleted file mode 100644
    index b29f156e4afb5fbda32c35777022caeadf50d711..0000000000000000000000000000000000000000
    --- a/spaces/r3gm/Aesthetic_RVC_Inference_HF/lib/infer/infer_libs/uvr5_pack/demucs/audio.py
    +++ /dev/null
    @@ -1,172 +0,0 @@
    -# Copyright (c) Facebook, Inc. and its affiliates.
    -# All rights reserved.
    -#
    -# This source code is licensed under the license found in the
    -# LICENSE file in the root directory of this source tree.
    -import json
    -import subprocess as sp
    -from pathlib import Path
    -
    -import julius
    -import numpy as np
    -import torch
    -
    -from .utils import temp_filenames
    -
    -
    -def _read_info(path):
    -    stdout_data = sp.check_output([
    -        'ffprobe', "-loglevel", "panic",
    -        str(path), '-print_format', 'json', '-show_format', '-show_streams'
    -    ])
    -    return json.loads(stdout_data.decode('utf-8'))
    -
    -
    -class AudioFile:
    -    """
    -    Allows to read audio from any format supported by ffmpeg, as well as resampling or
    -    converting to mono on the fly. See :method:`read` for more details.
    -    """
    -    def __init__(self, path: Path):
    -        self.path = Path(path)
    -        self._info = None
    -
    -    def __repr__(self):
    -        features = [("path", self.path)]
    -        features.append(("samplerate", self.samplerate()))
    -        features.append(("channels", self.channels()))
    -        features.append(("streams", len(self)))
    -        features_str = ", ".join(f"{name}={value}" for name, value in features)
    -        return f"AudioFile({features_str})"
    -
    -    @property
    -    def info(self):
    -        if self._info is None:
    -            self._info = _read_info(self.path)
    -        return self._info
    -
    -    @property
    -    def duration(self):
    -        return float(self.info['format']['duration'])
    -
    -    @property
    -    def _audio_streams(self):
    -        return [
    -            index for index, stream in enumerate(self.info["streams"])
    -            if stream["codec_type"] == "audio"
    -        ]
    -
    -    def __len__(self):
    -        return len(self._audio_streams)
    -
    -    def channels(self, stream=0):
    -        return int(self.info['streams'][self._audio_streams[stream]]['channels'])
    -
    -    def samplerate(self, stream=0):
    -        return int(self.info['streams'][self._audio_streams[stream]]['sample_rate'])
    -
    -    def read(self,
    -             seek_time=None,
    -             duration=None,
    -             streams=slice(None),
    -             samplerate=None,
    -             channels=None,
    -             temp_folder=None):
    -        """
    -        Slightly more efficient implementation than stempeg,
    -        in particular, this will extract all stems at once
    -        rather than having to loop over one file multiple times
    -        for each stream.
    -
    -        Args:
    -            seek_time (float):  seek time in seconds or None if no seeking is needed.
    -            duration (float): duration in seconds to extract or None to extract until the end.
    -            streams (slice, int or list): streams to extract, can be a single int, a list or
    -                a slice. If it is a slice or list, the output will be of size [S, C, T]
    -                with S the number of streams, C the number of channels and T the number of samples.
    -                If it is an int, the output will be [C, T].
    -            samplerate (int): if provided, will resample on the fly. If None, no resampling will
    -                be done. Original sampling rate can be obtained with :method:`samplerate`.
    -            channels (int): if 1, will convert to mono. We do not rely on ffmpeg for that
    -                as ffmpeg automatically scale by +3dB to conserve volume when playing on speakers.
    -                See https://sound.stackexchange.com/a/42710.
    -                Our definition of mono is simply the average of the two channels. Any other
    -                value will be ignored.
    -            temp_folder (str or Path or None): temporary folder to use for decoding.
    -
    -
    -        """
    -        streams = np.array(range(len(self)))[streams]
    -        single = not isinstance(streams, np.ndarray)
    -        if single:
    -            streams = [streams]
    -
    -        if duration is None:
    -            target_size = None
    -            query_duration = None
    -        else:
    -            target_size = int((samplerate or self.samplerate()) * duration)
    -            query_duration = float((target_size + 1) / (samplerate or self.samplerate()))
    -
    -        with temp_filenames(len(streams)) as filenames:
    -            command = ['ffmpeg', '-y']
    -            command += ['-loglevel', 'panic']
    -            if seek_time:
    -                command += ['-ss', str(seek_time)]
    -            command += ['-i', str(self.path)]
    -            for stream, filename in zip(streams, filenames):
    -                command += ['-map', f'0:{self._audio_streams[stream]}']
    -                if query_duration is not None:
    -                    command += ['-t', str(query_duration)]
    -                command += ['-threads', '1']
    -                command += ['-f', 'f32le']
    -                if samplerate is not None:
    -                    command += ['-ar', str(samplerate)]
    -                command += [filename]
    -
    -            sp.run(command, check=True)
    -            wavs = []
    -            for filename in filenames:
    -                wav = np.fromfile(filename, dtype=np.float32)
    -                wav = torch.from_numpy(wav)
    -                wav = wav.view(-1, self.channels()).t()
    -                if channels is not None:
    -                    wav = convert_audio_channels(wav, channels)
    -                if target_size is not None:
    -                    wav = wav[..., :target_size]
    -                wavs.append(wav)
    -        wav = torch.stack(wavs, dim=0)
    -        if single:
    -            wav = wav[0]
    -        return wav
    -
    -
    -def convert_audio_channels(wav, channels=2):
    -    """Convert audio to the given number of channels."""
    -    *shape, src_channels, length = wav.shape
    -    if src_channels == channels:
    -        pass
    -    elif channels == 1:
    -        # Case 1:
    -        # The caller asked 1-channel audio, but the stream have multiple
    -        # channels, downmix all channels.
    -        wav = wav.mean(dim=-2, keepdim=True)
    -    elif src_channels == 1:
    -        # Case 2:
    -        # The caller asked for multiple channels, but the input file have
    -        # one single channel, replicate the audio over all channels.
    -        wav = wav.expand(*shape, channels, length)
    -    elif src_channels >= channels:
    -        # Case 3:
    -        # The caller asked for multiple channels, and the input file have
    -        # more channels than requested. In that case return the first channels.
    -        wav = wav[..., :channels, :]
    -    else:
    -        # Case 4: What is a reasonable choice here?
    -        raise ValueError('The audio file has less channels than requested but is not mono.')
    -    return wav
    -
    -
    -def convert_audio(wav, from_samplerate, to_samplerate, channels):
    -    wav = convert_audio_channels(wav, channels)
    -    return julius.resample_frac(wav, from_samplerate, to_samplerate)
    diff --git a/spaces/r3gm/Aesthetic_RVC_Inference_HF/lib/tools/gui/guidml.py b/spaces/r3gm/Aesthetic_RVC_Inference_HF/lib/tools/gui/guidml.py
    deleted file mode 100644
    index f883e25cd2c981d8a469ff5d965a2dceeb2d963e..0000000000000000000000000000000000000000
    --- a/spaces/r3gm/Aesthetic_RVC_Inference_HF/lib/tools/gui/guidml.py
    +++ /dev/null
    @@ -1,710 +0,0 @@
    -"""
    -0416后的更新:
    -    引入config中half
    -    重建npy而不用填写
    -    v2支持
    -    无f0模型支持
    -    修复
    -
    -    int16:
    -    增加无索引支持
    -    f0算法改harvest(怎么看就只有这个会影响CPU占用),但是不这么改效果不好
    -"""
    -import os, sys, traceback, re
    -
    -import json
    -
    -now_dir = os.getcwd()
    -sys.path.append(now_dir)
    -from assets.configs.config import Config
    -
    -Config = Config()
    -
    -import torch_directml
    -import PySimpleGUI as sg
    -import sounddevice as sd
    -import noisereduce as nr
    -import numpy as np
    -from fairseq import checkpoint_utils
    -import librosa, torch, pyworld, faiss, time, threading
    -import torch.nn.functional as F
    -import torchaudio.transforms as tat
    -import scipy.signal as signal
    -
    -
    -# import matplotlib.pyplot as plt
    -from lib.infer.infer_pack.models import (
    -    SynthesizerTrnMs256NSFsid,
    -    SynthesizerTrnMs256NSFsid_nono,
    -    SynthesizerTrnMs768NSFsid,
    -    SynthesizerTrnMs768NSFsid_nono,
    -)
    -from assets.i18n.i18n import I18nAuto
    -
    -i18n = I18nAuto()
    -device = torch_directml.device(torch_directml.default_device())
    -current_dir = os.getcwd()
    -
    -
    -class RVC:
    -    def __init__(
    -        self, key, hubert_path, pth_path, index_path, npy_path, index_rate
    -    ) -> None:
    -        """
    -        初始化
    -        """
    -        try:
    -            self.f0_up_key = key
    -            self.time_step = 160 / 16000 * 1000
    -            self.f0_min = 50
    -            self.f0_max = 1100
    -            self.f0_mel_min = 1127 * np.log(1 + self.f0_min / 700)
    -            self.f0_mel_max = 1127 * np.log(1 + self.f0_max / 700)
    -            self.sr = 16000
    -            self.window = 160
    -            if index_rate != 0:
    -                self.index = faiss.read_index(index_path)
    -                # self.big_npy = np.load(npy_path)
    -                self.big_npy = self.index.reconstruct_n(0, self.index.ntotal)
    -                print("index search enabled")
    -            self.index_rate = index_rate
    -            model_path = hubert_path
    -            print("load model(s) from {}".format(model_path))
    -            models, saved_cfg, task = checkpoint_utils.load_model_ensemble_and_task(
    -                [model_path],
    -                suffix="",
    -            )
    -            self.model = models[0]
    -            self.model = self.model.to(device)
    -            if Config.is_half:
    -                self.model = self.model.half()
    -            else:
    -                self.model = self.model.float()
    -            self.model.eval()
    -            cpt = torch.load(pth_path, map_location="cpu")
    -            self.tgt_sr = cpt["config"][-1]
    -            cpt["config"][-3] = cpt["weight"]["emb_g.weight"].shape[0]  # n_spk
    -            self.if_f0 = cpt.get("f0", 1)
    -            self.version = cpt.get("version", "v1")
    -            if self.version == "v1":
    -                if self.if_f0 == 1:
    -                    self.net_g = SynthesizerTrnMs256NSFsid(
    -                        *cpt["config"], is_half=Config.is_half
    -                    )
    -                else:
    -                    self.net_g = SynthesizerTrnMs256NSFsid_nono(*cpt["config"])
    -            elif self.version == "v2":
    -                if self.if_f0 == 1:
    -                    self.net_g = SynthesizerTrnMs768NSFsid(
    -                        *cpt["config"], is_half=Config.is_half
    -                    )
    -                else:
    -                    self.net_g = SynthesizerTrnMs768NSFsid_nono(*cpt["config"])
    -            del self.net_g.enc_q
    -            print(self.net_g.load_state_dict(cpt["weight"], strict=False))
    -            self.net_g.eval().to(device)
    -            if Config.is_half:
    -                self.net_g = self.net_g.half()
    -            else:
    -                self.net_g = self.net_g.float()
    -        except:
    -            print(traceback.format_exc())
    -
    -    def get_f0(self, x, f0_up_key, inp_f0=None):
    -        x_pad = 1
    -        f0_min = 50
    -        f0_max = 1100
    -        f0_mel_min = 1127 * np.log(1 + f0_min / 700)
    -        f0_mel_max = 1127 * np.log(1 + f0_max / 700)
    -        f0, t = pyworld.harvest(
    -            x.astype(np.double),
    -            fs=self.sr,
    -            f0_ceil=f0_max,
    -            f0_floor=f0_min,
    -            frame_period=10,
    -        )
    -        f0 = pyworld.stonemask(x.astype(np.double), f0, t, self.sr)
    -        f0 = signal.medfilt(f0, 3)
    -        f0 *= pow(2, f0_up_key / 12)
    -        # with open("test.txt","w")as f:f.write("\n".join([str(i)for i in f0.tolist()]))
    -        tf0 = self.sr // self.window  # 每秒f0点数
    -        if inp_f0 is not None:
    -            delta_t = np.round(
    -                (inp_f0[:, 0].max() - inp_f0[:, 0].min()) * tf0 + 1
    -            ).astype("int16")
    -            replace_f0 = np.interp(
    -                list(range(delta_t)), inp_f0[:, 0] * 100, inp_f0[:, 1]
    -            )
    -            shape = f0[x_pad * tf0 : x_pad * tf0 + len(replace_f0)].shape[0]
    -            f0[x_pad * tf0 : x_pad * tf0 + len(replace_f0)] = replace_f0[:shape]
    -        # with open("test_opt.txt","w")as f:f.write("\n".join([str(i)for i in f0.tolist()]))
    -        f0bak = f0.copy()
    -        f0_mel = 1127 * np.log(1 + f0 / 700)
    -        f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * 254 / (
    -            f0_mel_max - f0_mel_min
    -        ) + 1
    -        f0_mel[f0_mel <= 1] = 1
    -        f0_mel[f0_mel > 255] = 255
    -        f0_coarse = np.rint(f0_mel).astype(np.int)
    -        return f0_coarse, f0bak  # 1-0
    -
    -    def infer(self, feats: torch.Tensor) -> np.ndarray:
    -        """
    -        推理函数
    -        """
    -        audio = feats.clone().cpu().numpy()
    -        assert feats.dim() == 1, feats.dim()
    -        feats = feats.view(1, -1)
    -        padding_mask = torch.BoolTensor(feats.shape).fill_(False)
    -        if Config.is_half:
    -            feats = feats.half()
    -        else:
    -            feats = feats.float()
    -        inputs = {
    -            "source": feats.to(device),
    -            "padding_mask": padding_mask.to(device),
    -            "output_layer": 9 if self.version == "v1" else 12,
    -        }
    -        torch.cuda.synchronize()
    -        with torch.no_grad():
    -            logits = self.model.extract_features(**inputs)
    -            feats = (
    -                self.model.final_proj(logits[0]) if self.version == "v1" else logits[0]
    -            )
    -
    -        ####索引优化
    -        try:
    -            if (
    -                hasattr(self, "index")
    -                and hasattr(self, "big_npy")
    -                and self.index_rate != 0
    -            ):
    -                npy = feats[0].cpu().numpy().astype("float32")
    -                score, ix = self.index.search(npy, k=8)
    -                weight = np.square(1 / score)
    -                weight /= weight.sum(axis=1, keepdims=True)
    -                npy = np.sum(self.big_npy[ix] * np.expand_dims(weight, axis=2), axis=1)
    -                if Config.is_half:
    -                    npy = npy.astype("float16")
    -                feats = (
    -                    torch.from_numpy(npy).unsqueeze(0).to(device) * self.index_rate
    -                    + (1 - self.index_rate) * feats
    -                )
    -            else:
    -                print("index search FAIL or disabled")
    -        except:
    -            traceback.print_exc()
    -            print("index search FAIL")
    -        feats = F.interpolate(feats.permute(0, 2, 1), scale_factor=2).permute(0, 2, 1)
    -        torch.cuda.synchronize()
    -        print(feats.shape)
    -        if self.if_f0 == 1:
    -            pitch, pitchf = self.get_f0(audio, self.f0_up_key)
    -            p_len = min(feats.shape[1], 13000, pitch.shape[0])  # 太大了爆显存
    -        else:
    -            pitch, pitchf = None, None
    -            p_len = min(feats.shape[1], 13000)  # 太大了爆显存
    -        torch.cuda.synchronize()
    -        # print(feats.shape,pitch.shape)
    -        feats = feats[:, :p_len, :]
    -        if self.if_f0 == 1:
    -            pitch = pitch[:p_len]
    -            pitchf = pitchf[:p_len]
    -            pitch = torch.LongTensor(pitch).unsqueeze(0).to(device)
    -            pitchf = torch.FloatTensor(pitchf).unsqueeze(0).to(device)
    -        p_len = torch.LongTensor([p_len]).to(device)
    -        ii = 0  # sid
    -        sid = torch.LongTensor([ii]).to(device)
    -        with torch.no_grad():
    -            if self.if_f0 == 1:
    -                infered_audio = (
    -                    self.net_g.infer(feats, p_len, pitch, pitchf, sid)[0][0, 0]
    -                    .data.cpu()
    -                    .float()
    -                )
    -            else:
    -                infered_audio = (
    -                    self.net_g.infer(feats, p_len, sid)[0][0, 0].data.cpu().float()
    -                )
    -        torch.cuda.synchronize()
    -        return infered_audio
    -
    -
    -class GUIConfig:
    -    def __init__(self) -> None:
    -        self.hubert_path: str = ""
    -        self.pth_path: str = ""
    -        self.index_path: str = ""
    -        self.npy_path: str = ""
    -        self.pitch: int = 12
    -        self.samplerate: int = 44100
    -        self.block_time: float = 1.0  # s
    -        self.buffer_num: int = 1
    -        self.threhold: int = -30
    -        self.crossfade_time: float = 0.08
    -        self.extra_time: float = 0.04
    -        self.I_noise_reduce = False
    -        self.O_noise_reduce = False
    -        self.index_rate = 0.3
    -
    -
    -class GUI:
    -    def __init__(self) -> None:
    -        self.config = GUIConfig()
    -        self.flag_vc = False
    -
    -        self.launcher()
    -
    -    def load(self):
    -        (
    -            input_devices,
    -            output_devices,
    -            input_devices_indices,
    -            output_devices_indices,
    -        ) = self.get_devices()
    -        try:
    -            with open("values1.json", "r") as j:
    -                data = json.load(j)
    -        except:
    -            with open("values1.json", "w") as j:
    -                data = {
    -                    "pth_path": "",
    -                    "index_path": "",
    -                    "sg_input_device": input_devices[
    -                        input_devices_indices.index(sd.default.device[0])
    -                    ],
    -                    "sg_output_device": output_devices[
    -                        output_devices_indices.index(sd.default.device[1])
    -                    ],
    -                    "threhold": "-45",
    -                    "pitch": "0",
    -                    "index_rate": "0",
    -                    "block_time": "1",
    -                    "crossfade_length": "0.04",
    -                    "extra_time": "1",
    -                }
    -        return data
    -
    -    def launcher(self):
    -        data = self.load()
    -        sg.theme("LightBlue3")
    -        input_devices, output_devices, _, _ = self.get_devices()
    -        layout = [
    -            [
    -                sg.Frame(
    -                    title=i18n("Load model"),
    -                    layout=[
    -                        [
    -                            sg.Input(
    -                                default_text="hubert_base.pt",
    -                                key="hubert_path",
    -                                disabled=True,
    -                            ),
    -                            sg.FileBrowse(
    -                                i18n("Hubert Model"),
    -                                initial_folder=os.path.join(os.getcwd()),
    -                                file_types=(("pt files", "*.pt"),),
    -                            ),
    -                        ],
    -                        [
    -                            sg.Input(
    -                                default_text=data.get("pth_path", ""),
    -                                key="pth_path",
    -                            ),
    -                            sg.FileBrowse(
    -                                i18n("Select the .pth file"),
    -                                initial_folder=os.path.join(os.getcwd(), "weights"),
    -                                file_types=(("weight files", "*.pth"),),
    -                            ),
    -                        ],
    -                        [
    -                            sg.Input(
    -                                default_text=data.get("index_path", ""),
    -                                key="index_path",
    -                            ),
    -                            sg.FileBrowse(
    -                                i18n("Select the .index file"),
    -                                initial_folder=os.path.join(os.getcwd(), "logs"),
    -                                file_types=(("index files", "*.index"),),
    -                            ),
    -                        ],
    -                        [
    -                            sg.Input(
    -                                default_text="你不需要填写这个You don't need write this.",
    -                                key="npy_path",
    -                                disabled=True,
    -                            ),
    -                            sg.FileBrowse(
    -                                i18n("Select the .npy file"),
    -                                initial_folder=os.path.join(os.getcwd(), "logs"),
    -                                file_types=(("feature files", "*.npy"),),
    -                            ),
    -                        ],
    -                    ],
    -                )
    -            ],
    -            [
    -                sg.Frame(
    -                    layout=[
    -                        [
    -                            sg.Text(i18n("Input device")),
    -                            sg.Combo(
    -                                input_devices,
    -                                key="sg_input_device",
    -                                default_value=data.get("sg_input_device", ""),
    -                            ),
    -                        ],
    -                        [
    -                            sg.Text(i18n("Output device")),
    -                            sg.Combo(
    -                                output_devices,
    -                                key="sg_output_device",
    -                                default_value=data.get("sg_output_device", ""),
    -                            ),
    -                        ],
    -                    ],
    -                    title=i18n("Audio device (please use the same type of driver)"),
    -                )
    -            ],
    -            [
    -                sg.Frame(
    -                    layout=[
    -                        [
    -                            sg.Text(i18n("Response threshold")),
    -                            sg.Slider(
    -                                range=(-60, 0),
    -                                key="threhold",
    -                                resolution=1,
    -                                orientation="h",
    -                                default_value=data.get("threhold", ""),
    -                            ),
    -                        ],
    -                        [
    -                            sg.Text(i18n("Pitch settings")),
    -                            sg.Slider(
    -                                range=(-24, 24),
    -                                key="pitch",
    -                                resolution=1,
    -                                orientation="h",
    -                                default_value=data.get("pitch", ""),
    -                            ),
    -                        ],
    -                        [
    -                            sg.Text(i18n("Index Rate")),
    -                            sg.Slider(
    -                                range=(0.0, 1.0),
    -                                key="index_rate",
    -                                resolution=0.01,
    -                                orientation="h",
    -                                default_value=data.get("index_rate", ""),
    -                            ),
    -                        ],
    -                    ],
    -                    title=i18n("General settings"),
    -                ),
    -                sg.Frame(
    -                    layout=[
    -                        [
    -                            sg.Text(i18n("Sample length")),
    -                            sg.Slider(
    -                                range=(0.1, 3.0),
    -                                key="block_time",
    -                                resolution=0.1,
    -                                orientation="h",
    -                                default_value=data.get("block_time", ""),
    -                            ),
    -                        ],
    -                        [
    -                            sg.Text(i18n("Fade length")),
    -                            sg.Slider(
    -                                range=(0.01, 0.15),
    -                                key="crossfade_length",
    -                                resolution=0.01,
    -                                orientation="h",
    -                                default_value=data.get("crossfade_length", ""),
    -                            ),
    -                        ],
    -                        [
    -                            sg.Text(i18n("Extra推理时长")),
    -                            sg.Slider(
    -                                range=(0.05, 3.00),
    -                                key="extra_time",
    -                                resolution=0.01,
    -                                orientation="h",
    -                                default_value=data.get("extra_time", ""),
    -                            ),
    -                        ],
    -                        [
    -                            sg.Checkbox(i18n("Input noise reduction"), key="I_noise_reduce"),
    -                            sg.Checkbox(i18n("Output noise reduction"), key="O_noise_reduce"),
    -                        ],
    -                    ],
    -                    title=i18n("Performance settings"),
    -                ),
    -            ],
    -            [
    -                sg.Button(i18n("开始音频Convert"), key="start_vc"),
    -                sg.Button(i18n("停止音频Convert"), key="stop_vc"),
    -                sg.Text(i18n("Inference time (ms):")),
    -                sg.Text("0", key="infer_time"),
    -            ],
    -        ]
    -        self.window = sg.Window("RVC - GUI", layout=layout)
    -        self.event_handler()
    -
    -    def event_handler(self):
    -        while True:
    -            event, values = self.window.read()
    -            if event == sg.WINDOW_CLOSED:
    -                self.flag_vc = False
    -                exit()
    -            if event == "start_vc" and self.flag_vc == False:
    -                if self.set_values(values) == True:
    -                    print("using_cuda:" + str(torch.cuda.is_available()))
    -                    self.start_vc()
    -                    settings = {
    -                        "pth_path": values["pth_path"],
    -                        "index_path": values["index_path"],
    -                        "sg_input_device": values["sg_input_device"],
    -                        "sg_output_device": values["sg_output_device"],
    -                        "threhold": values["threhold"],
    -                        "pitch": values["pitch"],
    -                        "index_rate": values["index_rate"],
    -                        "block_time": values["block_time"],
    -                        "crossfade_length": values["crossfade_length"],
    -                        "extra_time": values["extra_time"],
    -                    }
    -                    with open("values1.json", "w") as j:
    -                        json.dump(settings, j)
    -            if event == "stop_vc" and self.flag_vc == True:
    -                self.flag_vc = False
    -
    -    def set_values(self, values):
    -        if len(values["pth_path"].strip()) == 0:
    -            sg.popup(i18n("Select the pth file"))
    -            return False
    -        if len(values["index_path"].strip()) == 0:
    -            sg.popup(i18n("Select the index file"))
    -            return False
    -        pattern = re.compile("[^\x00-\x7F]+")
    -        if pattern.findall(values["hubert_path"]):
    -            sg.popup(i18n("The hubert model path must not contain Chinese characters"))
    -            return False
    -        if pattern.findall(values["pth_path"]):
    -            sg.popup(i18n("The pth file path must not contain Chinese characters."))
    -            return False
    -        if pattern.findall(values["index_path"]):
    -            sg.popup(i18n("The index file path must not contain Chinese characters."))
    -            return False
    -        self.set_devices(values["sg_input_device"], values["sg_output_device"])
    -        self.config.hubert_path = os.path.join(current_dir, "hubert_base.pt")
    -        self.config.pth_path = values["pth_path"]
    -        self.config.index_path = values["index_path"]
    -        self.config.npy_path = values["npy_path"]
    -        self.config.threhold = values["threhold"]
    -        self.config.pitch = values["pitch"]
    -        self.config.block_time = values["block_time"]
    -        self.config.crossfade_time = values["crossfade_length"]
    -        self.config.extra_time = values["extra_time"]
    -        self.config.I_noise_reduce = values["I_noise_reduce"]
    -        self.config.O_noise_reduce = values["O_noise_reduce"]
    -        self.config.index_rate = values["index_rate"]
    -        return True
    -
    -    def start_vc(self):
    -        torch.cuda.empty_cache()
    -        self.flag_vc = True
    -        self.block_frame = int(self.config.block_time * self.config.samplerate)
    -        self.crossfade_frame = int(self.config.crossfade_time * self.config.samplerate)
    -        self.sola_search_frame = int(0.012 * self.config.samplerate)
    -        self.delay_frame = int(0.01 * self.config.samplerate)  # 往前预留0.02s
    -        self.extra_frame = int(self.config.extra_time * self.config.samplerate)
    -        self.rvc = None
    -        self.rvc = RVC(
    -            self.config.pitch,
    -            self.config.hubert_path,
    -            self.config.pth_path,
    -            self.config.index_path,
    -            self.config.npy_path,
    -            self.config.index_rate,
    -        )
    -        self.input_wav: np.ndarray = np.zeros(
    -            self.extra_frame
    -            + self.crossfade_frame
    -            + self.sola_search_frame
    -            + self.block_frame,
    -            dtype="float32",
    -        )
    -        self.output_wav: torch.Tensor = torch.zeros(
    -            self.block_frame, device=device, dtype=torch.float32
    -        )
    -        self.sola_buffer: torch.Tensor = torch.zeros(
    -            self.crossfade_frame, device=device, dtype=torch.float32
    -        )
    -        self.fade_in_window: torch.Tensor = torch.linspace(
    -            0.0, 1.0, steps=self.crossfade_frame, device=device, dtype=torch.float32
    -        )
    -        self.fade_out_window: torch.Tensor = 1 - self.fade_in_window
    -        self.resampler1 = tat.Resample(
    -            orig_freq=self.config.samplerate, new_freq=16000, dtype=torch.float32
    -        )
    -        self.resampler2 = tat.Resample(
    -            orig_freq=self.rvc.tgt_sr,
    -            new_freq=self.config.samplerate,
    -            dtype=torch.float32,
    -        )
    -        thread_vc = threading.Thread(target=self.soundinput)
    -        thread_vc.start()
    -
    -    def soundinput(self):
    -        """
    -        接受音频输入
    -        """
    -        with sd.Stream(
    -            channels=2,
    -            callback=self.audio_callback,
    -            blocksize=self.block_frame,
    -            samplerate=self.config.samplerate,
    -            dtype="float32",
    -        ):
    -            while self.flag_vc:
    -                time.sleep(self.config.block_time)
    -                print("Audio block passed.")
    -        print("ENDing VC")
    -
    -    def audio_callback(
    -        self, indata: np.ndarray, outdata: np.ndarray, frames, times, status
    -    ):
    -        """
    -        音频处理
    -        """
    -        start_time = time.perf_counter()
    -        indata = librosa.to_mono(indata.T)
    -        if self.config.I_noise_reduce:
    -            indata[:] = nr.reduce_noise(y=indata, sr=self.config.samplerate)
    -
    -        """noise gate"""
    -        frame_length = 2048
    -        hop_length = 1024
    -        rms = librosa.feature.rms(
    -            y=indata, frame_length=frame_length, hop_length=hop_length
    -        )
    -        db_threhold = librosa.amplitude_to_db(rms, ref=1.0)[0] < self.config.threhold
    -        # print(rms.shape,db.shape,db)
    -        for i in range(db_threhold.shape[0]):
    -            if db_threhold[i]:
    -                indata[i * hop_length : (i + 1) * hop_length] = 0
    -        self.input_wav[:] = np.append(self.input_wav[self.block_frame :], indata)
    -
    -        # infer
    -        print("input_wav:" + str(self.input_wav.shape))
    -        # print('infered_wav:'+str(infer_wav.shape))
    -        infer_wav: torch.Tensor = self.resampler2(
    -            self.rvc.infer(self.resampler1(torch.from_numpy(self.input_wav)))
    -        )[-self.crossfade_frame - self.sola_search_frame - self.block_frame :].to(
    -            device
    -        )
    -        print("infer_wav:" + str(infer_wav.shape))
    -
    -        # SOLA algorithm from https://github.com/yxlllc/DDSP-SVC
    -        cor_nom = F.conv1d(
    -            infer_wav[None, None, : self.crossfade_frame + self.sola_search_frame],
    -            self.sola_buffer[None, None, :],
    -        )
    -        cor_den = torch.sqrt(
    -            F.conv1d(
    -                infer_wav[None, None, : self.crossfade_frame + self.sola_search_frame]
    -                ** 2,
    -                torch.ones(1, 1, self.crossfade_frame, device=device),
    -            )
    -            + 1e-8
    -        )
    -        sola_offset = torch.argmax(cor_nom[0, 0] / cor_den[0, 0])
    -        print("sola offset: " + str(int(sola_offset)))
    -
    -        # crossfade
    -        self.output_wav[:] = infer_wav[sola_offset : sola_offset + self.block_frame]
    -        self.output_wav[: self.crossfade_frame] *= self.fade_in_window
    -        self.output_wav[: self.crossfade_frame] += self.sola_buffer[:]
    -        if sola_offset < self.sola_search_frame:
    -            self.sola_buffer[:] = (
    -                infer_wav[
    -                    -self.sola_search_frame
    -                    - self.crossfade_frame
    -                    + sola_offset : -self.sola_search_frame
    -                    + sola_offset
    -                ]
    -                * self.fade_out_window
    -            )
    -        else:
    -            self.sola_buffer[:] = (
    -                infer_wav[-self.crossfade_frame :] * self.fade_out_window
    -            )
    -
    -        if self.config.O_noise_reduce:
    -            outdata[:] = np.tile(
    -                nr.reduce_noise(
    -                    y=self.output_wav[:].cpu().numpy(), sr=self.config.samplerate
    -                ),
    -                (2, 1),
    -            ).T
    -        else:
    -            outdata[:] = self.output_wav[:].repeat(2, 1).t().cpu().numpy()
    -        total_time = time.perf_counter() - start_time
    -        self.window["infer_time"].update(int(total_time * 1000))
    -        print("infer time:" + str(total_time))
    -
    -    def get_devices(self, update: bool = True):
    -        """获取设备列表"""
    -        if update:
    -            sd._terminate()
    -            sd._initialize()
    -        devices = sd.query_devices()
    -        hostapis = sd.query_hostapis()
    -        for hostapi in hostapis:
    -            for device_idx in hostapi["devices"]:
    -                devices[device_idx]["hostapi_name"] = hostapi["name"]
    -        input_devices = [
    -            f"{d['name']} ({d['hostapi_name']})"
    -            for d in devices
    -            if d["max_input_channels"] > 0
    -        ]
    -        output_devices = [
    -            f"{d['name']} ({d['hostapi_name']})"
    -            for d in devices
    -            if d["max_output_channels"] > 0
    -        ]
    -        input_devices_indices = [
    -            d["index"] if "index" in d else d["name"]
    -            for d in devices
    -            if d["max_input_channels"] > 0
    -        ]
    -        output_devices_indices = [
    -            d["index"] if "index" in d else d["name"]
    -            for d in devices
    -            if d["max_output_channels"] > 0
    -        ]
    -        return (
    -            input_devices,
    -            output_devices,
    -            input_devices_indices,
    -            output_devices_indices,
    -        )
    -
    -    def set_devices(self, input_device, output_device):
    -        """设置输出设备"""
    -        (
    -            input_devices,
    -            output_devices,
    -            input_device_indices,
    -            output_device_indices,
    -        ) = self.get_devices()
    -        sd.default.device[0] = input_device_indices[input_devices.index(input_device)]
    -        sd.default.device[1] = output_device_indices[
    -            output_devices.index(output_device)
    -        ]
    -        print("input device:" + str(sd.default.device[0]) + ":" + str(input_device))
    -        print("output device:" + str(sd.default.device[1]) + ":" + str(output_device))
    -
    -
    -gui = GUI()
    diff --git a/spaces/r3gm/RVC_HF/infer/lib/infer_pack/modules/F0Predictor/PMF0Predictor.py b/spaces/r3gm/RVC_HF/infer/lib/infer_pack/modules/F0Predictor/PMF0Predictor.py
    deleted file mode 100644
    index 06f2b79f5e5c6f2049bf8220c29ae20c3f82d524..0000000000000000000000000000000000000000
    --- a/spaces/r3gm/RVC_HF/infer/lib/infer_pack/modules/F0Predictor/PMF0Predictor.py
    +++ /dev/null
    @@ -1,98 +0,0 @@
    -import numpy as np
    -import parselmouth
    -
    -from infer.lib.infer_pack.modules.F0Predictor.F0Predictor import F0Predictor
    -
    -
    -class PMF0Predictor(F0Predictor):
    -    def __init__(self, hop_length=512, f0_min=50, f0_max=1100, sampling_rate=44100):
    -        self.hop_length = hop_length
    -        self.f0_min = f0_min
    -        self.f0_max = f0_max
    -        self.sampling_rate = sampling_rate
    -
    -    def interpolate_f0(self, f0):
    -        """
    -        对F0进行插值处理
    -        """
    -
    -        data = np.reshape(f0, (f0.size, 1))
    -
    -        vuv_vector = np.zeros((data.size, 1), dtype=np.float32)
    -        vuv_vector[data > 0.0] = 1.0
    -        vuv_vector[data <= 0.0] = 0.0
    -
    -        ip_data = data
    -
    -        frame_number = data.size
    -        last_value = 0.0
    -        for i in range(frame_number):
    -            if data[i] <= 0.0:
    -                j = i + 1
    -                for j in range(i + 1, frame_number):
    -                    if data[j] > 0.0:
    -                        break
    -                if j < frame_number - 1:
    -                    if last_value > 0.0:
    -                        step = (data[j] - data[i - 1]) / float(j - i)
    -                        for k in range(i, j):
    -                            ip_data[k] = data[i - 1] + step * (k - i + 1)
    -                    else:
    -                        for k in range(i, j):
    -                            ip_data[k] = data[j]
    -                else:
    -                    for k in range(i, frame_number):
    -                        ip_data[k] = last_value
    -            else:
    -                ip_data[i] = data[i]  # 这里可能存在一个没有必要的拷贝
    -                last_value = data[i]
    -
    -        return ip_data[:, 0], vuv_vector[:, 0]
    -
    -    def compute_f0(self, wav, p_len=None):
    -        x = wav
    -        if p_len is None:
    -            p_len = x.shape[0] // self.hop_length
    -        else:
    -            assert abs(p_len - x.shape[0] // self.hop_length) < 4, "pad length error"
    -        time_step = self.hop_length / self.sampling_rate * 1000
    -        f0 = (
    -            parselmouth.Sound(x, self.sampling_rate)
    -            .to_pitch_ac(
    -                time_step=time_step / 1000,
    -                voicing_threshold=0.6,
    -                pitch_floor=self.f0_min,
    -                pitch_ceiling=self.f0_max,
    -            )
    -            .selected_array["frequency"]
    -        )
    -
    -        pad_size = (p_len - len(f0) + 1) // 2
    -        if pad_size > 0 or p_len - len(f0) - pad_size > 0:
    -            f0 = np.pad(f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant")
    -        f0, uv = self.interpolate_f0(f0)
    -        return f0
    -
    -    def compute_f0_uv(self, wav, p_len=None):
    -        x = wav
    -        if p_len is None:
    -            p_len = x.shape[0] // self.hop_length
    -        else:
    -            assert abs(p_len - x.shape[0] // self.hop_length) < 4, "pad length error"
    -        time_step = self.hop_length / self.sampling_rate * 1000
    -        f0 = (
    -            parselmouth.Sound(x, self.sampling_rate)
    -            .to_pitch_ac(
    -                time_step=time_step / 1000,
    -                voicing_threshold=0.6,
    -                pitch_floor=self.f0_min,
    -                pitch_ceiling=self.f0_max,
    -            )
    -            .selected_array["frequency"]
    -        )
    -
    -        pad_size = (p_len - len(f0) + 1) // 2
    -        if pad_size > 0 or p_len - len(f0) - pad_size > 0:
    -            f0 = np.pad(f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant")
    -        f0, uv = self.interpolate_f0(f0)
    -        return f0, uv
    diff --git a/spaces/radames/MusicGen-Continuation/audiocraft/quantization/base.py b/spaces/radames/MusicGen-Continuation/audiocraft/quantization/base.py
    deleted file mode 100644
    index 1b16c130d266fbd021d3fc29bb9f98c33dd3c588..0000000000000000000000000000000000000000
    --- a/spaces/radames/MusicGen-Continuation/audiocraft/quantization/base.py
    +++ /dev/null
    @@ -1,107 +0,0 @@
    -# Copyright (c) Meta Platforms, Inc. and affiliates.
    -# All rights reserved.
    -#
    -# This source code is licensed under the license found in the
    -# LICENSE file in the root directory of this source tree.
    -
    -"""
    -Base class for all quantizers.
    -"""
    -
    -from dataclasses import dataclass, field
    -import typing as tp
    -
    -import torch
    -from torch import nn
    -
    -
    -@dataclass
    -class QuantizedResult:
    -    x: torch.Tensor
    -    codes: torch.Tensor
    -    bandwidth: torch.Tensor  # bandwidth in kb/s used, per batch item.
    -    penalty: tp.Optional[torch.Tensor] = None
    -    metrics: dict = field(default_factory=dict)
    -
    -
    -class BaseQuantizer(nn.Module):
    -    """Base class for quantizers.
    -    """
    -
    -    def forward(self, x: torch.Tensor, frame_rate: int) -> QuantizedResult:
    -        """
    -        Given input tensor x, returns first the quantized (or approximately quantized)
    -        representation along with quantized codes, bandwidth, and any penalty term for the loss.
    -        Finally, this returns a dict of metrics to update logging etc.
    -        Frame rate must be passed so that the bandwidth is properly computed.
    -        """
    -        raise NotImplementedError()
    -
    -    def encode(self, x: torch.Tensor) -> torch.Tensor:
    -        """Encode a given input tensor with the specified sample rate at the given bandwidth.
    -        """
    -        raise NotImplementedError()
    -
    -    def decode(self, codes: torch.Tensor) -> torch.Tensor:
    -        """Decode the given codes to the quantized representation.
    -        """
    -        raise NotImplementedError()
    -
    -    @property
    -    def total_codebooks(self):
    -        """Total number of codebooks.
    -        """
    -        raise NotImplementedError()
    -
    -    @property
    -    def num_codebooks(self):
    -        """Number of active codebooks.
    -        """
    -        raise NotImplementedError()
    -
    -    def set_num_codebooks(self, n: int):
    -        """Set the number of active codebooks.
    -        """
    -        raise NotImplementedError()
    -
    -
    -class DummyQuantizer(BaseQuantizer):
    -    """Fake quantizer that actually does not perform any quantization.
    -    """
    -    def __init__(self):
    -        super().__init__()
    -
    -    def forward(self, x: torch.Tensor, frame_rate: int):
    -        q = x.unsqueeze(1)
    -        return QuantizedResult(x, q, torch.tensor(q.numel() * 32 * frame_rate / 1000 / len(x)).to(x))
    -
    -    def encode(self, x: torch.Tensor) -> torch.Tensor:
    -        """Encode a given input tensor with the specified sample rate at the given bandwidth.
    -        In the case of the DummyQuantizer, the codes are actually identical
    -        to the input and resulting quantized representation as no quantization is done.
    -        """
    -        return x.unsqueeze(1)
    -
    -    def decode(self, codes: torch.Tensor) -> torch.Tensor:
    -        """Decode the given codes to the quantized representation.
    -        In the case of the DummyQuantizer, the codes are actually identical
    -        to the input and resulting quantized representation as no quantization is done.
    -        """
    -        return codes.squeeze(1)
    -
    -    @property
    -    def total_codebooks(self):
    -        """Total number of codebooks.
    -        """
    -        return 1
    -
    -    @property
    -    def num_codebooks(self):
    -        """Total number of codebooks.
    -        """
    -        return self.total_codebooks
    -
    -    def set_num_codebooks(self, n: int):
    -        """Set the number of active codebooks.
    -        """
    -        raise AttributeError("Cannot override the number of codebooks for the dummy quantizer")
    diff --git a/spaces/radames/transformers-js-sveltekit-static-example-app/_app/immutable/chunks/singletons.0131ad8a.js b/spaces/radames/transformers-js-sveltekit-static-example-app/_app/immutable/chunks/singletons.0131ad8a.js
    deleted file mode 100644
    index 7c6a3b2b086553a40c6939690da365d8f7fa2fdd..0000000000000000000000000000000000000000
    --- a/spaces/radames/transformers-js-sveltekit-static-example-app/_app/immutable/chunks/singletons.0131ad8a.js
    +++ /dev/null
    @@ -1 +0,0 @@
    -import{n as d,s as v}from"./scheduler.e108d1fd.js";const u=[];function p(e,t=d){let n;const o=new Set;function r(s){if(v(e,s)&&(e=s,n)){const c=!u.length;for(const i of o)i[1](),u.push(i,e);if(c){for(let i=0;i{o.delete(i),o.size===0&&n&&(n(),n=null)}}return{set:r,update:l,subscribe:a}}var g;const E=((g=globalThis.__sveltekit_e6msii)==null?void 0:g.base)??"";var k;const w=((k=globalThis.__sveltekit_e6msii)==null?void 0:k.assets)??E,A="1692221078782",y="sveltekit:snapshot",I="sveltekit:scroll",x="sveltekit:index",_={tap:1,hover:2,viewport:3,eager:4,off:-1};function O(e){let t=e.baseURI;if(!t){const n=e.getElementsByTagName("base");t=n.length?n[0].href:e.URL}return t}function U(){return{x:pageXOffset,y:pageYOffset}}function f(e,t){return e.getAttribute(`data-sveltekit-${t}`)}const b={..._,"":_.hover};function m(e){let t=e.assignedSlot??e.parentNode;return(t==null?void 0:t.nodeType)===11&&(t=t.host),t}function L(e,t){for(;e&&e!==t;){if(e.nodeName.toUpperCase()==="A"&&e.hasAttribute("href"))return e;e=m(e)}}function N(e,t){let n;try{n=new URL(e instanceof SVGAElement?e.href.baseVal:e.href,document.baseURI)}catch{}const o=e instanceof SVGAElement?e.target.baseVal:e.target,r=!n||!!o||S(n,t)||(e.getAttribute("rel")||"").split(/\s+/).includes("external"),l=(n==null?void 0:n.origin)===location.origin&&e.hasAttribute("download");return{url:n,external:r,target:o,download:l}}function P(e){let t=null,n=null,o=null,r=null,l=null,a=null,s=e;for(;s&&s!==document.documentElement;)o===null&&(o=f(s,"preload-code")),r===null&&(r=f(s,"preload-data")),t===null&&(t=f(s,"keepfocus")),n===null&&(n=f(s,"noscroll")),l===null&&(l=f(s,"reload")),a===null&&(a=f(s,"replacestate")),s=m(s);function c(i){switch(i){case"":case"true":return!0;case"off":case"false":return!1;default:return null}}return{preload_code:b[o??"off"],preload_data:b[r??"off"],keep_focus:c(t),noscroll:c(n),reload:c(l),replace_state:c(a)}}function h(e){const t=p(e);let n=!0;function o(){n=!0,t.update(a=>a)}function r(a){n=!1,t.set(a)}function l(a){let s;return t.subscribe(c=>{(s===void 0||n&&c!==s)&&a(s=c)})}return{notify:o,set:r,subscribe:l}}function R(){const{set:e,subscribe:t}=p(!1);let n;async function o(){clearTimeout(n);try{const r=await fetch(`${w}/_app/version.json`,{headers:{pragma:"no-cache","cache-control":"no-cache"}});if(!r.ok)return!1;const a=(await r.json()).version!==A;return a&&(e(!0),clearTimeout(n)),a}catch{return!1}}return{subscribe:t,check:o}}function S(e,t){return e.origin!==location.origin||!e.pathname.startsWith(t)}function V(e){e.client}const Y={url:h({}),page:h({}),navigating:p(null),updated:R()};export{x as I,_ as P,I as S,y as a,N as b,P as c,Y as d,E as e,L as f,O as g,V as h,S as i,U as s};
    diff --git a/spaces/raedeXanto/academic-chatgpt-beta/CARS DISNEY - PIXAR kotsikos2001 tool What Makes this Game Different from Other Racing Games.md b/spaces/raedeXanto/academic-chatgpt-beta/CARS DISNEY - PIXAR kotsikos2001 tool What Makes this Game Different from Other Racing Games.md
    deleted file mode 100644
    index 81613f28041eae2db008bd0e238c60d6732171b7..0000000000000000000000000000000000000000
    --- a/spaces/raedeXanto/academic-chatgpt-beta/CARS DISNEY - PIXAR kotsikos2001 tool What Makes this Game Different from Other Racing Games.md	
    +++ /dev/null
    @@ -1,140 +0,0 @@
    -
    -

    CARS DISNEY - PIXAR kotsikos2001 tool

    -

    If you are a fan of the Cars movie franchise by Disney and Pixar, you might have wondered how it would be like to create your own cars and race them on different tracks. Well, wonder no more, because there is a tool that lets you do just that. It is called CARS DISNEY - PIXAR kotsikos2001 tool, and it is a free software that allows you to design, customize, and drive your own cars in a 3D environment inspired by the movies. In this article, we will tell you everything you need to know about this amazing tool, including its features, how to use it, and what benefits it can bring to you.

    -

    Introduction

    -

    CARS DISNEY - PIXAR kotsikos2001 tool is a software that was created by a fan of the Cars movies, who goes by the name of kotsikos2001. He developed this tool as a hobby project, using Blender, Python, and Unreal Engine. He wanted to share his passion for cars and animation with other fans, and give them a chance to experience the world of Cars in a new way. He released his tool for free on his website, where you can also find tutorials, updates, and feedback from other users.

    -

    CARS DISNEY - PIXAR kotsikos2001 tool


    Downloadhttps://tinourl.com/2uL0JS



    -

    What is CARS DISNEY - PIXAR kotsikos2001 tool?

    -

    CARS DISNEY - PIXAR kotsikos2001 tool is a software that lets you create your own cars and race them on various tracks. You can choose from different models of cars, such as Lightning McQueen, Mater, Sally, Cruz Ramirez, Jackson Storm, and many more. You can also customize their appearance, color, decals, wheels, spoilers, etc. You can then select from different tracks, such as Radiator Springs, Florida Speedway, Tokyo Driftway, Route 66, etc. You can also adjust the weather, time of day, traffic, obstacles, etc. You can then drive your car using your keyboard or a gamepad. You can race against other cars controlled by the computer or by other players online. You can also explore the tracks freely or do stunts and tricks.

    -

    Why use CARS DISNEY - PIXAR kotsikos2001 tool?

    -

    CARS DISNEY - PIXAR kotsikos2001 tool is a fun and educational software that can appeal to anyone who loves cars and animation. It is especially suitable for children who are fans of the Cars movies. By using this tool, they can:

    -
      -
    • Express their creativity and imagination by designing their own cars
    • -
    • Improve their driving skills and knowledge by learning about speed, acceleration, braking, steering, etc.
    • -
    • Learn about the world of cars and racing by discovering different types of vehicles, tracks, locations, etc.
    • -
    • Have fun with their friends and family by playing together online or offline
    • -
    -

    Features of CARS DISNEY - PIXAR kotsikos2001 tool

    -

    CARS DISNEY - PIXAR kotsikos2001 tool has many features that make it an enjoyable and versatile software. Some of these features are:

    -

    Easy to use interface

    -

    The tool has a simple and intuitive interface that allows you to easily navigate through the menus and options. You can access the main menu by pressing Esc on your keyboard or Start on your gamepad. From there, you can select one of the four modes: Create Car Mode (where you can design your own car), Select Car Mode (where you can choose from existing cars), Select Track Mode (where you can choose from existing tracks), or Race Mode (where you can start racing). You can also change the settings of the game (such as sound volume, graphics quality, language) or exit the game.

    -

    Customizable cars and tracks

    -

    The tool gives you a lot of freedom and flexibility to create your own cars and tracks. You can use various tools and options to modify every aspect of your car or track. For example:

    -
      -
    • You can change the shape of your car by using different parts (such as bodywork, windows, lights, grill, etc.)
    • -
    • You can change the color of your car by using different paints (such as metallic, matte, glossy, etc.)
    • -
    • You can add decals to your car by using different stickers (such as logos, numbers, flags, etc.)
    • -
    • You can change the wheels of your car by using different rims (such as alloy, steel, chrome, etc.)
    • -
    • You can add spoilers to your car by using different wings (such as low, high, curved, etc.)
    • -
    • You can change the track layout by using different segments (such as straight, curved, slope, loop, etc.)
    • -
    • You can change the track environment by using different scenery (such as buildings, trees, rocks, signs, etc.)
    • -
    • You can change the track conditions by using different weather (such as sunny, rainy, snowy, foggy, etc.)
    • -
    -

    Realistic physics and graphics

    -

    The tool uses Unreal Engine 4 to render realistic physics and graphics for the cars and tracks. The cars behave according to their weight, speed, friction, aerodynamics, etc. The tracks have realistic lighting, shadows, reflections, textures, etc. The tool also supports high-resolution displays and VR headsets for an immersive experience.

    -

    Cars Disney Pixar movie download
    -Cars Disney Pixar kotsikos2001 tool crack
    -Cars Disney Pixar official site
    -Cars Disney Pixar games online
    -Cars Disney Pixar characters names
    -Cars Disney Pixar coloring pages
    -Cars Disney Pixar merchandise
    -Cars Disney Pixar soundtrack
    -Cars Disney Pixar trivia
    -Cars Disney Pixar quotes
    -Cars Disney Pixar wallpapers
    -Cars Disney Pixar toys
    -Cars Disney Pixar posters
    -Cars Disney Pixar theme park
    -Cars Disney Pixar costumes
    -Cars Disney Pixar cake
    -Cars Disney Pixar invitations
    -Cars Disney Pixar party supplies
    -Cars Disney Pixar birthday ideas
    -Cars Disney Pixar stickers
    -Cars Disney Pixar decals
    -Cars Disney Pixar bedding
    -Cars Disney Pixar curtains
    -Cars Disney Pixar rugs
    -Cars Disney Pixar lamps
    -Cars Disney Pixar backpacks
    -Cars Disney Pixar lunch boxes
    -Cars Disney Pixar water bottles
    -Cars Disney Pixar watches
    -Cars Disney Pixar jewelry
    -Cars Disney Pixar clothing
    -Cars Disney Pixar shoes
    -Cars Disney Pixar hats
    -Cars Disney Pixar jackets
    -Cars Disney Pixar pajamas
    -Cars Disney Pixar slippers
    -Cars Disney Pixar socks
    -Cars Disney Pixar underwear
    -Cars Disney Pixar masks
    -Cars Disney Pixar puzzles
    -Cars Disney Pixar books
    -Cars Disney Pixar comics
    -Cars Disney Pixar magazines
    -Cars Disney Pixar DVDs
    -Cars Disney Pixar Blu-rays
    -Cars Disney Pixar video games
    -Cars Disney Pixar console games
    -Cars Disney Pixar mobile games
    -Cars Disney Pixar board games
    -Cars Disney Pixar card games

    -

    Fun and educational gameplay

    -

    The tool offers fun and educational gameplay for all ages. You can drive your car using your keyboard or a gamepad. You can control the speed, acceleration, braking, steering, etc. You can also use nitro boosters or drift techniques to gain an advantage over your opponents. You can race against other cars controlled by the computer or by other players online. You can also explore the tracks freely or do stunts and tricks. You can earn points and trophies for completing races or challenges. You can also learn about the world of cars and racing by reading facts and trivia about different types of vehicles, tracks, locations, etc.

    -

    How to use CARS DISNEY - PIXAR kotsikos2001 tool

    -

    CARS DISNEY - PIXAR kotsikos2001 tool is easy to use for anyone who has a computer and an internet connection. Here are the steps to use it:

    -

    Download and install the tool

    -

    The first step is to download and install the tool on your computer. You can do this by visiting the official website of kotsikos2001 at https://kotsikos2001.com/cars-disney-pixar-tool/ There you will find the download link for the latest version of the tool. You will also find the system requirements for running the tool. Make sure that your computer meets them before downloading. The download file size is about 2 GB. Once you have downloaded the file, you need to unzip it and run the setup.exe file. Follow the instructions on screen to complete the installation.

    -

    Launch the tool and select a mode

    -

    The second step is to launch the tool and select a mode. You can do this by double-clicking on the desktop icon or finding it in your start menu. The tool will open in full screen mode. You will see a splash screen with the logo of CARS DISNEY - PIXAR kotsikos2001 tool. After a few seconds, the main menu with four options: Create Car Mode, Select Car Mode, Select Track Mode, and Race Mode. You can use your mouse or your keyboard or your gamepad to navigate through the menu and select an option. You can also press Esc or Start to access the settings menu or exit the game.

    -

    Choose your car and track

    -

    The third step is to choose your car and track. Depending on which mode you selected, you will have different options to do this. For example:

    -
      -
    • If you selected Create Car Mode, you will see a 3D model of a car that you can customize using various tools and options. You can rotate, zoom, or move the car using your mouse or your gamepad. You can also use the tabs on the left side of the screen to access different parts of the car (such as bodywork, paint, decals, wheels, spoilers, etc.). You can use the sliders, buttons, or color pickers on the right side of the screen to modify each part of the car. You can also use the buttons on the bottom of the screen to save, load, or reset your car. Once you are happy with your car, you can press Enter or A to confirm it and proceed to Select Track Mode.
    • -
    • If you selected Select Car Mode, you will see a list of cars that you can choose from. You can use your mouse or your keyboard or your gamepad to scroll through the list and select a car. You can also use the buttons on the bottom of the screen to filter the cars by category (such as movie characters, real cars, fantasy cars, etc.). You can also use the buttons on the top of the screen to sort the cars by name, speed, acceleration, handling, etc. Once you have selected a car, you can press Enter or A to confirm it and proceed to Select Track Mode.
    • -
    • If you selected Select Track Mode, you will see a list of tracks that you can choose from. You can use your mouse or your keyboard or your gamepad to scroll through the list and select a track. You can also use the buttons on the bottom of the screen to filter the tracks by category (such as movie locations, real locations, fantasy locations, etc.). You can also use the buttons on the top of the screen to sort the tracks by name, length, difficulty, etc. Once you have selected a track, you can press Enter or A to confirm it and proceed to Race Mode.
    • -
    -

    Start racing and enjoy

    -

    The fourth and final step is to start racing and enjoy. Once you have chosen your car and track, you will see a loading screen with some tips and facts about them. After a few seconds, you will see the race screen with your car and other cars on the track. You can use your keyboard or your gamepad to drive your car. You can use the arrow keys or the left stick to steer your car. You can use the space bar or the right trigger to accelerate your car. You can use the left shift key or the left trigger to brake or reverse your car. You can use the Z key or the X button to activate the nitro booster. You can use the X key or the A button to drift your car. You can also use the C key or the Y button to change the camera view. the game or access the settings menu or exit the game. You can race for as long as you want or until you finish the lap or the time limit. You can also see your position, lap time, speed, and nitro level on the screen. You can also hear the sound effects and the voice of your car or other cars. You can have fun racing and exploring the track or doing stunts and tricks. You can also earn points and trophies for completing races or challenges. You can also learn about the world of cars and racing by reading facts and trivia about different types of vehicles, tracks, locations, etc.

    -

    Conclusion

    -

    CARS DISNEY - PIXAR kotsikos2001 tool is a free software that lets you create your own cars and race them on various tracks. It is a fun and educational software that can appeal to anyone who loves cars and animation. It is especially suitable for children who are fans of the Cars movies. By using this tool, they can express their creativity and imagination by designing their own cars, improve their driving skills and knowledge by learning about speed, acceleration, braking, steering, etc., learn about the world of cars and racing by discovering different types of vehicles, tracks, locations, etc., and have fun with their friends and family by playing together online or offline. The tool has many features that make it an enjoyable and versatile software, such as easy to use interface, customizable cars and tracks, realistic physics and graphics, fun and educational gameplay. The tool is easy to use for anyone who has a computer and an internet connection. You just need to download and install the tool on your computer, launch the tool and select a mode, choose your car and track, and start racing and enjoying.

    -

    FAQs

    -

    Here are some frequently asked questions about CARS DISNEY - PIXAR kotsikos2001 tool:

    -
      -
    • Q: Is CARS DISNEY - PIXAR kotsikos2001 tool safe to use?
    • -
    • A: Yes, CARS DISNEY - PIXAR kotsikos2001 tool is safe to use. It does not contain any viruses, malware, spyware, or adware. It does not collect any personal information or data from your computer. It does not require any registration or payment to use. It does not interfere with any other programs or applications on your computer.
    • -
    • Q: Is CARS DISNEY - PIXAR kotsikos2001 tool compatible with my computer?
    • -
    • A: CARS DISNEY - PIXAR kotsikos2001 tool is compatible with most computers that run Windows 7 or higher. However, you need to make sure that your computer meets the minimum system requirements for running the tool. These are:
    • -
        -
      • CPU: Intel Core i3-2100 or AMD FX-6300
      • -
      • RAM: 4 GB
      • -
      • GPU: NVIDIA GeForce GTX 750 Ti or AMD Radeon R7 260X
      • -
      • Storage: 5 GB
      • -
      • Internet: Broadband connection
      • -
      -
    • If your computer does not meet these requirements, you may experience lagging, crashing, or freezing while using the tool.
    • -
    • Q: How can I update CARS DISNEY - PIXAR kotsikos2001 tool?
    • -
    • A: CARS DISNEY - PIXAR kotsikos2001 tool is regularly updated by its developer, kotsikos2001. He adds new features, cars, tracks, etc. to the tool based on his own ideas or feedback from users. You can check for updates by visiting the official website of kotsikos2001 at https://kotsikos2001.com/cars-disney-pixar-tool/ There you will find the latest version of the tool and a changelog of what's new. You can also follow kotsikos2001 on his social media accounts such as Facebook, Twitter, YouTube, etc. where he posts updates and news about the tool. To update the tool, you just need to download the latest version and install it over the previous one.
    • -
    • Q: How can I contact CARS DISNEY - PIXAR kotsikos2001 tool developer?
    • -
    • A: If you have any questions, comments, suggestions, or issues about CARS DISNEY - PIXAR kotsikos2001 tool, you can contact its developer, kotsikos2001, by using one of these methods:
    • -
        -
      • Email: kotsikos2001@gmail.com
      • -
      • Website: https://kotsikos2001.com/contact/
      • -
      • Facebook: https://www.facebook.com/kotsikos2001/
      • -
      • Twitter: https://twitter.com/kotsikos2001/
      • -
      • YouTube: https://www.youtube.com/channel/UCwZm8f9nZ6y9cYXlqJxwz4g/
      • -
      -
    • kotsikos2001 is very friendly and responsive to his users. He will try to answer your messages as soon as possible.
    • -
    • Q: How can I support CARS DISNEY - PIXAR kotsikos2001 tool developer?
    • -
    • A: If you like CARS DISNEY - PIXAR kotsikos2001 tool and want to support its developer, kotsikos2001, you can do so by using one of these methods:
    • -
        -
      • Donate: You can donate any amount of money to kotsikos2001 via PayPal at https://www.paypal.me/kotsikos2001/ Your donation will help him cover the costs of developing and maintaining the tool.
      • -
      • Share: You can share CARS DISNEY - PIXAR kotsikos2001 tool with your friends and family who might be interested in it. You can also share your creations and experiences with the tool on social media platforms such as Facebook, Twitter, YouTube, etc. You can also leave a positive review or rating for the tool on its website or other platforms where it is available.
      • -
      • Thank: You can thank kotsikos2001 for creating this amazing tool by sending him a message of appreciation or gratitude via email or social media. You can also thank him by giving him feedback or suggestions on how to improve the tool.
      • -
      -

      0a6ba089eb
      -
      -
      \ No newline at end of file diff --git a/spaces/ramiin2/AutoGPT/tests/test_json_parser.py b/spaces/ramiin2/AutoGPT/tests/test_json_parser.py deleted file mode 100644 index 41c90a6f66c0b0468f1443de80033cc4f268eca0..0000000000000000000000000000000000000000 --- a/spaces/ramiin2/AutoGPT/tests/test_json_parser.py +++ /dev/null @@ -1,111 +0,0 @@ -import unittest - -import tests.context -from autogpt.json_utils.json_fix_llm import fix_and_parse_json - - -class TestParseJson(unittest.TestCase): - def test_valid_json(self): - # Test that a valid JSON string is parsed correctly - json_str = '{"name": "John", "age": 30, "city": "New York"}' - obj = fix_and_parse_json(json_str) - self.assertEqual(obj, {"name": "John", "age": 30, "city": "New York"}) - - def test_invalid_json_minor(self): - # Test that an invalid JSON string can be fixed with gpt - json_str = '{"name": "John", "age": 30, "city": "New York",}' - with self.assertRaises(Exception): - fix_and_parse_json(json_str, try_to_fix_with_gpt=False) - - def test_invalid_json_major_with_gpt(self): - # Test that an invalid JSON string raises an error when try_to_fix_with_gpt is False - json_str = 'BEGIN: "name": "John" - "age": 30 - "city": "New York" :END' - with self.assertRaises(Exception): - fix_and_parse_json(json_str, try_to_fix_with_gpt=False) - - def test_invalid_json_major_without_gpt(self): - # Test that a REALLY invalid JSON string raises an error when try_to_fix_with_gpt is False - json_str = 'BEGIN: "name": "John" - "age": 30 - "city": "New York" :END' - # Assert that this raises an exception: - with self.assertRaises(Exception): - fix_and_parse_json(json_str, try_to_fix_with_gpt=False) - - def test_invalid_json_leading_sentence_with_gpt(self): - # Test that a REALLY invalid JSON string raises an error when try_to_fix_with_gpt is False - json_str = """I suggest we start by browsing the repository to find any issues that we can fix. - -{ - "command": { - "name": "browse_website", - "args":{ - "url": "https://github.com/Torantulino/Auto-GPT" - } - }, - "thoughts": - { - "text": "I suggest we start browsing the repository to find any issues that we can fix.", - "reasoning": "Browsing the repository will give us an idea of the current state of the codebase and identify any issues that we can address to improve the repo.", - "plan": "- Look through the repository to find any issues.\n- Investigate any issues to determine what needs to be fixed\n- Identify possible solutions to fix the issues\n- Open Pull Requests with fixes", - "criticism": "I should be careful while browsing so as not to accidentally introduce any new bugs or issues.", - "speak": "I will start browsing the repository to find any issues we can fix." - } -}""" - good_obj = { - "command": { - "name": "browse_website", - "args": {"url": "https://github.com/Torantulino/Auto-GPT"}, - }, - "thoughts": { - "text": "I suggest we start browsing the repository to find any issues that we can fix.", - "reasoning": "Browsing the repository will give us an idea of the current state of the codebase and identify any issues that we can address to improve the repo.", - "plan": "- Look through the repository to find any issues.\n- Investigate any issues to determine what needs to be fixed\n- Identify possible solutions to fix the issues\n- Open Pull Requests with fixes", - "criticism": "I should be careful while browsing so as not to accidentally introduce any new bugs or issues.", - "speak": "I will start browsing the repository to find any issues we can fix.", - }, - } - # Assert that this raises an exception: - self.assertEqual( - fix_and_parse_json(json_str, try_to_fix_with_gpt=False), good_obj - ) - - def test_invalid_json_leading_sentence_with_gpt(self): - # Test that a REALLY invalid JSON string raises an error when try_to_fix_with_gpt is False - json_str = """I will first need to browse the repository (https://github.com/Torantulino/Auto-GPT) and identify any potential bugs that need fixing. I will use the "browse_website" command for this. - -{ - "command": { - "name": "browse_website", - "args":{ - "url": "https://github.com/Torantulino/Auto-GPT" - } - }, - "thoughts": - { - "text": "Browsing the repository to identify potential bugs", - "reasoning": "Before fixing bugs, I need to identify what needs fixing. I will use the 'browse_website' command to analyze the repository.", - "plan": "- Analyze the repository for potential bugs and areas of improvement", - "criticism": "I need to ensure I am thorough and pay attention to detail while browsing the repository.", - "speak": "I am browsing the repository to identify potential bugs." - } -}""" - good_obj = { - "command": { - "name": "browse_website", - "args": {"url": "https://github.com/Torantulino/Auto-GPT"}, - }, - "thoughts": { - "text": "Browsing the repository to identify potential bugs", - "reasoning": "Before fixing bugs, I need to identify what needs fixing. I will use the 'browse_website' command to analyze the repository.", - "plan": "- Analyze the repository for potential bugs and areas of improvement", - "criticism": "I need to ensure I am thorough and pay attention to detail while browsing the repository.", - "speak": "I am browsing the repository to identify potential bugs.", - }, - } - # Assert that this raises an exception: - self.assertEqual( - fix_and_parse_json(json_str, try_to_fix_with_gpt=False), good_obj - ) - - -if __name__ == "__main__": - unittest.main() diff --git a/spaces/ramiin2/AutoGPT/tests/unit/test_chat.py b/spaces/ramiin2/AutoGPT/tests/unit/test_chat.py deleted file mode 100644 index 774f4103762c28d5a02e89c14b224fae0bc0756a..0000000000000000000000000000000000000000 --- a/spaces/ramiin2/AutoGPT/tests/unit/test_chat.py +++ /dev/null @@ -1,86 +0,0 @@ -# Generated by CodiumAI -import time -import unittest -from unittest.mock import patch - -from autogpt.chat import create_chat_message, generate_context - - -class TestChat(unittest.TestCase): - # Tests that the function returns a dictionary with the correct keys and values when valid strings are provided for role and content. - def test_happy_path_role_content(self): - result = create_chat_message("system", "Hello, world!") - self.assertEqual(result, {"role": "system", "content": "Hello, world!"}) - - # Tests that the function returns a dictionary with the correct keys and values when empty strings are provided for role and content. - def test_empty_role_content(self): - result = create_chat_message("", "") - self.assertEqual(result, {"role": "", "content": ""}) - - # Tests the behavior of the generate_context function when all input parameters are empty. - @patch("time.strftime") - def test_generate_context_empty_inputs(self, mock_strftime): - # Mock the time.strftime function to return a fixed value - mock_strftime.return_value = "Sat Apr 15 00:00:00 2023" - # Arrange - prompt = "" - relevant_memory = "" - full_message_history = [] - model = "gpt-3.5-turbo-0301" - - # Act - result = generate_context(prompt, relevant_memory, full_message_history, model) - - # Assert - expected_result = ( - -1, - 47, - 3, - [ - {"role": "system", "content": ""}, - { - "role": "system", - "content": f"The current time and date is {time.strftime('%c')}", - }, - { - "role": "system", - "content": f"This reminds you of these events from your past:\n\n\n", - }, - ], - ) - self.assertEqual(result, expected_result) - - # Tests that the function successfully generates a current_context given valid inputs. - def test_generate_context_valid_inputs(self): - # Given - prompt = "What is your favorite color?" - relevant_memory = "You once painted your room blue." - full_message_history = [ - create_chat_message("user", "Hi there!"), - create_chat_message("assistant", "Hello! How can I assist you today?"), - create_chat_message("user", "Can you tell me a joke?"), - create_chat_message( - "assistant", - "Why did the tomato turn red? Because it saw the salad dressing!", - ), - create_chat_message("user", "Haha, that's funny."), - ] - model = "gpt-3.5-turbo-0301" - - # When - result = generate_context(prompt, relevant_memory, full_message_history, model) - - # Then - self.assertIsInstance(result[0], int) - self.assertIsInstance(result[1], int) - self.assertIsInstance(result[2], int) - self.assertIsInstance(result[3], list) - self.assertGreaterEqual(result[0], 0) - self.assertGreaterEqual(result[1], 0) - self.assertGreaterEqual(result[2], 0) - self.assertGreaterEqual( - len(result[3]), 3 - ) # current_context should have at least 3 messages - self.assertLessEqual( - result[1], 2048 - ) # token limit for GPT-3.5-turbo-0301 is 2048 tokens diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Abacom FrontDesigner V3 0 En De Fr ISO [PORTABLE].md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Abacom FrontDesigner V3 0 En De Fr ISO [PORTABLE].md deleted file mode 100644 index f2304a44b7a49c5f84b2ca62c748ea4384c89d2b..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Abacom FrontDesigner V3 0 En De Fr ISO [PORTABLE].md +++ /dev/null @@ -1,115 +0,0 @@ - -

      Abacom FrontDesigner v3 0 En De Fr ISO: A Review

      - -

      If you are looking for a software that can help you design professional front panels for your electronic projects, you might want to check out Abacom FrontDesigner v3 0 En De Fr ISO. This software is a powerful tool that offers many features and functions to create stunning front panels with ease.

      -

      Abacom FrontDesigner v3 0 En De Fr ISO


      DOWNLOAD ►►►►► https://urlgoal.com/2uCJGI



      - -

      What is Abacom FrontDesigner v3 0 En De Fr ISO?

      - -

      Abacom FrontDesigner v3 0 En De Fr ISO is a software that allows you to create front panels for your electronic devices. It is compatible with Windows operating systems and supports English, German and French languages. It comes as an ISO file that you can burn to a CD or mount to a virtual drive.

      - -

      What are the features of Abacom FrontDesigner v3 0 En De Fr ISO?

      - -

      Abacom FrontDesigner v3 0 En De Fr ISO has many features that make it a versatile and user-friendly software for front panel design. Some of the features are:

      - -
        -
      • Comfortable drawing functions for rectangles, polygons, ellipses, labels, drillings and more.
      • -
      • Predefined and user-editable library of symbols and labels.
      • -
      • A scale-assistant that creates scales for switches, potentiometers and instruments.
      • -
      • Measurement options that simplify drilling and cutting.
      • -
      • A mirrored printout to transparent film that gives a long-life panel design.
      • -
      • A new HPGL export that creates PLT files so you can mill and engrave your front panel.
      • -
      • Specialized functions for rotation, stretching, mirroring, drilling, milling and more.
      • -
      • Rounded and interpolated contours and chamfers.
      • -
      • Dockable tools and grid and capture options.
      • -
      - -

      How to use Abacom FrontDesigner v3 0 En De Fr ISO?

      - -

      To use Abacom FrontDesigner v3 0 En De Fr ISO, you need to download the ISO file from the official website or from a trusted source. Then, you need to burn the ISO file to a CD or mount it to a virtual drive using a software like Daemon Tools. After that, you can install the software on your computer and start designing your front panels. You can use the help file or the online manual to learn how to use the software effectively.

      - -

      What are the benefits of Abacom FrontDesigner v3 0 En De Fr ISO?

      - -

      Abacom FrontDesigner v3 0 En De Fr ISO has many benefits for electronic hobbyists and professionals who want to create custom front panels for their devices. Some of the benefits are:

      - -
        -
      • It saves time and money by allowing you to design your own front panels instead of buying them from specialized dealers.
      • -
      • It gives you more control and creativity over your front panel design by offering many options and features.
      • -
      • It improves the appearance and functionality of your electronic devices by providing good-looking and fitting front panels.
      • -
      • It supports multiple languages and formats so you can use it in different countries and situations.
      • -
      • It is easy to use and learn with a user-friendly interface and comprehensive documentation.
      • -
      - -

      Conclusion

      - -

      Abacom FrontDesigner v3 0 En De Fr ISO is a software that can help you design professional front panels for your electronic projects. It has many features and functions that make it a powerful tool for front panel design. It is compatible with Windows operating systems and supports English, German and French languages. It comes as an ISO file that you can burn to a CD or mount to a virtual drive. It has many benefits for electronic hobbyists and professionals who want to create custom front panels for their devices. If you are interested in Abacom FrontDesigner v3 0 En De Fr ISO, you can download it from the official website or from a trusted source.

      -

      -

      How to download Abacom FrontDesigner v3 0 En De Fr ISO?

      - -

      To download Abacom FrontDesigner v3 0 En De Fr ISO, you need to visit the official website of Abacom or a trusted source that provides the ISO file. You need to make sure that the file is safe and virus-free before downloading it. You also need to have enough space on your computer or external drive to store the ISO file. The file size is about 46.93 MB.

      - -

      How to install Abacom FrontDesigner v3 0 En De Fr ISO?

      - -

      To install Abacom FrontDesigner v3 0 En De Fr ISO, you need to have a CD burner or a virtual drive software on your computer. You can use a software like Nero or Daemon Tools to burn the ISO file to a CD or mount it to a virtual drive. Then, you can run the setup.exe file from the CD or the virtual drive and follow the instructions on the screen. You may need to enter a password or a serial number to complete the installation. The password or the serial number can be found on the website or the source where you downloaded the ISO file.

      - -

      How to update Abacom FrontDesigner v3 0 En De Fr ISO?

      - -

      To update Abacom FrontDesigner v3 0 En De Fr ISO, you need to check the official website of Abacom or the source where you downloaded the ISO file for any new versions or patches. You can also use the update function in the software to check for updates automatically. If there is a new version or a patch available, you can download it and install it over your existing version. You may need to enter a password or a serial number again to update the software.

      -

      What are the drawbacks of Abacom FrontDesigner v3 0 En De Fr ISO?

      - -

      Although Abacom FrontDesigner v3 0 En De Fr ISO is a great software for front panel design, it also has some drawbacks that you should be aware of before using it. Some of the drawbacks are:

      - -
        -
      • It is not compatible with Mac or Linux operating systems, so you need to have a Windows computer to use it.
      • -
      • It is not free, so you need to pay a license fee to use it. The license fee is 49.90 EUR for a single user license and 99.90 EUR for a multi user license.
      • -
      • It may not support all types of printers or engravers, so you need to check the compatibility before printing or engraving your front panel.
      • -
      • It may not have all the symbols or labels that you need, so you may need to create your own or import them from other sources.
      • -
      • It may have some bugs or errors that can affect the performance or quality of your front panel design.
      • -
      - -

      What are the alternatives to Abacom FrontDesigner v3 0 En De Fr ISO?

      - -

      If you are not satisfied with Abacom FrontDesigner v3 0 En De Fr ISO or you want to try other software for front panel design, you can check out some of the alternatives that are available online. Some of the alternatives are:

      - -
        -
      • Front Panel Express: This is a software that allows you to design and order custom front panels online. It has a user-friendly interface and a large library of symbols and labels. It also offers CNC machining and engraving services.
      • -
      • Schaeffer AG: This is a company that offers front panel design and manufacturing services. You can use their online software to design your front panel and then order it from them. They have high-quality materials and processes.
      • -
      • PanelBuilder32: This is a software that allows you to design and print front panels for Allen-Bradley industrial control products. It has a simple interface and a database of symbols and labels. It also supports multiple languages and formats.
      • -
      - -

      Conclusion

      - -

      Abacom FrontDesigner v3 0 En De Fr ISO is a software that can help you design professional front panels for your electronic projects. It has many features and functions that make it a powerful tool for front panel design. It is compatible with Windows operating systems and supports English, German and French languages. It comes as an ISO file that you can burn to a CD or mount to a virtual drive. It has many benefits for electronic hobbyists and professionals who want to create custom front panels for their devices. However, it also has some drawbacks that you should be aware of before using it. You can also check out some of the alternatives that are available online if you want to try other software for front panel design. If you are interested in Abacom FrontDesigner v3 0 En De Fr ISO, you can download it from the official website or from a trusted source.

      -

      How to uninstall Abacom FrontDesigner v3 0 En De Fr ISO?

      - -

      If you want to uninstall Abacom FrontDesigner v3 0 En De Fr ISO from your computer, you can follow these steps:

      - -
        -
      1. Go to the Start menu and click on Control Panel.
      2. -
      3. Click on Programs and Features or Add or Remove Programs.
      4. -
      5. Find Abacom FrontDesigner v3 0 En De Fr ISO in the list of installed programs and click on it.
      6. -
      7. Click on Uninstall or Remove and follow the instructions on the screen.
      8. -
      9. Restart your computer if prompted.
      10. -
      - -

      You can also use a third-party software like Revo Uninstaller or CCleaner to uninstall Abacom FrontDesigner v3 0 En De Fr ISO more thoroughly and remove any leftover files or registry entries.

      - -

      How to get help or support for Abacom FrontDesigner v3 0 En De Fr ISO?

      - -

      If you need help or support for Abacom FrontDesigner v3 0 En De Fr ISO, you can use the following resources:

      - -
        -
      • Help file: You can access the help file from the software by clicking on Help or pressing F1. The help file contains detailed information and instructions on how to use the software and its features.
      • -
      • Online manual: You can access the online manual from the official website of Abacom by clicking on Products, then FrontDesigner, then Manual. The online manual is similar to the help file but it also contains screenshots and examples.
      • -
      • Forum: You can access the forum from the official website of Abacom by clicking on Forum. The forum is a place where you can ask questions, share tips, report bugs, request features, or discuss anything related to Abacom FrontDesigner v3 0 En De Fr ISO or other Abacom products. You need to register and log in to post on the forum.
      • -
      • Email: You can send an email to info@abacom-online.de or use the contact form on their website. You can expect a reply within 24 hours.
      • -
      • Phone: You can call them at +49 (0) 40 / 180 48 108 from Monday to Friday between 9:00 and 17:00 (CET). They speak English, German and French.
      • -
      • Fax: You can fax them at +49 (0) 40 / 180 48 109. They accept faxes in English, German and French.
      • -
      - -

      Conclusion

      - -

      Abacom FrontDesigner v3 0 En De Fr ISO is a software that can help you design professional front panels for your electronic projects. It has many features and functions that make it a powerful tool for front panel design. It is compatible with Windows operating systems and supports English, German and French languages. It comes as an ISO file that you can burn to a CD or mount to a virtual drive. It has many benefits for electronic hobbyists and professionals who want to create custom front panels for their devices. However, it also has some drawbacks that you should be aware of before using it. You can also check out some of the alternatives that are available online if you want to try other software for front panel design. If you are interested in Abacom FrontDesigner v3 0 En De Fr ISO, you can download it from the official website or from a trusted source. You can also contact Abacom for any help or support that you may need.

      -

      Abacom FrontDesigner v3 0 En De Fr ISO is a software that can help you design professional front panels for your electronic projects. It has many features and functions that make it a powerful tool for front panel design. It is compatible with Windows operating systems and supports English, German and French languages. It comes as an ISO file that you can burn to a CD or mount to a virtual drive. It has many benefits for electronic hobbyists and professionals who want to create custom front panels for their devices. However, it also has some drawbacks that you should be aware of before using it. You can also check out some of the alternatives that are available online if you want to try other software for front panel design. If you are interested in Abacom FrontDesigner v3 0 En De Fr ISO, you can download it from the official website or from a trusted source. You can also contact Abacom for any help or support that you may need.

      3cee63e6c2
      -
      -
      \ No newline at end of file diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Daceasyaccountingnetworkserialcrackkeygen 2021.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Daceasyaccountingnetworkserialcrackkeygen 2021.md deleted file mode 100644 index dd88e124c42dc26970cf202b86112cc7cd895aca..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Daceasyaccountingnetworkserialcrackkeygen 2021.md +++ /dev/null @@ -1,6 +0,0 @@ -

      daceasyaccountingnetworkserialcrackkeygen


      Download >>> https://urlgoal.com/2uCKKs



      - - d5da3c52bf
      -
      -
      -

      diff --git a/spaces/riccorl/relik-entity-linking/relik/inference/data/window/manager.py b/spaces/riccorl/relik-entity-linking/relik/inference/data/window/manager.py deleted file mode 100644 index 420609b1827f13bb332780554e3e20421908f6e9..0000000000000000000000000000000000000000 --- a/spaces/riccorl/relik-entity-linking/relik/inference/data/window/manager.py +++ /dev/null @@ -1,262 +0,0 @@ -import collections -import itertools -from dataclasses import dataclass -from typing import List, Optional, Set, Tuple - -from relik.inference.data.tokenizers.base_tokenizer import BaseTokenizer -from relik.reader.data.relik_reader_sample import RelikReaderSample - - -@dataclass -class Window: - doc_id: int - window_id: int - text: str - tokens: List[str] - doc_topic: Optional[str] - offset: int - token2char_start: dict - token2char_end: dict - window_candidates: Optional[List[str]] = None - - -class WindowManager: - def __init__(self, tokenizer: BaseTokenizer) -> None: - self.tokenizer = tokenizer - - def tokenize(self, document: str) -> Tuple[List[str], List[Tuple[int, int]]]: - tokenized_document = self.tokenizer(document) - tokens = [] - tokens_char_mapping = [] - for token in tokenized_document: - tokens.append(token.text) - tokens_char_mapping.append((token.start_char, token.end_char)) - return tokens, tokens_char_mapping - - def create_windows( - self, - document: str, - window_size: int, - stride: int, - doc_id: int = 0, - doc_topic: str = None, - ) -> List[RelikReaderSample]: - document_tokens, tokens_char_mapping = self.tokenize(document) - if doc_topic is None: - doc_topic = document_tokens[0] if len(document_tokens) > 0 else "" - document_windows = [] - if len(document_tokens) <= window_size: - text = document - # relik_reader_sample = RelikReaderSample() - document_windows.append( - # Window( - RelikReaderSample( - doc_id=doc_id, - window_id=0, - text=text, - tokens=document_tokens, - doc_topic=doc_topic, - offset=0, - token2char_start={ - str(i): tokens_char_mapping[i][0] - for i in range(len(document_tokens)) - }, - token2char_end={ - str(i): tokens_char_mapping[i][1] - for i in range(len(document_tokens)) - }, - ) - ) - else: - for window_id, i in enumerate(range(0, len(document_tokens), stride)): - # if the last stride is smaller than the window size, then we can - # include more tokens form the previous window. - if i != 0 and i + window_size > len(document_tokens): - overflowing_tokens = i + window_size - len(document_tokens) - if overflowing_tokens >= stride: - break - i -= overflowing_tokens - - involved_token_indices = list( - range(i, min(i + window_size, len(document_tokens) - 1)) - ) - window_tokens = [document_tokens[j] for j in involved_token_indices] - window_text_start = tokens_char_mapping[involved_token_indices[0]][0] - window_text_end = tokens_char_mapping[involved_token_indices[-1]][1] - text = document[window_text_start:window_text_end] - document_windows.append( - # Window( - RelikReaderSample( - # dict( - doc_id=doc_id, - window_id=window_id, - text=text, - tokens=window_tokens, - doc_topic=doc_topic, - offset=window_text_start, - token2char_start={ - str(i): tokens_char_mapping[ti][0] - for i, ti in enumerate(involved_token_indices) - }, - token2char_end={ - str(i): tokens_char_mapping[ti][1] - for i, ti in enumerate(involved_token_indices) - }, - # ) - ) - ) - return document_windows - - def merge_windows( - self, windows: List[RelikReaderSample] - ) -> List[RelikReaderSample]: - windows_by_doc_id = collections.defaultdict(list) - for window in windows: - windows_by_doc_id[window.doc_id].append(window) - - merged_window_by_doc = { - doc_id: self.merge_doc_windows(doc_windows) - for doc_id, doc_windows in windows_by_doc_id.items() - } - - return list(merged_window_by_doc.values()) - - def merge_doc_windows(self, windows: List[RelikReaderSample]) -> RelikReaderSample: - if len(windows) == 1: - return windows[0] - - if len(windows) > 0 and getattr(windows[0], "offset", None) is not None: - windows = sorted(windows, key=(lambda x: x.offset)) - - window_accumulator = windows[0] - - for next_window in windows[1:]: - window_accumulator = self._merge_window_pair( - window_accumulator, next_window - ) - - return window_accumulator - - def _merge_tokens( - self, window1: RelikReaderSample, window2: RelikReaderSample - ) -> Tuple[list, dict, dict]: - w1_tokens = window1.tokens[1:-1] - w2_tokens = window2.tokens[1:-1] - - # find intersection - tokens_intersection = None - for k in reversed(range(1, len(w1_tokens))): - if w1_tokens[-k:] == w2_tokens[:k]: - tokens_intersection = k - break - assert tokens_intersection is not None, ( - f"{window1.doc_id} - {window1.sent_id} - {window1.offset}" - + f" {window2.doc_id} - {window2.sent_id} - {window2.offset}\n" - + f"w1 tokens: {w1_tokens}\n" - + f"w2 tokens: {w2_tokens}\n" - ) - - final_tokens = ( - [window1.tokens[0]] # CLS - + w1_tokens - + w2_tokens[tokens_intersection:] - + [window1.tokens[-1]] # SEP - ) - - w2_starting_offset = len(w1_tokens) - tokens_intersection - - def merge_char_mapping(t2c1: dict, t2c2: dict) -> dict: - final_t2c = dict() - final_t2c.update(t2c1) - for t, c in t2c2.items(): - t = int(t) - if t < tokens_intersection: - continue - final_t2c[str(t + w2_starting_offset)] = c - return final_t2c - - return ( - final_tokens, - merge_char_mapping(window1.token2char_start, window2.token2char_start), - merge_char_mapping(window1.token2char_end, window2.token2char_end), - ) - - def _merge_span_annotation( - self, span_annotation1: List[list], span_annotation2: List[list] - ) -> List[list]: - uniq_store = set() - final_span_annotation_store = [] - for span_annotation in itertools.chain(span_annotation1, span_annotation2): - span_annotation_id = tuple(span_annotation) - if span_annotation_id not in uniq_store: - uniq_store.add(span_annotation_id) - final_span_annotation_store.append(span_annotation) - return sorted(final_span_annotation_store, key=lambda x: x[0]) - - def _merge_predictions( - self, - window1: RelikReaderSample, - window2: RelikReaderSample, - ) -> Tuple[Set[Tuple[int, int, str]], dict]: - merged_predictions = window1.predicted_window_labels_chars.union( - window2.predicted_window_labels_chars - ) - - span_title_probabilities = dict() - # probabilities - for span_prediction, predicted_probs in itertools.chain( - window1.probs_window_labels_chars.items(), - window2.probs_window_labels_chars.items(), - ): - if span_prediction not in span_title_probabilities: - span_title_probabilities[span_prediction] = predicted_probs - - return merged_predictions, span_title_probabilities - - def _merge_window_pair( - self, - window1: RelikReaderSample, - window2: RelikReaderSample, - ) -> RelikReaderSample: - merging_output = dict() - - if getattr(window1, "doc_id", None) is not None: - assert window1.doc_id == window2.doc_id - - if getattr(window1, "offset", None) is not None: - assert ( - window1.offset < window2.offset - ), f"window 2 offset ({window2.offset}) is smaller that window 1 offset({window1.offset})" - - merging_output["doc_id"] = window1.doc_id - merging_output["offset"] = window2.offset - - m_tokens, m_token2char_start, m_token2char_end = self._merge_tokens( - window1, window2 - ) - - window_labels = None - if getattr(window1, "window_labels", None) is not None: - window_labels = self._merge_span_annotation( - window1.window_labels, window2.window_labels - ) - ( - predicted_window_labels_chars, - probs_window_labels_chars, - ) = self._merge_predictions( - window1, - window2, - ) - - merging_output.update( - dict( - tokens=m_tokens, - token2char_start=m_token2char_start, - token2char_end=m_token2char_end, - window_labels=window_labels, - predicted_window_labels_chars=predicted_window_labels_chars, - probs_window_labels_chars=probs_window_labels_chars, - ) - ) - - return RelikReaderSample(**merging_output) diff --git a/spaces/ritwikbiswas/incoder-complete/static/frame.html b/spaces/ritwikbiswas/incoder-complete/static/frame.html deleted file mode 100644 index d67837cdb8a27c29d4c5ec7969efb628a3cc5842..0000000000000000000000000000000000000000 --- a/spaces/ritwikbiswas/incoder-complete/static/frame.html +++ /dev/null @@ -1 +0,0 @@ - \ No newline at end of file diff --git a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/core/bbox/iou_calculators/iou2d_calculator.py b/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/core/bbox/iou_calculators/iou2d_calculator.py deleted file mode 100644 index b71a5557ea129aaf72e39305524236e4419c3327..0000000000000000000000000000000000000000 --- a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/core/bbox/iou_calculators/iou2d_calculator.py +++ /dev/null @@ -1,260 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch - -from .builder import IOU_CALCULATORS - - -def cast_tensor_type(x, scale=1., dtype=None): - if dtype == 'fp16': - # scale is for preventing overflows - x = (x / scale).half() - return x - - -def fp16_clamp(x, min=None, max=None): - if not x.is_cuda and x.dtype == torch.float16: - # clamp for cpu float16, tensor fp16 has no clamp implementation - return x.float().clamp(min, max).half() - - return x.clamp(min, max) - - -@IOU_CALCULATORS.register_module() -class BboxOverlaps2D: - """2D Overlaps (e.g. IoUs, GIoUs) Calculator.""" - - def __init__(self, scale=1., dtype=None): - self.scale = scale - self.dtype = dtype - - def __call__(self, bboxes1, bboxes2, mode='iou', is_aligned=False): - """Calculate IoU between 2D bboxes. - - Args: - bboxes1 (Tensor): bboxes have shape (m, 4) in - format, or shape (m, 5) in format. - bboxes2 (Tensor): bboxes have shape (n, 4) in - format, shape (n, 5) in format, or be - empty. - mode (str): "iou" (intersection over union), "iof" (intersection - over foreground), or "giou" (generalized intersection over - union). - is_aligned (bool, optional): If True, then m and n must be equal. - Default False. - - Returns: - Tensor: shape (m, n) if ``is_aligned `` is False else shape (m,) - """ - assert bboxes1.size(-1) in [0, 4, 5] - assert bboxes2.size(-1) in [0, 4, 5] - if bboxes2.size(-1) == 5: - bboxes2 = bboxes2[..., :4] - if bboxes1.size(-1) == 5: - bboxes1 = bboxes1[..., :4] - - if self.dtype == 'fp16': - # change tensor type to save cpu and cuda memory and keep speed - bboxes1 = cast_tensor_type(bboxes1, self.scale, self.dtype) - bboxes2 = cast_tensor_type(bboxes2, self.scale, self.dtype) - overlaps = bbox_overlaps(bboxes1, bboxes2, mode, is_aligned) - if not overlaps.is_cuda and overlaps.dtype == torch.float16: - # resume cpu float32 - overlaps = overlaps.float() - return overlaps - - return bbox_overlaps(bboxes1, bboxes2, mode, is_aligned) - - def __repr__(self): - """str: a string describing the module""" - repr_str = self.__class__.__name__ + f'(' \ - f'scale={self.scale}, dtype={self.dtype})' - return repr_str - - -def bbox_overlaps(bboxes1, bboxes2, mode='iou', is_aligned=False, eps=1e-6): - """Calculate overlap between two set of bboxes. - - FP16 Contributed by https://github.com/open-mmlab/mmdetection/pull/4889 - Note: - Assume bboxes1 is M x 4, bboxes2 is N x 4, when mode is 'iou', - there are some new generated variable when calculating IOU - using bbox_overlaps function: - - 1) is_aligned is False - area1: M x 1 - area2: N x 1 - lt: M x N x 2 - rb: M x N x 2 - wh: M x N x 2 - overlap: M x N x 1 - union: M x N x 1 - ious: M x N x 1 - - Total memory: - S = (9 x N x M + N + M) * 4 Byte, - - When using FP16, we can reduce: - R = (9 x N x M + N + M) * 4 / 2 Byte - R large than (N + M) * 4 * 2 is always true when N and M >= 1. - Obviously, N + M <= N * M < 3 * N * M, when N >=2 and M >=2, - N + 1 < 3 * N, when N or M is 1. - - Given M = 40 (ground truth), N = 400000 (three anchor boxes - in per grid, FPN, R-CNNs), - R = 275 MB (one times) - - A special case (dense detection), M = 512 (ground truth), - R = 3516 MB = 3.43 GB - - When the batch size is B, reduce: - B x R - - Therefore, CUDA memory runs out frequently. - - Experiments on GeForce RTX 2080Ti (11019 MiB): - - | dtype | M | N | Use | Real | Ideal | - |:----:|:----:|:----:|:----:|:----:|:----:| - | FP32 | 512 | 400000 | 8020 MiB | -- | -- | - | FP16 | 512 | 400000 | 4504 MiB | 3516 MiB | 3516 MiB | - | FP32 | 40 | 400000 | 1540 MiB | -- | -- | - | FP16 | 40 | 400000 | 1264 MiB | 276MiB | 275 MiB | - - 2) is_aligned is True - area1: N x 1 - area2: N x 1 - lt: N x 2 - rb: N x 2 - wh: N x 2 - overlap: N x 1 - union: N x 1 - ious: N x 1 - - Total memory: - S = 11 x N * 4 Byte - - When using FP16, we can reduce: - R = 11 x N * 4 / 2 Byte - - So do the 'giou' (large than 'iou'). - - Time-wise, FP16 is generally faster than FP32. - - When gpu_assign_thr is not -1, it takes more time on cpu - but not reduce memory. - There, we can reduce half the memory and keep the speed. - - If ``is_aligned`` is ``False``, then calculate the overlaps between each - bbox of bboxes1 and bboxes2, otherwise the overlaps between each aligned - pair of bboxes1 and bboxes2. - - Args: - bboxes1 (Tensor): shape (B, m, 4) in format or empty. - bboxes2 (Tensor): shape (B, n, 4) in format or empty. - B indicates the batch dim, in shape (B1, B2, ..., Bn). - If ``is_aligned`` is ``True``, then m and n must be equal. - mode (str): "iou" (intersection over union), "iof" (intersection over - foreground) or "giou" (generalized intersection over union). - Default "iou". - is_aligned (bool, optional): If True, then m and n must be equal. - Default False. - eps (float, optional): A value added to the denominator for numerical - stability. Default 1e-6. - - Returns: - Tensor: shape (m, n) if ``is_aligned`` is False else shape (m,) - - Example: - >>> bboxes1 = torch.FloatTensor([ - >>> [0, 0, 10, 10], - >>> [10, 10, 20, 20], - >>> [32, 32, 38, 42], - >>> ]) - >>> bboxes2 = torch.FloatTensor([ - >>> [0, 0, 10, 20], - >>> [0, 10, 10, 19], - >>> [10, 10, 20, 20], - >>> ]) - >>> overlaps = bbox_overlaps(bboxes1, bboxes2) - >>> assert overlaps.shape == (3, 3) - >>> overlaps = bbox_overlaps(bboxes1, bboxes2, is_aligned=True) - >>> assert overlaps.shape == (3, ) - - Example: - >>> empty = torch.empty(0, 4) - >>> nonempty = torch.FloatTensor([[0, 0, 10, 9]]) - >>> assert tuple(bbox_overlaps(empty, nonempty).shape) == (0, 1) - >>> assert tuple(bbox_overlaps(nonempty, empty).shape) == (1, 0) - >>> assert tuple(bbox_overlaps(empty, empty).shape) == (0, 0) - """ - - assert mode in ['iou', 'iof', 'giou'], f'Unsupported mode {mode}' - # Either the boxes are empty or the length of boxes' last dimension is 4 - assert (bboxes1.size(-1) == 4 or bboxes1.size(0) == 0) - assert (bboxes2.size(-1) == 4 or bboxes2.size(0) == 0) - - # Batch dim must be the same - # Batch dim: (B1, B2, ... Bn) - assert bboxes1.shape[:-2] == bboxes2.shape[:-2] - batch_shape = bboxes1.shape[:-2] - - rows = bboxes1.size(-2) - cols = bboxes2.size(-2) - if is_aligned: - assert rows == cols - - if rows * cols == 0: - if is_aligned: - return bboxes1.new(batch_shape + (rows, )) - else: - return bboxes1.new(batch_shape + (rows, cols)) - - area1 = (bboxes1[..., 2] - bboxes1[..., 0]) * ( - bboxes1[..., 3] - bboxes1[..., 1]) - area2 = (bboxes2[..., 2] - bboxes2[..., 0]) * ( - bboxes2[..., 3] - bboxes2[..., 1]) - - if is_aligned: - lt = torch.max(bboxes1[..., :2], bboxes2[..., :2]) # [B, rows, 2] - rb = torch.min(bboxes1[..., 2:], bboxes2[..., 2:]) # [B, rows, 2] - - wh = fp16_clamp(rb - lt, min=0) - overlap = wh[..., 0] * wh[..., 1] - - if mode in ['iou', 'giou']: - union = area1 + area2 - overlap - else: - union = area1 - if mode == 'giou': - enclosed_lt = torch.min(bboxes1[..., :2], bboxes2[..., :2]) - enclosed_rb = torch.max(bboxes1[..., 2:], bboxes2[..., 2:]) - else: - lt = torch.max(bboxes1[..., :, None, :2], - bboxes2[..., None, :, :2]) # [B, rows, cols, 2] - rb = torch.min(bboxes1[..., :, None, 2:], - bboxes2[..., None, :, 2:]) # [B, rows, cols, 2] - - wh = fp16_clamp(rb - lt, min=0) - overlap = wh[..., 0] * wh[..., 1] - - if mode in ['iou', 'giou']: - union = area1[..., None] + area2[..., None, :] - overlap - else: - union = area1[..., None] - if mode == 'giou': - enclosed_lt = torch.min(bboxes1[..., :, None, :2], - bboxes2[..., None, :, :2]) - enclosed_rb = torch.max(bboxes1[..., :, None, 2:], - bboxes2[..., None, :, 2:]) - - eps = union.new_tensor([eps]) - union = torch.max(union, eps) - ious = overlap / union - if mode in ['iou', 'iof']: - return ious - # calculate gious - enclose_wh = fp16_clamp(enclosed_rb - enclosed_lt, min=0) - enclose_area = enclose_wh[..., 0] * enclose_wh[..., 1] - enclose_area = torch.max(enclose_area, eps) - gious = ious - (enclose_area - union) / enclose_area - return gious diff --git a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/datasets/samplers/class_aware_sampler.py b/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/datasets/samplers/class_aware_sampler.py deleted file mode 100644 index c52708eb8b98d85b3fac3ee55c7519be60681896..0000000000000000000000000000000000000000 --- a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/datasets/samplers/class_aware_sampler.py +++ /dev/null @@ -1,176 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import math - -import torch -from mmcv.runner import get_dist_info -from torch.utils.data import Sampler - -from mmdet.core.utils import sync_random_seed - - -class ClassAwareSampler(Sampler): - r"""Sampler that restricts data loading to the label of the dataset. - - A class-aware sampling strategy to effectively tackle the - non-uniform class distribution. The length of the training data is - consistent with source data. Simple improvements based on `Relay - Backpropagation for Effective Learning of Deep Convolutional - Neural Networks `_ - - The implementation logic is referred to - https://github.com/Sense-X/TSD/blob/master/mmdet/datasets/samplers/distributed_classaware_sampler.py - - Args: - dataset: Dataset used for sampling. - samples_per_gpu (int): When model is :obj:`DistributedDataParallel`, - it is the number of training samples on each GPU. - When model is :obj:`DataParallel`, it is - `num_gpus * samples_per_gpu`. - Default : 1. - num_replicas (optional): Number of processes participating in - distributed training. - rank (optional): Rank of the current process within num_replicas. - seed (int, optional): random seed used to shuffle the sampler if - ``shuffle=True``. This number should be identical across all - processes in the distributed group. Default: 0. - num_sample_class (int): The number of samples taken from each - per-label list. Default: 1 - """ - - def __init__(self, - dataset, - samples_per_gpu=1, - num_replicas=None, - rank=None, - seed=0, - num_sample_class=1): - _rank, _num_replicas = get_dist_info() - if num_replicas is None: - num_replicas = _num_replicas - if rank is None: - rank = _rank - - self.dataset = dataset - self.num_replicas = num_replicas - self.samples_per_gpu = samples_per_gpu - self.rank = rank - self.epoch = 0 - # Must be the same across all workers. If None, will use a - # random seed shared among workers - # (require synchronization among all workers) - self.seed = sync_random_seed(seed) - - # The number of samples taken from each per-label list - assert num_sample_class > 0 and isinstance(num_sample_class, int) - self.num_sample_class = num_sample_class - # Get per-label image list from dataset - assert hasattr(dataset, 'get_cat2imgs'), \ - 'dataset must have `get_cat2imgs` function' - self.cat_dict = dataset.get_cat2imgs() - - self.num_samples = int( - math.ceil( - len(self.dataset) * 1.0 / self.num_replicas / - self.samples_per_gpu)) * self.samples_per_gpu - self.total_size = self.num_samples * self.num_replicas - - # get number of images containing each category - self.num_cat_imgs = [len(x) for x in self.cat_dict.values()] - # filter labels without images - self.valid_cat_inds = [ - i for i, length in enumerate(self.num_cat_imgs) if length != 0 - ] - self.num_classes = len(self.valid_cat_inds) - - def __iter__(self): - # deterministically shuffle based on epoch - g = torch.Generator() - g.manual_seed(self.epoch + self.seed) - - # initialize label list - label_iter_list = RandomCycleIter(self.valid_cat_inds, generator=g) - # initialize each per-label image list - data_iter_dict = dict() - for i in self.valid_cat_inds: - data_iter_dict[i] = RandomCycleIter(self.cat_dict[i], generator=g) - - def gen_cat_img_inds(cls_list, data_dict, num_sample_cls): - """Traverse the categories and extract `num_sample_cls` image - indexes of the corresponding categories one by one.""" - id_indices = [] - for _ in range(len(cls_list)): - cls_idx = next(cls_list) - for _ in range(num_sample_cls): - id = next(data_dict[cls_idx]) - id_indices.append(id) - return id_indices - - # deterministically shuffle based on epoch - num_bins = int( - math.ceil(self.total_size * 1.0 / self.num_classes / - self.num_sample_class)) - indices = [] - for i in range(num_bins): - indices += gen_cat_img_inds(label_iter_list, data_iter_dict, - self.num_sample_class) - - # fix extra samples to make it evenly divisible - if len(indices) >= self.total_size: - indices = indices[:self.total_size] - else: - indices += indices[:(self.total_size - len(indices))] - assert len(indices) == self.total_size - - # subsample - offset = self.num_samples * self.rank - indices = indices[offset:offset + self.num_samples] - assert len(indices) == self.num_samples - - return iter(indices) - - def __len__(self): - return self.num_samples - - def set_epoch(self, epoch): - self.epoch = epoch - - -class RandomCycleIter: - """Shuffle the list and do it again after the list have traversed. - - The implementation logic is referred to - https://github.com/wutong16/DistributionBalancedLoss/blob/master/mllt/datasets/loader/sampler.py - - Example: - >>> label_list = [0, 1, 2, 4, 5] - >>> g = torch.Generator() - >>> g.manual_seed(0) - >>> label_iter_list = RandomCycleIter(label_list, generator=g) - >>> index = next(label_iter_list) - Args: - data (list or ndarray): The data that needs to be shuffled. - generator: An torch.Generator object, which is used in setting the seed - for generating random numbers. - """ # noqa: W605 - - def __init__(self, data, generator=None): - self.data = data - self.length = len(data) - self.index = torch.randperm(self.length, generator=generator).numpy() - self.i = 0 - self.generator = generator - - def __iter__(self): - return self - - def __len__(self): - return len(self.data) - - def __next__(self): - if self.i == self.length: - self.index = torch.randperm( - self.length, generator=self.generator).numpy() - self.i = 0 - idx = self.data[self.index[self.i]] - self.i += 1 - return idx diff --git a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/projects/instance_segment_anything/models/focalnet_dino/models/dino/convnext.py b/spaces/rockeycoss/Prompt-Segment-Anything-Demo/projects/instance_segment_anything/models/focalnet_dino/models/dino/convnext.py deleted file mode 100644 index 76eeeb2a15d9379968db53fc59fbf0f9a996f0bb..0000000000000000000000000000000000000000 --- a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/projects/instance_segment_anything/models/focalnet_dino/models/dino/convnext.py +++ /dev/null @@ -1,252 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. - -# All rights reserved. - -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - - -from functools import partial -import torch -import torch.nn as nn -import torch.nn.functional as F -from timm.models.layers import trunc_normal_, DropPath - -from .util.misc import NestedTensor -# from timm.models.registry import register_model - -class Block(nn.Module): - r""" ConvNeXt Block. There are two equivalent implementations: - (1) DwConv -> LayerNorm (channels_first) -> 1x1 Conv -> GELU -> 1x1 Conv; all in (N, C, H, W) - (2) DwConv -> Permute to (N, H, W, C); LayerNorm (channels_last) -> Linear -> GELU -> Linear; Permute back - We use (2) as we find it slightly faster in PyTorch - - Args: - dim (int): Number of input channels. - drop_path (float): Stochastic depth rate. Default: 0.0 - layer_scale_init_value (float): Init value for Layer Scale. Default: 1e-6. - """ - def __init__(self, dim, drop_path=0., layer_scale_init_value=1e-6): - super().__init__() - self.dwconv = nn.Conv2d(dim, dim, kernel_size=7, padding=3, groups=dim) # depthwise conv - self.norm = LayerNorm(dim, eps=1e-6) - self.pwconv1 = nn.Linear(dim, 4 * dim) # pointwise/1x1 convs, implemented with linear layers - self.act = nn.GELU() - self.pwconv2 = nn.Linear(4 * dim, dim) - self.gamma = nn.Parameter(layer_scale_init_value * torch.ones((dim)), - requires_grad=True) if layer_scale_init_value > 0 else None - self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity() - - def forward(self, x): - input = x - x = self.dwconv(x) - x = x.permute(0, 2, 3, 1) # (N, C, H, W) -> (N, H, W, C) - x = self.norm(x) - x = self.pwconv1(x) - x = self.act(x) - x = self.pwconv2(x) - if self.gamma is not None: - x = self.gamma * x - x = x.permute(0, 3, 1, 2) # (N, H, W, C) -> (N, C, H, W) - - x = input + self.drop_path(x) - return x - -class ConvNeXt(nn.Module): - r""" ConvNeXt - A PyTorch impl of : `A ConvNet for the 2020s` - - https://arxiv.org/pdf/2201.03545.pdf - - Args: - in_chans (int): Number of input image channels. Default: 3 - num_classes (int): Number of classes for classification head. Default: 1000 - depths (tuple(int)): Number of blocks at each stage. Default: [3, 3, 9, 3] - dims (int): Feature dimension at each stage. Default: [96, 192, 384, 768] - drop_path_rate (float): Stochastic depth rate. Default: 0. - layer_scale_init_value (float): Init value for Layer Scale. Default: 1e-6. - head_init_scale (float): Init scaling value for classifier weights and biases. Default: 1. - """ - def __init__(self, in_chans=3, num_classes=1000, - depths=[3, 3, 9, 3], dims=[96, 192, 384, 768], drop_path_rate=0., - layer_scale_init_value=1e-6, head_init_scale=1., - out_indices=[0, 1, 2, 3] - ): - super().__init__() - self.dims = dims - - self.downsample_layers = nn.ModuleList() # stem and 3 intermediate downsampling conv layers - stem = nn.Sequential( - nn.Conv2d(in_chans, dims[0], kernel_size=4, stride=4), - LayerNorm(dims[0], eps=1e-6, data_format="channels_first") - ) - self.downsample_layers.append(stem) - for i in range(3): - downsample_layer = nn.Sequential( - LayerNorm(dims[i], eps=1e-6, data_format="channels_first"), - nn.Conv2d(dims[i], dims[i+1], kernel_size=2, stride=2), - ) - self.downsample_layers.append(downsample_layer) - - self.stages = nn.ModuleList() # 4 feature resolution stages, each consisting of multiple residual blocks - dp_rates=[x.item() for x in torch.linspace(0, drop_path_rate, sum(depths))] - cur = 0 - for i in range(4): - stage = nn.Sequential( - *[Block(dim=dims[i], drop_path=dp_rates[cur + j], - layer_scale_init_value=layer_scale_init_value) for j in range(depths[i])] - ) - self.stages.append(stage) - cur += depths[i] - - self.out_indices = out_indices - - norm_layer = partial(LayerNorm, eps=1e-6, data_format="channels_first") - for i_layer in range(4): - layer = norm_layer(dims[i_layer]) - layer_name = f'norm{i_layer}' - self.add_module(layer_name, layer) - - # self.norm = nn.LayerNorm(dims[-1], eps=1e-6) # final norm layer - # self.head = nn.Linear(dims[-1], num_classes) - - # self.apply(self._init_weights) - # self.head.weight.data.mul_(head_init_scale) - # self.head.bias.data.mul_(head_init_scale) - - def _init_weights(self, m): - if isinstance(m, (nn.Conv2d, nn.Linear)): - trunc_normal_(m.weight, std=.02) - nn.init.constant_(m.bias, 0) - - def forward_features(self, x): - outs = [] - for i in range(4): - x = self.downsample_layers[i](x) - x = self.stages[i](x) - if i in self.out_indices: - norm_layer = getattr(self, f'norm{i}') - x_out = norm_layer(x) - outs.append(x_out) - # return self.norm(x.mean([-2, -1])) # global average pooling, (N, C, H, W) -> (N, C) - return tuple(outs) - - # def forward(self, x): - # x = self.forward_features(x) - # return x - - - def forward(self, tensor_list: NestedTensor): - x = tensor_list.tensors - outs = self.forward_features(x) - - # collect for nesttensors - outs_dict = {} - for idx, out_i in enumerate(outs): - m = tensor_list.mask - assert m is not None - mask = F.interpolate(m[None].float(), size=out_i.shape[-2:]).to(torch.bool)[0] - outs_dict[idx] = NestedTensor(out_i, mask) - - return outs_dict - -class LayerNorm(nn.Module): - r""" LayerNorm that supports two data formats: channels_last (default) or channels_first. - The ordering of the dimensions in the inputs. channels_last corresponds to inputs with - shape (batch_size, height, width, channels) while channels_first corresponds to inputs - with shape (batch_size, channels, height, width). - """ - def __init__(self, normalized_shape, eps=1e-6, data_format="channels_last"): - super().__init__() - self.weight = nn.Parameter(torch.ones(normalized_shape)) - self.bias = nn.Parameter(torch.zeros(normalized_shape)) - self.eps = eps - self.data_format = data_format - if self.data_format not in ["channels_last", "channels_first"]: - raise NotImplementedError - self.normalized_shape = (normalized_shape, ) - - def forward(self, x): - if self.data_format == "channels_last": - return F.layer_norm(x, self.normalized_shape, self.weight, self.bias, self.eps) - elif self.data_format == "channels_first": - u = x.mean(1, keepdim=True) - s = (x - u).pow(2).mean(1, keepdim=True) - x = (x - u) / torch.sqrt(s + self.eps) - x = self.weight[:, None, None] * x + self.bias[:, None, None] - return x - - -model_urls = { - "convnext_tiny_1k": "https://dl.fbaipublicfiles.com/convnext/convnext_tiny_1k_224_ema.pth", - "convnext_small_1k": "https://dl.fbaipublicfiles.com/convnext/convnext_small_1k_224_ema.pth", - "convnext_base_1k": "https://dl.fbaipublicfiles.com/convnext/convnext_base_1k_224_ema.pth", - "convnext_large_1k": "https://dl.fbaipublicfiles.com/convnext/convnext_large_1k_224_ema.pth", - "convnext_base_22k": "https://dl.fbaipublicfiles.com/convnext/convnext_base_22k_224.pth", - "convnext_large_22k": "https://dl.fbaipublicfiles.com/convnext/convnext_large_22k_224.pth", - "convnext_xlarge_22k": "https://dl.fbaipublicfiles.com/convnext/convnext_xlarge_22k_224.pth", -} - -# @register_model -# def convnext_tiny(pretrained=False, **kwargs): -# model = ConvNeXt(depths=[3, 3, 9, 3], dims=[96, 192, 384, 768], **kwargs) -# if pretrained: -# url = model_urls['convnext_tiny_1k'] -# checkpoint = torch.hub.load_state_dict_from_url(url=url, map_location="cpu", check_hash=True) -# model.load_state_dict(checkpoint["model"]) -# return model - -# @register_model -# def convnext_small(pretrained=False, **kwargs): -# model = ConvNeXt(depths=[3, 3, 27, 3], dims=[96, 192, 384, 768], **kwargs) -# if pretrained: -# url = model_urls['convnext_small_1k'] -# checkpoint = torch.hub.load_state_dict_from_url(url=url, map_location="cpu", check_hash=True) -# model.load_state_dict(checkpoint["model"]) -# return model - -# @register_model -# def convnext_base(pretrained=False, in_22k=False, **kwargs): -# model = ConvNeXt(depths=[3, 3, 27, 3], dims=[128, 256, 512, 1024], **kwargs) -# if pretrained: -# url = model_urls['convnext_base_22k'] if in_22k else model_urls['convnext_base_1k'] -# checkpoint = torch.hub.load_state_dict_from_url(url=url, map_location="cpu", check_hash=True) -# model.load_state_dict(checkpoint["model"]) -# return model - -# @register_model -# def convnext_large(pretrained=False, in_22k=False, **kwargs): -# model = ConvNeXt(depths=[3, 3, 27, 3], dims=[192, 384, 768, 1536], **kwargs) -# if pretrained: -# url = model_urls['convnext_large_22k'] if in_22k else model_urls['convnext_large_1k'] -# checkpoint = torch.hub.load_state_dict_from_url(url=url, map_location="cpu", check_hash=True) -# model.load_state_dict(checkpoint["model"]) -# return model - -# @register_model -# def convnext_xlarge(pretrained=False, in_22k=False, **kwargs): -# model = ConvNeXt(depths=[3, 3, 27, 3], dims=[256, 512, 1024, 2048], **kwargs) -# if pretrained: -# url = model_urls['convnext_xlarge_22k'] if in_22k else model_urls['convnext_xlarge_1k'] -# checkpoint = torch.hub.load_state_dict_from_url(url=url, map_location="cpu", check_hash=True) -# model.load_state_dict(checkpoint["model"]) -# return model - -def build_convnext(modelname, pretrained,backbone_dir=None, **kw): - assert modelname in ['convnext_xlarge_22k'] - - model_para_dict = { - 'convnext_xlarge_22k': dict( - depths=[3, 3, 27, 3], - dims=[256, 512, 1024, 2048], - ), - } - kw_cgf = model_para_dict[modelname] - kw_cgf.update(kw) - model = ConvNeXt(**kw_cgf) - if pretrained: - url = model_urls[modelname] - checkpoint = torch.hub.load_state_dict_from_url(url=url, model_dir=backbone_dir, map_location="cpu", check_hash=True) - _tmp_st_output = model.load_state_dict(checkpoint["model"], strict=False) - print(str(_tmp_st_output)) - - return model \ No newline at end of file diff --git a/spaces/rohan13/coursera-qa-bot/docs/05_Resources/01_books-articles/01__resources.html b/spaces/rohan13/coursera-qa-bot/docs/05_Resources/01_books-articles/01__resources.html deleted file mode 100644 index 30a165190d176dd990d465837e123fb1424da4b5..0000000000000000000000000000000000000000 --- a/spaces/rohan13/coursera-qa-bot/docs/05_Resources/01_books-articles/01__resources.html +++ /dev/null @@ -1,130 +0,0 @@ - - -

      - Here are some of my favorite books and articles about 3D Printing: -

      -

      - - Books - -

      -

      - - - 3D Printing will Rock the World - - - by John Hornick -

      -

      - - - Fabricated - - - by Hod Lipson & Melba Kurman -

      -

      - - - Makers - - - by Chris Anderson -

      -

      - - Articles - -

      -

      - - - 3D Opportunity - - - by Mark Cotteleer & Jim Joyce -

      -

      - - - 3-D Printing will Change the World - - - by Richard D'Aveni -

      -

      - - - Are you Ready for 3D Printing? - - - by Daniel Cohen, Kathy George & Colin Shaw -

      -

      - - - The Printed World - - - by The Economist -

      -

      - - - What Lies Ahead for 3-D Printing? - - - by Elizabeth Royte -

      -

      -

      -
      - - - diff --git a/spaces/rorallitri/biomedical-language-models/logs/Facebook Computer Login In What You Need to Know Before You Sign In.md b/spaces/rorallitri/biomedical-language-models/logs/Facebook Computer Login In What You Need to Know Before You Sign In.md deleted file mode 100644 index a659d706fb1d6bf33e4672ce081c093d97195ef1..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/Facebook Computer Login In What You Need to Know Before You Sign In.md +++ /dev/null @@ -1,26 +0,0 @@ - -

      Whether you have multiple Facebook accounts or share a computer with friends and family, you'll need to know how to switch Facebook accounts. Thankfully, the social network makes it easy to quickly switch between profiles using the same browser.

      -

      facebook computer login in


      Download >>>>> https://tinurll.com/2uzotp



      -

      Because you have the option to always enter your password when switching profiles, this feature is useful for family members who share a computer. Facebook also allows you to add up to 10 accounts using the Account Switcher feature.

      -

      iCloud Login: How to Sign Into iCloud for Data Backup & SyncCheck the iCloud login guide in this post and sign into iCloud with your Apple ID to back up & sync photos, videos, files, with this free cloud storage service.

      -

      Alisa is a professional English editor with 4-year experience. She loves writing and focuses on sharing detailed solutions and thoughts for computer problems, data recovery & backup, digital gadgets, tech news, etc. Through her articles, users can always easily get related problems solved and find what they want. In spare time, she likes basketball, badminton, tennis, cycling, running, and singing. She is very funny and energetic in life, and always brings friends lots of laughs.

      -

      If you're using Facebook on your phone's browser, not through the app, locate the three parallel lines icon on the top right corner of your screen. Just like before, scroll down to the bottom and tap "Log Out." A prompt will pop up asking if you want the browser to save your login info when you log out. Choose "Save and Log Out" or "Don't Save and Log Out" to complete the process.

      -

      To eliminate all existing saved passwords, click Remove all. To eliminate specific saved passwords, locate the site within the Site column and click on it once to highlight it in blue. Then click the Remove button below. You can also remove all saved passwords by clicking the Remove All button. If you wish, deselect the option to Remember logins for sites. This will prevent passwords from being saved in the future. In older versions of Firefox, this option is in the Privacy tab instead of Security.

      -

      -

      To eliminate all existing saved passwords, click Remove all. To eliminate specific saved passwords, click View Saved Passwords and delete just those associated with weblogin.bu.edu. If you wish, deselect the option to Remember passwords. This will prevent passwords from being saved in the future. In older versions of Firefox, this option is in the Privacy tab instead of Security.

      -

      My account was hacked several weeks ago. The person was able to change my email address to his and he changed my name on my facebook profile and made it his name. All of my friends can see his name under my picture and it is creepy. I tried to report this with no help. I created a new page and now that has been disabled. I know it is tied to the hacking. How can I get around this?

      -

      facebook suddenly vanished on the 10th Ddecember 2020off my computor without warning and asked me to nopen anew account when i did they told me there was someone with the same name AS me already on that account there was no way i new how to tell them it was me so ive no way of entering facebook i am 90 and not to bright with these computors but not dim either will some one please help

      -

      Note that if you use the same login for your business and personal Facebook accounts, it also means your personal login credentials were compromised. So it is imperative that you change your password and, ideally, turn on two-factor authentication.

      -

      The first step is to delete the app from your smartphone or tablet. Remember that deleting the Facebook app doesn't delete your account -- you can still access it from the browser and other apps might still use Facebook as a login.

      -

      When you try to open Facebook by typing www.facebook.com in Google Chrome or Safari browser, Facebook will automatically detect that you are using a Mobile device (Phone or Tablet) and it will redirect you to the Mobile version of Facebook.

      -

      If you want to access the full functionality and features of Facebook, you can either visit Facebook on your computer or use workarounds as provided below to open Facebook Desktop Version on your Mobile Device.

      -

      One of mine clients cannot login to Facebook using Firefox browser. In fact when attempting to login to his Facebook account he receive an alert message says that "Your Computer Needs To Be Cleaned". After following the suggested steps to download and clean its computer with ESET Online Scanner, the same message appears again and the user still cannot login to his Facebook account.

      -

      The alert message "Your Computer Needs To Be Cleaned", is displayed because Facebook has implement a malware checkpoint to it's platform in order to prevent malicious activity to your FB account. So if you want to bypass this alert message you have to follow the suggested steps and clean your computer using ESET Online Scanner.

      -

      But in some cases the ESET Online Scanner, doesn't run (open) using Firefox browser so you have to use Internet Explorer in order to run ESET Online Scanner without problems. After scanning/cleaning, if you still receive the "Your Computer Needs To Be Cleaned" alert message using Firefox or another browser (e.g. Chrome) you have to empty your browser history in order to login to your FB account again.

      -

      8. After making sure that your computer is clean try to login to your FB account again. If you still receive the same alert message ("Your Computer Needs To Be Cleaned"), then proceed to step 2 and clean you Internet browser's history.

      -

      Unlike iOS, Android allows you to play around with the data stored by your installed applications. Since most apps store your login details in these data files, clearing these files can log you out from your chosen apps.

      -

      Instead of a direct messaging platform in the native Facebook app, Facebook Messenger exists as a separate application so users can chat one-on-one or in a private group setting. When using Facebook.com on a desktop computer, the messenger is accessible through the native Facebook website.

      -

      Step 5: Now, you can see which devices have your login. It also lists which devices are active right now. Besides the listed devices tap on the three-dotted vertical icon, and here you can log out of the devices. Moreover, you can also secure accounts on particular devices.

      -

      To discover how to check Facebook login devices, go over to your Fb settings site on the web and click on the Password and Security link. You may need to choose toSee More from the drop-down menu to see them all. Next, log out of all sessions by clicking the Log Out Of All Sessions button at the bottom of the list, or use the menu icons (three dots) on the right to delete entries one by one(including your current one). GNext, return to the main Login and Safety page and select Change password to update your Social media passwords simultaneously

      aaccfb2cb3
      -
      -
      \ No newline at end of file diff --git a/spaces/sanchit-gandhi/musicgen-negative-prompting/README.md b/spaces/sanchit-gandhi/musicgen-negative-prompting/README.md deleted file mode 100644 index d3fc351a7e8cf60b675a8b5bd5007473fc63e604..0000000000000000000000000000000000000000 --- a/spaces/sanchit-gandhi/musicgen-negative-prompting/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Musicgen Negative Prompting -emoji: 👁 -colorFrom: gray -colorTo: pink -sdk: gradio -sdk_version: 3.35.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/scedlatioru/img-to-music/example/Adobe Flash Player Update Version 31.0.0.108 !EXCLUSIVE!.md b/spaces/scedlatioru/img-to-music/example/Adobe Flash Player Update Version 31.0.0.108 !EXCLUSIVE!.md deleted file mode 100644 index 96ab1d8ecc800d723839625ce15250005e904b33..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/Adobe Flash Player Update Version 31.0.0.108 !EXCLUSIVE!.md +++ /dev/null @@ -1,6 +0,0 @@ -

      Adobe Flash Player: Update Version 31.0.0.108


      Download Zip ►►► https://gohhs.com/2uEzfN



      -
      -Install updates from vendor's website. Vulnerable software versions. Adobe Flash Player: 30.0.0.113, 30.0.0.134, 30.0.0.154, 31.0.0.108, ... 4d29de3e1b
      -
      -
      -

      diff --git a/spaces/scedlatioru/img-to-music/example/Driver Genius Professional V10.0.0.712 Crk Download FREE Pc.md b/spaces/scedlatioru/img-to-music/example/Driver Genius Professional V10.0.0.712 Crk Download FREE Pc.md deleted file mode 100644 index 2f0363308dca039edb1cdd8ea7205f863328d8d4..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/Driver Genius Professional V10.0.0.712 Crk Download FREE Pc.md +++ /dev/null @@ -1,54 +0,0 @@ -

      Driver Genius Professional v10.0.0.712 crk download pc


      Download ►►► https://gohhs.com/2uEzWs



      - -com/driver-manager-downloads) shows what you can have Driver Genius Professional Download.The table below lists the .com/driver-manager-downloads) will automatically download the latest driver after the installation.Note - You can free download Driver Genius Professional. This software was scanned by our virus scan and antivirus software and the results are below. The green circle shows that Driver Genius Professional is a safe and healthy software. - -Downloads - -Other downloads related to Driver Genius Professional are listed below. - -Shareware Connection periodically updates pricing and software information on this site. Some of the software links supplied direct by the publisher are affiliate links. We may be compensated when you click on them.I have recently been working with the codegen engine of the compiler, in order to check if and how they would have handled various issues that arose during the compilation of the comp.lang.python standard library. - -One of the issues that arose, was that the Jython codegen did not perform a simple run-time check on how deep the context is nested, and then performing an object lookup, and seeing if there were any elements in the iterator for which it would not have checked by then. - -I took advantage of this to implement a similar check for CPython, which is the current version of the python compiler, that is run-time, and also tells you the list of elements that were only discovered at the time of the exception, and which are not going to be reported at all. - -Let's see how the code looks like.Q: - -how to separate components in different divs - -I have this component: - -@Component( - - selector:'menu', - - templateUrl:'menu.html' - -) - -export class MenuComponent implements OnInit { - - menu_items: any[]; - - constructor(private menuData: MenuService) - - - - ngOnInit() - - this.menuData.getMenuList().subscribe(menu_items => - - console.log(menu_items); - - this.menu_items = menu_items; - - ); - - - -In my main template I have this code: - -< 4fefd39f24
      -
      -
      -

      diff --git a/spaces/scedlatioru/img-to-music/example/Sapphire Plugin Sony Vegas Crack 11.md b/spaces/scedlatioru/img-to-music/example/Sapphire Plugin Sony Vegas Crack 11.md deleted file mode 100644 index 4e9197a5a34ce428c26dad5455921a798297478f..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/Sapphire Plugin Sony Vegas Crack 11.md +++ /dev/null @@ -1,6 +0,0 @@ -

      sapphire plugin sony vegas crack 11


      Download ››› https://gohhs.com/2uEz4o



      - -It depends on you, dude. Trojan is deadly virus, anyone can get remote access to your system through this virus. Trojan horse (computing) But nowadays ... 1fdad05405
      -
      -
      -

      diff --git a/spaces/sczhou/CodeFormer/CodeFormer/facelib/detection/yolov5face/utils/datasets.py b/spaces/sczhou/CodeFormer/CodeFormer/facelib/detection/yolov5face/utils/datasets.py deleted file mode 100644 index e672b136f56fd6b05038e24377908361a54fe519..0000000000000000000000000000000000000000 --- a/spaces/sczhou/CodeFormer/CodeFormer/facelib/detection/yolov5face/utils/datasets.py +++ /dev/null @@ -1,35 +0,0 @@ -import cv2 -import numpy as np - - -def letterbox(img, new_shape=(640, 640), color=(114, 114, 114), auto=True, scale_fill=False, scaleup=True): - # Resize image to a 32-pixel-multiple rectangle https://github.com/ultralytics/yolov3/issues/232 - shape = img.shape[:2] # current shape [height, width] - if isinstance(new_shape, int): - new_shape = (new_shape, new_shape) - - # Scale ratio (new / old) - r = min(new_shape[0] / shape[0], new_shape[1] / shape[1]) - if not scaleup: # only scale down, do not scale up (for better test mAP) - r = min(r, 1.0) - - # Compute padding - ratio = r, r # width, height ratios - new_unpad = int(round(shape[1] * r)), int(round(shape[0] * r)) - dw, dh = new_shape[1] - new_unpad[0], new_shape[0] - new_unpad[1] # wh padding - if auto: # minimum rectangle - dw, dh = np.mod(dw, 64), np.mod(dh, 64) # wh padding - elif scale_fill: # stretch - dw, dh = 0.0, 0.0 - new_unpad = (new_shape[1], new_shape[0]) - ratio = new_shape[1] / shape[1], new_shape[0] / shape[0] # width, height ratios - - dw /= 2 # divide padding into 2 sides - dh /= 2 - - if shape[::-1] != new_unpad: # resize - img = cv2.resize(img, new_unpad, interpolation=cv2.INTER_LINEAR) - top, bottom = int(round(dh - 0.1)), int(round(dh + 0.1)) - left, right = int(round(dw - 0.1)), int(round(dw + 0.1)) - img = cv2.copyMakeBorder(img, top, bottom, left, right, cv2.BORDER_CONSTANT, value=color) # add border - return img, ratio, (dw, dh) diff --git a/spaces/senfu/tiny_gaze/app.py b/spaces/senfu/tiny_gaze/app.py deleted file mode 100644 index 3703e2db0009fea1686d779101b431c47248e5e9..0000000000000000000000000000000000000000 --- a/spaces/senfu/tiny_gaze/app.py +++ /dev/null @@ -1,7 +0,0 @@ -import gradio as gr - -def greet(name): - return "Hello " + name + "!!" - -iface = gr.Interface(fn=greet, inputs="text", outputs="text") -iface.launch() diff --git a/spaces/shahzaibelbert/CHATGPT-Detector/app.py b/spaces/shahzaibelbert/CHATGPT-Detector/app.py deleted file mode 100644 index b3deae3b56925e396288237629bdbf9fe253e6f8..0000000000000000000000000000000000000000 --- a/spaces/shahzaibelbert/CHATGPT-Detector/app.py +++ /dev/null @@ -1,29 +0,0 @@ -import os -import gradio as gr -from transformers import pipeline - -auth_token = os.environ.get("access_token") -pipeline_en = pipeline(task="text-classification", model="Hello-SimpleAI/chatgpt-detector-roberta",use_auth_token=auth_token) - - -def predict_en(text): - res = pipeline_en(text)[0] - label = res['label'] - score = round(res['score']*100, 2) - return "%d%% chance"%score, label - - -with gr.Blocks() as demo: - gr.Markdown("AI Content Sentinel") - with gr.Tab("Check Your Content For AI Plagiarism"): - gr.Markdown(""" - Note: Providing more text to the `Text` box can make the prediction more accurate! - """) - t1 = gr.Textbox(lines=5, label='Paste the text you want to check',value="Paste Your Content Here") - button1 = gr.Button("👀 See results") - score1 = gr.Textbox(lines=1, label='There is a') - label1 = gr.Textbox(lines=1, label='That this text is written by a') - - button1.click(predict_en, inputs=[t1], outputs=[score1, label1]) - -demo.launch() \ No newline at end of file diff --git a/spaces/shikunl/prismer/prismer/experts/obj_detection/unidet/modeling/meta_arch/unified_rcnn.py b/spaces/shikunl/prismer/prismer/experts/obj_detection/unidet/modeling/meta_arch/unified_rcnn.py deleted file mode 100644 index 1dae23d759d14dbd0f6170dd0df698b2bcf1485c..0000000000000000000000000000000000000000 --- a/spaces/shikunl/prismer/prismer/experts/obj_detection/unidet/modeling/meta_arch/unified_rcnn.py +++ /dev/null @@ -1,92 +0,0 @@ -import logging -import numpy as np -import torch -import json -from torch import nn - -from detectron2.structures import ImageList -from detectron2.utils.events import get_event_storage -from detectron2.utils.logger import log_first_n - -from detectron2.modeling.backbone import build_backbone -from detectron2.modeling.postprocessing import detector_postprocess -from detectron2.modeling.meta_arch.build import META_ARCH_REGISTRY -from detectron2.modeling.meta_arch import GeneralizedRCNN -from detectron2.modeling.proposal_generator import build_proposal_generator -from detectron2.modeling.roi_heads import build_roi_heads - - -@META_ARCH_REGISTRY.register() -class UnifiedRCNN(GeneralizedRCNN): - def __init__(self, cfg): - super().__init__(cfg) - self.unified_eval = cfg.MULTI_DATASET.UNIFIED_EVAL - self.datasets = cfg.MULTI_DATASET.DATASETS - self.num_datasets = len(self.datasets) - self.dataset_name_to_id = {k: i for i, k in enumerate(self.datasets)} - self.eval_dataset = -1 - self.cpu_post_process = cfg.CPU_POST_PROCESS # due to memory issue on mask - - label_map = json.load( - open(cfg.MULTI_DATASET.UNIFIED_LABEL_FILE, 'r'))['label_map'] - self.label_map = { - self.datasets.index(d): torch.tensor(x).long().to( - torch.device(cfg.MODEL.DEVICE)) \ - for d, x in label_map.items() if d in self.datasets} - - def forward(self, batched_inputs): - if not self.training: - return self.inference(batched_inputs) - images = self.preprocess_image(batched_inputs) - gt_instances = [x["instances"].to(self.device) for x in batched_inputs] - - for i in range(len(gt_instances)): - dataset_source = batched_inputs[i]['dataset_source'] - gt_instances[i]._dataset_source = dataset_source - gt_instances[i].gt_classes = \ - self.label_map[dataset_source][gt_instances[i].gt_classes] - - features = self.backbone(images.tensor) # #lvl - proposals, proposal_losses = self.proposal_generator( - images, features, gt_instances) - - _, detector_losses = self.roi_heads( - images, features, proposals, gt_instances) - if self.vis_period > 0: - storage = get_event_storage() - if storage.iter % self.vis_period == 0: - self.visualize_training(batched_inputs, proposals) - - losses = {} - losses.update(proposal_losses) - losses.update(detector_losses) - return losses - - def inference(self, batched_inputs, detected_instances=None, - do_postprocess=True): - # support eval_dataset and cpu post process - assert not self.training - assert detected_instances is None - images = self.preprocess_image(batched_inputs) - features = self.backbone(images.tensor) - proposals, _ = self.proposal_generator(images, features, None) - results, _ = self.roi_heads( - images, features, proposals, None, eval_dataset=self.eval_dataset) - - if do_postprocess: - if self.cpu_post_process: - for r in results: - r = r.to('cpu') - return GeneralizedRCNN._postprocess( - results, batched_inputs, images.image_sizes) - else: - return results - - def set_eval_dataset(self, dataset_name): - meta_datase_name = dataset_name[:dataset_name.find('_')] - if self.unified_eval: - self.eval_dataset = -1 - else: - self.eval_dataset = \ - self.dataset_name_to_id[meta_datase_name] - diff --git a/spaces/shimizukawa/python-no-senpai/store.py b/spaces/shimizukawa/python-no-senpai/store.py deleted file mode 100644 index 894374363d6a26eac92b97ff8ae2f8f51aaaedd9..0000000000000000000000000000000000000000 --- a/spaces/shimizukawa/python-no-senpai/store.py +++ /dev/null @@ -1,89 +0,0 @@ -import argparse -from itertools import islice -from pathlib import Path - -from tqdm import tqdm -import torch -from langchain.text_splitter import RecursiveCharacterTextSplitter -from langchain.embeddings import HuggingFaceEmbeddings -from langchain.vectorstores import Qdrant - -from loaders import get_loader, LOADER_NAMES -from config import DB_CONFIG - - -CHUNK_SIZE = 500 - - -def get_text_chunk(docs): - text_splitter = RecursiveCharacterTextSplitter( - chunk_size=CHUNK_SIZE, chunk_overlap=0 - ) - texts = text_splitter.split_documents(docs) - return texts - - -def batched(iterable, *, size=100): - "Batch data into tuples of length n. The last batch may be shorter." - # batched('ABCDEFG', 3) --> ABC DEF G - if size < 1: - raise ValueError('n must be at least one') - it = iter(iterable) - while batch := tuple(islice(it, size)): - yield batch - - -def store(texts): - model_name = "intfloat/multilingual-e5-large" - model_kwargs = {"device": "cuda:0" if torch.cuda.is_available() else "cpu"} - encode_kwargs = {"normalize_embeddings": False} - embeddings = HuggingFaceEmbeddings( - model_name=model_name, - model_kwargs=model_kwargs, - encode_kwargs=encode_kwargs, - ) - db_url, db_api_key, db_collection_name = DB_CONFIG - for batch in tqdm(batched(texts, size=100)): - _ = Qdrant.from_documents( - batch, - embeddings, - url=db_url, - api_key=db_api_key, - collection_name=db_collection_name, - ) - - -def get_parser(): - p = argparse.ArgumentParser() - p.add_argument("index", type=str) - p.add_argument("inputfile", metavar="INPUTFILE", type=str) - p.add_argument("-l", "--loader", type=str, choices=LOADER_NAMES, required=True) - return p - - -def index_annotated_docs(docs, index): - for doc in docs: - doc.metadata["index"] = index - yield doc - - -def main(): - """ - $ python store.py --loader wikipage "index" "FILE_PATH" - $ python store.py -l wikipage wiki data/wiki.json - $ python store.py -l rtdhtmlpage django ./docs.djangoproject.com/ - """ - p = get_parser() - args = p.parse_args() - loader = get_loader( - args.loader, - inputfile=Path(args.inputfile), - ) - - docs = loader.lazy_load() - texts = get_text_chunk(index_annotated_docs(docs, args.index)) - store(texts) - - -if __name__ == "__main__": - main() diff --git a/spaces/shuhulhandoo/face-swap/scripts/faceswap.sh b/spaces/shuhulhandoo/face-swap/scripts/faceswap.sh deleted file mode 100644 index 9ba6be3e2f88c918eb59bd41314a27cd868e931d..0000000000000000000000000000000000000000 --- a/spaces/shuhulhandoo/face-swap/scripts/faceswap.sh +++ /dev/null @@ -1 +0,0 @@ -python main.py --src imgs/test6.jpg --dst imgs/test4.jpg --out results/output6_4.jpg --correct_color diff --git a/spaces/skf15963/summary/fengshen/examples/zen2_finetune/fengshen_token_level_ft_task.py b/spaces/skf15963/summary/fengshen/examples/zen2_finetune/fengshen_token_level_ft_task.py deleted file mode 100644 index 619847c1555311226be69d7d0558368dfd048546..0000000000000000000000000000000000000000 --- a/spaces/skf15963/summary/fengshen/examples/zen2_finetune/fengshen_token_level_ft_task.py +++ /dev/null @@ -1,678 +0,0 @@ -# coding=utf-8 -# Copyright 2021 The IDEA Authors. All rights reserved. - -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at - -# http://www.apache.org/licenses/LICENSE-2.0 - -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -from fengshen.models.zen2.modeling import ZenForTokenClassification -from fengshen.metric.metric import SeqEntityScore -from fengshen.models.zen2.tokenization import BertTokenizer -from fengshen.models.zen2.ngram_utils import ZenNgramDict -from pytorch_lightning.callbacks import LearningRateMonitor -from dataclasses import dataclass -import logging -import math -import numpy as np -import os -import json -import torch -import pytorch_lightning as pl -import argparse -from pytorch_lightning.callbacks import ModelCheckpoint -from torch.utils.data import Dataset, DataLoader - -import torch.nn.functional as F -logging.basicConfig(format='%(asctime)s - %(levelname)s - %(name)s - %(message)s', - datefmt='%m/%d/%Y %H:%M:%S', - level=logging.ERROR) -logger = logging.getLogger(__name__) - - -class InputExample(object): - """A single training/test example for simple sequence classification.""" - - def __init__(self, guid, text_a, text_b=None, label=None): - """Constructs a InputExample. - - Args: - guid: Unique id for the example. - text_a: string. The untokenized text of the first sequence. For single - sequence tasks, only this sequence must be specified. - text_b: (Optional) string. The untokenized text of the second sequence. - Only must be specified for sequence pair tasks. - label: (Optional) string. The label of the example. This should be - specified for train and dev examples, but not for test examples. - """ - self.guid = guid - self.text_a = text_a - self.text_b = text_b - self.label = label - - -class InputFeatures(object): - """A single set of features of data.""" - - def __init__(self, input_ids, input_mask, segment_ids, label_id, ngram_ids, ngram_positions, ngram_lengths, - ngram_tuples, ngram_seg_ids, ngram_masks, valid_ids=None, label_mask=None, b_use_valid_filter=False): - self.input_ids = input_ids - self.input_mask = input_mask - self.segment_ids = segment_ids - self.label_id = label_id - self.valid_ids = valid_ids - self.label_mask = label_mask - - self.ngram_ids = ngram_ids - self.ngram_positions = ngram_positions - self.ngram_lengths = ngram_lengths - self.ngram_tuples = ngram_tuples - self.ngram_seg_ids = ngram_seg_ids - self.ngram_masks = ngram_masks - - self.b_use_valid_filter = b_use_valid_filter - - -def convert_examples_to_features(examples, label_map, max_seq_length, tokenizer, ngram_dict): - """Loads a data file into a list of `InputBatch`s.""" - - # label_map = {label: i for i, label in enumerate(label_list, 1)} - # label_map["[PAD]"] = 0 - - features = [] - b_use_valid_filter = False - for (ex_index, example) in enumerate(examples): - textlist = example.text_a - labellist = example.label - tokens = [] - labels = [] - valid = [] - label_mask = [] - for i, word in enumerate(textlist): - token = tokenizer.tokenize(word) - if len(tokens) + len(token) > max_seq_length - 2: - break - tokens.extend(token) - label_1 = labellist[i] - for m in range(len(token)): - if m == 0: - labels.append(label_1) - valid.append(1) - label_mask.append(1) - else: - valid.append(0) - b_use_valid_filter = True - ntokens = [] - segment_ids = [] - label_ids = [] - ntokens.append("[CLS]") - segment_ids.append(0) - valid.insert(0, 1) - label_mask.insert(0, 1) - label_ids.append(label_map["[CLS]"]) - for i, token in enumerate(tokens): - ntokens.append(token) - segment_ids.append(0) - if len(labels) > i: - label_ids.append(label_map[labels[i]]) - ntokens.append("[SEP]") - segment_ids.append(0) - valid.append(1) - label_mask.append(1) - label_ids.append(label_map["[SEP]"]) - input_ids = tokenizer.convert_tokens_to_ids(ntokens) - input_mask = [1] * len(input_ids) - label_mask = [1] * len(label_ids) - while len(input_ids) < max_seq_length: - input_ids.append(0) - input_mask.append(0) - segment_ids.append(0) - label_ids.append(0) - valid.append(1) - label_mask.append(0) - while len(label_ids) < max_seq_length: - label_ids.append(0) - label_mask.append(0) - assert len(input_ids) == max_seq_length - assert len(input_mask) == max_seq_length - assert len(segment_ids) == max_seq_length - assert len(label_ids) == max_seq_length - assert len(valid) == max_seq_length - assert len(label_mask) == max_seq_length - - # ----------- code for ngram BEGIN----------- - ngram_matches = [] - # Filter the ngram segment from 2 to 7 to check whether there is a ngram - max_gram_n = ngram_dict.max_ngram_len - for p in range(2, max_gram_n): - for q in range(0, len(tokens) - p + 1): - character_segment = tokens[q:q + p] - # j is the starting position of the ngram - # i is the length of the current ngram - character_segment = tuple(character_segment) - if character_segment in ngram_dict.ngram_to_id_dict: - ngram_index = ngram_dict.ngram_to_id_dict[character_segment] - ngram_freq = ngram_dict.ngram_to_freq_dict[character_segment] - ngram_matches.append([ngram_index, q, p, character_segment, ngram_freq]) - - ngram_matches = sorted(ngram_matches, key=lambda s: s[0]) - - max_ngram_in_seq_proportion = math.ceil((len(tokens) / max_seq_length) * ngram_dict.max_ngram_in_seq) - if len(ngram_matches) > max_ngram_in_seq_proportion: - ngram_matches = ngram_matches[:max_ngram_in_seq_proportion] - - ngram_ids = [ngram[0] for ngram in ngram_matches] - ngram_positions = [ngram[1] for ngram in ngram_matches] - ngram_lengths = [ngram[2] for ngram in ngram_matches] - ngram_tuples = [ngram[3] for ngram in ngram_matches] - ngram_freqs = [ngram[4] for ngram in ngram_matches] - ngram_seg_ids = [0 if position < (len(tokens) + 2) else 1 for position in ngram_positions] - - ngram_mask_array = np.zeros(ngram_dict.max_ngram_in_seq, dtype=np.bool) - ngram_mask_array[:len(ngram_ids)] = 1 - - # record the masked positions - ngram_positions_matrix = np.zeros(shape=(max_seq_length, ngram_dict.max_ngram_in_seq), dtype=np.int32) - for i in range(len(ngram_ids)): - ngram_positions_matrix[ngram_positions[i]:ngram_positions[i] + ngram_lengths[i], i] = ngram_freqs[i] - ngram_positions_matrix = torch.from_numpy(ngram_positions_matrix.astype(np.float)) - ngram_positions_matrix = torch.div(ngram_positions_matrix, torch.stack( - [torch.sum(ngram_positions_matrix, 1)] * ngram_positions_matrix.size(1)).t() + 1e-10) - ngram_positions_matrix = ngram_positions_matrix.numpy() - - # Zero-pad up to the max ngram in seq length. - padding = [0] * (ngram_dict.max_ngram_in_seq - len(ngram_ids)) - ngram_ids += padding - ngram_lengths += padding - ngram_seg_ids += padding - - # ----------- code for ngram END----------- - - if ex_index < 5: - logger.info("*** Example ***") - logger.info("guid: %s" % (example.guid)) - logger.info("tokens: %s" % " ".join([str(x) for x in tokens])) - logger.info("input_ids: %s" % " ".join([str(x) for x in input_ids])) - logger.info("input_mask: %s" % " ".join([str(x) for x in input_mask])) - logger.info("segment_ids: %s" % " ".join([str(x) for x in segment_ids])) - logger.info("label: %s (id = %s)" % (",".join([str(x) for x in example.label]), ",".join([str(x) for x in label_ids]))) - logger.info("valid: %s" % " ".join([str(x) for x in valid])) - logger.info("b_use_valid_filter: %s" % str(b_use_valid_filter)) - logger.info("ngram_ids: %s" % " ".join([str(x) for x in ngram_ids])) - logger.info("ngram_positions: %s" % " ".join([str(x) for x in ngram_positions])) - logger.info("ngram_lengths: %s" % " ".join([str(x) for x in ngram_lengths])) - logger.info("ngram_tuples: %s" % " ".join([str(x) for x in ngram_tuples])) - logger.info("ngram_seg_ids: %s" % " ".join([str(x) for x in ngram_seg_ids])) - - features.append( - InputFeatures(input_ids=input_ids, - input_mask=input_mask, - segment_ids=segment_ids, - label_id=label_ids, - ngram_ids=ngram_ids, - ngram_positions=ngram_positions_matrix, - ngram_lengths=ngram_lengths, - ngram_tuples=ngram_tuples, - ngram_seg_ids=ngram_seg_ids, - ngram_masks=ngram_mask_array, - valid_ids=valid, - label_mask=label_mask, - b_use_valid_filter=b_use_valid_filter)) - return features - - -class DataProcessor(object): - """Base class for data converters for sequence classification data sets.""" - - def get_examples(self, data_path, set_type, quotechar=' '): - """See base class.""" - return self._create_examples( - self._read_tsv(data_path, self.get_quotechar()), set_type) - - def _create_examples(self, lines, set_type): - examples = [] - for i, (sentence, label) in enumerate(lines): - guid = "%s-%s" % (set_type, i) - text_a = sentence - label = label - examples.append(InputExample(guid=guid, text_a=text_a, label=label)) - return examples - - def get_labels(self): - """Gets the list of labels for this data set.""" - raise NotImplementedError() - - def get_quotechar(self): - return ' ' - - @classmethod - def _read_tsv(cls, input_file, quotechar=None): - ''' - read file - return format : - [ ['EU', 'B-ORG'], ['rejects', 'O'], ['German', 'B-MISC'], ['call', 'O'], ['to', 'O'], ['boycott', 'O'], ['British', 'B-MISC'], ['lamb', 'O'], ['.', 'O'] ] - ''' - f = open(input_file) - data = [] - sentence = [] - label = [] - for line in f: - if len(line) == 0 or line.startswith('-DOCSTART') or line[0] == "\n": - if len(sentence) > 0: - data.append((sentence, label)) - sentence = [] - label = [] - continue - splits = line.split(quotechar) - sentence.append(splits[0]) - label.append(splits[-1][:-1]) - - if len(sentence) > 0: - data.append((sentence, label)) - sentence = [] - label = [] - return data - - -class MSRAProcessor(DataProcessor): - """Processor for the msra data set.""" - - def get_labels(self): - return ['B-NR', 'B-NS', 'B-NT', 'E-NR', 'E-NS', 'E-NT', 'M-NR', - 'M-NS', 'M-NT', 'O', 'S-NR', 'S-NS', 'S-NT', '[CLS]', '[SEP]'] - - -class OntoNotes4Processor(DataProcessor): - """Processor for the OntoNotes4 data set.""" - - def get_labels(self): - return ['B-GPE', 'B-LOC', 'B-ORG', 'B-PER', 'E-GPE', 'E-LOC', - 'E-ORG', 'E-PER', 'M-GPE', 'M-LOC', 'M-ORG', 'M-PER', 'O', - 'S-GPE', 'S-LOC', 'S-ORG', 'S-PER', '[CLS]', '[SEP]'] - - -class WeiboProcessor(DataProcessor): - """Processor for the Weibo data set.""" - - def get_labels(self): - return ['B-GPE.NAM', 'B-GPE.NOM', 'B-LOC.NAM', 'B-LOC.NOM', - 'B-ORG.NAM', 'B-ORG.NOM', 'B-PER.NAM', 'B-PER.NOM', 'E-GPE.NAM', - 'E-GPE.NOM', 'E-LOC.NAM', 'E-LOC.NOM', 'E-ORG.NAM', 'E-ORG.NOM', - 'E-PER.NAM', 'E-PER.NOM', 'M-GPE.NAM', 'M-LOC.NAM', 'M-LOC.NOM', - 'M-ORG.NAM', 'M-ORG.NOM', 'M-PER.NAM', 'M-PER.NOM', 'O', - 'S-GPE.NAM', 'S-LOC.NOM', 'S-PER.NAM', 'S-PER.NOM', '[CLS]', '[SEP]'] - - -class ResumeProcessor(DataProcessor): - """Processor for the resume data set.""" - - def get_labels(self): - return ['B-CONT', 'B-EDU', 'B-LOC', 'B-NAME', 'B-ORG', 'B-PRO', - 'B-RACE', 'B-TITLE', 'E-CONT', 'E-EDU', 'E-LOC', 'E-NAME', - 'E-ORG', 'E-PRO', 'E-RACE', 'E-TITLE', 'M-CONT', 'M-EDU', - 'M-LOC', 'M-NAME', 'M-ORG', 'M-PRO', 'M-RACE', 'M-TITLE', - 'O', 'S-NAME', 'S-ORG', 'S-RACE', '[CLS]', '[SEP]'] - - -class CMeEEProcessor(DataProcessor): - """Processor for the CMeEE data set.""" - - def get_quotechar(self): - return '\t' - - def get_labels(self): - return ['B-临床表现', 'B-医学检验项目', 'B-医疗程序', 'B-医疗设备', - 'B-微生物类', 'B-疾病', 'B-科室', 'B-药物', 'B-身体', 'I-临床表现', - 'I-医学检验项目', 'I-医疗程序', 'I-医疗设备', 'I-微生物类', - 'I-疾病', 'I-科室', 'I-药物', 'I-身体', 'O', '[CLS]', '[SEP]'] - - -class CLUENERProcessor(DataProcessor): - """Processor for the CLUENER data set.""" - - def get_quotechar(self): - return '\t' - - def get_labels(self): - return ['B-书名', 'B-公司', 'B-地址', 'B-姓名', 'B-政府', 'B-景点', - 'B-游戏', 'B-电影', 'B-组织机构', 'B-职位', 'I-书名', 'I-公司', - 'I-地址', 'I-姓名', 'I-政府', 'I-景点', 'I-游戏', 'I-电影', - 'I-组织机构', 'I-职位', 'O', '[CLS]', '[SEP]'] - - -class TaskDataset(Dataset): - def __init__(self, data_path, processor, mode='train'): - super().__init__() - self.data = self.load_data(data_path, processor, mode) - - def __len__(self): - return len(self.data) - - def __getitem__(self, index): - return self.data[index] - - def load_data(self, data_path, processor, mode): - if mode == "train": - examples = processor.get_examples(data_path, mode) - elif mode == "test": - examples = processor.get_examples(data_path, mode) - elif mode == "dev": - examples = processor.get_examples(data_path, mode) - return examples - - -@dataclass -class TaskCollator: - args = None - tokenizer = None - ngram_dict = None - label2id = None - - def __call__(self, samples): - features = convert_examples_to_features(samples, self.label2id, self.args.max_seq_length, self.tokenizer, self.ngram_dict) - # logger.info(" Num examples = %d", len(samples)) - - input_ids = torch.tensor([f.input_ids for f in features], dtype=torch.long) - input_mask = torch.tensor([f.input_mask for f in features], dtype=torch.long) - segment_ids = torch.tensor([f.segment_ids for f in features], dtype=torch.long) - label_ids = torch.tensor([f.label_id for f in features], dtype=torch.long) - valid_ids = torch.tensor([f.valid_ids for f in features], dtype=torch.long) - - ngram_ids = torch.tensor([f.ngram_ids for f in features], dtype=torch.long) - ngram_positions = torch.tensor([f.ngram_positions for f in features], dtype=torch.long) - # ngram_lengths = torch.tensor([f.ngram_lengths for f in features], dtype=torch.long) - # ngram_seg_ids = torch.tensor([f.ngram_seg_ids for f in features], dtype=torch.long) - # ngram_masks = torch.tensor([f.ngram_masks for f in features], dtype=torch.long) - - # label_mask = torch.tensor([f.label_mask for f in features], dtype=torch.long) - b_use_valid_filter = torch.tensor([f.b_use_valid_filter for f in features], dtype=torch.bool) - # 取第一个出来? - # b_use_valid_filter = b_use_valid_filter.detach().cpu().numpy()[0] - b_use_valid_filter = b_use_valid_filter[0] - return { - 'input_ids': input_ids, - 'input_ngram_ids': ngram_ids, - 'ngram_position_matrix': ngram_positions, - 'attention_mask': input_mask, - 'token_type_ids': segment_ids, - 'labels': label_ids, - 'valid_ids': valid_ids, - 'b_use_valid_filter': b_use_valid_filter, - } - - -class TaskDataModel(pl.LightningDataModule): - @staticmethod - def add_data_specific_args(parent_args): - parser = parent_args.add_argument_group('TASK NAME DataModel') - parser.add_argument('--data_dir', default='./data', type=str) - parser.add_argument('--num_workers', default=8, type=int) - parser.add_argument('--train_data', default='train.json', type=str) - parser.add_argument('--valid_data', default='dev.json', type=str) - parser.add_argument('--test_data', default='test.json', type=str) - parser.add_argument('--train_batchsize', default=16, type=int) - parser.add_argument('--valid_batchsize', default=32, type=int) - parser.add_argument('--max_seq_length', default=128, type=int) - - parser.add_argument('--texta_name', default='text', type=str) - parser.add_argument('--textb_name', default='sentence2', type=str) - parser.add_argument('--label_name', default='label', type=str) - parser.add_argument('--id_name', default='id', type=str) - - parser.add_argument('--dataset_name', default=None, type=str) - parser.add_argument('--vocab_file', - type=str, default=None, - help="Vocabulary mapping/file BERT was pretrainined on") - parser.add_argument("--do_lower_case", - action='store_true', - help="Set this flag if you are using an uncased model.") - parser.add_argument('--task_name', default='weibo', type=str) - - return parent_args - - def __init__(self, args): - super().__init__() - self.train_batchsize = args.train_batchsize - self.valid_batchsize = args.valid_batchsize - self.collator = TaskCollator() - self.collator.args = args - self.collator.tokenizer = BertTokenizer.from_pretrained(args.pretrained_model_path, do_lower_case=args.do_lower_case) - self.collator.ngram_dict = ZenNgramDict.from_pretrained(args.pretrained_model_path, tokenizer=self.collator.tokenizer) - - processors = { - 'weibo': WeiboProcessor, - 'resume': ResumeProcessor, - 'msra': MSRAProcessor, - 'ontonotes4': OntoNotes4Processor, - 'cmeee': CMeEEProcessor, - 'cluener': CLUENERProcessor, - } - if args.task_name not in processors: - raise ValueError("Task not found: %s" % (args.task_name)) - processor = processors[args.task_name]() - # 生成id映射 - label_list = processor.get_labels() - label2id = {label: i for i, label in enumerate(label_list, 1)} - label2id["[PAD]"] = 0 - self.id2label = {v: k for k, v in label2id.items()} - self.collator.label2id = label2id - - if args.dataset_name is None: - self.train_data = TaskDataset(os.path.join( - args.data_dir, args.train_data), processor, mode='train') - self.valid_data = TaskDataset(os.path.join( - args.data_dir, args.valid_data), processor, mode='dev') - self.test_data = TaskDataset(os.path.join( - args.data_dir, args.test_data), processor, mode='test') - - else: - import datasets - ds = datasets.load_dataset(args.dataset_name) - self.train_data = ds['train'] - self.valid_data = ds['validation'] - self.test_data = ds['test'] - self.save_hyperparameters(args) - - def train_dataloader(self): - return DataLoader(self.train_data, shuffle=True, batch_size=self.train_batchsize, pin_memory=False, - collate_fn=self.collator) - - def val_dataloader(self): - return DataLoader(self.valid_data, shuffle=False, batch_size=self.valid_batchsize, pin_memory=False, - collate_fn=self.collator) - - def predict_dataloader(self): - return DataLoader(self.test_data, shuffle=False, batch_size=self.valid_batchsize, pin_memory=False, - collate_fn=self.collator) - - -class LitModel(pl.LightningModule): - - @staticmethod - def add_model_specific_args(parent_args): - parser = parent_args.add_argument_group('BaseModel') - parser.add_argument('--markup', default='bios', type=str) - parser.add_argument('--middle_prefix', default='I-', type=str) - return parent_args - - def __init__(self, args, id2label): - super().__init__() - # config = ZenConfig(os.path.join(args.pretrained_model_path, 'config.json')) - self.model = ZenForTokenClassification.from_pretrained(args.pretrained_model_path, num_labels=len(id2label)) - self.seq_entity_score = SeqEntityScore(id2label, markup=args.markup, middle_prefix=args.middle_prefix) - self.train_seq_entity_score = SeqEntityScore(id2label, markup=args.markup, middle_prefix=args.middle_prefix) - self.id2label = id2label - self.label2id = {v: k for k, v in id2label.items()} - self.save_hyperparameters(args) - - def setup(self, stage) -> None: - if stage == 'fit': - train_loader = self.trainer._data_connector._train_dataloader_source.dataloader() - - # Calculate total steps - if self.trainer.max_epochs > 0: - world_size = self.trainer.world_size - tb_size = self.hparams.train_batchsize * max(1, world_size) - ab_size = self.trainer.accumulate_grad_batches - self.total_steps = (len(train_loader.dataset) * - self.trainer.max_epochs // tb_size) // ab_size - else: - self.total_steps = self.trainer.max_steps // self.trainer.accumulate_grad_batches - - print('Total steps: {}' .format(self.total_steps)) - - def training_step(self, batch, batch_idx): - outputs = self.model(**batch) - loss = outputs.loss - # logits = outputs.logits - # preds = torch.argmax(F.log_softmax(logits, dim=2), dim=2) - # preds = preds.detach().cpu().numpy() - # labels = batch['labels'].detach().cpu().numpy() - # num_labels = len(self.label2id) - # y_true = [] - # y_pred = [] - # for i, label in enumerate(labels): - # temp_1 = [] - # temp_2 = [] - # for j, m in enumerate(label): - # if j == 0: - # continue - # elif labels[i][j] == num_labels - 1: - # y_true.append(temp_1) - # y_pred.append(temp_2) - # break - # else: - # temp_1.append(self.id2label[labels[i][j]]) - # temp_2.append(self.id2label[preds[i][j]]) - - # self.train_seq_entity_score.update(y_true, y_pred) - # result = self.train_seq_entity_score.result() - # self.train_seq_entity_score.reset() - self.log('train_loss', loss) - - return loss - - def validation_step(self, batch, batch_idx): - outputs = self.model(**batch) - loss = outputs.loss - logits = outputs.logits - preds = torch.argmax(F.log_softmax(logits, dim=2), dim=2) - preds = preds.detach().cpu().numpy() - labels = batch['labels'].detach().cpu().numpy() - num_labels = len(self.label2id) - y_true = [] - y_pred = [] - for i, label in enumerate(labels): - temp_1 = [] - temp_2 = [] - for j, m in enumerate(label): - if j == 0: - continue - elif labels[i][j] == num_labels - 1: - y_true.append(temp_1) - y_pred.append(temp_2) - break - else: - temp_1.append(self.id2label[labels[i][j]]) - temp_2.append(self.id2label[preds[i][j]]) - - self.seq_entity_score.update(y_true, y_pred) - self.log('val_loss', loss) - - def validation_epoch_end(self, outputs): - # compute metric for all process - score_dict, _ = self.seq_entity_score.result() - if self.trainer._accelerator_connector.cluster_environment.global_rank() == 0: - print('score_dict:\n', score_dict) - # reset the metric after once validation - self.seq_entity_score.reset() - for k, v in score_dict.items(): - self.log('val_{}'.format(k), v) - - def configure_optimizers(self): - from fengshen.models.model_utils import configure_optimizers - return configure_optimizers(self) - - -class TaskModelCheckpoint: - @staticmethod - def add_argparse_args(parent_args): - parser = parent_args.add_argument_group('BaseModel') - - parser.add_argument('--monitor', default='train_loss', type=str) - parser.add_argument('--mode', default='min', type=str) - parser.add_argument('--dirpath', default='./log/', type=str) - parser.add_argument( - '--filename', default='model-{epoch:02d}-{train_loss:.4f}', type=str) - - parser.add_argument('--save_top_k', default=3, type=float) - parser.add_argument('--every_n_train_steps', default=100, type=float) - parser.add_argument('--save_weights_only', default=True, type=bool) - - return parent_args - - def __init__(self, args): - self.callbacks = ModelCheckpoint(monitor=args.monitor, - save_top_k=args.save_top_k, - mode=args.mode, - every_n_train_steps=args.every_n_train_steps, - save_weights_only=args.save_weights_only, - dirpath=args.dirpath, - filename=args.filename) - - -def save_test(data, args, data_model): - with open(args.output_save_path, 'w', encoding='utf-8') as f: - idx = 0 - for i in range(len(data)): - batch = data[i] - for sample in batch: - tmp_result = dict() - label_id = np.argmax(sample.numpy()) - tmp_result['id'] = data_model.test_data.data[idx]['id'] - tmp_result['label'] = data_model.id2label[label_id] - json_data = json.dumps(tmp_result, ensure_ascii=False) - f.write(json_data+'\n') - idx += 1 - print('save the result to '+args.output_save_path) - - -def main(): - total_parser = argparse.ArgumentParser("TASK NAME") - total_parser.add_argument('--pretrained_model_path', default='', type=str) - total_parser.add_argument('--output_save_path', - default='./predict.json', type=str) - # * Args for data preprocessing - total_parser = TaskDataModel.add_data_specific_args(total_parser) - # * Args for training - total_parser = pl.Trainer.add_argparse_args(total_parser) - total_parser = TaskModelCheckpoint.add_argparse_args(total_parser) - - # * Args for base model - from fengshen.models.model_utils import add_module_args - total_parser = add_module_args(total_parser) - total_parser = LitModel.add_model_specific_args(total_parser) - - args = total_parser.parse_args() - - checkpoint_callback = TaskModelCheckpoint(args).callbacks - lr_monitor = LearningRateMonitor(logging_interval='step') - trainer = pl.Trainer.from_argparse_args(args, - callbacks=[checkpoint_callback, lr_monitor] - ) - - data_model = TaskDataModel(args) - id2label = data_model.id2label - print('id2label:', id2label) - model = LitModel(args, id2label) - trainer.fit(model, data_model) - - -if __name__ == "__main__": - main() diff --git a/spaces/sky24h/Controllable_Multi-domain_Semantic_Artwork_Synthesis/README.md b/spaces/sky24h/Controllable_Multi-domain_Semantic_Artwork_Synthesis/README.md deleted file mode 100644 index f863b7b976e8c8eee39ef6a50c6a64235c84e8be..0000000000000000000000000000000000000000 --- a/spaces/sky24h/Controllable_Multi-domain_Semantic_Artwork_Synthesis/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: Controllable Multi-domain Semantic Artwork Synthesis -emoji: 🖼️ -colorFrom: gray -colorTo: pink -sdk: docker -pinned: false -license: cc-by-nc-4.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/.github/ISSUE_TEMPLATE.md b/spaces/sriramelango/Social_Classification_Public/fairseq/.github/ISSUE_TEMPLATE.md deleted file mode 100644 index 5c4c4493e4a8e5386b927e4f4554df925955d129..0000000000000000000000000000000000000000 --- a/spaces/sriramelango/Social_Classification_Public/fairseq/.github/ISSUE_TEMPLATE.md +++ /dev/null @@ -1,3 +0,0 @@ -## 👉 [Please follow one of these issue templates](https://github.com/pytorch/fairseq/issues/new/choose) 👈 - -Note: to keep the backlog clean and actionable, issues may be immediately closed if they do not follow one of the above issue templates. diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/constrained_decoding/README.md b/spaces/sriramelango/Social_Classification_Public/fairseq/examples/constrained_decoding/README.md deleted file mode 100644 index e04b8b6a018214c8233fa87fd91d46a6dd1519d4..0000000000000000000000000000000000000000 --- a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/constrained_decoding/README.md +++ /dev/null @@ -1,123 +0,0 @@ -# (Vectorized) Lexically constrained decoding with dynamic beam allocation - -This page provides instructions for how to use lexically constrained decoding in Fairseq. -Fairseq implements the code described in the following papers: - -* [Fast Lexically Constrained Decoding With Dynamic Beam Allocation](https://www.aclweb.org/anthology/N18-1119/) (Post & Vilar, 2018) -* [Improved Lexically Constrained Decoding for Translation and Monolingual Rewriting](https://www.aclweb.org/anthology/N19-1090/) (Hu et al., 2019) - -## Quick start - -Constrained search is enabled by adding the command-line argument `--constraints` to `fairseq-interactive`. -Constraints are appended to each line of input, separated by tabs. Each constraint (one or more tokens) -is a separate field. - -The following command, using [Fairseq's WMT19 German--English model](https://github.com/pytorch/fairseq/blob/main/examples/wmt19/README.md), -translates the sentence *Die maschinelle Übersetzung ist schwer zu kontrollieren.* with the constraints -"hard" and "to influence". - - echo -e "Die maschinelle Übersetzung ist schwer zu kontrollieren.\thard\ttoinfluence" \ - | normalize.py | tok.py \ - | fairseq-interactive /path/to/model \ - --path /path/to/model/model1.pt \ - --bpe fastbpe \ - --bpe-codes /path/to/model/bpecodes \ - --constraints \ - -s de -t en \ - --beam 10 - -(tok.py and normalize.py can be found in the same directory as this README; they are just shortcuts around Fairseq's WMT19 preprocessing). -This will generate the following output: - - [snip] - S-0 Die masch@@ in@@ elle Über@@ setzung ist schwer zu kontrollieren . - W-0 1.844 seconds - C-0 hard - C-0 influence - H-0 -1.5333266258239746 Mach@@ ine trans@@ lation is hard to influence . - D-0 -1.5333266258239746 Machine translation is hard to influence . - P-0 -0.5434 -0.1423 -0.1930 -0.1415 -0.2346 -1.8031 -0.1701 -11.7727 -0.1815 -0.1511 - -By default, constraints are generated in the order supplied, with any number (zero or more) of tokens generated -between constraints. If you wish for the decoder to order the constraints, then use `--constraints unordered`. -Note that you may want to use a larger beam. - -## Implementation details - -The heart of the implementation is in `fairseq/search.py`, which adds a `LexicallyConstrainedBeamSearch` instance. -This instance of beam search tracks the progress of each hypothesis in the beam through the set of constraints -provided for each input sentence. It does this using one of two classes, both found in `fairseq/token_generation_contstraints.py`: - -* OrderedConstraintState: assumes the `C` input constraints will be generated in the provided order -* UnorderedConstraintState: tries to apply `C` (phrasal) constraints in all `C!` orders - -## Differences from Sockeye - -There are a number of [differences from Sockeye's implementation](https://awslabs.github.io/sockeye/inference.html#lexical-constraints). - -* Generating constraints in the order supplied (the default option here) is not available in Sockeye. -* Due to an improved beam allocation method, there is no need to prune the beam. -* Again due to better allocation, beam sizes as low as 10 or even 5 are often sufficient. -* [The vector extensions described in Hu et al.](https://github.com/edwardjhu/sockeye/tree/trie_constraints) (NAACL 2019) were never merged - into the main Sockeye branch. - -## Citation - -The paper first describing lexical constraints for seq2seq decoding is: - -```bibtex -@inproceedings{hokamp-liu-2017-lexically, - title = "Lexically Constrained Decoding for Sequence Generation Using Grid Beam Search", - author = "Hokamp, Chris and - Liu, Qun", - booktitle = "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", - month = jul, - year = "2017", - address = "Vancouver, Canada", - publisher = "Association for Computational Linguistics", - url = "https://www.aclweb.org/anthology/P17-1141", - doi = "10.18653/v1/P17-1141", - pages = "1535--1546", -} -``` - -The fairseq implementation uses the extensions described in - -```bibtex -@inproceedings{post-vilar-2018-fast, - title = "Fast Lexically Constrained Decoding with Dynamic Beam Allocation for Neural Machine Translation", - author = "Post, Matt and - Vilar, David", - booktitle = "Proceedings of the 2018 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)", - month = jun, - year = "2018", - address = "New Orleans, Louisiana", - publisher = "Association for Computational Linguistics", - url = "https://www.aclweb.org/anthology/N18-1119", - doi = "10.18653/v1/N18-1119", - pages = "1314--1324", -} -``` - -and - -```bibtex -@inproceedings{hu-etal-2019-improved, - title = "Improved Lexically Constrained Decoding for Translation and Monolingual Rewriting", - author = "Hu, J. Edward and - Khayrallah, Huda and - Culkin, Ryan and - Xia, Patrick and - Chen, Tongfei and - Post, Matt and - Van Durme, Benjamin", - booktitle = "Proceedings of the 2019 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)", - month = jun, - year = "2019", - address = "Minneapolis, Minnesota", - publisher = "Association for Computational Linguistics", - url = "https://www.aclweb.org/anthology/N19-1090", - doi = "10.18653/v1/N19-1090", - pages = "839--850", -} -``` diff --git a/spaces/stamps-labs/stamp2vec/models.py b/spaces/stamps-labs/stamp2vec/models.py deleted file mode 100644 index 660abdd9499eaad4af4e9082d1e4b992f6d17667..0000000000000000000000000000000000000000 --- a/spaces/stamps-labs/stamp2vec/models.py +++ /dev/null @@ -1,135 +0,0 @@ -import torch -import torch.nn as nn - -from constants import * - -""" - Class for custom activation. -""" -class SymReLU(nn.Module): - def __init__(self, inplace: bool = False): - super().__init__() - self.inplace = inplace - - def forward(self, input): - return torch.min(torch.max(input, -torch.ones_like(input)), torch.ones_like(input)) - - def extra_repr(self) -> str: - inplace_str = 'inplace=True' if self.inplace else '' - return inplace_str - - -""" - Class implementing YOLO-Stamp architecture described in https://link.springer.com/article/10.1134/S1054661822040046. -""" -class YOLOStamp(nn.Module): - def __init__( - self, - anchors=ANCHORS, - in_channels=3, - ): - super().__init__() - - self.register_buffer('anchors', torch.tensor(anchors)) - - self.act = SymReLU() - self.pool = nn.MaxPool2d(kernel_size=2, stride=2) - self.conv1 = nn.Conv2d(in_channels=in_channels, out_channels=8, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) - self.norm1 = nn.BatchNorm2d(num_features=8) - self.conv2 = nn.Conv2d(in_channels=8, out_channels=16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) - self.norm2 = nn.BatchNorm2d(num_features=16) - self.conv3 = nn.Conv2d(in_channels=16, out_channels=16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) - self.norm3 = nn.BatchNorm2d(num_features=16) - self.conv4 = nn.Conv2d(in_channels=16, out_channels=16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) - self.norm4 = nn.BatchNorm2d(num_features=16) - self.conv5 = nn.Conv2d(in_channels=16, out_channels=16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) - self.norm5 = nn.BatchNorm2d(num_features=16) - self.conv6 = nn.Conv2d(in_channels=16, out_channels=24, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) - self.norm6 = nn.BatchNorm2d(num_features=24) - self.conv7 = nn.Conv2d(in_channels=24, out_channels=24, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) - self.norm7 = nn.BatchNorm2d(num_features=24) - self.conv8 = nn.Conv2d(in_channels=24, out_channels=48, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) - self.norm8 = nn.BatchNorm2d(num_features=48) - self.conv9 = nn.Conv2d(in_channels=48, out_channels=48, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) - self.norm9 = nn.BatchNorm2d(num_features=48) - self.conv10 = nn.Conv2d(in_channels=48, out_channels=48, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) - self.norm10 = nn.BatchNorm2d(num_features=48) - self.conv11 = nn.Conv2d(in_channels=48, out_channels=64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) - self.norm11 = nn.BatchNorm2d(num_features=64) - self.conv12 = nn.Conv2d(in_channels=64, out_channels=256, kernel_size=(1, 1), stride=(1, 1), padding=(0, 0)) - self.norm12 = nn.BatchNorm2d(num_features=256) - self.conv13 = nn.Conv2d(in_channels=256, out_channels=len(anchors) * 5, kernel_size=(1, 1), stride=(1, 1), padding=(0, 0)) - - def forward(self, x, head=True): - x = x.type(self.conv1.weight.dtype) - x = self.act(self.pool(self.norm1(self.conv1(x)))) - x = self.act(self.pool(self.norm2(self.conv2(x)))) - x = self.act(self.pool(self.norm3(self.conv3(x)))) - x = self.act(self.pool(self.norm4(self.conv4(x)))) - x = self.act(self.pool(self.norm5(self.conv5(x)))) - x = self.act(self.norm6(self.conv6(x))) - x = self.act(self.norm7(self.conv7(x))) - x = self.act(self.pool(self.norm8(self.conv8(x)))) - x = self.act(self.norm9(self.conv9(x))) - x = self.act(self.norm10(self.conv10(x))) - x = self.act(self.norm11(self.conv11(x))) - x = self.act(self.norm12(self.conv12(x))) - x = self.conv13(x) - nb, _, nh, nw= x.shape - x = x.permute(0, 2, 3, 1).view(nb, nh, nw, self.anchors.shape[0], 5) - return x - - -class Encoder(torch.nn.Module): - ''' - Encoder Class - Values: - im_chan: the number of channels of the output image, a scalar - hidden_dim: the inner dimension, a scalar - ''' - - def __init__(self, im_chan=3, output_chan=Z_DIM, hidden_dim=ENC_HIDDEN_DIM): - super(Encoder, self).__init__() - self.z_dim = output_chan - self.disc = torch.nn.Sequential( - self.make_disc_block(im_chan, hidden_dim), - self.make_disc_block(hidden_dim, hidden_dim * 2), - self.make_disc_block(hidden_dim * 2, hidden_dim * 4), - self.make_disc_block(hidden_dim * 4, hidden_dim * 8), - self.make_disc_block(hidden_dim * 8, output_chan * 2, final_layer=True), - ) - - def make_disc_block(self, input_channels, output_channels, kernel_size=4, stride=2, final_layer=False): - ''' - Function to return a sequence of operations corresponding to a encoder block of the VAE, - corresponding to a convolution, a batchnorm (except for in the last layer), and an activation - Parameters: - input_channels: how many channels the input feature representation has - output_channels: how many channels the output feature representation should have - kernel_size: the size of each convolutional filter, equivalent to (kernel_size, kernel_size) - stride: the stride of the convolution - final_layer: whether we're on the final layer (affects activation and batchnorm) - ''' - if not final_layer: - return torch.nn.Sequential( - torch.nn.Conv2d(input_channels, output_channels, kernel_size, stride), - torch.nn.BatchNorm2d(output_channels), - torch.nn.LeakyReLU(0.2, inplace=True), - ) - else: - return torch.nn.Sequential( - torch.nn.Conv2d(input_channels, output_channels, kernel_size, stride), - ) - - def forward(self, image): - ''' - Function for completing a forward pass of the Encoder: Given an image tensor, - returns a 1-dimension tensor representing fake/real. - Parameters: - image: a flattened image tensor with dimension (im_dim) - ''' - disc_pred = self.disc(image) - encoding = disc_pred.view(len(disc_pred), -1) - # The stddev output is treated as the log of the variance of the normal - # distribution by convention and for numerical stability - return encoding[:, :self.z_dim], encoding[:, self.z_dim:].exp() \ No newline at end of file diff --git a/spaces/starlit7/USPoliticsTTS/text/ngu_dialect.py b/spaces/starlit7/USPoliticsTTS/text/ngu_dialect.py deleted file mode 100644 index f0b431b9338f8f363446f56f6e2ca272c46e2f7a..0000000000000000000000000000000000000000 --- a/spaces/starlit7/USPoliticsTTS/text/ngu_dialect.py +++ /dev/null @@ -1,29 +0,0 @@ -import re -import opencc - - -dialects = {'SZ': 'suzhou', 'WX': 'wuxi', 'CZ': 'changzhou', 'HZ': 'hangzhou', - 'SX': 'shaoxing', 'NB': 'ningbo', 'JJ': 'jingjiang', 'YX': 'yixing', - 'JD': 'jiading', 'ZR': 'zhenru', 'PH': 'pinghu', 'TX': 'tongxiang', - 'JS': 'jiashan', 'XS': 'xiashi', 'LP': 'linping', 'XS': 'xiaoshan', - 'FY': 'fuyang', 'RA': 'ruao', 'CX': 'cixi', 'SM': 'sanmen', 'TT': 'tiantai'} - -converters = {} - -for dialect in dialects.values(): - try: - converters[dialect] = opencc.OpenCC(dialect) - except: - pass - - -def ngu_dialect_to_ipa(text, dialect): - dialect = dialects[dialect] - text = converters[dialect].convert(text).replace('$',' ') - text = re.sub(r'[、;:]', ',', text) - text = re.sub(r'\s*,\s*', ', ', text) - text = re.sub(r'\s*。\s*', '. ', text) - text = re.sub(r'\s*?\s*', '? ', text) - text = re.sub(r'\s*!\s*', '! ', text) - text = re.sub(r'\s*$', '', text) - return text diff --git a/spaces/stomexserde/gpt4-ui/Adrian Gurvitz Classic Flac.md b/spaces/stomexserde/gpt4-ui/Adrian Gurvitz Classic Flac.md deleted file mode 100644 index 8c67d36dce891a2963617b2a0db3d2fb79ecbd3c..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Adrian Gurvitz Classic Flac.md +++ /dev/null @@ -1,43 +0,0 @@ -## Adrian Gurvitz Classic Flac - - - -**LINK • [https://urluso.com/2tx1TL](https://urluso.com/2tx1TL)** - - - -# Adrian Gurvitz - Classic: A Review of the 1982 Album - - - -Adrian Gurvitz is a British singer-songwriter and guitarist who rose to fame in the late 1970s and early 1980s with his solo albums and collaborations with other artists. One of his most successful albums was Classic, released in 1982 by Rdeg Records. The album features eight tracks of pop rock and soft rock, with catchy melodies, smooth vocals and guitar solos. The title track, Classic, was a hit single that reached number 8 on the UK Singles Chart and number 15 on the US Billboard Hot 100. The song is a romantic ballad that expresses Gurvitz's love for music and his desire to write a classic song for his lover. - - - -The album also includes other notable songs, such as No Fears in the Night, a upbeat rocker that showcases Gurvitz's guitar skills; Living Ain't Easy Without You, a tender love song with a piano accompaniment; Hello New York, a tribute to the city that inspired Gurvitz; Your Dream, a motivational anthem that encourages listeners to pursue their dreams; Breakdown, a bluesy track that deals with emotional turmoil; No One Can Take Your Place, a heartfelt declaration of loyalty; and End the Story, a dramatic finale that closes the album with a powerful chorus. The album also features a bonus track, Runaway, which was originally recorded by Gurvitz's previous band, The Baker Gurvitz Army. - - - -Classic is a well-crafted album that showcases Gurvitz's talent as a songwriter and musician. The album has a timeless appeal that transcends the trends of its era. It is available for download in MP3 and FLAC formats from various online platforms[^2^] [^3^]. For fans of pop rock and soft rock, Classic is an album worth listening to. - - - -## Adrian Gurvitz - A Brief Biography - - - -Adrian Gurvitz was born on June 26, 1949 in Stoke Newington, North London. His father was a tour manager for bands like Cliff Richard and the Shadows and the Kinks, and his mother was a singer. He started playing guitar at the age of eight and by age 15, he was touring with artists like Screaming Lord Sutch, Billie Davis and Crispian St. Peters. [^1^] [^4^] - - - -In 1967, he joined Rupert's People, a band that had a hit in Europe with "Reflections of Charles Brown". He then formed the Gun with his brother Paul Gurvitz and drummer Louis Farrell in 1968. The Gun had a top 10 hit in the UK with "Race with the Devil", a hard rock song that influenced many bands in the genre. The Gun released two albums, Gun and Gunsight, before disbanding in 1970. [^1^] [^5^] - - - -Gurvitz then started his solo career, which turned into Three Man Army, a power trio with his brother Paul and various drummers, including Buddy Miles and Carmine Appice. Three Man Army released four albums between 1971 and 1974, blending rock, blues and funk. In 1974, Gurvitz joined forces with legendary drummer Ginger Baker to form the Baker Gurvitz Army, a progressive rock band that also featured vocalist Snips and keyboardist Peter Lemer. The Baker Gurvitz Army released three albums between 1974 and 1976, as well as a live album in 1977. [^1^] [^5^] - - - -After the Baker Gurvitz Army split up, Gurvitz moved to Los Angeles and resumed his solo career. He released several albums in the late 70s and early 80s, including Sweet Vendetta, Il Assassino and No Compromise. In 1982, he had his biggest solo hit with "Classic", a soft rock ballad that reached number 8 on the UK Singles Chart and number 15 on the US Billboard Hot 100. The song was also featured on his album Classic, which was produced by David Paich and Jeff Porcaro of Toto. [^1^] [^2^] - - 1b8d091108 \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Chick Corea A Work In Progress Pdf 91.md b/spaces/stomexserde/gpt4-ui/Examples/Chick Corea A Work In Progress Pdf 91.md deleted file mode 100644 index dbad34a0c76b4479f29e8faab30d0d735a6cdb2d..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Chick Corea A Work In Progress Pdf 91.md +++ /dev/null @@ -1,24 +0,0 @@ - -

      Chick Corea A Work In Progress Pdf 91: A Treasure for Musicians

      -

      If you are a musician who wants to learn from one of the most influential and virtuosic jazz pianists of all time, you should check out Chick Corea A Work In Progress Pdf 91. This is a book that Chick Corea wrote before his passing in 2021, where he shares his insights, tips, exercises, and philosophy on being a musician. It is a document of musical knowledge unlike any other, and it is available exclusively at Chick's official store[^1^].

      -

      In this book, Chick answers often-asked questions such as:

      -

      Chick Corea A Work In Progress Pdf 91


      Download File 🌟 https://urlgoal.com/2uIbdA



      -
        -
      • What is the single most important element in making good music?
      • -
      • How can one gain the ability to completely originate one's own music?
      • -
      • How much time and effort should go into getting a single musical product?
      • -
      • What's the best way to evaluate one's own live performance?
      • -
      • What can one do about a "difficult" audience?
      • -
      • Can others' opinions on your music serve some useful purpose?
      • -
      • How to learn an instrument effectively?
      • -
      -

      And much more. Chick also gives examples of his own compositions, improvisations, and practice routines, as well as anecdotes from his illustrious career. He covers topics such as creativity, communication, expression, technique, harmony, melody, rhythm, style, and genre. He also explains his concept of the "musician hat", which is the role and responsibility of a musician in society.

      -

      The book is available in English and Spanish-language editions, and it comes in PDF format. You can download it instantly after purchasing it from Chick's website for $20.00[^1^]. It is a great investment for any musician who wants to improve their skills and understanding of music.

      -

      Chick Corea A Work In Progress Pdf 91 is a treasure that Chick left for us, his "music mind". It is a rare opportunity to learn from a master who dedicated his life to music and inspired generations of musicians. Don't miss this chance to get your copy today!

      - -

      Chick Corea was born in Chelsea, Massachusetts, on June 12, 1941. He began playing piano at age four, and was exposed to jazz music by his father, a trumpet player. He studied classical piano and composition at Columbia University and the Juilliard School, but soon dropped out to pursue a career in jazz. He was influenced by bebop pioneers such as Bud Powell, Charlie Parker, and Dizzy Gillespie, as well as by classical composers such as Mozart, Bach, and Chopin.

      -

      -

      Chick Corea's career spanned more than five decades and encompassed a wide range of musical genres and styles. He played with some of the most prominent jazz musicians of his time, such as Miles Davis, Stan Getz, Herbie Mann, Blue Mitchell, Cal Tjader, and Gary Burton. He also formed his own influential groups, such as Circle, Return to Forever, the Elektric Band, the Akoustic Band, Origin, and the Chick Corea New Trio. He explored various forms of jazz, from straight-ahead to avant-garde, from fusion to acoustic, from Latin to classical. He also composed music for orchestra, chamber ensemble, solo piano, and children.

      -

      Chick Corea was one of the most-nominated artists in the history of the Grammy Awards, with 71 nominations and 27 wins. He also received three Latin Grammy Awards and numerous other honors and accolades. He was a DownBeat Hall of Famer and an NEA Jazz Master. He was widely regarded as a keyboard virtuoso and a prolific composer. He was also a generous mentor and collaborator who shared his musical wisdom and passion with many musicians of different generations and backgrounds.

      cec2833e83
      -
      -
      \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Dreambox Control Center 2.96 Download Full Version !!EXCLUSIVE!!.md b/spaces/stomexserde/gpt4-ui/Examples/Dreambox Control Center 2.96 Download Full Version !!EXCLUSIVE!!.md deleted file mode 100644 index 0b210ed5fc87bb240da1900e7f4ffda7bd378ec2..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Dreambox Control Center 2.96 Download Full Version !!EXCLUSIVE!!.md +++ /dev/null @@ -1,33 +0,0 @@ -
      -

      Dreambox Control Center 2.96: A Handy Tool for Enigma2 Receivers

      -

      Dreambox Control Center (DCC) is a popular software that allows you to manage your Enigma2 receiver via a computer. With DCC, you can perform various tasks such as network management, telnet client, FTP client, download recordings, MP3 playlists, and more. DCC is compatible with most Enigma2 receivers, such as Dreambox, Vu+, Gigablue, etc.

      -

      Dreambox Control Center 2.96 Download Full Version


      Download File ○○○ https://urlgoal.com/2uI94k



      -

      In this article, we will show you how to download and install DCC 2.96, the latest version of the software. We will also explain some of the features and benefits of using DCC 2.96 for your Enigma2 receiver.

      -

      How to Download and Install DCC 2.96

      -

      Downloading and installing DCC 2.96 is very easy and straightforward. Here are the steps you need to follow:

      -
        -
      1. Click here to download the zip folder containing DCC 2.96[^1^]. Alternatively, you can also download it from SoundCloud [^2^] or SoundCloud [^3^].
      2. -
      3. Extract the zip folder to a location of your choice on your computer.
      4. -
      5. Run the DCC.exe file as administrator.
      6. -
      7. Enter your Enigma2 receiver's IP address, username, and password in the corresponding fields.
      8. -
      9. Click Connect to establish a connection between your computer and your receiver.
      10. -
      11. You can now use DCC 2.96 to manage your Enigma2 receiver.
      12. -
      -

      Features and Benefits of DCC 2.96

      -

      DCC 2.96 is a powerful and versatile tool that offers many features and benefits for Enigma2 users. Some of them are:

      -
        -
      • You can easily access and modify various settings of your receiver, such as network configuration, satellite list, channel list, EPG settings, etc.
      • -
      • You can transfer files between your computer and your receiver using the FTP client feature. You can also upload plugins, skins, scripts, etc. to your receiver using this feature.
      • -
      • You can download recordings from your receiver to your computer using the Download Recordings feature. You can also play them on your computer using VLC player or other media players.
      • -
      • You can create and edit MP3 playlists on your receiver using the MP3 Playlist feature. You can also play them on your receiver or on your computer using VLC player or other media players.
      • -
      • You can use the Telnet Client feature to execute commands on your receiver via a terminal window. You can also use this feature to install or uninstall packages, update software, reboot or shutdown your receiver, etc.
      • -
      • You can use the Network Management feature to scan and ping your network devices, such as routers, switches, etc. You can also use this feature to check the status of your internet connection.
      • -
      • You can use the Backup/Restore feature to backup or restore your receiver's settings, channel list, plugins, etc. You can also use this feature to flash new images to your receiver.
      • -
      • You can use the Screen Capture feature to take screenshots of your receiver's screen and save them on your computer.
      • -
      • You can use the Log Viewer feature to view and save various logs from your receiver, such as boot log, system log, crash log, etc.
      • -
      • You can use the Script Manager feature to run various scripts on your receiver, such as CCcam script, Oscam script, etc.
      • -
      -

      DCC 2.96 is a must-have software for any Enigma2 user who wants to have more control and convenience over their receiver. It is easy to use and has a user-friendly interface. It is also free and regularly updated by its developer.

      -

      If you have any questions or feedback about DCC 2.96, feel free to leave a comment below or contact us via email.

      7196e7f11a
      -
      -
      \ No newline at end of file diff --git a/spaces/sunshineatnoon/TextureScraping/swapae/evaluation/swap_visualization_evaluator.py b/spaces/sunshineatnoon/TextureScraping/swapae/evaluation/swap_visualization_evaluator.py deleted file mode 100644 index 73989dea8025f950d12dd0e66cafebce884eb488..0000000000000000000000000000000000000000 --- a/spaces/sunshineatnoon/TextureScraping/swapae/evaluation/swap_visualization_evaluator.py +++ /dev/null @@ -1,91 +0,0 @@ -import os -from PIL import Image -import numpy as np -import torch -from swapae.evaluation import BaseEvaluator -import swapae.util as util - - -class SwapVisualizationEvaluator(BaseEvaluator): - @staticmethod - def modify_commandline_options(parser, is_train): - parser.add_argument("--swap_num_columns", type=int, default=4, - help="number of images to be shown in the swap visualization grid. Setting this value will result in 4x4 swapping grid, with additional row and col for showing original images.") - parser.add_argument("--swap_num_images", type=int, default=16, - help="total number of images to perform swapping. In the end, (swap_num_images / swap_num_columns) grid will be saved to disk") - return parser - - def gather_images(self, dataset): - all_images = [] - num_images_to_gather = max(self.opt.swap_num_columns, self.opt.num_gpus) - exhausted = False - while len(all_images) < num_images_to_gather: - try: - data = next(dataset) - except StopIteration: - print("Exhausted the dataset at %s" % (self.opt.dataroot)) - exhausted = True - break - for i in range(data["real_A"].size(0)): - all_images.append(data["real_A"][i:i+1]) - if "real_B" in data: - all_images.append(data["real_B"][i:i+1]) - if len(all_images) >= num_images_to_gather: - break - if len(all_images) == 0: - return None, None, True - return all_images, exhausted - - def generate_mix_grid(self, model, images): - sps, gls = [], [] - for image in images: - assert image.size(0) == 1 - sp, gl = model(image.expand(self.opt.num_gpus, -1, -1, -1), command="encode") - sp = sp[:1] - gl = gl[:1] - sps.append(sp) - gls.append(gl) - gl = torch.cat(gls, dim=0) - - def put_img(img, canvas, row, col): - h, w = img.shape[0], img.shape[1] - start_x = int(self.opt.load_size * col + (self.opt.load_size - w) * 0.5) - start_y = int(self.opt.load_size * row + (self.opt.load_size - h) * 0.5) - canvas[start_y:start_y + h, start_x: start_x + w] = img - grid_w = self.opt.load_size * (gl.size(0) + 1) - grid_h = self.opt.load_size * (gl.size(0) + 1) - grid_img = np.ones((grid_h, grid_w, 3), dtype=np.uint8) - #images_np = util.tensor2im(images, tile=False) - for i, image in enumerate(images): - image_np = util.tensor2im(image, tile=False)[0] - put_img(image_np, grid_img, 0, i + 1) - put_img(image_np, grid_img, i + 1, 0) - - for i, sp in enumerate(sps): - sp_for_current_row = sp.repeat(gl.size(0), 1, 1, 1) - mix_row = model(sp_for_current_row, gl, command="decode") - mix_row = util.tensor2im(mix_row, tile=False) - for j, mix in enumerate(mix_row): - put_img(mix, grid_img, i + 1, j + 1) - - final_grid = Image.fromarray(grid_img) - return final_grid - - def evaluate(self, model, dataset, nsteps): - nsteps = self.opt.resume_iter if nsteps is None else str(round(nsteps / 1000)) + "k" - savedir = os.path.join(self.output_dir(), "%s_%s" % (self.target_phase, nsteps)) - os.makedirs(savedir, exist_ok=True) - webpage_title = "Swap Visualization of %s. iter=%s. phase=%s" % \ - (self.opt.name, str(nsteps), self.target_phase) - webpage = util.HTML(savedir, webpage_title) - num_repeats = int(np.ceil(self.opt.swap_num_images / max(self.opt.swap_num_columns, self.opt.num_gpus))) - for i in range(num_repeats): - images, should_break = self.gather_images(dataset) - if images is None: - break - mix_grid = self.generate_mix_grid(model, images) - webpage.add_images([mix_grid], ["%04d.png" % i]) - if should_break: - break - webpage.save() - return {} diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Amped Five Full Download ((INSTALL))golkes.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Amped Five Full Download ((INSTALL))golkes.md deleted file mode 100644 index d8cc4ecb045b5f640936400e2dad6e4e0d018df7..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Amped Five Full Download ((INSTALL))golkes.md +++ /dev/null @@ -1,9 +0,0 @@ - -

      next time i read a blog, hopefully it doesnt fail me as much as this particular one. i mean, yes, it was my choice to read, but i actually thought youd have something useful to talk about. all i hear is a bunch of complaining about something you could possibly fix if you werent too busy seeking attention.

      -

      amped five full downloadgolkes


      Download Filehttps://cinurl.com/2uEYGU



      -

      next time i read a blog, hopefully it doesnt fail me as much as this particular one. i mean, yes, it was my choice to read, however i actually thought youd have something useful to talk about. all i hear is a bunch of complaining about something you could possibly fix if you werent too busy searching for attention.

      -

      next time i read a blog, hopefully it doesnt fail me as much as this particular one. i mean, yes, it was my choice to read, but i actually thought youd have something helpful to talk about. all i hear is a bunch of complaining about something you could possibly fix if you werent too busy searching for attention.

      -

      next time i read a blog, hopefully it doesnt fail me as much as this particular one. i mean, yes, it was my choice to read, however i actually thought youd have something helpful to talk about. all i hear is a bunch of complaining about something you could possibly fix if you werent too busy searching for attention.

      -

      combopancy.com wordpress themes premium business theme
      combopancy.com is one of the most flexible and feature rich premium business wordpress themes with multipurpose and business needs, suited for all kinds of businesses. its themes files have been well managed for better performance.
      it is designed and developed with the combination of both web 2.0 and web 1.0 technologies and can be fully customised to meet with your exact needs. most important components
      49e0806aebe talicail

      899543212b
      -
      -
      \ No newline at end of file diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Economiaprincipiosyaplicaciones3raedicionmochonybecker.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Economiaprincipiosyaplicaciones3raedicionmochonybecker.md deleted file mode 100644 index 8d4ee261abb24f31b89c87d79fb812237b4065af..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Economiaprincipiosyaplicaciones3raedicionmochonybecker.md +++ /dev/null @@ -1,6 +0,0 @@ -

      economiaprincipiosyaplicaciones3raedicionmochonybecker


      Download Filehttps://cinurl.com/2uEYXr



      -
      - d5da3c52bf
      -
      -
      -

      diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/MAGIX.Video.Pro.X3.v10.0.12.2-EQUiNOX Download !EXCLUSIVE!.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/MAGIX.Video.Pro.X3.v10.0.12.2-EQUiNOX Download !EXCLUSIVE!.md deleted file mode 100644 index 8b867196dba3ae190ce46089f8afc6158b340ddc..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/MAGIX.Video.Pro.X3.v10.0.12.2-EQUiNOX Download !EXCLUSIVE!.md +++ /dev/null @@ -1,8 +0,0 @@ - -

      It provides the user the ability to edit, upload and download files and to create, convert, burn, edit, upload and download audio and video files that are stored locally in the PC. You can create a backup of your audio/video files using this tool. Add a logo to your video file using this handy app. When you add a logo, your video clip will play with your logo.

      -

      Recently, there was the appearance of the game \"Undertale\" and you must think about how to download it. But the appearance of the game has prevented many companies from downloading it. But it will be easy to download Undertale from this software. You can download this software from the button below.

      -

      MAGIX.Video.Pro.X3.v10.0.12.2-EQUiNOX download


      Download Ziphttps://cinurl.com/2uEZ5X



      -

      Even the best video editing tools can only provide an effective set of features and functions when used in conjunction with the expertise of knowledgeable users. And many professionals learn everything they can about their editing software and about editing video in general before they can become as productive as possible. However, this is not the case with everyone and not everyone needs to spend years, or even months, learning how to use editing software. In this video, viewers are provided an overview of what to look for when buying a video editing program, what to look for in a video editing tutorial, what to look for in a video editing course, and what to look for in video editing articles and videos.

      -

      Wanting a game of TF2 won't be soon, check out this easy and efficient downloader for the latest TF2 mod packs! It's safe, reliable, and free. If you need a TF2 Mod Pack right now, this is an ultimate choice for you. Just follow the instructions and you will get a perfect download job from this software that enables you to download latest mod packs of TF2 from this software. You will find it very user-friendly. All you need to do is open the game, scroll to the level you want, choose your download and the game will start downloading the mod pack directly from TF2 servers. By this way, you will receive the mod pack just in time.

      899543212b
      -
      -
      \ No newline at end of file diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Xforce Keygen Inventor Professional 2018 32 Bit Windows.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Xforce Keygen Inventor Professional 2018 32 Bit Windows.md deleted file mode 100644 index b447d6347cdc521f6e1cb7fc8b6118d9c2540902..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Xforce Keygen Inventor Professional 2018 32 Bit Windows.md +++ /dev/null @@ -1,6 +0,0 @@ - -

      xforce keygen autodesk project 2017 for 64bit + 32bit. for all model types, even full-architecture devices with separate memory sockets. for customers who are using a processor from an earlier version of the.. xforce keygen, key generator, key ring, patch, patcher, keygen, key, keygen, key generator xforce keygen.

      -

      xforce keygen Inventor Professional 2018 32 bit windows


      Download ››› https://cinurl.com/2uEYgT



      -

      download autodesk inventor 2016 crack free for 32-bit and 64-bit - jzb0e1328 q6 or g0knmtrev4g4. autodesk inventor 2012 crack 32 bit + 64 bit latest version microsoft windows. xforce keygen autocad free 2016 32bit. autodesk inventor 2014 keygen free. autocad is an application that allows designers to build. use x-force keygen to remove autocad pro 2019 license key. the only difference is the x-force keygen can be used to remove all versions of autocad. use xforce keygen to remove autocad free 2020 license key. free download by xforce keygen. free download for autocad inventor. automatically how to crack autocad. free download x-force keygen 2020 crack. xforce keygen autocad free 2020 32bit.autocad 2014 crack free download xforce keygen all versions. free download autocad 2012 keygen xforce. installation xforce autocad free 2020 crack 32 bit. autocad 2019 crack x-force keygen 32 bit download. what is the benefit of using xforce keygen. keygen autocad to generate free license key you can.2016 xforce keygen autocad free. autocad 2011 64bit 32bit xforce keygen. autocad 2019 crack x-force keygen 32 bit. autocad 2016 crack 32 bit + 64 bit x-force keygen. use xforce keygen to remove autocad free 2020. download free keys from xforce keygen. crack xforce keygen free 32bit. free download xforce keygen. autocad 2010 crack x-force keygen free download. sándor á. https://unfriend.me/918b21b0-2bb9-4cc7-b0e9-d6b43b84f3b5 free download xforce keygen free 32bit. autocad 2011 32 bit hotmail crack download. autocad 2012 crack x-force keygen 32 bit. autocad 2014 crack x-force keygen free download. free download x-force keygen 2020 64 bit.

      899543212b
      -
      -
      \ No newline at end of file diff --git a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmseg/models/segmentors/encoder_decoder.py b/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmseg/models/segmentors/encoder_decoder.py deleted file mode 100644 index 98392ac04c4c44a7f4e7b1c0808266875877dd1f..0000000000000000000000000000000000000000 --- a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmseg/models/segmentors/encoder_decoder.py +++ /dev/null @@ -1,298 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F - -from annotator.uniformer.mmseg.core import add_prefix -from annotator.uniformer.mmseg.ops import resize -from .. import builder -from ..builder import SEGMENTORS -from .base import BaseSegmentor - - -@SEGMENTORS.register_module() -class EncoderDecoder(BaseSegmentor): - """Encoder Decoder segmentors. - - EncoderDecoder typically consists of backbone, decode_head, auxiliary_head. - Note that auxiliary_head is only used for deep supervision during training, - which could be dumped during inference. - """ - - def __init__(self, - backbone, - decode_head, - neck=None, - auxiliary_head=None, - train_cfg=None, - test_cfg=None, - pretrained=None): - super(EncoderDecoder, self).__init__() - self.backbone = builder.build_backbone(backbone) - if neck is not None: - self.neck = builder.build_neck(neck) - self._init_decode_head(decode_head) - self._init_auxiliary_head(auxiliary_head) - - self.train_cfg = train_cfg - self.test_cfg = test_cfg - - self.init_weights(pretrained=pretrained) - - assert self.with_decode_head - - def _init_decode_head(self, decode_head): - """Initialize ``decode_head``""" - self.decode_head = builder.build_head(decode_head) - self.align_corners = self.decode_head.align_corners - self.num_classes = self.decode_head.num_classes - - def _init_auxiliary_head(self, auxiliary_head): - """Initialize ``auxiliary_head``""" - if auxiliary_head is not None: - if isinstance(auxiliary_head, list): - self.auxiliary_head = nn.ModuleList() - for head_cfg in auxiliary_head: - self.auxiliary_head.append(builder.build_head(head_cfg)) - else: - self.auxiliary_head = builder.build_head(auxiliary_head) - - def init_weights(self, pretrained=None): - """Initialize the weights in backbone and heads. - - Args: - pretrained (str, optional): Path to pre-trained weights. - Defaults to None. - """ - - super(EncoderDecoder, self).init_weights(pretrained) - self.backbone.init_weights(pretrained=pretrained) - self.decode_head.init_weights() - if self.with_auxiliary_head: - if isinstance(self.auxiliary_head, nn.ModuleList): - for aux_head in self.auxiliary_head: - aux_head.init_weights() - else: - self.auxiliary_head.init_weights() - - def extract_feat(self, img): - """Extract features from images.""" - x = self.backbone(img) - if self.with_neck: - x = self.neck(x) - return x - - def encode_decode(self, img, img_metas): - """Encode images with backbone and decode into a semantic segmentation - map of the same size as input.""" - x = self.extract_feat(img) - out = self._decode_head_forward_test(x, img_metas) - out = resize( - input=out, - size=img.shape[2:], - mode='bilinear', - align_corners=self.align_corners) - return out - - def _decode_head_forward_train(self, x, img_metas, gt_semantic_seg): - """Run forward function and calculate loss for decode head in - training.""" - losses = dict() - loss_decode = self.decode_head.forward_train(x, img_metas, - gt_semantic_seg, - self.train_cfg) - - losses.update(add_prefix(loss_decode, 'decode')) - return losses - - def _decode_head_forward_test(self, x, img_metas): - """Run forward function and calculate loss for decode head in - inference.""" - seg_logits = self.decode_head.forward_test(x, img_metas, self.test_cfg) - return seg_logits - - def _auxiliary_head_forward_train(self, x, img_metas, gt_semantic_seg): - """Run forward function and calculate loss for auxiliary head in - training.""" - losses = dict() - if isinstance(self.auxiliary_head, nn.ModuleList): - for idx, aux_head in enumerate(self.auxiliary_head): - loss_aux = aux_head.forward_train(x, img_metas, - gt_semantic_seg, - self.train_cfg) - losses.update(add_prefix(loss_aux, f'aux_{idx}')) - else: - loss_aux = self.auxiliary_head.forward_train( - x, img_metas, gt_semantic_seg, self.train_cfg) - losses.update(add_prefix(loss_aux, 'aux')) - - return losses - - def forward_dummy(self, img): - """Dummy forward function.""" - seg_logit = self.encode_decode(img, None) - - return seg_logit - - def forward_train(self, img, img_metas, gt_semantic_seg): - """Forward function for training. - - Args: - img (Tensor): Input images. - img_metas (list[dict]): List of image info dict where each dict - has: 'img_shape', 'scale_factor', 'flip', and may also contain - 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'. - For details on the values of these keys see - `mmseg/datasets/pipelines/formatting.py:Collect`. - gt_semantic_seg (Tensor): Semantic segmentation masks - used if the architecture supports semantic segmentation task. - - Returns: - dict[str, Tensor]: a dictionary of loss components - """ - - x = self.extract_feat(img) - - losses = dict() - - loss_decode = self._decode_head_forward_train(x, img_metas, - gt_semantic_seg) - losses.update(loss_decode) - - if self.with_auxiliary_head: - loss_aux = self._auxiliary_head_forward_train( - x, img_metas, gt_semantic_seg) - losses.update(loss_aux) - - return losses - - # TODO refactor - def slide_inference(self, img, img_meta, rescale): - """Inference by sliding-window with overlap. - - If h_crop > h_img or w_crop > w_img, the small patch will be used to - decode without padding. - """ - - h_stride, w_stride = self.test_cfg.stride - h_crop, w_crop = self.test_cfg.crop_size - batch_size, _, h_img, w_img = img.size() - num_classes = self.num_classes - h_grids = max(h_img - h_crop + h_stride - 1, 0) // h_stride + 1 - w_grids = max(w_img - w_crop + w_stride - 1, 0) // w_stride + 1 - preds = img.new_zeros((batch_size, num_classes, h_img, w_img)) - count_mat = img.new_zeros((batch_size, 1, h_img, w_img)) - for h_idx in range(h_grids): - for w_idx in range(w_grids): - y1 = h_idx * h_stride - x1 = w_idx * w_stride - y2 = min(y1 + h_crop, h_img) - x2 = min(x1 + w_crop, w_img) - y1 = max(y2 - h_crop, 0) - x1 = max(x2 - w_crop, 0) - crop_img = img[:, :, y1:y2, x1:x2] - crop_seg_logit = self.encode_decode(crop_img, img_meta) - preds += F.pad(crop_seg_logit, - (int(x1), int(preds.shape[3] - x2), int(y1), - int(preds.shape[2] - y2))) - - count_mat[:, :, y1:y2, x1:x2] += 1 - assert (count_mat == 0).sum() == 0 - if torch.onnx.is_in_onnx_export(): - # cast count_mat to constant while exporting to ONNX - count_mat = torch.from_numpy( - count_mat.cpu().detach().numpy()).to(device=img.device) - preds = preds / count_mat - if rescale: - preds = resize( - preds, - size=img_meta[0]['ori_shape'][:2], - mode='bilinear', - align_corners=self.align_corners, - warning=False) - return preds - - def whole_inference(self, img, img_meta, rescale): - """Inference with full image.""" - - seg_logit = self.encode_decode(img, img_meta) - if rescale: - # support dynamic shape for onnx - if torch.onnx.is_in_onnx_export(): - size = img.shape[2:] - else: - size = img_meta[0]['ori_shape'][:2] - seg_logit = resize( - seg_logit, - size=size, - mode='bilinear', - align_corners=self.align_corners, - warning=False) - - return seg_logit - - def inference(self, img, img_meta, rescale): - """Inference with slide/whole style. - - Args: - img (Tensor): The input image of shape (N, 3, H, W). - img_meta (dict): Image info dict where each dict has: 'img_shape', - 'scale_factor', 'flip', and may also contain - 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'. - For details on the values of these keys see - `mmseg/datasets/pipelines/formatting.py:Collect`. - rescale (bool): Whether rescale back to original shape. - - Returns: - Tensor: The output segmentation map. - """ - - assert self.test_cfg.mode in ['slide', 'whole'] - ori_shape = img_meta[0]['ori_shape'] - assert all(_['ori_shape'] == ori_shape for _ in img_meta) - if self.test_cfg.mode == 'slide': - seg_logit = self.slide_inference(img, img_meta, rescale) - else: - seg_logit = self.whole_inference(img, img_meta, rescale) - output = F.softmax(seg_logit, dim=1) - flip = img_meta[0]['flip'] - if flip: - flip_direction = img_meta[0]['flip_direction'] - assert flip_direction in ['horizontal', 'vertical'] - if flip_direction == 'horizontal': - output = output.flip(dims=(3, )) - elif flip_direction == 'vertical': - output = output.flip(dims=(2, )) - - return output - - def simple_test(self, img, img_meta, rescale=True): - """Simple test with single image.""" - seg_logit = self.inference(img, img_meta, rescale) - seg_pred = seg_logit.argmax(dim=1) - if torch.onnx.is_in_onnx_export(): - # our inference backend only support 4D output - seg_pred = seg_pred.unsqueeze(0) - return seg_pred - seg_pred = seg_pred.cpu().numpy() - # unravel batch dim - seg_pred = list(seg_pred) - return seg_pred - - def aug_test(self, imgs, img_metas, rescale=True): - """Test with augmentations. - - Only rescale=True is supported. - """ - # aug_test rescale all imgs back to ori_shape for now - assert rescale - # to save memory, we get augmented seg logit inplace - seg_logit = self.inference(imgs[0], img_metas[0], rescale) - for i in range(1, len(imgs)): - cur_seg_logit = self.inference(imgs[i], img_metas[i], rescale) - seg_logit += cur_seg_logit - seg_logit /= len(imgs) - seg_pred = seg_logit.argmax(dim=1) - seg_pred = seg_pred.cpu().numpy() - # unravel batch dim - seg_pred = list(seg_pred) - return seg_pred diff --git a/spaces/szukevin/VISOR-GPT/train/tencentpretrain/utils/image_tokenizer.py b/spaces/szukevin/VISOR-GPT/train/tencentpretrain/utils/image_tokenizer.py deleted file mode 100644 index 06e398a8f1f8b21012d643f26818455a1e405b8f..0000000000000000000000000000000000000000 --- a/spaces/szukevin/VISOR-GPT/train/tencentpretrain/utils/image_tokenizer.py +++ /dev/null @@ -1,80 +0,0 @@ -import yaml -import torch -import torch.nn.functional as F -from omegaconf import OmegaConf -from einops import rearrange -from taming.models.vqgan import VQModel, GumbelVQ -from taming.models.cond_transformer import Net2NetTransformer -from PIL import Image -from torchvision.utils import make_grid, save_image -from math import sqrt, log -#https://github.com/lucidrains/DALLE-pytorch/blob/main/dalle_pytorch/vae.py#L160 - -def load_vqgan(config, ckpt_path=None, is_gumbel=False, is_transformer=False): - if is_gumbel: - model = GumbelVQ(**config.model.params) - elif is_transformer: - model = Net2NetTransformer(**config.model.params) - else: - model = VQModel(**config.model.params) - if ckpt_path is not None: - sd = torch.load(ckpt_path, map_location="cpu")["state_dict"] - missing, unexpected = model.load_state_dict(sd, strict=False) - - if is_transformer: - model = model.first_stage_model - return model - - -def preprocess_vqgan(x): - x = 2.*x - 1. - return x - - -def build_vqgan_model(args): - config = OmegaConf.load(args.vqgan_config_path) - vqgan_model = load_vqgan(config, ckpt_path=args.vqgan_model_path, - is_transformer=args.image_tokenizer["is_transformer"], - is_gumbel=args.image_tokenizer["is_gumbel"]) - return vqgan_model - - -def image_tokenize(vqgan_model, image, is_gumbel=False): - image = torch.stack([preprocess_vqgan(image)], 0) - with torch.no_grad(): - _, _, [_, _, indices] = vqgan_model.encode(image) - if is_gumbel: - image_tokens = rearrange(indices, 'b h w -> b (h w)', b = 1).flatten().tolist() - else: - image_tokens = rearrange(indices, '(b n) -> b n', b = 1).flatten().tolist() - - return image_tokens - - -def image_tokenize_batch(vqgan_model, images, is_gumbel=False): - image_src = torch.stack([preprocess_vqgan(image) for image in images], 0) - with torch.no_grad(): - _, _, [_, _, indices] = vqgan_model.encode(image_src) - if is_gumbel: - image_tokens = rearrange(indices, 'b h w -> b (h w)', b = len(images)).tolist() - else: - image_tokens = rearrange(indices, '(b n) -> b n', b = len(images)).tolist() - - return image_tokens - - -def image_detokenize(vqgan_model, image_tokens, image_vocab_size=1024, is_gumbel=False, save_path=None): - with torch.no_grad(): - b, n = 1, len(image_tokens) - one_hot_indices = F.one_hot(torch.tensor([image_tokens]), num_classes = image_vocab_size).float().to(vqgan_model.device) - z = one_hot_indices @ vqgan_model.quantize.embed.weight if is_gumbel \ - else (one_hot_indices @ vqgan_model.quantize.embedding.weight) - z = rearrange(z, 'b (h w) c -> b c h w', h = int(sqrt(n))).to(vqgan_model.device) - img = vqgan_model.decode(z) - img = (img.clamp(-1., 1.) + 1) * 0.5 - - if save_path: - save_image(img, save_path, normalize=False) - return img - - diff --git a/spaces/tabeina/bingo1/src/components/chat-panel.tsx b/spaces/tabeina/bingo1/src/components/chat-panel.tsx deleted file mode 100644 index 56b2112bd75ba08134383871177851fa2e3f43a4..0000000000000000000000000000000000000000 --- a/spaces/tabeina/bingo1/src/components/chat-panel.tsx +++ /dev/null @@ -1,153 +0,0 @@ -'use client' - -import * as React from 'react' -import Image from 'next/image' -import Textarea from 'react-textarea-autosize' -import { useAtomValue } from 'jotai' -import { useEnterSubmit } from '@/lib/hooks/use-enter-submit' -import { cn } from '@/lib/utils' - -import BrushIcon from '@/assets/images/brush.svg' -import ChatIcon from '@/assets/images/chat.svg' -import VisualSearchIcon from '@/assets/images/visual-search.svg' -import SendIcon from '@/assets/images/send.svg' -import PinIcon from '@/assets/images/pin.svg' -import PinFillIcon from '@/assets/images/pin-fill.svg' - -import { useBing } from '@/lib/hooks/use-bing' -import { voiceListenAtom } from '@/state' -import Voice from './voice' -import { ChatImage } from './chat-image' -import { ChatAttachments } from './chat-attachments' - -export interface ChatPanelProps - extends Pick< - ReturnType, - | 'generating' - | 'input' - | 'setInput' - | 'sendMessage' - | 'resetConversation' - | 'isSpeaking' - | 'attachmentList' - | 'uploadImage' - | 'setAttachmentList' - > { - id?: string - className?: string -} - -export function ChatPanel({ - isSpeaking, - generating, - input, - setInput, - className, - sendMessage, - resetConversation, - attachmentList, - uploadImage, - setAttachmentList -}: ChatPanelProps) { - const inputRef = React.useRef(null) - const {formRef, onKeyDown} = useEnterSubmit() - const [focused, setFocused] = React.useState(false) - const [active, setActive] = React.useState(false) - const [pin, setPin] = React.useState(false) - const [tid, setTid] = React.useState() - const voiceListening = useAtomValue(voiceListenAtom) - - const setBlur = React.useCallback(() => { - clearTimeout(tid) - setActive(false) - const _tid = setTimeout(() => setFocused(false), 2000); - setTid(_tid) - }, [tid]) - - const setFocus = React.useCallback(() => { - setFocused(true) - setActive(true) - clearTimeout(tid) - inputRef.current?.focus() - }, [tid]) - - React.useEffect(() => { - if (input) { - setFocus() - } - }, [input, setFocus]) - - return ( -
      { - e.preventDefault() - if (generating) { - return; - } - if (!input?.trim()) { - return - } - setInput('') - setPin(false) - await sendMessage(input) - }} - ref={formRef} - > -
      -
      -
      -
      -
      -
      -
      - -
      -
      -
      -
      - chat -