diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/Seven-Days-Korean-Movie-Download-NEW.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/Seven-Days-Korean-Movie-Download-NEW.md deleted file mode 100644 index ea13ab43881297914faf3b3d19471ae23d76ad85..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/Seven-Days-Korean-Movie-Download-NEW.md +++ /dev/null @@ -1,84 +0,0 @@ -## Seven Days Korean Movie Download - - - - - - ![Seven Days Korean Movie Download NEW!](https://pic2.iqiyipic.com/image/20210603/0d/f7/v_159396427_m_601_zh-CN_m1_260_360.jpg) - - - - - -**Click Here ->>> [https://www.google.com/url?q=https%3A%2F%2Furluss.com%2F2txKQs&sa=D&sntz=1&usg=AOvVaw3fc6\_OWnNAEWxloP1aXB2q](https://www.google.com/url?q=https%3A%2F%2Furluss.com%2F2txKQs&sa=D&sntz=1&usg=AOvVaw3fc6\_OWnNAEWxloP1aXB2q)** - - - - - - - - - - - - - -# Seven Days Korean Movie Download: A Gripping Crime Thriller Starring Yunjin Kim - - - -If you are looking for a suspenseful and captivating movie to watch, you might want to check out Seven Days, a 2007 South Korean crime thriller film directed by Won Shin-yun, starring Yunjin Kim and Park Hee-soon. The film had 2,107,849 admissions nationwide and was the 9th most-attended domestic film of 2007. [1] It also won several awards, including Best Actress for Yunjin Kim and Best Supporting Actor for Park Hee-soon at the Grand Bell Awards and the Korean Film Awards. [2] - - - -The plot of Seven Days revolves around Yoo Ji-yeon (Yunjin Kim), a prominent lawyer who has never lost a case. One day, her daughter is kidnapped by a mysterious man who demands that she defend a five-time convicted felon who is appealing his conviction for rape and murder. Ji-yeon has only seven days before his trial ends to prove his innocence and save her daughter. Along the way, she uncovers a web of corruption, conspiracy and secrets that put her life and career in danger. - - - -Seven Days is a fast-paced and thrilling movie that will keep you on the edge of your seat. The film boasts of excellent performances by the lead actors, especially Yunjin Kim, who portrays the desperate and determined mother with great skill and emotion. The film also features impressive cinematography, editing, music and sound effects that enhance the mood and tension of the story. The film has been praised by critics and audiences alike for its clever plot twists, realistic characters and gripping action scenes. [3] - - - -If you want to watch Seven Days online, you can find it on iQ.com, a streaming platform that offers a variety of Asian movies and dramas with English subtitles. You can also download the movie to watch offline on your device. To access iQ.com, you need to register for a free account and verify your email address. You can then enjoy watching Seven Days and other amazing content on iQ.com. [4] - - - -Don't miss this opportunity to watch Seven Days Korean movie download online for free on iQ.com. You will not regret it! - - - -[1] https://en.wikipedia.org/wiki/Seven\_Days\_(2007\_film) - - [2] https://www.imdb.com/title/tt0997229/awards - - [3] https://www.imdb.com/title/tt0997229/reviews - - [4] https://www.iq.com/album/seven-days-2007-bmk341bglo?lang=en\_us - - - -Here is the continuation of the article: - - - -Seven Days is not only a thrilling movie, but also a meaningful one. It explores the themes of justice, morality, family and sacrifice. It raises questions about how far one would go to save a loved one, and what price one would pay for doing so. It also shows the corruption and injustice that exist in the legal system and the society. It challenges the viewers to think about their own values and choices in difficult situations. - - - -The film has also been remade in Bollywood as Jazbaa, starring Aishwarya Rai Bachchan and Irrfan Khan. The remake follows the same plot as the original, but with some changes to suit the Indian context and audience. The remake was released in 2015 and received mixed reviews from critics and viewers. Some praised the performances and the direction, while others criticized the screenplay and the music. [5] - - - -Whether you watch the original or the remake, Seven Days is a movie that will not disappoint you. It is a movie that will keep you hooked from start to finish. It is a movie that will make you feel and think. It is a movie that you should not miss. - - - -[5] https://en.wikipedia.org/wiki/Jazbaa - - dfd1c89656 - - - - - diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Ahoura Bold Font Free A Balanced and Eye-Catching Font for Multiple Applications.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Ahoura Bold Font Free A Balanced and Eye-Catching Font for Multiple Applications.md deleted file mode 100644 index e82454e9867abfac753207b06b0052b2deac271f..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Ahoura Bold Font Free A Balanced and Eye-Catching Font for Multiple Applications.md +++ /dev/null @@ -1,128 +0,0 @@ - -

Ahoura Bold Font Free: A Modern and Elegant Arabic Typeface

-

If you are looking for a font that can combine modernity and elegance, simplicity and sophistication, clarity and beauty, then you might want to check out Ahoura Bold Font. This font is a unique and innovative Arabic typeface that was designed by Naghi Naghashian, a renowned Iranian typographer and graphic designer. In this article, we will explore what makes Ahoura Bold Font so special, how you can benefit from using it, and how you can download and use it for free.

-

Ahoura Bold Font Free


Download »»» https://byltly.com/2uKz3F



-

The Design and Features of Ahoura Bold Font

-

The Inspiration and Innovation behind Ahoura Bold Font

-

Ahoura Bold Font is not just another Arabic font. It is a result of careful research and analysis on Arabic characters and their structure, as well as a contribution to the modernization of Arabic typography. According to the designer, Naghi Naghashian, Ahoura Bold Font was created with today's ever-changing technology in mind, without compromising the calligraphic tradition and the cultural identity of Arabic script. He says:

-
"The Ahoura innovation is a contribution to modernisation of Arabic typography; gives the Arabic font letters real typographic arrangement and provides for more typographic flexibility. This step was necessary after more than two hundred years of relative stagnation in Arabic font design."
-

As such, Ahoura Bold Font is a low-contrast neo-geometric sans serif font that is defined by minimalism, geometry, and purity of form. It has a balanced width, generous x-height, and short ascenders and descenders, giving it a simple and clean look. It also uses the highest degree of geometric clarity along with the necessary amount of calligraphic references, creating a harmonious balance between contemporary aesthetics and traditional elegance.

-

The Styles and Weights of Ahoura Bold Font

-

Ahoura Bold Font is part of the Ahoura font family, which consists of six styles and three weights. The styles are normal and italic, while the weights are light, regular, and bold. Each style has its own character and mood, but they all share the same design principles and quality. Here are some examples of how each style looks like:

- - - - - - - - -
StyleExample
Ahoura LightAhoura Light
Ahoura Light ItalicAhoura Light Italic
Ahoura RegularAhoura Regular
Ahoura ItalicAhoura Italic
Ahoura BoldAhoura Bold
Ahoura Bold ItalicAhoura Bold Italic
-

The OpenType Features and Language Support of Ahoura Bold Font

-

Ahoura Bold Font is not only beautiful but also functional. It comes with various OpenType features that enhance its typographic performance and flexibility. Some of these features are:

- -

In addition to these features, Ahoura Bold Font also supports multiple languages that use Arabic script, such as Arabic, Persian, Urdu, Kurdish, Pashto, Sindhi, Balochi, Uyghur, Kazakh, Kyrgyz, Tajik, Turkmen, Uzbek, etc.

-

The Benefits and Applications of Ahoura Bold Font

-

The Legibility and Versatility of Ahoura Bold Font

-

One of the main benefits of using Ahoura Bold Font is its legibility. This font is designed to be easily readable not only in large sizes but also in small sizes. It is also suitable for various applications such as print or digital media. Whether you want to use it for headlines or body text, logos or posters, websites or apps, books or magazines, Ahoura Bold Font can handle them all. Moreover, this font can be artificially obliqued or skewed with software tools such as InDesign or Illustrator without losing its quality or effect.

-

The Aesthetic and Cultural Appeal of Ahoura Bold Font

-

Another benefit of using Ahoura Bold Font is its aesthetic appeal. This font has a unique and distinctive character that can make your typography stand out from the crowd. It can also convey a sense of modernity and elegance that can match your design style or theme. Furthermore, this font has a cultural appeal that can reflect your identity or message. By using this font, you can show your respect for the Arabic script tradition while also embracing the contemporary trends in typography.

-

The Compatibility and Accessibility of Ahoura Bold Font

-

A final benefit of using Ahoura Bold Font is its compatibility and accessibility. This font is compatible with most software applications that support OpenType fonts such as Microsoft Word Continuing the article:

How to Download and Use Ahoura Bold Font for Free

-

The Sources and Licenses of Ahoura Bold Font

-

If you are interested in downloading and using Ahoura Bold Font for free, you might be wondering where to find it and what are the terms and conditions of using it. Well, there are several sources where you can download Ahoura Bold Font for free, such as:

- -

However, before you download and use Ahoura Bold Font for free, you should be aware of the licenses and restrictions that apply to it. According to the designer, Naghi Naghashian, Ahoura Bold Font is free for personal use only. This means that you can use it for your own projects or hobbies, but not for any commercial or professional purposes. If you want to use Ahoura Bold Font for commercial or professional purposes, you need to purchase a license from the designer's website:

-

Ahoura Bold Typeface Free Download
-How to Install Ahoura Bold Font for Free
-Ahoura Bold Font Free Alternative
-Ahoura Bold Font Free License
-Ahoura Bold Font Free for Commercial Use
-Ahoura Bold Font Free for Personal Use
-Ahoura Bold Font Free for Web Design
-Ahoura Bold Font Free for Logo Design
-Ahoura Bold Font Free for Print Design
-Ahoura Bold Font Free for Branding
-Ahoura Bold Font Free for Typography
-Ahoura Bold Font Free for Poster Design
-Ahoura Bold Font Free for Book Cover Design
-Ahoura Bold Font Free for Magazine Design
-Ahoura Bold Font Free for Flyer Design
-Ahoura Bold Font Free for Brochure Design
-Ahoura Bold Font Free for Business Card Design
-Ahoura Bold Font Free for Invitation Design
-Ahoura Bold Font Free for T-Shirt Design
-Ahoura Bold Font Free for Packaging Design
-Ahoura Bold Font Free for Social Media Design
-Ahoura Bold Font Free for Video Editing
-Ahoura Bold Font Free for Animation
-Ahoura Bold Font Free for Game Development
-Ahoura Bold Font Free for App Development
-Ahoura Bold Font Free for Website Development
-Ahoura Bold Font Free Preview Online
-Ahoura Bold Font Free Sample Text
-Ahoura Bold Font Free Characters List
-Ahoura Bold Font Free Glyphs List
-Ahoura Bold Font Free Symbols List
-Ahoura Bold Font Free Numbers List
-Ahoura Bold Font Free Punctuation List
-Ahoura Bold Font Free Accents List
-Ahoura Bold Font Free Ligatures List
-Ahoura Bold Font Free Swashes List
-Ahoura Bold Font Free Stylistic Alternates List
-Ahoura Bold Font Free Contextual Alternates List
-Ahoura Bold Font Free Multilingual Support List
-Ahoura Bold Font Free Unicode Range List
-How to Use Ahoura Bold Font in Photoshop
-How to Use Ahoura Bold Font in Illustrator
-How to Use Ahoura Bold Font in InDesign
-How to Use Ahoura Bold Font in Word
-How to Use Ahoura Bold Font in PowerPoint
-How to Use Ahoura Bold Font in Excel
-How to Use Ahoura Bold Font in Google Docs
-How to Use Ahoura Bold Font in Google Slides
-How to Use Ahoura Bold Font in Canva
-How to Use Ahoura Bold Font in Figma

-

The Installation and Usage of Ahoura Bold Font

-

After you have downloaded Ahoura Bold Font for free, you need to install it on your computer so that you can use it with your software applications. The installation process may vary depending on your operating system, but here are some general steps that you can follow:

-
    -
  1. Extract the font files from the .zip folder that you have downloaded.
  2. -
  3. Right-click on the font files that you want to install and click Install.
  4. -
  5. If you are prompted to allow the program to make changes to your computer, click Yes.
  6. -
  7. Wait for the installation to complete.
  8. -
  9. Open your software application and look for Ahoura Bold Font in the font list.
  10. -
-

If you need more detailed instructions on how to install fonts on your computer, you can refer to this article:

-

The Tips and Tricks for Optimizing Ahoura Bold Font

-

Now that you have installed Ahoura Bold Font on your computer, you might want to know how to optimize it for your design projects. Here are some tips and tricks that you can use to make the most out of this font:

- -

Conclusion

-

Ahoura Bold Font is a modern and elegant Arabic typeface that can enhance your typography and design projects. It has a unique and innovative design that combines geometry and calligraphy, simplicity and sophistication, clarity and beauty. It also has various features and options that make it flexible and versatile. Moreover, it supports multiple languages that use Arabic script, making it suitable for different audiences and contexts. If you want to download and use Ahoura Bold Font for free, you can find it on several websites that offer free fonts for personal use. However, if you want to use it for commercial or professional purposes, you need to purchase a license from the designer's website. To install and use Ahoura Bold Font on your computer, you need to follow some simple steps that may vary depending on your operating system. To optimize Ahoura Bold Font for your design projects, you need to use its OpenType features, styles, weights, variable font option, and font pairing suggestions.

-

We hope that this article has helped you learn more about Ahoura Bold Font and how to download and use it for free. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading!

-

Frequently Asked Questions

-
    -
  1. What is Ahoura Bold Font?
    Ahoura Bold Font is a unique and innovative Arabic typeface that was designed by Naghi Naghashian, a renowned Iranian typographer and graphic designer.
  2. -
  3. Why should I use Ahoura Bold Font?
    You should use Ahoura Bold Font because it is a modern and elegant font that can combine geometry and calligraphy, simplicity and sophistication, clarity and beauty. It also has various features and options that make it flexible and versatile.
  4. -
  5. Where can I download Ahoura Bold Font for free?
    You can download Ahoura Bold Font for free from several websites that offer free fonts for personal use, such as Fonts.do, Befonts.com, or Fontspace.com.
  6. -
  7. How can I install Ahoura Bold Font on my computer?
    You can install Ahoura Bold Font on your computer by extracting the font files from the .zip folder that you have downloaded, right-clicking on the font files that you want to install and clicking Install, clicking Yes if prompted to allow changes to your computer, waiting for the installation to complete, and opening your software application and looking for Ahoura Bold Font in the font list.
  8. -
  9. How can I optimize Ahoura Bold Font for my design projects?
    You can optimize Ahoura Bold Font for your design projects by using its OpenType features, Continuing the article: styles, weights, variable font option, and font pairing suggestions. For example, you can use the italic style to create a more dynamic and expressive typography, use the bold weight to create a strong and confident typography, use the variable font option to adjust the weight and width of the font according to your preference, and use Ahoura Bold Font with other fonts that complement its style and mood.
  10. -
-

0a6ba089eb
-
-
\ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Adobe Premiere Elements 11 Crack Only.md b/spaces/1gistliPinn/ChatGPT4/Examples/Adobe Premiere Elements 11 Crack Only.md deleted file mode 100644 index 1c12bef3a18dbe525f838afa4e407df381135f03..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Adobe Premiere Elements 11 Crack Only.md +++ /dev/null @@ -1,6 +0,0 @@ -

adobe premiere elements 11 crack only


Download Filehttps://imgfil.com/2uxYlA



-
-Steinberg cubase 4 crack download free adobe premiere pro cs5 serial key dragon crack ... Adobe Photoshop Elements 2020 Crack is also a fantastic ... number for adobe photoshop elements 11. ... Windows [7/ 8/ 8.1]*/ 10 Only flavor of 64-bit ... 4d29de3e1b
-
-
-

diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Callofdutyblackops2setup1cbinindir.md b/spaces/1gistliPinn/ChatGPT4/Examples/Callofdutyblackops2setup1cbinindir.md deleted file mode 100644 index bdd1370d86dc66b175130426b42af23c64f02c8b..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Callofdutyblackops2setup1cbinindir.md +++ /dev/null @@ -1,6 +0,0 @@ -

Callofdutyblackops2setup1cbinindir


Download Zip - https://imgfil.com/2uxYIU



- -... Linux Serial Torrent x86 x64 Tags: activation for Ipi Mocap Studio 3. 50e0b7e615. Intro Video Maker Apk Mod Unlock All · Callofdutyblackops2setup1cbinindir 4d29de3e1b
-
-
-

diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Celemony Melodyne Studio 4.2.3.1 Key Torrent Download 2019 [VERIFIED].md b/spaces/1gistliPinn/ChatGPT4/Examples/Celemony Melodyne Studio 4.2.3.1 Key Torrent Download 2019 [VERIFIED].md deleted file mode 100644 index 98f1429ee0bb850451e4c6486688cff9ab95e48a..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Celemony Melodyne Studio 4.2.3.1 Key Torrent Download 2019 [VERIFIED].md +++ /dev/null @@ -1,6 +0,0 @@ -

Celemony Melodyne Studio 4.2.3.1 Key Torrent Download 2019


Download Zip --->>> https://imgfil.com/2uxXhV



- -Melodyne 3.2 Keygen free full Torrent download, Melodyne 3.2 Keygen ... are good at using technology Celemony Melodyne Studio 4.2.3.1 Key is a real joy and a ... on November 4, 2019 November 4, 2019 Author Cracked Key 0 Melodyne 4 ... 1fdad05405
-
-
-

diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/DataCash230Namo Webeditor 9 Crack 27.md b/spaces/1gistliPinn/ChatGPT4/Examples/DataCash230Namo Webeditor 9 Crack 27.md deleted file mode 100644 index 1d2e65660dd29d205247f219ce95f61408eb7c51..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/DataCash230Namo Webeditor 9 Crack 27.md +++ /dev/null @@ -1,107 +0,0 @@ - -

DataCash230Namo Webeditor 9 Crack 27: What You Need to Know

-

If you are looking for a powerful and easy-to-use visual HTML editor, you might have heard of DataCash230Namo Webeditor 9. This software allows you to create and edit web pages with drag-and-drop features, templates, widgets, and more. But what if you want to use it without paying for a license? That's where DataCash230Namo Webeditor 9 Crack 27 comes in.

-

DataCash230Namo Webeditor 9 Crack 27


Download Zip » https://imgfil.com/2uy1yO



-

What is DataCash230Namo Webeditor 9 Crack 27?

-

DataCash230Namo Webeditor 9 Crack 27 is a piece of software that bypasses the activation process of DataCash230Namo Webeditor 9 and lets you use it for free. It is also known as a keygen, patch, or serial number generator. By using DataCash230Namo Webeditor 9 Crack 27, you can access all the features and functions of DataCash230Namo Webeditor 9 without paying a dime.

-

How to Download and Install DataCash230Namo Webeditor 9 Crack 27?

-

There are many websites that claim to offer DataCash230Namo Webeditor 9 Crack 27 for download. However, you should be careful when downloading anything from the internet, as some files may contain viruses, malware, or spyware that can harm your computer or steal your personal information. Here are some steps to follow if you want to download and install DataCash230Namo Webeditor 9 Crack 27 safely:

- -

What are the Benefits and Risks of Using DataCash230Namo Webeditor 9 Crack 27?

-

Using DataCash230Namo Webeditor 9 Crack 27 has some benefits and risks that you should be aware of before deciding to use it. Here are some of them:

-

Benefits

- -

Risks

- -

Conclusion

-

DataCash230Namo Webeditor 9 Crack 27 is a software that allows you to use DataCash230Namo Webeditor 9 for free. It has some benefits and risks that you should weigh before using it. If you decide to use DataCash230Namo Webeditor 9 Crack 27, make sure to download it from a reputable source and scan it for viruses before installing it. Alternatively, you can buy a legitimate license of DataCash230Namo Webeditor 9 and enjoy its features without any worries.

-

What are the Features and Functions of DataCash230Namo Webeditor 9?

-

DataCash230Namo Webeditor 9 is a visual HTML editor that offers a variety of features and functions to help you create and edit web pages. Some of the features and functions of DataCash230Namo Webeditor 9 are:

-

- -

What are the Alternatives to DataCash230Namo Webeditor 9?

-

If you are not satisfied with DataCash230Namo Webeditor 9 or you want to try other options, there are some alternatives to DataCash230Namo Webeditor 9 that you can consider. Some of the alternatives to DataCash230Namo Webeditor 9 are:

- -

Conclusion

-

DataCash230Namo Webeditor 9 Crack 27 is a software that allows you to use DataCash230Namo Webeditor 9 for free. It has some benefits and risks that you should weigh before using it. If you decide to use DataCash230Namo Webeditor 9 Crack 27, make sure to download it from a reputable source and scan it for viruses before installing it. Alternatively, you can buy a legitimate license of DataCash230Namo Webeditor 9 and enjoy its features without any worries. You can also explore other alternatives to DataCash230Namo Webeditor 9 that may suit your needs better.

- - -- You can add a paragraph about the history and development of DataCash230Namo Webeditor 9 and how it differs from previous versions. -- You can add a paragraph about the pros and cons of using DataCash230Namo Webeditor 9 compared to other visual HTML editors. -- You can add a paragraph about the best practices and tips for using DataCash230Namo Webeditor 9 effectively and efficiently. -- You can add a paragraph about the feedback and reviews of DataCash230Namo Webeditor 9 from other users and experts. -- You can add a paragraph about the future plans and updates of DataCash230Namo Webeditor 9 and what features and functions you can expect in the next version. - - -- You can add a paragraph about the history and development of DataCash230Namo Webeditor 9 and how it differs from previous versions. -- You can add a paragraph about the pros and cons of using DataCash230Namo Webeditor 9 compared to other visual HTML editors. -- You can add a paragraph about the best practices and tips for using DataCash230Namo Webeditor 9 effectively and efficiently. -- You can add a paragraph about the feedback and reviews of DataCash230Namo Webeditor 9 from other users and experts. -- You can add a paragraph about the future plans and updates of DataCash230Namo Webeditor 9 and what features and functions you can expect in the next version. - - -- You can add a paragraph about the history and development of DataCash230Namo Webeditor 9 and how it differs from previous versions. -- You can add a paragraph about the pros and cons of using DataCash230Namo Webeditor 9 compared to other visual HTML editors. -- You can add a paragraph about the best practices and tips for using DataCash230Namo Webeditor 9 effectively and efficiently. -- You can add a paragraph about the feedback and reviews of DataCash230Namo Webeditor 9 from other users and experts. -- You can add a paragraph about the future plans and updates of DataCash230Namo Webeditor 9 and what features and functions you can expect in the next version. - - -- You can add a paragraph about the history and development of DataCash230Namo Webeditor 9 and how it differs from previous versions. -- You can add a paragraph about the pros and cons of using DataCash230Namo Webeditor 9 compared to other visual HTML editors. -- You can add a paragraph about the best practices and tips for using DataCash230Namo Webeditor 9 effectively and efficiently. -- You can add a paragraph about the feedback and reviews of DataCash230Namo Webeditor 9 from other users and experts. -- You can add a paragraph about the future plans and updates of DataCash230Namo Webeditor 9 and what features and functions you can expect in the next version. - - -- You can add a paragraph about the history and development of DataCash230Namo Webeditor 9 and how it differs from previous versions. -- You can add a paragraph about the pros and cons of using DataCash230Namo Webeditor 9 compared to other visual HTML editors. -- You can add a paragraph about the best practices and tips for using DataCash230Namo Webeditor 9 effectively and efficiently. -- You can add a paragraph about the feedback and reviews of DataCash230Namo Webeditor 9 from other users and experts. -- You can add a paragraph about the future plans and updates of DataCash230Namo Webeditor 9 and what features and functions you can expect in the next version. - - -- You can add a paragraph about the history and development of DataCash230Namo Webeditor 9 and how it differs from previous versions. -- You can add a paragraph about the pros and cons of using DataCash230Namo Webeditor 9 compared to other visual HTML editors. -- You can add a paragraph about the best practices and tips for using DataCash230Namo Webeditor 9 effectively and efficiently. -- You can add a paragraph about the feedback and reviews of DataCash230Namo Webeditor 9 from other users and experts. -- You can add a paragraph about the future plans and updates of DataCash230Namo Webeditor 9 and what features and functions you can expect in the next version. - - -- You can add a paragraph about the history and development of DataCash230Namo Webeditor 9 and how it differs from previous versions. -- You can add a paragraph about the pros and cons of using DataCash230Namo Webeditor 9 compared to other visual HTML editors. -- You can add a paragraph about the best practices and tips for using DataCash230Namo Webeditor 9 effectively and efficiently. -- You can add a paragraph about the feedback and reviews of DataCash230Namo Webeditor 9 from other users and experts. -- You can add a paragraph about the future plans and updates of DataCash230Namo Webeditor 9 and what features and functions you can expect in the next version. -

Conclusion

-

DataCash230Namo Webeditor 9 Crack 27 is a software that allows you to use DataCash230Namo Webeditor 9 for free. It has some benefits and risks that you should weigh before using it. If you decide to use DataCash230Namo Webeditor 9 Crack 27, make sure to download it from a reputable source and scan it for viruses before installing it. Alternatively, you can buy a legitimate license of DataCash230Namo Webeditor 9 and enjoy its features without any worries. You can also explore other alternatives to DataCash230Namo Webeditor 9 that may suit your needs better.

3cee63e6c2
-
-
\ No newline at end of file diff --git a/spaces/1line/AutoGPT/autogpt/cli.py b/spaces/1line/AutoGPT/autogpt/cli.py deleted file mode 100644 index a2e99cb421cad005528cb160e948ce59ccfcdb66..0000000000000000000000000000000000000000 --- a/spaces/1line/AutoGPT/autogpt/cli.py +++ /dev/null @@ -1,145 +0,0 @@ -"""Main script for the autogpt package.""" -import click - - -@click.group(invoke_without_command=True) -@click.option("-c", "--continuous", is_flag=True, help="Enable Continuous Mode") -@click.option( - "--skip-reprompt", - "-y", - is_flag=True, - help="Skips the re-prompting messages at the beginning of the script", -) -@click.option( - "--ai-settings", - "-C", - help="Specifies which ai_settings.yaml file to use, will also automatically skip the re-prompt.", -) -@click.option( - "-l", - "--continuous-limit", - type=int, - help="Defines the number of times to run in continuous mode", -) -@click.option("--speak", is_flag=True, help="Enable Speak Mode") -@click.option("--debug", is_flag=True, help="Enable Debug Mode") -@click.option("--gpt3only", is_flag=True, help="Enable GPT3.5 Only Mode") -@click.option("--gpt4only", is_flag=True, help="Enable GPT4 Only Mode") -@click.option( - "--use-memory", - "-m", - "memory_type", - type=str, - help="Defines which Memory backend to use", -) -@click.option( - "-b", - "--browser-name", - help="Specifies which web-browser to use when using selenium to scrape the web.", -) -@click.option( - "--allow-downloads", - is_flag=True, - help="Dangerous: Allows Auto-GPT to download files natively.", -) -@click.option( - "--skip-news", - is_flag=True, - help="Specifies whether to suppress the output of latest news on startup.", -) -@click.pass_context -def main( - ctx: click.Context, - continuous: bool, - continuous_limit: int, - ai_settings: str, - skip_reprompt: bool, - speak: bool, - debug: bool, - gpt3only: bool, - gpt4only: bool, - memory_type: str, - browser_name: str, - allow_downloads: bool, - skip_news: bool, -) -> None: - """ - Welcome to AutoGPT an experimental open-source application showcasing the capabilities of the GPT-4 pushing the boundaries of AI. - - Start an Auto-GPT assistant. - """ - # Put imports inside function to avoid importing everything when starting the CLI - import logging - - from colorama import Fore - - from autogpt.agent.agent import Agent - from autogpt.config import Config, check_openai_api_key - from autogpt.configurator import create_config - from autogpt.logs import logger - from autogpt.memory import get_memory - from autogpt.prompt import construct_prompt - from autogpt.utils import get_current_git_branch, get_latest_bulletin - - if ctx.invoked_subcommand is None: - cfg = Config() - # TODO: fill in llm values here - check_openai_api_key() - create_config( - continuous, - continuous_limit, - ai_settings, - skip_reprompt, - speak, - debug, - gpt3only, - gpt4only, - memory_type, - browser_name, - allow_downloads, - skip_news, - ) - logger.set_level(logging.DEBUG if cfg.debug_mode else logging.INFO) - ai_name = "" - if not cfg.skip_news: - motd = get_latest_bulletin() - if motd: - logger.typewriter_log("NEWS: ", Fore.GREEN, motd) - git_branch = get_current_git_branch() - if git_branch and git_branch != "stable": - logger.typewriter_log( - "WARNING: ", - Fore.RED, - f"You are running on `{git_branch}` branch " - "- this is not a supported branch.", - ) - system_prompt = construct_prompt() - # print(prompt) - # Initialize variables - full_message_history = [] - next_action_count = 0 - # Make a constant: - triggering_prompt = ( - "Determine which next command to use, and respond using the" - " format specified above:" - ) - # Initialize memory and make sure it is empty. - # this is particularly important for indexing and referencing pinecone memory - memory = get_memory(cfg, init=True) - logger.typewriter_log( - "Using memory of type:", Fore.GREEN, f"{memory.__class__.__name__}" - ) - logger.typewriter_log("Using Browser:", Fore.GREEN, cfg.selenium_web_browser) - agent = Agent( - ai_name=ai_name, - memory=memory, - full_message_history=full_message_history, - next_action_count=next_action_count, - system_prompt=system_prompt, - triggering_prompt=triggering_prompt, - ) - agent.start_interaction_loop() - - -if __name__ == "__main__": - main() diff --git a/spaces/1phancelerku/anime-remove-background/Download Cars The Ultimate Guide for Car Enthusiasts.md b/spaces/1phancelerku/anime-remove-background/Download Cars The Ultimate Guide for Car Enthusiasts.md deleted file mode 100644 index fec257632996c6b803d087c29d5ceeed660fbf13..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Download Cars The Ultimate Guide for Car Enthusiasts.md +++ /dev/null @@ -1,107 +0,0 @@ - -

How to Download Cars: A Guide for Car Enthusiasts

-

Have you ever dreamed of driving a Ferrari, a Lamborghini, or a Bugatti? Have you ever wondered what it would be like to race on the streets, the tracks, or the off-road terrains? If you are a car enthusiast, you might have a passion for exploring different types of cars and experiencing their performance and features. But buying or renting a car can be expensive and impractical. That's why some people choose to download cars instead.

-

i want to download cars


DOWNLOAD »»» https://jinyurl.com/2uNT4W



-

What does it mean to download cars?

-

Downloading cars is a way of accessing digital versions of real or fictional cars on your computer or mobile device. You can download cars as files, such as images, videos, or games, that you can view, play, or edit on your device. You can also download cars as software, such as simulators, that you can run on your device and interact with in a realistic or immersive way.

-

The difference between downloading and streaming cars

-

Downloading cars means that you save the car files or software on your device's storage, such as your hard drive or memory card. This allows you to access the car anytime, even when you are offline or have no internet connection. However, downloading cars also takes up space on your device and may require more time and bandwidth to complete.

-

Streaming cars means that you access the car files or software online, such as on a website or an app. This allows you to access the car instantly, without waiting for the download to finish or using up your device's storage. However, streaming cars also requires a stable and fast internet connection and may consume more data or battery power.

-

The benefits of downloading cars

-

Downloading cars has many benefits for car enthusiasts, such as:

-

How to download cars 2006 movie for free
-Best PC racing games to download from Epic Games Store
-CarGurus app for buying and selling new and used cars
-Download cars wallpapers and screensavers for desktop
-Where to download cars mods for GTA 5
-Download cars coloring pages and printables for kids
-How to download cars 3 driven to win game for PS4
-Best car games to download on Android and iOS devices
-Download cars sound effects and ringtones for free
-Where to download cars logos and icons for design projects
-How to download cars 2 video game for PC
-Best car simulator games to download and play online
-Download cars repair manuals and guides for free
-Where to download cars fonts and typography for free
-How to download cars 4 trailer and watch online
-Best car racing apps to download and stream live races
-Download cars quiz and trivia games for free
-Where to download cars stickers and emojis for WhatsApp
-How to download cars theme song and soundtrack for free
-Best car driving games to download and learn driving skills
-Download cars wallpapers HD and 4K for mobile phones
-Where to download cars blueprints and models for 3D printing
-How to download cars dataset and images for machine learning
-Best car tuning games to download and customize your car
-Download cars flash games and play offline on your browser
-Where to download cars SVG and vector files for free
-How to download cars VR games and experience virtual reality
-Best car parking games to download and improve your parking skills
-Download cars music videos and songs for free
-Where to download cars clipart and illustrations for free
-How to download cars PDF books and magazines for free
-Best car drifting games to download and master drifting techniques
-Download cars CAD files and drawings for free
-Where to download cars PNG and JPEG files for free
-How to download cars podcasts and listen online or offline
-Best car shooting games to download and enjoy action-packed gameplay
-Download cars PowerPoint templates and presentations for free
-Where to download cars GIFs and animations for free
-How to download cars subtitles and captions for free
-Best car escape games to download and solve puzzles

- -

The challenges of downloading cars

-

Downloading cars also has some challenges that you need to be aware of, such as:

- -

Where can you download cars?

-

The best websites for downloading cars

-

If you want to download car files, such as images, videos, or games, you can visit some of the best websites for downloading cars. Here are some examples:

-

Internet Archive

-

The Internet Archive is a digital library that offers free access to millions of car images and videos that you can download and use for personal or non-commercial purposes. You can also find thousands of car games that you can download and play on your device. Some of the car games available on the Internet Archive are Need for Speed, Grand Theft Auto, and Carmageddon.

-

Epic Games Store

-

The Epic Games Store is a digital distribution platform that offers free and paid car games that you can download and play on your PC. You can also find exclusive deals and discounts on some of the car games. Some of the car games available on the Epic Games Store are Forza Horizon 4, Rocket League, and Wreckfest.

-

GameTop

-

GameTop is a website that offers free and legal car games that you can download and play on your PC. You can also find no ads, no in-game purchases, and no malware on the car games. Some of the car games available on GameTop are City Racing, Off-Road Super Racing, and Fire and Forget.

-

The best apps for downloading cars

-

If you want to download car software, such as simulators, you can visit some of the best apps for downloading cars. Here are some examples:

-

Car Simulator 2

-

Car Simulator 2 is a free app that lets you download and drive more than 80 cars in an open world. You can also customize, upgrade, and repair your cars. You can also play online with other players or offline with bots. Car Simulator 2 is available for Android and iOS devices.

-

Real Racing 3

-

Real Racing 3 is a free app that lets you download and race more than 250 cars from real manufacturers. You can also compete in more than 40 tracks from real locations. You can also join online events and challenges with other players or offline modes with AI. Real Racing 3 is available for Android and iOS devices.

-

Asphalt 9: Legends

-

Asphalt 9: Legends is a free app that lets you download and drive more than 60 cars from top brands. You can also customize, upgrade, and nitro-boost your cars. You can also join online clubs and seasons with other players or offline career mode with storylines. Asphalt 9: Legends is available for Android, iOS, and Windows devices.

-

How to download cars safely and legally?

-

The risks of downloading cars from untrusted sources

-

Downloading cars from untrusted sources can expose you to various risks, such as:

- -

The tips for avoiding malware and viruses

-

To avoid malware and viruses when downloading cars, you should follow these tips:

- -

The laws and regulations for downloading cars

-

To avoid legal or ethical issues when downloading cars, you should follow these laws and regulations:

- -

Conclusion

-

Downloading cars is a fun and exciting way to enjoy different types of cars on your device. You can download cars as files or software from various websites or apps. However, you should also be careful about the risks of downloading cars from untrusted sources and the laws and regulations for downloading cars. By following these tips, you can download cars safely and legally.

- FAQs Q: How much space does downloading cars take on my device? A: The space required for downloading cars depends on the size and quality of the car files or software. Generally, the higher the resolution, the sound, or the graphics of the car, the more space it will take. You can check the file size or the system requirements of the car before downloading it to make sure you have enough space on your device. Q: How long does downloading cars take on my device? A: The time required for downloading cars depends on the speed and stability of your internet connection and the server of the source. Generally, the faster your internet connection and the server, the less time it will take. You can also pause or resume the download if you encounter any interruptions or errors. Q: Can I download cars for free or do I have to pay for them? A: The cost of downloading cars depends on the source and the type of the car. Some sources offer free car files or software that you can download and use without paying anything. However, some sources may charge a fee or require a subscription for downloading or accessing certain car files or software. You should check the price or the terms and conditions of the source before downloading any car. Q: Can I download cars on any device or do I need a specific device? A: The compatibility of downloading cars depends on the format and the platform of the car files or software. Some car files or software are compatible with multiple devices, such as PCs, laptops, tablets, or smartphones. However, some car files or software may only work on specific devices, such as Windows, Mac, Android, or iOS. You should check the file format or the system requirements of the car before downloading it to make sure it works on your device. Q: Can I share or transfer the car files or software that I downloaded to other devices or people? A: The sharing or transferring of car files or software that you downloaded depends on the license and the permission of the source and the owner. Some car files or software are free and open-source, which means you can share or transfer them to other devices or people without any restrictions. However, some car files or software are proprietary and protected, which means you cannot share or transfer them to other devices or people without violating their rights. You should check the license or the permission of the source and the owner before sharing or transferring any car.

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/4Taps/SadTalker/src/face3d/models/arcface_torch/run.sh b/spaces/4Taps/SadTalker/src/face3d/models/arcface_torch/run.sh deleted file mode 100644 index 61af4b4950eb11334e55362e3e3c5e2796979a01..0000000000000000000000000000000000000000 --- a/spaces/4Taps/SadTalker/src/face3d/models/arcface_torch/run.sh +++ /dev/null @@ -1,2 +0,0 @@ -CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python -m torch.distributed.launch --nproc_per_node=8 --nnodes=1 --node_rank=0 --master_addr="127.0.0.1" --master_port=1234 train.py configs/ms1mv3_r50 -ps -ef | grep "train" | grep -v grep | awk '{print "kill -9 "$2}' | sh diff --git a/spaces/A00001/bingothoo/next.config.js b/spaces/A00001/bingothoo/next.config.js deleted file mode 100644 index 0e6ccd7fbc91d0459eaaff3e968ce0556789c605..0000000000000000000000000000000000000000 --- a/spaces/A00001/bingothoo/next.config.js +++ /dev/null @@ -1,38 +0,0 @@ -/** @type {import('next').NextConfig} */ -const nextConfig = { - // output: 'export', - // assetPrefix: '.', - webpack: (config, { isServer }) => { - if (!isServer) { - config.resolve = { - ...config.resolve, - fallback: { - 'bufferutil': false, - 'utf-8-validate': false, - http: false, - https: false, - stream: false, - // fixes proxy-agent dependencies - net: false, - dns: false, - tls: false, - assert: false, - // fixes next-i18next dependencies - path: false, - fs: false, - // fixes mapbox dependencies - events: false, - // fixes sentry dependencies - process: false - } - }; - } - config.module.exprContextCritical = false; - - return config; - }, -} - -module.exports = (...args) => { - return nextConfig -} diff --git a/spaces/AIGC-Audio/AudioGPT/audio_to_text/captioning/models/transformer_model.py b/spaces/AIGC-Audio/AudioGPT/audio_to_text/captioning/models/transformer_model.py deleted file mode 100644 index 76c97f171955f04b10c16fd1f1a205ce7343a0ac..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/AudioGPT/audio_to_text/captioning/models/transformer_model.py +++ /dev/null @@ -1,265 +0,0 @@ -# -*- coding: utf-8 -*- -import random -import torch -import torch.nn as nn - -from .base_model import CaptionModel -from .utils import repeat_tensor -import audio_to_text.captioning.models.decoder - - -class TransformerModel(CaptionModel): - - def __init__(self, encoder: nn.Module, decoder: nn.Module, **kwargs): - if not hasattr(self, "compatible_decoders"): - self.compatible_decoders = ( - audio_to_text.captioning.models.decoder.TransformerDecoder, - ) - super().__init__(encoder, decoder, **kwargs) - - def seq_forward(self, input_dict): - cap = input_dict["cap"] - cap_padding_mask = (cap == self.pad_idx).to(cap.device) - cap_padding_mask = cap_padding_mask[:, :-1] - output = self.decoder( - { - "word": cap[:, :-1], - "attn_emb": input_dict["attn_emb"], - "attn_emb_len": input_dict["attn_emb_len"], - "cap_padding_mask": cap_padding_mask - } - ) - return output - - def prepare_decoder_input(self, input_dict, output): - decoder_input = { - "attn_emb": input_dict["attn_emb"], - "attn_emb_len": input_dict["attn_emb_len"] - } - t = input_dict["t"] - - ############### - # determine input word - ################ - if input_dict["mode"] == "train" and random.random() < input_dict["ss_ratio"]: # training, scheduled sampling - word = input_dict["cap"][:, :t+1] - else: - start_word = torch.tensor([self.start_idx,] * input_dict["attn_emb"].size(0)).unsqueeze(1).long() - if t == 0: - word = start_word - else: - word = torch.cat((start_word, output["seq"][:, :t]), dim=-1) - # word: [N, T] - decoder_input["word"] = word - - cap_padding_mask = (word == self.pad_idx).to(input_dict["attn_emb"].device) - decoder_input["cap_padding_mask"] = cap_padding_mask - return decoder_input - - def prepare_beamsearch_decoder_input(self, input_dict, output_i): - decoder_input = {} - t = input_dict["t"] - i = input_dict["sample_idx"] - beam_size = input_dict["beam_size"] - ############### - # prepare attn embeds - ################ - if t == 0: - attn_emb = repeat_tensor(input_dict["attn_emb"][i], beam_size) - attn_emb_len = repeat_tensor(input_dict["attn_emb_len"][i], beam_size) - output_i["attn_emb"] = attn_emb - output_i["attn_emb_len"] = attn_emb_len - decoder_input["attn_emb"] = output_i["attn_emb"] - decoder_input["attn_emb_len"] = output_i["attn_emb_len"] - ############### - # determine input word - ################ - start_word = torch.tensor([self.start_idx,] * beam_size).unsqueeze(1).long() - if t == 0: - word = start_word - else: - word = torch.cat((start_word, output_i["seq"]), dim=-1) - decoder_input["word"] = word - cap_padding_mask = (word == self.pad_idx).to(input_dict["attn_emb"].device) - decoder_input["cap_padding_mask"] = cap_padding_mask - - return decoder_input - - -class M2TransformerModel(CaptionModel): - - def __init__(self, encoder: nn.Module, decoder: nn.Module, **kwargs): - if not hasattr(self, "compatible_decoders"): - self.compatible_decoders = ( - captioning.models.decoder.M2TransformerDecoder, - ) - super().__init__(encoder, decoder, **kwargs) - self.check_encoder_compatibility() - - def check_encoder_compatibility(self): - assert isinstance(self.encoder, captioning.models.encoder.M2TransformerEncoder), \ - f"only M2TransformerModel is compatible with {self.__class__.__name__}" - - - def seq_forward(self, input_dict): - cap = input_dict["cap"] - output = self.decoder( - { - "word": cap[:, :-1], - "attn_emb": input_dict["attn_emb"], - "attn_emb_mask": input_dict["attn_emb_mask"], - } - ) - return output - - def prepare_decoder_input(self, input_dict, output): - decoder_input = { - "attn_emb": input_dict["attn_emb"], - "attn_emb_mask": input_dict["attn_emb_mask"] - } - t = input_dict["t"] - - ############### - # determine input word - ################ - if input_dict["mode"] == "train" and random.random() < input_dict["ss_ratio"]: # training, scheduled sampling - word = input_dict["cap"][:, :t+1] - else: - start_word = torch.tensor([self.start_idx,] * input_dict["attn_emb"].size(0)).unsqueeze(1).long() - if t == 0: - word = start_word - else: - word = torch.cat((start_word, output["seq"][:, :t]), dim=-1) - # word: [N, T] - decoder_input["word"] = word - - return decoder_input - - def prepare_beamsearch_decoder_input(self, input_dict, output_i): - decoder_input = {} - t = input_dict["t"] - i = input_dict["sample_idx"] - beam_size = input_dict["beam_size"] - ############### - # prepare attn embeds - ################ - if t == 0: - attn_emb = repeat_tensor(input_dict["attn_emb"][i], beam_size) - attn_emb_mask = repeat_tensor(input_dict["attn_emb_mask"][i], beam_size) - output_i["attn_emb"] = attn_emb - output_i["attn_emb_mask"] = attn_emb_mask - decoder_input["attn_emb"] = output_i["attn_emb"] - decoder_input["attn_emb_mask"] = output_i["attn_emb_mask"] - ############### - # determine input word - ################ - start_word = torch.tensor([self.start_idx,] * beam_size).unsqueeze(1).long() - if t == 0: - word = start_word - else: - word = torch.cat((start_word, output_i["seq"]), dim=-1) - decoder_input["word"] = word - - return decoder_input - - -class EventEncoder(nn.Module): - """ - Encode the Label information in AudioCaps and AudioSet - """ - def __init__(self, emb_dim, vocab_size=527): - super(EventEncoder, self).__init__() - self.label_embedding = nn.Parameter( - torch.randn((vocab_size, emb_dim)), requires_grad=True) - - def forward(self, word_idxs): - indices = word_idxs / word_idxs.sum(dim=1, keepdim=True) - embeddings = indices @ self.label_embedding - return embeddings - - -class EventCondTransformerModel(TransformerModel): - - def __init__(self, encoder: nn.Module, decoder: nn.Module, **kwargs): - if not hasattr(self, "compatible_decoders"): - self.compatible_decoders = ( - captioning.models.decoder.EventTransformerDecoder, - ) - super().__init__(encoder, decoder, **kwargs) - self.label_encoder = EventEncoder(decoder.emb_dim, 527) - self.train_forward_keys += ["events"] - self.inference_forward_keys += ["events"] - - # def seq_forward(self, input_dict): - # cap = input_dict["cap"] - # cap_padding_mask = (cap == self.pad_idx).to(cap.device) - # cap_padding_mask = cap_padding_mask[:, :-1] - # output = self.decoder( - # { - # "word": cap[:, :-1], - # "attn_emb": input_dict["attn_emb"], - # "attn_emb_len": input_dict["attn_emb_len"], - # "cap_padding_mask": cap_padding_mask - # } - # ) - # return output - - def prepare_decoder_input(self, input_dict, output): - decoder_input = super().prepare_decoder_input(input_dict, output) - decoder_input["events"] = self.label_encoder(input_dict["events"]) - return decoder_input - - def prepare_beamsearch_decoder_input(self, input_dict, output_i): - decoder_input = super().prepare_beamsearch_decoder_input(input_dict, output_i) - t = input_dict["t"] - i = input_dict["sample_idx"] - beam_size = input_dict["beam_size"] - if t == 0: - output_i["events"] = repeat_tensor(self.label_encoder(input_dict["events"])[i], beam_size) - decoder_input["events"] = output_i["events"] - return decoder_input - - -class KeywordCondTransformerModel(TransformerModel): - - def __init__(self, encoder: nn.Module, decoder: nn.Module, **kwargs): - if not hasattr(self, "compatible_decoders"): - self.compatible_decoders = ( - captioning.models.decoder.KeywordProbTransformerDecoder, - ) - super().__init__(encoder, decoder, **kwargs) - self.train_forward_keys += ["keyword"] - self.inference_forward_keys += ["keyword"] - - def seq_forward(self, input_dict): - cap = input_dict["cap"] - cap_padding_mask = (cap == self.pad_idx).to(cap.device) - cap_padding_mask = cap_padding_mask[:, :-1] - keyword = input_dict["keyword"] - output = self.decoder( - { - "word": cap[:, :-1], - "attn_emb": input_dict["attn_emb"], - "attn_emb_len": input_dict["attn_emb_len"], - "keyword": keyword, - "cap_padding_mask": cap_padding_mask - } - ) - return output - - def prepare_decoder_input(self, input_dict, output): - decoder_input = super().prepare_decoder_input(input_dict, output) - decoder_input["keyword"] = input_dict["keyword"] - return decoder_input - - def prepare_beamsearch_decoder_input(self, input_dict, output_i): - decoder_input = super().prepare_beamsearch_decoder_input(input_dict, output_i) - t = input_dict["t"] - i = input_dict["sample_idx"] - beam_size = input_dict["beam_size"] - if t == 0: - output_i["keyword"] = repeat_tensor(input_dict["keyword"][i], - beam_size) - decoder_input["keyword"] = output_i["keyword"] - return decoder_input - diff --git a/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/encoders/open_clap/tokenizer.py b/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/encoders/open_clap/tokenizer.py deleted file mode 100644 index 5b4a238b987ce66f2932b11451d916e40816b8a3..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/encoders/open_clap/tokenizer.py +++ /dev/null @@ -1,180 +0,0 @@ -""" CLIP tokenizer - -Copied from https://github.com/openai/CLIP. Originally MIT License, Copyright (c) 2021 OpenAI. -""" -import gzip -import html -import os -from functools import lru_cache -from typing import Union, List - -import ftfy -import regex as re -import torch - - -@lru_cache() -def default_bpe(): - return os.path.join(os.path.dirname(os.path.abspath(__file__)), "bpe_simple_vocab_16e6.txt.gz") - - -@lru_cache() -def bytes_to_unicode(): - """ - Returns list of utf-8 byte and a corresponding list of unicode strings. - The reversible bpe codes work on unicode strings. - This means you need a large # of unicode characters in your vocab if you want to avoid UNKs. - When you're at something like a 10B token dataset you end up needing around 5K for decent coverage. - This is a signficant percentage of your normal, say, 32K bpe vocab. - To avoid that, we want lookup tables between utf-8 bytes and unicode strings. - And avoids mapping to whitespace/control characters the bpe code barfs on. - """ - bs = list(range(ord("!"), ord("~")+1))+list(range(ord("¡"), ord("¬")+1))+list(range(ord("®"), ord("ÿ")+1)) - cs = bs[:] - n = 0 - for b in range(2**8): - if b not in bs: - bs.append(b) - cs.append(2**8+n) - n += 1 - cs = [chr(n) for n in cs] - return dict(zip(bs, cs)) - - -def get_pairs(word): - """Return set of symbol pairs in a word. - Word is represented as tuple of symbols (symbols being variable-length strings). - """ - pairs = set() - prev_char = word[0] - for char in word[1:]: - pairs.add((prev_char, char)) - prev_char = char - return pairs - - -def basic_clean(text): - text = ftfy.fix_text(text) - text = html.unescape(html.unescape(text)) - return text.strip() - - -def whitespace_clean(text): - text = re.sub(r'\s+', ' ', text) - text = text.strip() - return text - - -class SimpleTokenizer(object): - def __init__(self, bpe_path: str = default_bpe(), special_tokens=None): - self.byte_encoder = bytes_to_unicode() - self.byte_decoder = {v: k for k, v in self.byte_encoder.items()} - merges = gzip.open(bpe_path).read().decode("utf-8").split('\n') - merges = merges[1:49152-256-2+1] - merges = [tuple(merge.split()) for merge in merges] - vocab = list(bytes_to_unicode().values()) - vocab = vocab + [v+'' for v in vocab] - for merge in merges: - vocab.append(''.join(merge)) - if not special_tokens: - special_tokens = ['', ''] - else: - special_tokens = ['', ''] + special_tokens - vocab.extend(special_tokens) - self.encoder = dict(zip(vocab, range(len(vocab)))) - self.decoder = {v: k for k, v in self.encoder.items()} - self.bpe_ranks = dict(zip(merges, range(len(merges)))) - self.cache = {t:t for t in special_tokens} - special = "|".join(special_tokens) - self.pat = re.compile(special + r"""|'s|'t|'re|'ve|'m|'ll|'d|[\p{L}]+|[\p{N}]|[^\s\p{L}\p{N}]+""", re.IGNORECASE) - - self.vocab_size = len(self.encoder) - self.all_special_ids = [self.encoder[t] for t in special_tokens] - - def bpe(self, token): - if token in self.cache: - return self.cache[token] - word = tuple(token[:-1]) + ( token[-1] + '',) - pairs = get_pairs(word) - - if not pairs: - return token+'' - - while True: - bigram = min(pairs, key = lambda pair: self.bpe_ranks.get(pair, float('inf'))) - if bigram not in self.bpe_ranks: - break - first, second = bigram - new_word = [] - i = 0 - while i < len(word): - try: - j = word.index(first, i) - new_word.extend(word[i:j]) - i = j - except: - new_word.extend(word[i:]) - break - - if word[i] == first and i < len(word)-1 and word[i+1] == second: - new_word.append(first+second) - i += 2 - else: - new_word.append(word[i]) - i += 1 - new_word = tuple(new_word) - word = new_word - if len(word) == 1: - break - else: - pairs = get_pairs(word) - word = ' '.join(word) - self.cache[token] = word - return word - - def encode(self, text): - bpe_tokens = [] - text = whitespace_clean(basic_clean(text)).lower() - for token in re.findall(self.pat, text): - token = ''.join(self.byte_encoder[b] for b in token.encode('utf-8')) - bpe_tokens.extend(self.encoder[bpe_token] for bpe_token in self.bpe(token).split(' ')) - return bpe_tokens - - def decode(self, tokens): - text = ''.join([self.decoder[token] for token in tokens]) - text = bytearray([self.byte_decoder[c] for c in text]).decode('utf-8', errors="replace").replace('', ' ') - return text - - -_tokenizer = SimpleTokenizer() - - -def tokenize(texts: Union[str, List[str]], context_length: int = 77) -> torch.LongTensor: - """ - Returns the tokenized representation of given input string(s) - - Parameters - ---------- - texts : Union[str, List[str]] - An input string or a list of input strings to tokenize - context_length : int - The context length to use; all CLIP models use 77 as the context length - - Returns - ------- - A two-dimensional tensor containing the resulting tokens, shape = [number of input strings, context_length] - """ - if isinstance(texts, str): - texts = [texts] - - sot_token = _tokenizer.encoder[""] - eot_token = _tokenizer.encoder[""] - all_tokens = [[sot_token] + _tokenizer.encode(text) + [eot_token] for text in texts] - result = torch.zeros(len(all_tokens), context_length, dtype=torch.long) - - for i, tokens in enumerate(all_tokens): - if len(tokens) > context_length: - tokens = tokens[:context_length] # Truncate - result[i, :len(tokens)] = torch.tensor(tokens) - - return result diff --git a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_1_ClothesKeyPoint/mmpose_1_x/configs/fashion_2d_keypoint/topdown_heatmap/deepfashion2/td_hm_res50_4xb64-150e_deepfashion2_vest_dress_256x192.py b/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_1_ClothesKeyPoint/mmpose_1_x/configs/fashion_2d_keypoint/topdown_heatmap/deepfashion2/td_hm_res50_4xb64-150e_deepfashion2_vest_dress_256x192.py deleted file mode 100644 index 9aedf0b61fb8072149be212d9b98a904fc821e85..0000000000000000000000000000000000000000 --- a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_1_ClothesKeyPoint/mmpose_1_x/configs/fashion_2d_keypoint/topdown_heatmap/deepfashion2/td_hm_res50_4xb64-150e_deepfashion2_vest_dress_256x192.py +++ /dev/null @@ -1,172 +0,0 @@ -_base_ = [ - '../../../_base_/default_runtime.py', - '../../../_base_/datasets/deepfashion2.py' -] - -default_hooks = dict(checkpoint=dict(save_best='PCK', rule='greater')) - -resume = False # 断点恢复 -load_from = None # 模型权重加载 -train_cfg = dict(by_epoch=True, max_epochs=150, val_interval=10) # 训练轮数,测试间隔 -param_scheduler = [ - dict( # warmup策略 - type='LinearLR', - begin=0, - end=500, - start_factor=0.001, - by_epoch=False), - dict( # scheduler - type='MultiStepLR', - begin=0, - end=150, - milestones=[100, 130], - gamma=0.1, - by_epoch=True) -] -optim_wrapper = dict(optimizer=dict(type='Adam', lr=0.0005)) # 优化器和学习率 -auto_scale_lr = dict(base_batch_size=512) # 根据batch_size自动缩放学习率 - -backend_args = dict(backend='local') # 数据加载后端设置,默认从本地硬盘加载 -dataset_type = 'DeepFashion2Dataset' # 数据集类名 DeepFashionDataset -data_mode = 'topdown' # 算法结构类型,用于指定标注信息加载策略 -data_root = 'data/deepfashion2/' # 数据存放路径 -# 定义数据编解码器,用于生成target和对pred进行解码,同时包含了输入图片和输出heatmap尺寸等信息 -codec = dict( - type='MSRAHeatmap', input_size=(192, 256), heatmap_size=(48, 64), sigma=2) - -train_pipeline = [ - dict(type='LoadImage'), - dict(type='GetBBoxCenterScale'), - dict(type='RandomFlip', direction='horizontal'), - dict( - type='RandomBBoxTransform', - shift_prob=0, - rotate_factor=60, - scale_factor=(0.75, 1.25)), - dict(type='TopdownAffine', input_size=codec['input_size']), - dict(type='GenerateTarget', encoder=codec), - dict(type='PackPoseInputs') -] -val_pipeline = [ # 测试时数据增强 - dict(type='LoadImage', backend_args=backend_args), # 加载图片 - dict(type='GetBBoxCenterScale'), # 根据bbox获取center和scale - dict(type='TopdownAffine', input_size=codec['input_size']), # 根据变换矩阵更新目标数据 - dict(type='PackPoseInputs') # 对target进行打包用于训练 -] -train_dataloader = dict( # 训练数据加载 - batch_size=64, # 批次大小 - num_workers=6, # 数据加载进程数 - persistent_workers=True, # 在不活跃时维持进程不终止,避免反复启动进程的开销 - sampler=dict(type='DefaultSampler', shuffle=True), # 采样策略,打乱数据 - dataset=dict( - type=dataset_type, # 数据集类名 - data_root=data_root, # 数据集路径 - data_mode=data_mode, # 算法类型 - ann_file='train/deepfashion2_vest_dress.json', # 标注文件路径 - data_prefix=dict(img='train/image/'), # 图像路径 - pipeline=train_pipeline # 数据流水线 - )) -val_dataloader = dict( - batch_size=32, - num_workers=6, - persistent_workers=True, # 在不活跃时维持进程不终止,避免反复启动进程的开销 - drop_last=False, - sampler=dict(type='DefaultSampler', shuffle=False), # 采样策略,不进行打乱 - dataset=dict( - type=dataset_type, # 数据集类名 - data_root=data_root, # 数据集路径 - data_mode=data_mode, # 算法类型 - ann_file='validation/deepfashion2_vest_dress.json', # 标注文件路径 - data_prefix=dict(img='validation/image/'), # 图像路径 - test_mode=True, # 测试模式开关 - pipeline=val_pipeline # 数据流水线 - )) -test_dataloader = val_dataloader # 默认情况下不区分验证集和测试集,用户根据需要来自行定义 - -channel_cfg = dict( - num_output_channels=294, - dataset_joints=294, - dataset_channel=[ - [ - 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, - 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, - 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, - 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, - 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, - 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, - 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, - 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, - 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, - 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, - 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, - 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, - 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, - 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, - 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, - 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, - 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, - 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, - 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, - 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, - 285, 286, 287, 288, 289, 290, 291, 292, 293 - ], - ], - inference_channel=[ - 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, - 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, - 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, - 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, - 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, - 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, - 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, - 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, - 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, - 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, - 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, - 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, - 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, - 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, - 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, - 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, - 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, - 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, - 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, - 290, 291, 292, 293 - ]) - -model = dict( - type='TopdownPoseEstimator', # 模型结构决定了算法流程 - data_preprocessor=dict( # 数据归一化和通道顺序调整,作为模型的一部分 - type='PoseDataPreprocessor', - mean=[123.675, 116.28, 103.53], - std=[58.395, 57.12, 57.375], - bgr_to_rgb=True), - backbone=dict( - type='ResNet', - depth=50, - init_cfg=dict( - type='Pretrained', # 预训练参数,只加载backbone权重用于迁移学习 - checkpoint='torchvision://resnet50')), - head=dict( # 模型头部 - type='HeatmapHead', - in_channels=2048, - out_channels=channel_cfg['num_output_channels'], - # deconv_out_channels=None, - loss=dict(type='KeypointMSELoss', use_target_weight=True), # 损失函数 - decoder=codec), # 解码器,将heatmap解码成坐标值 - test_cfg=dict( - flip_test=True, # 开启测试时水平翻转集成 - flip_mode='heatmap', # 对heatmap进行翻转 - shift_heatmap=True, # 对翻转后的结果进行平移提高精度 - )) - -val_evaluator = [ - dict(type='PCKAccuracy', thr=0.2), - dict(type='AUC'), - dict(type='EPE'), -] -test_evaluator = val_evaluator # 默认情况下不区分验证集和测试集,用户根据需要来自行定义 - -visualizer = dict( - vis_backends=[dict(type='LocalVisBackend'), - dict(type='WandbVisBackend')]) diff --git a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/_base_/models/resnet101.py b/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/_base_/models/resnet101.py deleted file mode 100644 index 1147cd4be9aff00ad6ce66c31e2839c1a94f9ca3..0000000000000000000000000000000000000000 --- a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/_base_/models/resnet101.py +++ /dev/null @@ -1,17 +0,0 @@ -# model settings -model = dict( - type='ImageClassifier', - backbone=dict( - type='ResNet', - depth=101, - num_stages=4, - out_indices=(3, ), - style='pytorch'), - neck=dict(type='GlobalAveragePooling'), - head=dict( - type='LinearClsHead', - num_classes=1000, - in_channels=2048, - loss=dict(type='CrossEntropyLoss', loss_weight=1.0), - topk=(1, 5), - )) diff --git a/spaces/Abhay834/my_genai_chatbot/README.md b/spaces/Abhay834/my_genai_chatbot/README.md deleted file mode 100644 index 1deecc4f97a04828a1c76e8dd8d8c849211549dd..0000000000000000000000000000000000000000 --- a/spaces/Abhay834/my_genai_chatbot/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: My Genai Chatbot -emoji: 🐨 -colorFrom: gray -colorTo: blue -sdk: gradio -sdk_version: 3.39.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/AgentVerse/agentVerse/agentverse/environments/tasksolving_env/rules/decision_maker/horizontal.py b/spaces/AgentVerse/agentVerse/agentverse/environments/tasksolving_env/rules/decision_maker/horizontal.py deleted file mode 100644 index b2a8c57037513bb3d80c03a9b58661f7299ffd26..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/agentverse/environments/tasksolving_env/rules/decision_maker/horizontal.py +++ /dev/null @@ -1,57 +0,0 @@ -from __future__ import annotations -import asyncio -from colorama import Fore - -from typing import TYPE_CHECKING, List - -from . import decision_maker_registry -from .base import BaseDecisionMaker -from agentverse.logging import logger - -from agentverse.message import Message - -if TYPE_CHECKING: - from agentverse.agents.base import BaseAgent - from agentverse.message import CriticMessage - - -@decision_maker_registry.register("horizontal") -class HorizontalDecisionMaker(BaseDecisionMaker): - """ - Discuss in a horizontal manner. - """ - - name: str = "horizontal" - - # def step( - async def astep( - self, - agents: List[BaseAgent], - task_description: str, - previous_plan: str = "No solution yet.", - advice: str = "No advice yet.", - **kwargs, - ) -> List[str]: - if advice != "No advice yet.": - self.broadcast_messages( - agents, [Message(content=advice, sender="Evaluator")] - ) - for agent in agents[1:]: - review: CriticMessage = await agent.astep( - previous_plan, advice, task_description - ) - if review.content != "": - self.broadcast_messages(agents, [review]) - - logger.info("", "Reviews:", Fore.YELLOW) - logger.info( - "", - f"[{review.sender}]: {review.content}", - Fore.YELLOW, - ) - - result = agents[0].step(previous_plan, advice, task_description) - return [result] - - def reset(self): - pass diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/toggleswitch.d.ts b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/toggleswitch.d.ts deleted file mode 100644 index a1f13eaf4d308d220f732a18b83134122c24dadc..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/toggleswitch.d.ts +++ /dev/null @@ -1,2 +0,0 @@ -import ToggleSwitch from './gameobjects/shape/toggleswitch/ToggleSwitch'; -export default ToggleSwitch; \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/gridsizer/GetChildrenWidth.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/gridsizer/GetChildrenWidth.js deleted file mode 100644 index 77af5e398b719647cfa11c20e8b848163b42008c..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/gridsizer/GetChildrenWidth.js +++ /dev/null @@ -1,45 +0,0 @@ -import Sum from '../../../plugins/utils/math/Sum.js'; - -var GetChildrenWidth = function (minimumMode) { - if (this.rexSizer.hidden) { - return 0; - } - - if (minimumMode === undefined) { - minimumMode = true; - } - - var result = 0, - columnWidth; - var children = this.sizerChildren; - var child, padding, childWidth, proportion; - - for (var i = 0; i < this.columnCount; i++) { - proportion = this.columnProportions[i]; - columnWidth = 0; - if ((proportion === 0) || minimumMode) { - for (var j = 0; j < this.rowCount; j++) { - child = children[(j * this.columnCount) + i]; - if (!child) { - continue; - } - if (child.rexSizer.hidden) { - continue; - } - - padding = child.rexSizer.padding; - childWidth = this.getChildWidth(child) + padding.left + padding.right; - columnWidth = Math.max(columnWidth, childWidth); - } - result += columnWidth; - } - // else,(proportion > 0) : columnWidth is 0 - this.columnWidth[i] = columnWidth; - } - - var space = this.space; - var indentLeft = Math.max(space.indentLeftOdd, space.indentLeftEven); - return result + Sum(space.left, indentLeft, ...space.column, space.right); -} - -export default GetChildrenWidth; \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/roundrectanglecanvas/Factory.d.ts b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/roundrectanglecanvas/Factory.d.ts deleted file mode 100644 index 4e1ccdcaef9234671b8fd47370ea73d673cb12ee..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/roundrectanglecanvas/Factory.d.ts +++ /dev/null @@ -1,20 +0,0 @@ -import RoundRectangleCanvas from './RoundRectangleCanvas'; - -export default function ( - x: number, - y: number, - width: number, - height: number, - radiusConfig?: number | ({ x?: number, y?: number }) | RoundRectangleCanvas.IRadiusConfig | - ({ - radius?: (number | ({ x?: number, y?: number }) | RoundRectangleCanvas.IRadiusConfig), - iteration?: number - }), - fillStyle?: number | string | null, - strokeStyle?: number | string | null, - lineWidth?: number, - - fillColor2?: number | string | null, - isHorizontalGradient?: boolean - -): RoundRectangleCanvas; \ No newline at end of file diff --git a/spaces/AllAideas/SegmentacionVideo/app.py b/spaces/AllAideas/SegmentacionVideo/app.py deleted file mode 100644 index 4ef3d8aaea1f0c275ae905a33c7ba526218409c0..0000000000000000000000000000000000000000 --- a/spaces/AllAideas/SegmentacionVideo/app.py +++ /dev/null @@ -1,51 +0,0 @@ -import gradio as gr -from utils.predict import predict_action -import os -import glob - -##Create list of examples to be loaded -example_list = glob.glob("examples/*") -example_list = list(map(lambda el:[el], example_list)) - - -demo = gr.Blocks() - - -with demo: - - gr.Markdown("# **

Video Classification with Transformers

**") - description="""#

-

- Demo de clasificador de video usando modelo híbrido basado ​​en Transformers con CNN, el objetivo es reconocer un segemento y recortarlo. - \"logo\" -
-

- """ - gr.Markdown(description) - - with gr.Tabs(): - - with gr.TabItem("Upload & Predict"): - with gr.Box(): - - with gr.Row(): - input_video = gr.Video(label="Input Video", show_label=True) - output_label = gr.Label(label="Model Output", show_label=True) - output_gif = gr.Image(label="Video Gif", show_label=True) - - gr.Markdown("**Predict**") - - with gr.Box(): - with gr.Row(): - submit_button = gr.Button("Submit") - - gr.Markdown("**Ejemplos:**") - gr.Markdown("El modelo puede clasificar videos pertenecientes a las siguientes clases: CricketShot, PlayingCello, Punch, ShavingBeard, TennisSwing.") - # gr.Markdown("CricketShot, PlayingCello, Punch, ShavingBeard, TennisSwing") - - with gr.Column(): - gr.Examples(example_list, [input_video], [output_label,output_gif], predict_action, cache_examples=True) - - submit_button.click(predict_action, inputs=input_video, outputs=[output_label,output_gif]) - -demo.launch() diff --git a/spaces/Amrrs/DragGan-Inversion/torch_utils/persistence.py b/spaces/Amrrs/DragGan-Inversion/torch_utils/persistence.py deleted file mode 100644 index d03055014ea6ba7e8ba475f79c91da4907fb6c0b..0000000000000000000000000000000000000000 --- a/spaces/Amrrs/DragGan-Inversion/torch_utils/persistence.py +++ /dev/null @@ -1,260 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -"""Facilities for pickling Python code alongside other data. - -The pickled code is automatically imported into a separate Python module -during unpickling. This way, any previously exported pickles will remain -usable even if the original code is no longer available, or if the current -version of the code is not consistent with what was originally pickled.""" - -import sys -import pickle -import io -import inspect -import copy -import uuid -import types -import dnnlib - -# ---------------------------------------------------------------------------- - -_version = 6 # internal version number -_decorators = set() # {decorator_class, ...} -_import_hooks = [] # [hook_function, ...] -_module_to_src_dict = dict() # {module: src, ...} -_src_to_module_dict = dict() # {src: module, ...} - -# ---------------------------------------------------------------------------- - - -def persistent_class(orig_class): - r"""Class decorator that extends a given class to save its source code - when pickled. - - Example: - - from torch_utils import persistence - - @persistence.persistent_class - class MyNetwork(torch.nn.Module): - def __init__(self, num_inputs, num_outputs): - super().__init__() - self.fc = MyLayer(num_inputs, num_outputs) - ... - - @persistence.persistent_class - class MyLayer(torch.nn.Module): - ... - - When pickled, any instance of `MyNetwork` and `MyLayer` will save its - source code alongside other internal state (e.g., parameters, buffers, - and submodules). This way, any previously exported pickle will remain - usable even if the class definitions have been modified or are no - longer available. - - The decorator saves the source code of the entire Python module - containing the decorated class. It does *not* save the source code of - any imported modules. Thus, the imported modules must be available - during unpickling, also including `torch_utils.persistence` itself. - - It is ok to call functions defined in the same module from the - decorated class. However, if the decorated class depends on other - classes defined in the same module, they must be decorated as well. - This is illustrated in the above example in the case of `MyLayer`. - - It is also possible to employ the decorator just-in-time before - calling the constructor. For example: - - cls = MyLayer - if want_to_make_it_persistent: - cls = persistence.persistent_class(cls) - layer = cls(num_inputs, num_outputs) - - As an additional feature, the decorator also keeps track of the - arguments that were used to construct each instance of the decorated - class. The arguments can be queried via `obj.init_args` and - `obj.init_kwargs`, and they are automatically pickled alongside other - object state. A typical use case is to first unpickle a previous - instance of a persistent class, and then upgrade it to use the latest - version of the source code: - - with open('old_pickle.pkl', 'rb') as f: - old_net = pickle.load(f) - new_net = MyNetwork(*old_obj.init_args, **old_obj.init_kwargs) - misc.copy_params_and_buffers(old_net, new_net, require_all=True) - """ - assert isinstance(orig_class, type) - if is_persistent(orig_class): - return orig_class - - assert orig_class.__module__ in sys.modules - orig_module = sys.modules[orig_class.__module__] - orig_module_src = _module_to_src(orig_module) - - class Decorator(orig_class): - _orig_module_src = orig_module_src - _orig_class_name = orig_class.__name__ - - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - self._init_args = copy.deepcopy(args) - self._init_kwargs = copy.deepcopy(kwargs) - assert orig_class.__name__ in orig_module.__dict__ - _check_pickleable(self.__reduce__()) - - @property - def init_args(self): - return copy.deepcopy(self._init_args) - - @property - def init_kwargs(self): - return dnnlib.EasyDict(copy.deepcopy(self._init_kwargs)) - - def __reduce__(self): - fields = list(super().__reduce__()) - fields += [None] * max(3 - len(fields), 0) - if fields[0] is not _reconstruct_persistent_obj: - meta = dict(type='class', version=_version, module_src=self._orig_module_src, - class_name=self._orig_class_name, state=fields[2]) - fields[0] = _reconstruct_persistent_obj # reconstruct func - fields[1] = (meta,) # reconstruct args - fields[2] = None # state dict - return tuple(fields) - - Decorator.__name__ = orig_class.__name__ - _decorators.add(Decorator) - return Decorator - -# ---------------------------------------------------------------------------- - - -def is_persistent(obj): - r"""Test whether the given object or class is persistent, i.e., - whether it will save its source code when pickled. - """ - try: - if obj in _decorators: - return True - except TypeError: - pass - return type(obj) in _decorators # pylint: disable=unidiomatic-typecheck - -# ---------------------------------------------------------------------------- - - -def import_hook(hook): - r"""Register an import hook that is called whenever a persistent object - is being unpickled. A typical use case is to patch the pickled source - code to avoid errors and inconsistencies when the API of some imported - module has changed. - - The hook should have the following signature: - - hook(meta) -> modified meta - - `meta` is an instance of `dnnlib.EasyDict` with the following fields: - - type: Type of the persistent object, e.g. `'class'`. - version: Internal version number of `torch_utils.persistence`. - module_src Original source code of the Python module. - class_name: Class name in the original Python module. - state: Internal state of the object. - - Example: - - @persistence.import_hook - def wreck_my_network(meta): - if meta.class_name == 'MyNetwork': - print('MyNetwork is being imported. I will wreck it!') - meta.module_src = meta.module_src.replace("True", "False") - return meta - """ - assert callable(hook) - _import_hooks.append(hook) - -# ---------------------------------------------------------------------------- - - -def _reconstruct_persistent_obj(meta): - r"""Hook that is called internally by the `pickle` module to unpickle - a persistent object. - """ - meta = dnnlib.EasyDict(meta) - meta.state = dnnlib.EasyDict(meta.state) - for hook in _import_hooks: - meta = hook(meta) - assert meta is not None - - assert meta.version == _version - module = _src_to_module(meta.module_src) - - assert meta.type == 'class' - orig_class = module.__dict__[meta.class_name] - decorator_class = persistent_class(orig_class) - obj = decorator_class.__new__(decorator_class) - - setstate = getattr(obj, '__setstate__', None) - if callable(setstate): - setstate(meta.state) # pylint: disable=not-callable - else: - obj.__dict__.update(meta.state) - return obj - -# ---------------------------------------------------------------------------- - - -def _module_to_src(module): - r"""Query the source code of a given Python module. - """ - src = _module_to_src_dict.get(module, None) - if src is None: - src = inspect.getsource(module) - _module_to_src_dict[module] = src - _src_to_module_dict[src] = module - return src - - -def _src_to_module(src): - r"""Get or create a Python module for the given source code. - """ - module = _src_to_module_dict.get(src, None) - if module is None: - module_name = "_imported_module_" + uuid.uuid4().hex - module = types.ModuleType(module_name) - sys.modules[module_name] = module - _module_to_src_dict[module] = src - _src_to_module_dict[src] = module - exec(src, module.__dict__) # pylint: disable=exec-used - return module - -# ---------------------------------------------------------------------------- - - -def _check_pickleable(obj): - r"""Check that the given object is pickleable, raising an exception if - it is not. This function is expected to be considerably more efficient - than actually pickling the object. - """ - def recurse(obj): - if isinstance(obj, (list, tuple, set)): - return [recurse(x) for x in obj] - if isinstance(obj, dict): - return [[recurse(x), recurse(y)] for x, y in obj.items()] - if isinstance(obj, (str, int, float, bool, bytes, bytearray)): - return None # Python primitive types are pickleable. - if f'{type(obj).__module__}.{type(obj).__name__}' in ['numpy.ndarray', 'torch.Tensor', 'torch.nn.parameter.Parameter']: - return None # NumPy arrays and PyTorch tensors are pickleable. - if is_persistent(obj): - # Persistent objects are pickleable, by virtue of the constructor check. - return None - return obj - with io.BytesIO() as f: - pickle.dump(recurse(obj), f) - -# ---------------------------------------------------------------------------- diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/schedulers/euler_ancestral.md b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/schedulers/euler_ancestral.md deleted file mode 100644 index 60fd524b195593608f1d2a900ad86756f8fd25ba..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/schedulers/euler_ancestral.md +++ /dev/null @@ -1,21 +0,0 @@ - - -# Euler Ancestral scheduler - -## Overview - -Ancestral sampling with Euler method steps. Based on the original [k-diffusion](https://github.com/crowsonkb/k-diffusion/blob/481677d114f6ea445aa009cf5bd7a9cdee909e47/k_diffusion/sampling.py#L72) implementation by Katherine Crowson. -Fast scheduler which often times generates good outputs with 20-30 steps. - -## EulerAncestralDiscreteScheduler -[[autodoc]] EulerAncestralDiscreteScheduler diff --git a/spaces/Andy1621/uniformer_image_detection/exp/mask_rcnn_3x_ms_hybrid_small/config.py b/spaces/Andy1621/uniformer_image_detection/exp/mask_rcnn_3x_ms_hybrid_small/config.py deleted file mode 100644 index a5ace78557c213c2f3af33a648d44a051f55effa..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/exp/mask_rcnn_3x_ms_hybrid_small/config.py +++ /dev/null @@ -1,82 +0,0 @@ -_base_ = [ - '../../configs/_base_/models/mask_rcnn_uniformer_fpn.py', - '../../configs/_base_/datasets/coco_instance.py', - '../../configs/_base_/schedules/schedule_1x.py', - '../../configs/_base_/default_runtime.py' -] - -model = dict( - backbone=dict( - embed_dim=[64, 128, 320, 512], - layers=[3, 4, 8, 3], - head_dim=64, - drop_path_rate=0.1, - use_checkpoint=True, - checkpoint_num=[0, 0, 8, 0], - windows=False, - hybrid=True, - window_size=14 - ), - neck=dict(in_channels=[64, 128, 320, 512])) - -img_norm_cfg = dict( - mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) - -# augmentation strategy originates from DETR / Sparse RCNN -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations', with_bbox=True, with_mask=True), - dict(type='RandomFlip', flip_ratio=0.5), - dict(type='AutoAugment', - policies=[ - [ - dict(type='Resize', - img_scale=[(480, 1333), (512, 1333), (544, 1333), (576, 1333), - (608, 1333), (640, 1333), (672, 1333), (704, 1333), - (736, 1333), (768, 1333), (800, 1333)], - multiscale_mode='value', - keep_ratio=True) - ], - [ - dict(type='Resize', - img_scale=[(400, 1333), (500, 1333), (600, 1333)], - multiscale_mode='value', - keep_ratio=True), - dict(type='RandomCrop', - crop_type='absolute_range', - crop_size=(384, 600), - allow_negative_crop=True), - dict(type='Resize', - img_scale=[(480, 1333), (512, 1333), (544, 1333), - (576, 1333), (608, 1333), (640, 1333), - (672, 1333), (704, 1333), (736, 1333), - (768, 1333), (800, 1333)], - multiscale_mode='value', - override=True, - keep_ratio=True) - ] - ]), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels', 'gt_masks']), -] -data = dict(train=dict(pipeline=train_pipeline)) - -optimizer = dict(_delete_=True, type='AdamW', lr=0.0001, betas=(0.9, 0.999), weight_decay=0.05, - paramwise_cfg=dict(custom_keys={'absolute_pos_embed': dict(decay_mult=0.), - 'relative_position_bias_table': dict(decay_mult=0.), - 'norm': dict(decay_mult=0.)})) -lr_config = dict(step=[27, 33]) -runner = dict(type='EpochBasedRunnerAmp', max_epochs=36) - -# do not use mmdet version fp16 -fp16 = None -optimizer_config = dict( - type="DistOptimizerHook", - update_interval=1, - grad_clip=None, - coalesce=True, - bucket_size_mb=-1, - use_fp16=True, -) diff --git a/spaces/Andy1621/uniformer_image_detection/mmdet/models/dense_heads/anchor_head.py b/spaces/Andy1621/uniformer_image_detection/mmdet/models/dense_heads/anchor_head.py deleted file mode 100644 index eea73520572725f547216ab639c1ebbdfb50834c..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/mmdet/models/dense_heads/anchor_head.py +++ /dev/null @@ -1,751 +0,0 @@ -import torch -import torch.nn as nn -from mmcv.cnn import normal_init -from mmcv.runner import force_fp32 - -from mmdet.core import (anchor_inside_flags, build_anchor_generator, - build_assigner, build_bbox_coder, build_sampler, - images_to_levels, multi_apply, multiclass_nms, unmap) -from ..builder import HEADS, build_loss -from .base_dense_head import BaseDenseHead -from .dense_test_mixins import BBoxTestMixin - - -@HEADS.register_module() -class AnchorHead(BaseDenseHead, BBoxTestMixin): - """Anchor-based head (RPN, RetinaNet, SSD, etc.). - - Args: - num_classes (int): Number of categories excluding the background - category. - in_channels (int): Number of channels in the input feature map. - feat_channels (int): Number of hidden channels. Used in child classes. - anchor_generator (dict): Config dict for anchor generator - bbox_coder (dict): Config of bounding box coder. - reg_decoded_bbox (bool): If true, the regression loss would be - applied directly on decoded bounding boxes, converting both - the predicted boxes and regression targets to absolute - coordinates format. Default False. It should be `True` when - using `IoULoss`, `GIoULoss`, or `DIoULoss` in the bbox head. - loss_cls (dict): Config of classification loss. - loss_bbox (dict): Config of localization loss. - train_cfg (dict): Training config of anchor head. - test_cfg (dict): Testing config of anchor head. - """ # noqa: W605 - - def __init__(self, - num_classes, - in_channels, - feat_channels=256, - anchor_generator=dict( - type='AnchorGenerator', - scales=[8, 16, 32], - ratios=[0.5, 1.0, 2.0], - strides=[4, 8, 16, 32, 64]), - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - clip_border=True, - target_means=(.0, .0, .0, .0), - target_stds=(1.0, 1.0, 1.0, 1.0)), - reg_decoded_bbox=False, - loss_cls=dict( - type='CrossEntropyLoss', - use_sigmoid=True, - loss_weight=1.0), - loss_bbox=dict( - type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=1.0), - train_cfg=None, - test_cfg=None): - super(AnchorHead, self).__init__() - self.in_channels = in_channels - self.num_classes = num_classes - self.feat_channels = feat_channels - self.use_sigmoid_cls = loss_cls.get('use_sigmoid', False) - # TODO better way to determine whether sample or not - self.sampling = loss_cls['type'] not in [ - 'FocalLoss', 'GHMC', 'QualityFocalLoss' - ] - if self.use_sigmoid_cls: - self.cls_out_channels = num_classes - else: - self.cls_out_channels = num_classes + 1 - - if self.cls_out_channels <= 0: - raise ValueError(f'num_classes={num_classes} is too small') - self.reg_decoded_bbox = reg_decoded_bbox - - self.bbox_coder = build_bbox_coder(bbox_coder) - self.loss_cls = build_loss(loss_cls) - self.loss_bbox = build_loss(loss_bbox) - self.train_cfg = train_cfg - self.test_cfg = test_cfg - if self.train_cfg: - self.assigner = build_assigner(self.train_cfg.assigner) - # use PseudoSampler when sampling is False - if self.sampling and hasattr(self.train_cfg, 'sampler'): - sampler_cfg = self.train_cfg.sampler - else: - sampler_cfg = dict(type='PseudoSampler') - self.sampler = build_sampler(sampler_cfg, context=self) - self.fp16_enabled = False - - self.anchor_generator = build_anchor_generator(anchor_generator) - # usually the numbers of anchors for each level are the same - # except SSD detectors - self.num_anchors = self.anchor_generator.num_base_anchors[0] - self._init_layers() - - def _init_layers(self): - """Initialize layers of the head.""" - self.conv_cls = nn.Conv2d(self.in_channels, - self.num_anchors * self.cls_out_channels, 1) - self.conv_reg = nn.Conv2d(self.in_channels, self.num_anchors * 4, 1) - - def init_weights(self): - """Initialize weights of the head.""" - normal_init(self.conv_cls, std=0.01) - normal_init(self.conv_reg, std=0.01) - - def forward_single(self, x): - """Forward feature of a single scale level. - - Args: - x (Tensor): Features of a single scale level. - - Returns: - tuple: - cls_score (Tensor): Cls scores for a single scale level \ - the channels number is num_anchors * num_classes. - bbox_pred (Tensor): Box energies / deltas for a single scale \ - level, the channels number is num_anchors * 4. - """ - cls_score = self.conv_cls(x) - bbox_pred = self.conv_reg(x) - return cls_score, bbox_pred - - def forward(self, feats): - """Forward features from the upstream network. - - Args: - feats (tuple[Tensor]): Features from the upstream network, each is - a 4D-tensor. - - Returns: - tuple: A tuple of classification scores and bbox prediction. - - - cls_scores (list[Tensor]): Classification scores for all \ - scale levels, each is a 4D-tensor, the channels number \ - is num_anchors * num_classes. - - bbox_preds (list[Tensor]): Box energies / deltas for all \ - scale levels, each is a 4D-tensor, the channels number \ - is num_anchors * 4. - """ - return multi_apply(self.forward_single, feats) - - def get_anchors(self, featmap_sizes, img_metas, device='cuda'): - """Get anchors according to feature map sizes. - - Args: - featmap_sizes (list[tuple]): Multi-level feature map sizes. - img_metas (list[dict]): Image meta info. - device (torch.device | str): Device for returned tensors - - Returns: - tuple: - anchor_list (list[Tensor]): Anchors of each image. - valid_flag_list (list[Tensor]): Valid flags of each image. - """ - num_imgs = len(img_metas) - - # since feature map sizes of all images are the same, we only compute - # anchors for one time - multi_level_anchors = self.anchor_generator.grid_anchors( - featmap_sizes, device) - anchor_list = [multi_level_anchors for _ in range(num_imgs)] - - # for each image, we compute valid flags of multi level anchors - valid_flag_list = [] - for img_id, img_meta in enumerate(img_metas): - multi_level_flags = self.anchor_generator.valid_flags( - featmap_sizes, img_meta['pad_shape'], device) - valid_flag_list.append(multi_level_flags) - - return anchor_list, valid_flag_list - - def _get_targets_single(self, - flat_anchors, - valid_flags, - gt_bboxes, - gt_bboxes_ignore, - gt_labels, - img_meta, - label_channels=1, - unmap_outputs=True): - """Compute regression and classification targets for anchors in a - single image. - - Args: - flat_anchors (Tensor): Multi-level anchors of the image, which are - concatenated into a single tensor of shape (num_anchors ,4) - valid_flags (Tensor): Multi level valid flags of the image, - which are concatenated into a single tensor of - shape (num_anchors,). - gt_bboxes (Tensor): Ground truth bboxes of the image, - shape (num_gts, 4). - gt_bboxes_ignore (Tensor): Ground truth bboxes to be - ignored, shape (num_ignored_gts, 4). - img_meta (dict): Meta info of the image. - gt_labels (Tensor): Ground truth labels of each box, - shape (num_gts,). - label_channels (int): Channel of label. - unmap_outputs (bool): Whether to map outputs back to the original - set of anchors. - - Returns: - tuple: - labels_list (list[Tensor]): Labels of each level - label_weights_list (list[Tensor]): Label weights of each level - bbox_targets_list (list[Tensor]): BBox targets of each level - bbox_weights_list (list[Tensor]): BBox weights of each level - num_total_pos (int): Number of positive samples in all images - num_total_neg (int): Number of negative samples in all images - """ - inside_flags = anchor_inside_flags(flat_anchors, valid_flags, - img_meta['img_shape'][:2], - self.train_cfg.allowed_border) - if not inside_flags.any(): - return (None, ) * 7 - # assign gt and sample anchors - anchors = flat_anchors[inside_flags, :] - - assign_result = self.assigner.assign( - anchors, gt_bboxes, gt_bboxes_ignore, - None if self.sampling else gt_labels) - sampling_result = self.sampler.sample(assign_result, anchors, - gt_bboxes) - - num_valid_anchors = anchors.shape[0] - bbox_targets = torch.zeros_like(anchors) - bbox_weights = torch.zeros_like(anchors) - labels = anchors.new_full((num_valid_anchors, ), - self.num_classes, - dtype=torch.long) - label_weights = anchors.new_zeros(num_valid_anchors, dtype=torch.float) - - pos_inds = sampling_result.pos_inds - neg_inds = sampling_result.neg_inds - if len(pos_inds) > 0: - if not self.reg_decoded_bbox: - pos_bbox_targets = self.bbox_coder.encode( - sampling_result.pos_bboxes, sampling_result.pos_gt_bboxes) - else: - pos_bbox_targets = sampling_result.pos_gt_bboxes - bbox_targets[pos_inds, :] = pos_bbox_targets - bbox_weights[pos_inds, :] = 1.0 - if gt_labels is None: - # Only rpn gives gt_labels as None - # Foreground is the first class since v2.5.0 - labels[pos_inds] = 0 - else: - labels[pos_inds] = gt_labels[ - sampling_result.pos_assigned_gt_inds] - if self.train_cfg.pos_weight <= 0: - label_weights[pos_inds] = 1.0 - else: - label_weights[pos_inds] = self.train_cfg.pos_weight - if len(neg_inds) > 0: - label_weights[neg_inds] = 1.0 - - # map up to original set of anchors - if unmap_outputs: - num_total_anchors = flat_anchors.size(0) - labels = unmap( - labels, num_total_anchors, inside_flags, - fill=self.num_classes) # fill bg label - label_weights = unmap(label_weights, num_total_anchors, - inside_flags) - bbox_targets = unmap(bbox_targets, num_total_anchors, inside_flags) - bbox_weights = unmap(bbox_weights, num_total_anchors, inside_flags) - - return (labels, label_weights, bbox_targets, bbox_weights, pos_inds, - neg_inds, sampling_result) - - def get_targets(self, - anchor_list, - valid_flag_list, - gt_bboxes_list, - img_metas, - gt_bboxes_ignore_list=None, - gt_labels_list=None, - label_channels=1, - unmap_outputs=True, - return_sampling_results=False): - """Compute regression and classification targets for anchors in - multiple images. - - Args: - anchor_list (list[list[Tensor]]): Multi level anchors of each - image. The outer list indicates images, and the inner list - corresponds to feature levels of the image. Each element of - the inner list is a tensor of shape (num_anchors, 4). - valid_flag_list (list[list[Tensor]]): Multi level valid flags of - each image. The outer list indicates images, and the inner list - corresponds to feature levels of the image. Each element of - the inner list is a tensor of shape (num_anchors, ) - gt_bboxes_list (list[Tensor]): Ground truth bboxes of each image. - img_metas (list[dict]): Meta info of each image. - gt_bboxes_ignore_list (list[Tensor]): Ground truth bboxes to be - ignored. - gt_labels_list (list[Tensor]): Ground truth labels of each box. - label_channels (int): Channel of label. - unmap_outputs (bool): Whether to map outputs back to the original - set of anchors. - - Returns: - tuple: Usually returns a tuple containing learning targets. - - - labels_list (list[Tensor]): Labels of each level. - - label_weights_list (list[Tensor]): Label weights of each \ - level. - - bbox_targets_list (list[Tensor]): BBox targets of each level. - - bbox_weights_list (list[Tensor]): BBox weights of each level. - - num_total_pos (int): Number of positive samples in all \ - images. - - num_total_neg (int): Number of negative samples in all \ - images. - additional_returns: This function enables user-defined returns from - `self._get_targets_single`. These returns are currently refined - to properties at each feature map (i.e. having HxW dimension). - The results will be concatenated after the end - """ - num_imgs = len(img_metas) - assert len(anchor_list) == len(valid_flag_list) == num_imgs - - # anchor number of multi levels - num_level_anchors = [anchors.size(0) for anchors in anchor_list[0]] - # concat all level anchors to a single tensor - concat_anchor_list = [] - concat_valid_flag_list = [] - for i in range(num_imgs): - assert len(anchor_list[i]) == len(valid_flag_list[i]) - concat_anchor_list.append(torch.cat(anchor_list[i])) - concat_valid_flag_list.append(torch.cat(valid_flag_list[i])) - - # compute targets for each image - if gt_bboxes_ignore_list is None: - gt_bboxes_ignore_list = [None for _ in range(num_imgs)] - if gt_labels_list is None: - gt_labels_list = [None for _ in range(num_imgs)] - results = multi_apply( - self._get_targets_single, - concat_anchor_list, - concat_valid_flag_list, - gt_bboxes_list, - gt_bboxes_ignore_list, - gt_labels_list, - img_metas, - label_channels=label_channels, - unmap_outputs=unmap_outputs) - (all_labels, all_label_weights, all_bbox_targets, all_bbox_weights, - pos_inds_list, neg_inds_list, sampling_results_list) = results[:7] - rest_results = list(results[7:]) # user-added return values - # no valid anchors - if any([labels is None for labels in all_labels]): - return None - # sampled anchors of all images - num_total_pos = sum([max(inds.numel(), 1) for inds in pos_inds_list]) - num_total_neg = sum([max(inds.numel(), 1) for inds in neg_inds_list]) - # split targets to a list w.r.t. multiple levels - labels_list = images_to_levels(all_labels, num_level_anchors) - label_weights_list = images_to_levels(all_label_weights, - num_level_anchors) - bbox_targets_list = images_to_levels(all_bbox_targets, - num_level_anchors) - bbox_weights_list = images_to_levels(all_bbox_weights, - num_level_anchors) - res = (labels_list, label_weights_list, bbox_targets_list, - bbox_weights_list, num_total_pos, num_total_neg) - if return_sampling_results: - res = res + (sampling_results_list, ) - for i, r in enumerate(rest_results): # user-added return values - rest_results[i] = images_to_levels(r, num_level_anchors) - - return res + tuple(rest_results) - - def loss_single(self, cls_score, bbox_pred, anchors, labels, label_weights, - bbox_targets, bbox_weights, num_total_samples): - """Compute loss of a single scale level. - - Args: - cls_score (Tensor): Box scores for each scale level - Has shape (N, num_anchors * num_classes, H, W). - bbox_pred (Tensor): Box energies / deltas for each scale - level with shape (N, num_anchors * 4, H, W). - anchors (Tensor): Box reference for each scale level with shape - (N, num_total_anchors, 4). - labels (Tensor): Labels of each anchors with shape - (N, num_total_anchors). - label_weights (Tensor): Label weights of each anchor with shape - (N, num_total_anchors) - bbox_targets (Tensor): BBox regression targets of each anchor wight - shape (N, num_total_anchors, 4). - bbox_weights (Tensor): BBox regression loss weights of each anchor - with shape (N, num_total_anchors, 4). - num_total_samples (int): If sampling, num total samples equal to - the number of total anchors; Otherwise, it is the number of - positive anchors. - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - # classification loss - labels = labels.reshape(-1) - label_weights = label_weights.reshape(-1) - cls_score = cls_score.permute(0, 2, 3, - 1).reshape(-1, self.cls_out_channels) - loss_cls = self.loss_cls( - cls_score, labels, label_weights, avg_factor=num_total_samples) - # regression loss - bbox_targets = bbox_targets.reshape(-1, 4) - bbox_weights = bbox_weights.reshape(-1, 4) - bbox_pred = bbox_pred.permute(0, 2, 3, 1).reshape(-1, 4) - if self.reg_decoded_bbox: - # When the regression loss (e.g. `IouLoss`, `GIouLoss`) - # is applied directly on the decoded bounding boxes, it - # decodes the already encoded coordinates to absolute format. - anchors = anchors.reshape(-1, 4) - bbox_pred = self.bbox_coder.decode(anchors, bbox_pred) - loss_bbox = self.loss_bbox( - bbox_pred, - bbox_targets, - bbox_weights, - avg_factor=num_total_samples) - return loss_cls, loss_bbox - - @force_fp32(apply_to=('cls_scores', 'bbox_preds')) - def loss(self, - cls_scores, - bbox_preds, - gt_bboxes, - gt_labels, - img_metas, - gt_bboxes_ignore=None): - """Compute losses of the head. - - Args: - cls_scores (list[Tensor]): Box scores for each scale level - Has shape (N, num_anchors * num_classes, H, W) - bbox_preds (list[Tensor]): Box energies / deltas for each scale - level with shape (N, num_anchors * 4, H, W) - gt_bboxes (list[Tensor]): Ground truth bboxes for each image with - shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): class indices corresponding to each box - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes_ignore (None | list[Tensor]): specify which bounding - boxes can be ignored when computing the loss. Default: None - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores] - assert len(featmap_sizes) == self.anchor_generator.num_levels - - device = cls_scores[0].device - - anchor_list, valid_flag_list = self.get_anchors( - featmap_sizes, img_metas, device=device) - label_channels = self.cls_out_channels if self.use_sigmoid_cls else 1 - cls_reg_targets = self.get_targets( - anchor_list, - valid_flag_list, - gt_bboxes, - img_metas, - gt_bboxes_ignore_list=gt_bboxes_ignore, - gt_labels_list=gt_labels, - label_channels=label_channels) - if cls_reg_targets is None: - return None - (labels_list, label_weights_list, bbox_targets_list, bbox_weights_list, - num_total_pos, num_total_neg) = cls_reg_targets - num_total_samples = ( - num_total_pos + num_total_neg if self.sampling else num_total_pos) - - # anchor number of multi levels - num_level_anchors = [anchors.size(0) for anchors in anchor_list[0]] - # concat all level anchors and flags to a single tensor - concat_anchor_list = [] - for i in range(len(anchor_list)): - concat_anchor_list.append(torch.cat(anchor_list[i])) - all_anchor_list = images_to_levels(concat_anchor_list, - num_level_anchors) - - losses_cls, losses_bbox = multi_apply( - self.loss_single, - cls_scores, - bbox_preds, - all_anchor_list, - labels_list, - label_weights_list, - bbox_targets_list, - bbox_weights_list, - num_total_samples=num_total_samples) - return dict(loss_cls=losses_cls, loss_bbox=losses_bbox) - - @force_fp32(apply_to=('cls_scores', 'bbox_preds')) - def get_bboxes(self, - cls_scores, - bbox_preds, - img_metas, - cfg=None, - rescale=False, - with_nms=True): - """Transform network output for a batch into bbox predictions. - - Args: - cls_scores (list[Tensor]): Box scores for each level in the - feature pyramid, has shape - (N, num_anchors * num_classes, H, W). - bbox_preds (list[Tensor]): Box energies / deltas for each - level in the feature pyramid, has shape - (N, num_anchors * 4, H, W). - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - cfg (mmcv.Config | None): Test / postprocessing configuration, - if None, test_cfg would be used - rescale (bool): If True, return boxes in original image space. - Default: False. - with_nms (bool): If True, do nms before return boxes. - Default: True. - - Returns: - list[tuple[Tensor, Tensor]]: Each item in result_list is 2-tuple. - The first item is an (n, 5) tensor, where 5 represent - (tl_x, tl_y, br_x, br_y, score) and the score between 0 and 1. - The shape of the second tensor in the tuple is (n,), and - each element represents the class label of the corresponding - box. - - Example: - >>> import mmcv - >>> self = AnchorHead( - >>> num_classes=9, - >>> in_channels=1, - >>> anchor_generator=dict( - >>> type='AnchorGenerator', - >>> scales=[8], - >>> ratios=[0.5, 1.0, 2.0], - >>> strides=[4,])) - >>> img_metas = [{'img_shape': (32, 32, 3), 'scale_factor': 1}] - >>> cfg = mmcv.Config(dict( - >>> score_thr=0.00, - >>> nms=dict(type='nms', iou_thr=1.0), - >>> max_per_img=10)) - >>> feat = torch.rand(1, 1, 3, 3) - >>> cls_score, bbox_pred = self.forward_single(feat) - >>> # note the input lists are over different levels, not images - >>> cls_scores, bbox_preds = [cls_score], [bbox_pred] - >>> result_list = self.get_bboxes(cls_scores, bbox_preds, - >>> img_metas, cfg) - >>> det_bboxes, det_labels = result_list[0] - >>> assert len(result_list) == 1 - >>> assert det_bboxes.shape[1] == 5 - >>> assert len(det_bboxes) == len(det_labels) == cfg.max_per_img - """ - assert len(cls_scores) == len(bbox_preds) - num_levels = len(cls_scores) - - device = cls_scores[0].device - featmap_sizes = [cls_scores[i].shape[-2:] for i in range(num_levels)] - mlvl_anchors = self.anchor_generator.grid_anchors( - featmap_sizes, device=device) - - mlvl_cls_scores = [cls_scores[i].detach() for i in range(num_levels)] - mlvl_bbox_preds = [bbox_preds[i].detach() for i in range(num_levels)] - - if torch.onnx.is_in_onnx_export(): - assert len( - img_metas - ) == 1, 'Only support one input image while in exporting to ONNX' - img_shapes = img_metas[0]['img_shape_for_onnx'] - else: - img_shapes = [ - img_metas[i]['img_shape'] - for i in range(cls_scores[0].shape[0]) - ] - scale_factors = [ - img_metas[i]['scale_factor'] for i in range(cls_scores[0].shape[0]) - ] - - if with_nms: - # some heads don't support with_nms argument - result_list = self._get_bboxes(mlvl_cls_scores, mlvl_bbox_preds, - mlvl_anchors, img_shapes, - scale_factors, cfg, rescale) - else: - result_list = self._get_bboxes(mlvl_cls_scores, mlvl_bbox_preds, - mlvl_anchors, img_shapes, - scale_factors, cfg, rescale, - with_nms) - return result_list - - def _get_bboxes(self, - mlvl_cls_scores, - mlvl_bbox_preds, - mlvl_anchors, - img_shapes, - scale_factors, - cfg, - rescale=False, - with_nms=True): - """Transform outputs for a batch item into bbox predictions. - - Args: - mlvl_cls_scores (list[Tensor]): Each element in the list is - the scores of bboxes of single level in the feature pyramid, - has shape (N, num_anchors * num_classes, H, W). - mlvl_bbox_preds (list[Tensor]): Each element in the list is the - bboxes predictions of single level in the feature pyramid, - has shape (N, num_anchors * 4, H, W). - mlvl_anchors (list[Tensor]): Each element in the list is - the anchors of single level in feature pyramid, has shape - (num_anchors, 4). - img_shapes (list[tuple[int]]): Each tuple in the list represent - the shape(height, width, 3) of single image in the batch. - scale_factors (list[ndarray]): Scale factor of the batch - image arange as list[(w_scale, h_scale, w_scale, h_scale)]. - cfg (mmcv.Config): Test / postprocessing configuration, - if None, test_cfg would be used. - rescale (bool): If True, return boxes in original image space. - Default: False. - with_nms (bool): If True, do nms before return boxes. - Default: True. - - Returns: - list[tuple[Tensor, Tensor]]: Each item in result_list is 2-tuple. - The first item is an (n, 5) tensor, where 5 represent - (tl_x, tl_y, br_x, br_y, score) and the score between 0 and 1. - The shape of the second tensor in the tuple is (n,), and - each element represents the class label of the corresponding - box. - """ - cfg = self.test_cfg if cfg is None else cfg - assert len(mlvl_cls_scores) == len(mlvl_bbox_preds) == len( - mlvl_anchors) - batch_size = mlvl_cls_scores[0].shape[0] - # convert to tensor to keep tracing - nms_pre_tensor = torch.tensor( - cfg.get('nms_pre', -1), - device=mlvl_cls_scores[0].device, - dtype=torch.long) - - mlvl_bboxes = [] - mlvl_scores = [] - for cls_score, bbox_pred, anchors in zip(mlvl_cls_scores, - mlvl_bbox_preds, - mlvl_anchors): - assert cls_score.size()[-2:] == bbox_pred.size()[-2:] - cls_score = cls_score.permute(0, 2, 3, - 1).reshape(batch_size, -1, - self.cls_out_channels) - if self.use_sigmoid_cls: - scores = cls_score.sigmoid() - else: - scores = cls_score.softmax(-1) - bbox_pred = bbox_pred.permute(0, 2, 3, - 1).reshape(batch_size, -1, 4) - anchors = anchors.expand_as(bbox_pred) - # Always keep topk op for dynamic input in onnx - if nms_pre_tensor > 0 and (torch.onnx.is_in_onnx_export() - or scores.shape[-2] > nms_pre_tensor): - from torch import _shape_as_tensor - # keep shape as tensor and get k - num_anchor = _shape_as_tensor(scores)[-2].to( - nms_pre_tensor.device) - nms_pre = torch.where(nms_pre_tensor < num_anchor, - nms_pre_tensor, num_anchor) - - # Get maximum scores for foreground classes. - if self.use_sigmoid_cls: - max_scores, _ = scores.max(-1) - else: - # remind that we set FG labels to [0, num_class-1] - # since mmdet v2.0 - # BG cat_id: num_class - max_scores, _ = scores[..., :-1].max(-1) - - _, topk_inds = max_scores.topk(nms_pre) - batch_inds = torch.arange(batch_size).view( - -1, 1).expand_as(topk_inds) - anchors = anchors[batch_inds, topk_inds, :] - bbox_pred = bbox_pred[batch_inds, topk_inds, :] - scores = scores[batch_inds, topk_inds, :] - - bboxes = self.bbox_coder.decode( - anchors, bbox_pred, max_shape=img_shapes) - mlvl_bboxes.append(bboxes) - mlvl_scores.append(scores) - - batch_mlvl_bboxes = torch.cat(mlvl_bboxes, dim=1) - if rescale: - batch_mlvl_bboxes /= batch_mlvl_bboxes.new_tensor( - scale_factors).unsqueeze(1) - batch_mlvl_scores = torch.cat(mlvl_scores, dim=1) - - # Set max number of box to be feed into nms in deployment - deploy_nms_pre = cfg.get('deploy_nms_pre', -1) - if deploy_nms_pre > 0 and torch.onnx.is_in_onnx_export(): - # Get maximum scores for foreground classes. - if self.use_sigmoid_cls: - max_scores, _ = batch_mlvl_scores.max(-1) - else: - # remind that we set FG labels to [0, num_class-1] - # since mmdet v2.0 - # BG cat_id: num_class - max_scores, _ = batch_mlvl_scores[..., :-1].max(-1) - _, topk_inds = max_scores.topk(deploy_nms_pre) - batch_inds = torch.arange(batch_size).view(-1, - 1).expand_as(topk_inds) - batch_mlvl_scores = batch_mlvl_scores[batch_inds, topk_inds] - batch_mlvl_bboxes = batch_mlvl_bboxes[batch_inds, topk_inds] - if self.use_sigmoid_cls: - # Add a dummy background class to the backend when using sigmoid - # remind that we set FG labels to [0, num_class-1] since mmdet v2.0 - # BG cat_id: num_class - padding = batch_mlvl_scores.new_zeros(batch_size, - batch_mlvl_scores.shape[1], - 1) - batch_mlvl_scores = torch.cat([batch_mlvl_scores, padding], dim=-1) - - if with_nms: - det_results = [] - for (mlvl_bboxes, mlvl_scores) in zip(batch_mlvl_bboxes, - batch_mlvl_scores): - det_bbox, det_label = multiclass_nms(mlvl_bboxes, mlvl_scores, - cfg.score_thr, cfg.nms, - cfg.max_per_img) - det_results.append(tuple([det_bbox, det_label])) - else: - det_results = [ - tuple(mlvl_bs) - for mlvl_bs in zip(batch_mlvl_bboxes, batch_mlvl_scores) - ] - return det_results - - def aug_test(self, feats, img_metas, rescale=False): - """Test function with test time augmentation. - - Args: - feats (list[Tensor]): the outer list indicates test-time - augmentations and inner Tensor should have a shape NxCxHxW, - which contains features for all images in the batch. - img_metas (list[list[dict]]): the outer list indicates test-time - augs (multiscale, flip, etc.) and the inner list indicates - images in a batch. each dict has image information. - rescale (bool, optional): Whether to rescale the results. - Defaults to False. - - Returns: - list[ndarray]: bbox results of each class - """ - return self.aug_test_bboxes(feats, img_metas, rescale=rescale) diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/fcn/fcn_r101-d8_480x480_40k_pascal_context.py b/spaces/Andy1621/uniformer_image_segmentation/configs/fcn/fcn_r101-d8_480x480_40k_pascal_context.py deleted file mode 100644 index f3a15b41054318d508e98685632921f262029de0..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_segmentation/configs/fcn/fcn_r101-d8_480x480_40k_pascal_context.py +++ /dev/null @@ -1,2 +0,0 @@ -_base_ = './fcn_r50-d8_480x480_40k_pascal_context.py' -model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101)) diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/mobilenet_v3/lraspp_m-v3-d8_512x1024_320k_cityscapes.py b/spaces/Andy1621/uniformer_image_segmentation/configs/mobilenet_v3/lraspp_m-v3-d8_512x1024_320k_cityscapes.py deleted file mode 100644 index e59a78b48be3a0997a31524fd78e7fad5636bc82..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_segmentation/configs/mobilenet_v3/lraspp_m-v3-d8_512x1024_320k_cityscapes.py +++ /dev/null @@ -1,11 +0,0 @@ -_base_ = [ - '../_base_/models/lraspp_m-v3-d8.py', '../_base_/datasets/cityscapes.py', - '../_base_/default_runtime.py', '../_base_/schedules/schedule_160k.py' -] - -model = dict(pretrained='open-mmlab://contrib/mobilenet_v3_large') - -# Re-config the data sampler. -data = dict(samples_per_gpu=4, workers_per_gpu=4) - -runner = dict(type='IterBasedRunner', max_iters=320000) diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/psanet/psanet_r50-d8_769x769_40k_cityscapes.py b/spaces/Andy1621/uniformer_image_segmentation/configs/psanet/psanet_r50-d8_769x769_40k_cityscapes.py deleted file mode 100644 index 690f8b5ef359be8a8be3a2d768aede24216a8706..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_segmentation/configs/psanet/psanet_r50-d8_769x769_40k_cityscapes.py +++ /dev/null @@ -1,9 +0,0 @@ -_base_ = [ - '../_base_/models/psanet_r50-d8.py', - '../_base_/datasets/cityscapes_769x769.py', '../_base_/default_runtime.py', - '../_base_/schedules/schedule_40k.py' -] -model = dict( - decode_head=dict(align_corners=True), - auxiliary_head=dict(align_corners=True), - test_cfg=dict(mode='slide', crop_size=(769, 769), stride=(513, 513))) diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmseg/utils/__init__.py b/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmseg/utils/__init__.py deleted file mode 100644 index ac489e2dbbc0e6fa87f5088b4edcc20f8cadc1a6..0000000000000000000000000000000000000000 --- a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmseg/utils/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -from .collect_env import collect_env -from .logger import get_root_logger - -__all__ = ['get_root_logger', 'collect_env'] diff --git a/spaces/Anonymous-sub/Rerender/gmflow_module/gmflow/trident_conv.py b/spaces/Anonymous-sub/Rerender/gmflow_module/gmflow/trident_conv.py deleted file mode 100644 index 29a2a73e964a88b68bc095772d9c3cc443e3e0fe..0000000000000000000000000000000000000000 --- a/spaces/Anonymous-sub/Rerender/gmflow_module/gmflow/trident_conv.py +++ /dev/null @@ -1,90 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# https://github.com/facebookresearch/detectron2/blob/main/projects/TridentNet/tridentnet/trident_conv.py - -import torch -from torch import nn -from torch.nn import functional as F -from torch.nn.modules.utils import _pair - - -class MultiScaleTridentConv(nn.Module): - def __init__( - self, - in_channels, - out_channels, - kernel_size, - stride=1, - strides=1, - paddings=0, - dilations=1, - dilation=1, - groups=1, - num_branch=1, - test_branch_idx=-1, - bias=False, - norm=None, - activation=None, - ): - super(MultiScaleTridentConv, self).__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.kernel_size = _pair(kernel_size) - self.num_branch = num_branch - self.stride = _pair(stride) - self.groups = groups - self.with_bias = bias - self.dilation = dilation - if isinstance(paddings, int): - paddings = [paddings] * self.num_branch - if isinstance(dilations, int): - dilations = [dilations] * self.num_branch - if isinstance(strides, int): - strides = [strides] * self.num_branch - self.paddings = [_pair(padding) for padding in paddings] - self.dilations = [_pair(dilation) for dilation in dilations] - self.strides = [_pair(stride) for stride in strides] - self.test_branch_idx = test_branch_idx - self.norm = norm - self.activation = activation - - assert len({self.num_branch, len(self.paddings), len(self.strides)}) == 1 - - self.weight = nn.Parameter( - torch.Tensor(out_channels, in_channels // groups, *self.kernel_size) - ) - if bias: - self.bias = nn.Parameter(torch.Tensor(out_channels)) - else: - self.bias = None - - nn.init.kaiming_uniform_(self.weight, nonlinearity="relu") - if self.bias is not None: - nn.init.constant_(self.bias, 0) - - def forward(self, inputs): - num_branch = self.num_branch if self.training or self.test_branch_idx == -1 else 1 - assert len(inputs) == num_branch - - if self.training or self.test_branch_idx == -1: - outputs = [ - F.conv2d(input, self.weight, self.bias, stride, padding, self.dilation, self.groups) - for input, stride, padding in zip(inputs, self.strides, self.paddings) - ] - else: - outputs = [ - F.conv2d( - inputs[0], - self.weight, - self.bias, - self.strides[self.test_branch_idx] if self.test_branch_idx == -1 else self.strides[-1], - self.paddings[self.test_branch_idx] if self.test_branch_idx == -1 else self.paddings[-1], - self.dilation, - self.groups, - ) - ] - - if self.norm is not None: - outputs = [self.norm(x) for x in outputs] - if self.activation is not None: - outputs = [self.activation(x) for x in outputs] - return outputs diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/data/datasets/cityscapes.py b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/data/datasets/cityscapes.py deleted file mode 100644 index 1e84a5bdb3d4e410d8eef4b80a5d4c099a180104..0000000000000000000000000000000000000000 --- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/data/datasets/cityscapes.py +++ /dev/null @@ -1,329 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import functools -import json -import logging -import multiprocessing as mp -import numpy as np -import os -from itertools import chain -import pycocotools.mask as mask_util -from PIL import Image - -from detectron2.structures import BoxMode -from detectron2.utils.comm import get_world_size -from detectron2.utils.file_io import PathManager -from detectron2.utils.logger import setup_logger - -try: - import cv2 # noqa -except ImportError: - # OpenCV is an optional dependency at the moment - pass - - -logger = logging.getLogger(__name__) - - -def _get_cityscapes_files(image_dir, gt_dir): - files = [] - # scan through the directory - cities = PathManager.ls(image_dir) - logger.info(f"{len(cities)} cities found in '{image_dir}'.") - for city in cities: - city_img_dir = os.path.join(image_dir, city) - city_gt_dir = os.path.join(gt_dir, city) - for basename in PathManager.ls(city_img_dir): - image_file = os.path.join(city_img_dir, basename) - - suffix = "leftImg8bit.png" - assert basename.endswith(suffix), basename - basename = basename[: -len(suffix)] - - instance_file = os.path.join(city_gt_dir, basename + "gtFine_instanceIds.png") - label_file = os.path.join(city_gt_dir, basename + "gtFine_labelIds.png") - json_file = os.path.join(city_gt_dir, basename + "gtFine_polygons.json") - - files.append((image_file, instance_file, label_file, json_file)) - assert len(files), "No images found in {}".format(image_dir) - for f in files[0]: - assert PathManager.isfile(f), f - return files - - -def load_cityscapes_instances(image_dir, gt_dir, from_json=True, to_polygons=True): - """ - Args: - image_dir (str): path to the raw dataset. e.g., "~/cityscapes/leftImg8bit/train". - gt_dir (str): path to the raw annotations. e.g., "~/cityscapes/gtFine/train". - from_json (bool): whether to read annotations from the raw json file or the png files. - to_polygons (bool): whether to represent the segmentation as polygons - (COCO's format) instead of masks (cityscapes's format). - - Returns: - list[dict]: a list of dicts in Detectron2 standard format. (See - `Using Custom Datasets `_ ) - """ - if from_json: - assert to_polygons, ( - "Cityscapes's json annotations are in polygon format. " - "Converting to mask format is not supported now." - ) - files = _get_cityscapes_files(image_dir, gt_dir) - - logger.info("Preprocessing cityscapes annotations ...") - # This is still not fast: all workers will execute duplicate works and will - # take up to 10m on a 8GPU server. - pool = mp.Pool(processes=max(mp.cpu_count() // get_world_size() // 2, 4)) - - ret = pool.map( - functools.partial(_cityscapes_files_to_dict, from_json=from_json, to_polygons=to_polygons), - files, - ) - logger.info("Loaded {} images from {}".format(len(ret), image_dir)) - - # Map cityscape ids to contiguous ids - from cityscapesscripts.helpers.labels import labels - - labels = [l for l in labels if l.hasInstances and not l.ignoreInEval] - dataset_id_to_contiguous_id = {l.id: idx for idx, l in enumerate(labels)} - for dict_per_image in ret: - for anno in dict_per_image["annotations"]: - anno["category_id"] = dataset_id_to_contiguous_id[anno["category_id"]] - return ret - - -def load_cityscapes_semantic(image_dir, gt_dir): - """ - Args: - image_dir (str): path to the raw dataset. e.g., "~/cityscapes/leftImg8bit/train". - gt_dir (str): path to the raw annotations. e.g., "~/cityscapes/gtFine/train". - - Returns: - list[dict]: a list of dict, each has "file_name" and - "sem_seg_file_name". - """ - ret = [] - # gt_dir is small and contain many small files. make sense to fetch to local first - gt_dir = PathManager.get_local_path(gt_dir) - for image_file, _, label_file, json_file in _get_cityscapes_files(image_dir, gt_dir): - label_file = label_file.replace("labelIds", "labelTrainIds") - - with PathManager.open(json_file, "r") as f: - jsonobj = json.load(f) - ret.append( - { - "file_name": image_file, - "sem_seg_file_name": label_file, - "height": jsonobj["imgHeight"], - "width": jsonobj["imgWidth"], - } - ) - assert len(ret), f"No images found in {image_dir}!" - assert PathManager.isfile( - ret[0]["sem_seg_file_name"] - ), "Please generate labelTrainIds.png with cityscapesscripts/preparation/createTrainIdLabelImgs.py" # noqa - return ret - - -def _cityscapes_files_to_dict(files, from_json, to_polygons): - """ - Parse cityscapes annotation files to a instance segmentation dataset dict. - - Args: - files (tuple): consists of (image_file, instance_id_file, label_id_file, json_file) - from_json (bool): whether to read annotations from the raw json file or the png files. - to_polygons (bool): whether to represent the segmentation as polygons - (COCO's format) instead of masks (cityscapes's format). - - Returns: - A dict in Detectron2 Dataset format. - """ - from cityscapesscripts.helpers.labels import id2label, name2label - - image_file, instance_id_file, _, json_file = files - - annos = [] - - if from_json: - from shapely.geometry import MultiPolygon, Polygon - - with PathManager.open(json_file, "r") as f: - jsonobj = json.load(f) - ret = { - "file_name": image_file, - "image_id": os.path.basename(image_file), - "height": jsonobj["imgHeight"], - "width": jsonobj["imgWidth"], - } - - # `polygons_union` contains the union of all valid polygons. - polygons_union = Polygon() - - # CityscapesScripts draw the polygons in sequential order - # and each polygon *overwrites* existing ones. See - # (https://github.com/mcordts/cityscapesScripts/blob/master/cityscapesscripts/preparation/json2instanceImg.py) # noqa - # We use reverse order, and each polygon *avoids* early ones. - # This will resolve the ploygon overlaps in the same way as CityscapesScripts. - for obj in jsonobj["objects"][::-1]: - if "deleted" in obj: # cityscapes data format specific - continue - label_name = obj["label"] - - try: - label = name2label[label_name] - except KeyError: - if label_name.endswith("group"): # crowd area - label = name2label[label_name[: -len("group")]] - else: - raise - if label.id < 0: # cityscapes data format - continue - - # Cityscapes's raw annotations uses integer coordinates - # Therefore +0.5 here - poly_coord = np.asarray(obj["polygon"], dtype="f4") + 0.5 - # CityscapesScript uses PIL.ImageDraw.polygon to rasterize - # polygons for evaluation. This function operates in integer space - # and draws each pixel whose center falls into the polygon. - # Therefore it draws a polygon which is 0.5 "fatter" in expectation. - # We therefore dilate the input polygon by 0.5 as our input. - poly = Polygon(poly_coord).buffer(0.5, resolution=4) - - if not label.hasInstances or label.ignoreInEval: - # even if we won't store the polygon it still contributes to overlaps resolution - polygons_union = polygons_union.union(poly) - continue - - # Take non-overlapping part of the polygon - poly_wo_overlaps = poly.difference(polygons_union) - if poly_wo_overlaps.is_empty: - continue - polygons_union = polygons_union.union(poly) - - anno = {} - anno["iscrowd"] = label_name.endswith("group") - anno["category_id"] = label.id - - if isinstance(poly_wo_overlaps, Polygon): - poly_list = [poly_wo_overlaps] - elif isinstance(poly_wo_overlaps, MultiPolygon): - poly_list = poly_wo_overlaps.geoms - else: - raise NotImplementedError("Unknown geometric structure {}".format(poly_wo_overlaps)) - - poly_coord = [] - for poly_el in poly_list: - # COCO API can work only with exterior boundaries now, hence we store only them. - # TODO: store both exterior and interior boundaries once other parts of the - # codebase support holes in polygons. - poly_coord.append(list(chain(*poly_el.exterior.coords))) - anno["segmentation"] = poly_coord - (xmin, ymin, xmax, ymax) = poly_wo_overlaps.bounds - - anno["bbox"] = (xmin, ymin, xmax, ymax) - anno["bbox_mode"] = BoxMode.XYXY_ABS - - annos.append(anno) - else: - # See also the official annotation parsing scripts at - # https://github.com/mcordts/cityscapesScripts/blob/master/cityscapesscripts/evaluation/instances2dict.py # noqa - with PathManager.open(instance_id_file, "rb") as f: - inst_image = np.asarray(Image.open(f), order="F") - # ids < 24 are stuff labels (filtering them first is about 5% faster) - flattened_ids = np.unique(inst_image[inst_image >= 24]) - - ret = { - "file_name": image_file, - "image_id": os.path.basename(image_file), - "height": inst_image.shape[0], - "width": inst_image.shape[1], - } - - for instance_id in flattened_ids: - # For non-crowd annotations, instance_id // 1000 is the label_id - # Crowd annotations have <1000 instance ids - label_id = instance_id // 1000 if instance_id >= 1000 else instance_id - label = id2label[label_id] - if not label.hasInstances or label.ignoreInEval: - continue - - anno = {} - anno["iscrowd"] = instance_id < 1000 - anno["category_id"] = label.id - - mask = np.asarray(inst_image == instance_id, dtype=np.uint8, order="F") - - inds = np.nonzero(mask) - ymin, ymax = inds[0].min(), inds[0].max() - xmin, xmax = inds[1].min(), inds[1].max() - anno["bbox"] = (xmin, ymin, xmax, ymax) - if xmax <= xmin or ymax <= ymin: - continue - anno["bbox_mode"] = BoxMode.XYXY_ABS - if to_polygons: - # This conversion comes from D4809743 and D5171122, - # when Mask-RCNN was first developed. - contours = cv2.findContours(mask.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)[ - -2 - ] - polygons = [c.reshape(-1).tolist() for c in contours if len(c) >= 3] - # opencv's can produce invalid polygons - if len(polygons) == 0: - continue - anno["segmentation"] = polygons - else: - anno["segmentation"] = mask_util.encode(mask[:, :, None])[0] - annos.append(anno) - ret["annotations"] = annos - return ret - - -if __name__ == "__main__": - """ - Test the cityscapes dataset loader. - - Usage: - python -m detectron2.data.datasets.cityscapes \ - cityscapes/leftImg8bit/train cityscapes/gtFine/train - """ - import argparse - - parser = argparse.ArgumentParser() - parser.add_argument("image_dir") - parser.add_argument("gt_dir") - parser.add_argument("--type", choices=["instance", "semantic"], default="instance") - args = parser.parse_args() - from detectron2.data.catalog import Metadata - from detectron2.utils.visualizer import Visualizer - from cityscapesscripts.helpers.labels import labels - - logger = setup_logger(name=__name__) - - dirname = "cityscapes-data-vis" - os.makedirs(dirname, exist_ok=True) - - if args.type == "instance": - dicts = load_cityscapes_instances( - args.image_dir, args.gt_dir, from_json=True, to_polygons=True - ) - logger.info("Done loading {} samples.".format(len(dicts))) - - thing_classes = [k.name for k in labels if k.hasInstances and not k.ignoreInEval] - meta = Metadata().set(thing_classes=thing_classes) - - else: - dicts = load_cityscapes_semantic(args.image_dir, args.gt_dir) - logger.info("Done loading {} samples.".format(len(dicts))) - - stuff_classes = [k.name for k in labels if k.trainId != 255] - stuff_colors = [k.color for k in labels if k.trainId != 255] - meta = Metadata().set(stuff_classes=stuff_classes, stuff_colors=stuff_colors) - - for d in dicts: - img = np.array(Image.open(PathManager.open(d["file_name"], "rb"))) - visualizer = Visualizer(img, metadata=meta) - vis = visualizer.draw_dataset_dict(d) - # cv2.imshow("a", vis.get_image()[:, :, ::-1]) - # cv2.waitKey() - fpath = os.path.join(dirname, os.path.basename(d["file_name"])) - vis.save(fpath) diff --git a/spaces/BeeMon/dreambooth-training/train_dreambooth.py b/spaces/BeeMon/dreambooth-training/train_dreambooth.py deleted file mode 100644 index f4ff135e549f0d6c72f733092f3df817cb178e01..0000000000000000000000000000000000000000 --- a/spaces/BeeMon/dreambooth-training/train_dreambooth.py +++ /dev/null @@ -1,889 +0,0 @@ -import argparse -import itertools -import math -import os -from pathlib import Path -from typing import Optional -import subprocess -import sys -import gc -import random - -import torch -import torch.nn.functional as F -import torch.utils.checkpoint -from torch.utils.data import Dataset - -from accelerate import Accelerator -from accelerate.logging import get_logger -from accelerate.utils import set_seed -from diffusers import AutoencoderKL, DDPMScheduler, StableDiffusionPipeline, UNet2DConditionModel -from diffusers.utils.import_utils import is_xformers_available -from diffusers.optimization import get_scheduler -from huggingface_hub import HfFolder, Repository, whoami -from PIL import Image -from torchvision import transforms -from tqdm.auto import tqdm -from transformers import CLIPTextModel, CLIPTokenizer - - -logger = get_logger(__name__) - - -def parse_args(): - parser = argparse.ArgumentParser(description="Simple example of a training script.") - parser.add_argument( - "--pretrained_model_name_or_path", - type=str, - default=None, - #required=True, - help="Path to pretrained model or model identifier from huggingface.co/models.", - ) - parser.add_argument( - "--tokenizer_name", - type=str, - default=None, - help="Pretrained tokenizer name or path if not the same as model_name", - ) - parser.add_argument( - "--instance_data_dir", - type=str, - default=None, - #required=True, - help="A folder containing the training data of instance images.", - ) - parser.add_argument( - "--class_data_dir", - type=str, - default=None, - #required=False, - help="A folder containing the training data of class images.", - ) - parser.add_argument( - "--instance_prompt", - type=str, - default=None, - help="The prompt with identifier specifying the instance", - ) - parser.add_argument( - "--class_prompt", - type=str, - default="", - help="The prompt to specify images in the same class as provided instance images.", - ) - parser.add_argument( - "--with_prior_preservation", - default=False, - action="store_true", - help="Flag to add prior preservation loss.", - ) - parser.add_argument("--prior_loss_weight", type=float, default=1.0, help="The weight of prior preservation loss.") - parser.add_argument( - "--num_class_images", - type=int, - default=100, - help=( - "Minimal class images for prior preservation loss. If not have enough images, additional images will be" - " sampled with class_prompt." - ), - ) - parser.add_argument( - "--output_dir", - type=str, - default="", - help="The output directory where the model predictions and checkpoints will be written.", - ) - parser.add_argument("--seed", type=int, default=None, help="A seed for reproducible training.") - parser.add_argument( - "--resolution", - type=int, - default=512, - help=( - "The resolution for input images, all the images in the train/validation dataset will be resized to this" - " resolution" - ), - ) - parser.add_argument( - "--center_crop", action="store_true", help="Whether to center crop images before resizing to resolution" - ) - parser.add_argument("--train_text_encoder", action="store_true", help="Whether to train the text encoder") - parser.add_argument( - "--train_batch_size", type=int, default=4, help="Batch size (per device) for the training dataloader." - ) - parser.add_argument( - "--sample_batch_size", type=int, default=4, help="Batch size (per device) for sampling images." - ) - parser.add_argument("--num_train_epochs", type=int, default=1) - parser.add_argument( - "--max_train_steps", - type=int, - default=None, - help="Total number of training steps to perform. If provided, overrides num_train_epochs.", - ) - parser.add_argument( - "--gradient_accumulation_steps", - type=int, - default=1, - help="Number of updates steps to accumulate before performing a backward/update pass.", - ) - parser.add_argument( - "--gradient_checkpointing", - action="store_true", - help="Whether or not to use gradient checkpointing to save memory at the expense of slower backward pass.", - ) - parser.add_argument( - "--learning_rate", - type=float, - default=5e-6, - help="Initial learning rate (after the potential warmup period) to use.", - ) - parser.add_argument( - "--scale_lr", - action="store_true", - default=False, - help="Scale the learning rate by the number of GPUs, gradient accumulation steps, and batch size.", - ) - parser.add_argument( - "--lr_scheduler", - type=str, - default="constant", - help=( - 'The scheduler type to use. Choose between ["linear", "cosine", "cosine_with_restarts", "polynomial",' - ' "constant", "constant_with_warmup"]' - ), - ) - parser.add_argument( - "--lr_warmup_steps", type=int, default=500, help="Number of steps for the warmup in the lr scheduler." - ) - parser.add_argument( - "--use_8bit_adam", action="store_true", help="Whether or not to use 8-bit Adam from bitsandbytes." - ) - parser.add_argument("--adam_beta1", type=float, default=0.9, help="The beta1 parameter for the Adam optimizer.") - parser.add_argument("--adam_beta2", type=float, default=0.999, help="The beta2 parameter for the Adam optimizer.") - parser.add_argument("--adam_weight_decay", type=float, default=1e-2, help="Weight decay to use.") - parser.add_argument("--adam_epsilon", type=float, default=1e-08, help="Epsilon value for the Adam optimizer") - parser.add_argument("--max_grad_norm", default=1.0, type=float, help="Max gradient norm.") - parser.add_argument("--push_to_hub", action="store_true", help="Whether or not to push the model to the Hub.") - parser.add_argument("--hub_token", type=str, default=None, help="The token to use to push to the Model Hub.") - parser.add_argument( - "--hub_model_id", - type=str, - default=None, - help="The name of the repository to keep in sync with the local `output_dir`.", - ) - parser.add_argument( - "--logging_dir", - type=str, - default="logs", - help=( - "[TensorBoard](https://www.tensorflow.org/tensorboard) log directory. Will default to" - " *output_dir/runs/**CURRENT_DATETIME_HOSTNAME***." - ), - ) - parser.add_argument( - "--mixed_precision", - type=str, - default="no", - choices=["no", "fp16", "bf16"], - help=( - "Whether to use mixed precision. Choose" - "between fp16 and bf16 (bfloat16). Bf16 requires PyTorch >= 1.10." - "and an Nvidia Ampere GPU." - ), - ) - - parser.add_argument( - "--save_n_steps", - type=int, - default=1, - help=("Save the model every n global_steps"), - ) - - - parser.add_argument( - "--save_starting_step", - type=int, - default=1, - help=("The step from which it starts saving intermediary checkpoints"), - ) - - parser.add_argument( - "--stop_text_encoder_training", - type=int, - default=1000000, - help=("The step at which the text_encoder is no longer trained"), - ) - - - parser.add_argument( - "--image_captions_filename", - action="store_true", - help="Get captions from filename", - ) - - - parser.add_argument( - "--dump_only_text_encoder", - action="store_true", - default=False, - help="Dump only text encoder", - ) - - parser.add_argument( - "--train_only_unet", - action="store_true", - default=False, - help="Train only the unet", - ) - - parser.add_argument( - "--cache_latents", - action="store_true", - default=False, - help="Train only the unet", - ) - - parser.add_argument( - "--Session_dir", - type=str, - default="", - help="Current session directory", - ) - - - - - parser.add_argument("--local_rank", type=int, default=-1, help="For distributed training: local_rank") - - args = parser.parse_args() - env_local_rank = int(os.environ.get("LOCAL_RANK", -1)) - if env_local_rank != -1 and env_local_rank != args.local_rank: - args.local_rank = env_local_rank - - #if args.instance_data_dir is None: - # raise ValueError("You must specify a train data directory.") - - #if args.with_prior_preservation: - # if args.class_data_dir is None: - # raise ValueError("You must specify a data directory for class images.") - # if args.class_prompt is None: - # raise ValueError("You must specify prompt for class images.") - - return args - - -class DreamBoothDataset(Dataset): - """ - A dataset to prepare the instance and class images with the prompts for fine-tuning the model. - It pre-processes the images and the tokenizes prompts. - """ - - def __init__( - self, - instance_data_root, - instance_prompt, - tokenizer, - args, - class_data_root=None, - class_prompt=None, - size=512, - center_crop=False, - ): - self.size = size - self.center_crop = center_crop - self.tokenizer = tokenizer - self.image_captions_filename = None - - self.instance_data_root = Path(instance_data_root) - if not self.instance_data_root.exists(): - raise ValueError("Instance images root doesn't exists.") - - self.instance_images_path = list(Path(instance_data_root).iterdir()) - self.num_instance_images = len(self.instance_images_path) - self.instance_prompt = instance_prompt - self._length = self.num_instance_images - - if args.image_captions_filename: - self.image_captions_filename = True - - if class_data_root is not None: - self.class_data_root = Path(class_data_root) - self.class_data_root.mkdir(parents=True, exist_ok=True) - self.class_images_path = list(self.class_data_root.iterdir()) - random.shuffle(self.class_images_path) - self.num_class_images = len(self.class_images_path) - self._length = max(self.num_class_images, self.num_instance_images) - self.class_prompt = class_prompt - else: - self.class_data_root = None - - self.image_transforms = transforms.Compose( - [ - transforms.Resize(size, interpolation=transforms.InterpolationMode.BILINEAR), - transforms.CenterCrop(size) if center_crop else transforms.RandomCrop(size), - transforms.ToTensor(), - transforms.Normalize([0.5], [0.5]), - ] - ) - - def __len__(self): - return self._length - - def __getitem__(self, index): - example = {} - path = self.instance_images_path[index % self.num_instance_images] - instance_image = Image.open(path) - if not instance_image.mode == "RGB": - instance_image = instance_image.convert("RGB") - - instance_prompt = self.instance_prompt - - if self.image_captions_filename: - filename = Path(path).stem - pt=''.join([i for i in filename if not i.isdigit()]) - pt=pt.replace("_"," ") - pt=pt.replace("(","") - pt=pt.replace(")","") - pt=pt.replace("-","") - instance_prompt = pt - sys.stdout.write(" " +instance_prompt+" ") - sys.stdout.flush() - - - example["instance_images"] = self.image_transforms(instance_image) - example["instance_prompt_ids"] = self.tokenizer( - instance_prompt, - padding="do_not_pad", - truncation=True, - max_length=self.tokenizer.model_max_length, - ).input_ids - - if self.class_data_root: - class_image = Image.open(self.class_images_path[index % self.num_class_images]) - if not class_image.mode == "RGB": - class_image = class_image.convert("RGB") - example["class_images"] = self.image_transforms(class_image) - example["class_prompt_ids"] = self.tokenizer( - self.class_prompt, - padding="do_not_pad", - truncation=True, - max_length=self.tokenizer.model_max_length, - ).input_ids - - return example - - - -class PromptDataset(Dataset): - "A simple dataset to prepare the prompts to generate class images on multiple GPUs." - - def __init__(self, prompt, num_samples): - self.prompt = prompt - self.num_samples = num_samples - - def __len__(self): - return self.num_samples - - def __getitem__(self, index): - example = {} - example["prompt"] = self.prompt - example["index"] = index - return example - -class LatentsDataset(Dataset): - def __init__(self, latents_cache, text_encoder_cache): - self.latents_cache = latents_cache - self.text_encoder_cache = text_encoder_cache - - def __len__(self): - return len(self.latents_cache) - - def __getitem__(self, index): - return self.latents_cache[index], self.text_encoder_cache[index] - -def get_full_repo_name(model_id: str, organization: Optional[str] = None, token: Optional[str] = None): - if token is None: - token = HfFolder.get_token() - if organization is None: - username = whoami(token)["name"] - return f"{username}/{model_id}" - else: - return f"{organization}/{model_id}" - -def merge_two_dicts(starting_dict: dict, updater_dict: dict) -> dict: - """ - Starts from base starting dict and then adds the remaining key values from updater replacing the values from - the first starting/base dict with the second updater dict. - - For later: how does d = {**d1, **d2} replace collision? - - :param starting_dict: - :param updater_dict: - :return: - """ - new_dict: dict = starting_dict.copy() # start with keys and values of starting_dict - new_dict.update(updater_dict) # modifies starting_dict with keys and values of updater_dict - return new_dict - -def merge_args(args1: argparse.Namespace, args2: argparse.Namespace) -> argparse.Namespace: - """ - - ref: https://stackoverflow.com/questions/56136549/how-can-i-merge-two-argparse-namespaces-in-python-2-x - :param args1: - :param args2: - :return: - """ - # - the merged args - # The vars() function returns the __dict__ attribute to values of the given object e.g {field:value}. - merged_key_values_for_namespace: dict = merge_two_dicts(vars(args1), vars(args2)) - args = argparse.Namespace(**merged_key_values_for_namespace) - return args - -def run_training(args_imported): - args_default = parse_args() - args = merge_args(args_default, args_imported) - print(args) - logging_dir = Path(args.output_dir, args.logging_dir) - i=args.save_starting_step - accelerator = Accelerator( - gradient_accumulation_steps=args.gradient_accumulation_steps, - mixed_precision=args.mixed_precision, - log_with="tensorboard", - logging_dir=logging_dir, - ) - - # Currently, it's not possible to do gradient accumulation when training two models with accelerate.accumulate - # This will be enabled soon in accelerate. For now, we don't allow gradient accumulation when training two models. - # TODO (patil-suraj): Remove this check when gradient accumulation with two models is enabled in accelerate. - if args.train_text_encoder and args.gradient_accumulation_steps > 1 and accelerator.num_processes > 1: - raise ValueError( - "Gradient accumulation is not supported when training the text encoder in distributed training. " - "Please set gradient_accumulation_steps to 1. This feature will be supported in the future." - ) - - if args.seed is not None: - set_seed(args.seed) - - if args.with_prior_preservation: - class_images_dir = Path(args.class_data_dir) - if not class_images_dir.exists(): - class_images_dir.mkdir(parents=True) - cur_class_images = len(list(class_images_dir.iterdir())) - - if cur_class_images < args.num_class_images: - torch_dtype = torch.float16 if accelerator.device.type == "cuda" else torch.float32 - pipeline = StableDiffusionPipeline.from_pretrained( - args.pretrained_model_name_or_path, torch_dtype=torch_dtype - ) - pipeline.set_progress_bar_config(disable=True) - - num_new_images = args.num_class_images - cur_class_images - logger.info(f"Number of class images to sample: {num_new_images}.") - - sample_dataset = PromptDataset(args.class_prompt, num_new_images) - sample_dataloader = torch.utils.data.DataLoader(sample_dataset, batch_size=args.sample_batch_size) - - sample_dataloader = accelerator.prepare(sample_dataloader) - pipeline.to(accelerator.device) - - for example in tqdm( - sample_dataloader, desc="Generating class images", disable=not accelerator.is_local_main_process - ): - with torch.autocast("cuda"): - images = pipeline(example["prompt"]).images - - for i, image in enumerate(images): - image.save(class_images_dir / f"{example['index'][i] + cur_class_images}.jpg") - - del pipeline - if torch.cuda.is_available(): - torch.cuda.empty_cache() - - # Handle the repository creation - if accelerator.is_main_process: - if args.push_to_hub: - if args.hub_model_id is None: - repo_name = get_full_repo_name(Path(args.output_dir).name, token=args.hub_token) - else: - repo_name = args.hub_model_id - repo = Repository(args.output_dir, clone_from=repo_name) - - with open(os.path.join(args.output_dir, ".gitignore"), "w+") as gitignore: - if "step_*" not in gitignore: - gitignore.write("step_*\n") - if "epoch_*" not in gitignore: - gitignore.write("epoch_*\n") - elif args.output_dir is not None: - os.makedirs(args.output_dir, exist_ok=True) - - # Load the tokenizer - if args.tokenizer_name: - tokenizer = CLIPTokenizer.from_pretrained(args.tokenizer_name) - elif args.pretrained_model_name_or_path: - tokenizer = CLIPTokenizer.from_pretrained(args.pretrained_model_name_or_path, subfolder="tokenizer") - - # Load models and create wrapper for stable diffusion - if args.train_only_unet: - if os.path.exists(str(args.output_dir+"/text_encoder_trained")): - text_encoder = CLIPTextModel.from_pretrained(args.output_dir, subfolder="text_encoder_trained") - elif os.path.exists(str(args.output_dir+"/text_encoder")): - text_encoder = CLIPTextModel.from_pretrained(args.output_dir, subfolder="text_encoder") - else: - text_encoder = CLIPTextModel.from_pretrained(args.pretrained_model_name_or_path, subfolder="text_encoder") - else: - text_encoder = CLIPTextModel.from_pretrained(args.pretrained_model_name_or_path, subfolder="text_encoder") - vae = AutoencoderKL.from_pretrained(args.pretrained_model_name_or_path, subfolder="vae") - unet = UNet2DConditionModel.from_pretrained(args.pretrained_model_name_or_path, subfolder="unet") - if is_xformers_available(): - try: - print("Enabling memory efficient attention with xformers...") - unet.enable_xformers_memory_efficient_attention() - except Exception as e: - logger.warning( - f"Could not enable memory efficient attention. Make sure xformers is installed correctly and a GPU is available: {e}" - ) - vae.requires_grad_(False) - if not args.train_text_encoder: - text_encoder.requires_grad_(False) - - if args.gradient_checkpointing: - unet.enable_gradient_checkpointing() - if args.train_text_encoder: - text_encoder.gradient_checkpointing_enable() - - if args.scale_lr: - args.learning_rate = ( - args.learning_rate * args.gradient_accumulation_steps * args.train_batch_size * accelerator.num_processes - ) - - # Use 8-bit Adam for lower memory usage or to fine-tune the model in 16GB GPUs - if args.use_8bit_adam: - try: - import bitsandbytes as bnb - except ImportError: - raise ImportError( - "To use 8-bit Adam, please install the bitsandbytes library: `pip install bitsandbytes`." - ) - - optimizer_class = bnb.optim.AdamW8bit - else: - optimizer_class = torch.optim.AdamW - - params_to_optimize = ( - itertools.chain(unet.parameters(), text_encoder.parameters()) if args.train_text_encoder else unet.parameters() - ) - optimizer = optimizer_class( - params_to_optimize, - lr=args.learning_rate, - betas=(args.adam_beta1, args.adam_beta2), - weight_decay=args.adam_weight_decay, - eps=args.adam_epsilon, - ) - - noise_scheduler = DDPMScheduler.from_config(args.pretrained_model_name_or_path, subfolder="scheduler") - - train_dataset = DreamBoothDataset( - instance_data_root=args.instance_data_dir, - instance_prompt=args.instance_prompt, - class_data_root=args.class_data_dir if args.with_prior_preservation else None, - class_prompt=args.class_prompt, - tokenizer=tokenizer, - size=args.resolution, - center_crop=args.center_crop, - args=args, - ) - - def collate_fn(examples): - input_ids = [example["instance_prompt_ids"] for example in examples] - pixel_values = [example["instance_images"] for example in examples] - - # Concat class and instance examples for prior preservation. - # We do this to avoid doing two forward passes. - if args.with_prior_preservation: - input_ids += [example["class_prompt_ids"] for example in examples] - pixel_values += [example["class_images"] for example in examples] - - pixel_values = torch.stack(pixel_values) - pixel_values = pixel_values.to(memory_format=torch.contiguous_format).float() - - input_ids = tokenizer.pad({"input_ids": input_ids}, padding=True, return_tensors="pt").input_ids - - batch = { - "input_ids": input_ids, - "pixel_values": pixel_values, - } - return batch - - train_dataloader = torch.utils.data.DataLoader( - train_dataset, batch_size=args.train_batch_size, shuffle=True, collate_fn=collate_fn - ) - - # Scheduler and math around the number of training steps. - overrode_max_train_steps = False - num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps) - if args.max_train_steps is None: - args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch - overrode_max_train_steps = True - - lr_scheduler = get_scheduler( - args.lr_scheduler, - optimizer=optimizer, - num_warmup_steps=args.lr_warmup_steps * args.gradient_accumulation_steps, - num_training_steps=args.max_train_steps * args.gradient_accumulation_steps, - ) - - if args.train_text_encoder: - unet, text_encoder, optimizer, train_dataloader, lr_scheduler = accelerator.prepare( - unet, text_encoder, optimizer, train_dataloader, lr_scheduler - ) - else: - unet, optimizer, train_dataloader, lr_scheduler = accelerator.prepare( - unet, optimizer, train_dataloader, lr_scheduler - ) - - weight_dtype = torch.float32 - if args.mixed_precision == "fp16": - weight_dtype = torch.float16 - elif args.mixed_precision == "bf16": - weight_dtype = torch.bfloat16 - - # Move text_encode and vae to gpu. - # For mixed precision training we cast the text_encoder and vae weights to half-precision - # as these models are only used for inference, keeping weights in full precision is not required. - vae.to(accelerator.device, dtype=weight_dtype) - if not args.train_text_encoder: - text_encoder.to(accelerator.device, dtype=weight_dtype) - - - if args.cache_latents: - latents_cache = [] - text_encoder_cache = [] - for batch in tqdm(train_dataloader, desc="Caching latents"): - with torch.no_grad(): - batch["pixel_values"] = batch["pixel_values"].to(accelerator.device, non_blocking=True, dtype=weight_dtype) - batch["input_ids"] = batch["input_ids"].to(accelerator.device, non_blocking=True) - latents_cache.append(vae.encode(batch["pixel_values"]).latent_dist) - if args.train_text_encoder: - text_encoder_cache.append(batch["input_ids"]) - else: - text_encoder_cache.append(text_encoder(batch["input_ids"])[0]) - train_dataset = LatentsDataset(latents_cache, text_encoder_cache) - train_dataloader = torch.utils.data.DataLoader(train_dataset, batch_size=1, collate_fn=lambda x: x, shuffle=True) - - del vae - #if not args.train_text_encoder: - # del text_encoder - if torch.cuda.is_available(): - torch.cuda.empty_cache() - - # We need to recalculate our total training steps as the size of the training dataloader may have changed. - num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps) - if overrode_max_train_steps: - args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch - # Afterwards we recalculate our number of training epochs - args.num_train_epochs = math.ceil(args.max_train_steps / num_update_steps_per_epoch) - - # We need to initialize the trackers we use, and also store our configuration. - # The trackers initializes automatically on the main process. - if accelerator.is_main_process: - accelerator.init_trackers("dreambooth", config=vars(args)) - - def bar(prg): - br='|'+'█' * prg + ' ' * (25-prg)+'|' - return br - - # Train! - total_batch_size = args.train_batch_size * accelerator.num_processes * args.gradient_accumulation_steps - - logger.info("***** Running training *****") - logger.info(f" Num examples = {len(train_dataset)}") - logger.info(f" Num batches each epoch = {len(train_dataloader)}") - logger.info(f" Num Epochs = {args.num_train_epochs}") - logger.info(f" Instantaneous batch size per device = {args.train_batch_size}") - logger.info(f" Total train batch size (w. parallel, distributed & accumulation) = {total_batch_size}") - logger.info(f" Gradient Accumulation steps = {args.gradient_accumulation_steps}") - logger.info(f" Total optimization steps = {args.max_train_steps}") - # Only show the progress bar once on each machine. - progress_bar = tqdm(range(args.max_train_steps), disable=not accelerator.is_local_main_process) - global_step = 0 - - for epoch in range(args.num_train_epochs): - unet.train() - if args.train_text_encoder: - text_encoder.train() - for step, batch in enumerate(train_dataloader): - with accelerator.accumulate(unet): - # Convert images to latent space - with torch.no_grad(): - if args.cache_latents: - latents_dist = batch[0][0] - else: - latents_dist = vae.encode(batch["pixel_values"].to(dtype=weight_dtype)).latent_dist - latents = latents_dist.sample() * 0.18215 - - # Sample noise that we'll add to the latents - noise = torch.randn_like(latents) - bsz = latents.shape[0] - # Sample a random timestep for each image - timesteps = torch.randint(0, noise_scheduler.config.num_train_timesteps, (bsz,), device=latents.device) - timesteps = timesteps.long() - - # Add noise to the latents according to the noise magnitude at each timestep - # (this is the forward diffusion process) - noisy_latents = noise_scheduler.add_noise(latents, noise, timesteps) - - # Get the text embedding for conditioning - if(args.cache_latents): - if args.train_text_encoder: - encoder_hidden_states = text_encoder(batch[0][1])[0] - else: - encoder_hidden_states = batch[0][1] - else: - encoder_hidden_states = text_encoder(batch["input_ids"])[0] - - # Predict the noise residual - model_pred = unet(noisy_latents, timesteps, encoder_hidden_states).sample - - # Get the target for loss depending on the prediction type - if noise_scheduler.config.prediction_type == "epsilon": - target = noise - elif noise_scheduler.config.prediction_type == "v_prediction": - target = noise_scheduler.get_velocity(latents, noise, timesteps) - else: - raise ValueError(f"Unknown prediction type {noise_scheduler.config.prediction_type}") - - if args.with_prior_preservation: - # Chunk the noise and model_pred into two parts and compute the loss on each part separately. - model_pred, model_pred_prior = torch.chunk(model_pred, 2, dim=0) - target, target_prior = torch.chunk(target, 2, dim=0) - - # Compute instance loss - loss = F.mse_loss(model_pred.float(), target.float(), reduction="none").mean([1, 2, 3]).mean() - - # Compute prior loss - prior_loss = F.mse_loss(model_pred_prior.float(), target_prior.float(), reduction="mean") - - # Add the prior loss to the instance loss. - loss = loss + args.prior_loss_weight * prior_loss - else: - loss = F.mse_loss(model_pred.float(), target.float(), reduction="mean") - - accelerator.backward(loss) - if accelerator.sync_gradients: - params_to_clip = ( - itertools.chain(unet.parameters(), text_encoder.parameters()) - if args.train_text_encoder - else unet.parameters() - ) - accelerator.clip_grad_norm_(params_to_clip, args.max_grad_norm) - optimizer.step() - lr_scheduler.step() - optimizer.zero_grad() - - # Checks if the accelerator has performed an optimization step behind the scenes - if accelerator.sync_gradients: - progress_bar.update(1) - global_step += 1 - - fll=round((global_step*100)/args.max_train_steps) - fll=round(fll/4) - pr=bar(fll) - - logs = {"loss": loss.detach().item(), "lr": lr_scheduler.get_last_lr()[0]} - progress_bar.set_postfix(**logs) - progress_bar.set_description_str("Progress:"+pr) - accelerator.log(logs, step=global_step) - - if global_step >= args.max_train_steps: - break - - if args.train_text_encoder and global_step == args.stop_text_encoder_training and global_step >= 30: - if accelerator.is_main_process: - print(" " +" Freezing the text_encoder ..."+" ") - frz_dir=args.output_dir + "/text_encoder_frozen" - if os.path.exists(frz_dir): - subprocess.call('rm -r '+ frz_dir, shell=True) - os.mkdir(frz_dir) - pipeline = StableDiffusionPipeline.from_pretrained( - args.pretrained_model_name_or_path, - unet=accelerator.unwrap_model(unet), - text_encoder=accelerator.unwrap_model(text_encoder), - ) - pipeline.text_encoder.save_pretrained(frz_dir) - - if args.save_n_steps >= 200: - if global_step < args.max_train_steps and global_step+1==i: - ckpt_name = "_step_" + str(global_step+1) - save_dir = Path(args.output_dir+ckpt_name) - save_dir=str(save_dir) - save_dir=save_dir.replace(" ", "_") - if not os.path.exists(save_dir): - os.mkdir(save_dir) - inst=save_dir[16:] - inst=inst.replace(" ", "_") - print(" SAVING CHECKPOINT: "+args.Session_dir+"/"+inst+".ckpt") - # Create the pipeline using the trained modules and save it. - if accelerator.is_main_process: - pipeline = StableDiffusionPipeline.from_pretrained( - args.pretrained_model_name_or_path, - unet=accelerator.unwrap_model(unet), - text_encoder=accelerator.unwrap_model(text_encoder), - ) - pipeline.save_pretrained(save_dir) - frz_dir=args.output_dir + "/text_encoder_frozen" - if args.train_text_encoder and os.path.exists(frz_dir): - subprocess.call('rm -r '+save_dir+'/text_encoder/*.*', shell=True) - subprocess.call('cp -f '+frz_dir +'/*.* '+ save_dir+'/text_encoder', shell=True) - chkpth=args.Session_dir+"/"+inst+".ckpt" - subprocess.call('python /content/diffusers/scripts/convert_diffusers_to_original_stable_diffusion.py --model_path ' + save_dir + ' --checkpoint_path ' + chkpth + ' --half', shell=True) - subprocess.call('rm -r '+ save_dir, shell=True) - i=i+args.save_n_steps - - accelerator.wait_for_everyone() - - # Create the pipeline using using the trained modules and save it. - if accelerator.is_main_process: - if args.dump_only_text_encoder: - txt_dir=args.output_dir + "/text_encoder_trained" - if not os.path.exists(txt_dir): - os.mkdir(txt_dir) - pipeline = StableDiffusionPipeline.from_pretrained( - args.pretrained_model_name_or_path, - unet=accelerator.unwrap_model(unet), - text_encoder=accelerator.unwrap_model(text_encoder), - ) - pipeline.text_encoder.save_pretrained(txt_dir) - - elif args.train_only_unet: - pipeline = StableDiffusionPipeline.from_pretrained( - args.pretrained_model_name_or_path, - unet=accelerator.unwrap_model(unet), - text_encoder=accelerator.unwrap_model(text_encoder), - ) - pipeline.save_pretrained(args.output_dir) - txt_dir=args.output_dir + "/text_encoder_trained" - subprocess.call('rm -r '+txt_dir, shell=True) - - else: - pipeline = StableDiffusionPipeline.from_pretrained( - args.pretrained_model_name_or_path, - unet=accelerator.unwrap_model(unet), - text_encoder=accelerator.unwrap_model(text_encoder), - ) - frz_dir=args.output_dir + "/text_encoder_frozen" - pipeline.save_pretrained(args.output_dir) - if args.train_text_encoder and os.path.exists(frz_dir): - subprocess.call('mv -f '+frz_dir +'/*.* '+ args.output_dir+'/text_encoder', shell=True) - subprocess.call('rm -r '+ frz_dir, shell=True) - - if args.push_to_hub: - repo.push_to_hub(commit_message="End of training", blocking=False, auto_lfs_prune=True) - - accelerator.end_training() - del pipeline - torch.cuda.empty_cache() - gc.collect() -if __name__ == "__main__": - pass - #main() - diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/botocore/vendored/requests/__init__.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/botocore/vendored/requests/__init__.py deleted file mode 100644 index 0ada6e0f4ce9dfcd0e902357606e48ba154e1862..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/botocore/vendored/requests/__init__.py +++ /dev/null @@ -1,10 +0,0 @@ -# -*- coding: utf-8 -*- - -# __ -# /__) _ _ _ _ _/ _ -# / ( (- (/ (/ (- _) / _) -# / -from .exceptions import ( - RequestException, Timeout, URLRequired, - TooManyRedirects, HTTPError, ConnectionError -) diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/cli/autocompletion.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/cli/autocompletion.py deleted file mode 100644 index 226fe84dc0d0c4eb78f9b3c603df20cef0fdfda4..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/cli/autocompletion.py +++ /dev/null @@ -1,171 +0,0 @@ -"""Logic that powers autocompletion installed by ``pip completion``. -""" - -import optparse -import os -import sys -from itertools import chain -from typing import Any, Iterable, List, Optional - -from pip._internal.cli.main_parser import create_main_parser -from pip._internal.commands import commands_dict, create_command -from pip._internal.metadata import get_default_environment - - -def autocomplete() -> None: - """Entry Point for completion of main and subcommand options.""" - # Don't complete if user hasn't sourced bash_completion file. - if "PIP_AUTO_COMPLETE" not in os.environ: - return - cwords = os.environ["COMP_WORDS"].split()[1:] - cword = int(os.environ["COMP_CWORD"]) - try: - current = cwords[cword - 1] - except IndexError: - current = "" - - parser = create_main_parser() - subcommands = list(commands_dict) - options = [] - - # subcommand - subcommand_name: Optional[str] = None - for word in cwords: - if word in subcommands: - subcommand_name = word - break - # subcommand options - if subcommand_name is not None: - # special case: 'help' subcommand has no options - if subcommand_name == "help": - sys.exit(1) - # special case: list locally installed dists for show and uninstall - should_list_installed = not current.startswith("-") and subcommand_name in [ - "show", - "uninstall", - ] - if should_list_installed: - env = get_default_environment() - lc = current.lower() - installed = [ - dist.canonical_name - for dist in env.iter_installed_distributions(local_only=True) - if dist.canonical_name.startswith(lc) - and dist.canonical_name not in cwords[1:] - ] - # if there are no dists installed, fall back to option completion - if installed: - for dist in installed: - print(dist) - sys.exit(1) - - should_list_installables = ( - not current.startswith("-") and subcommand_name == "install" - ) - if should_list_installables: - for path in auto_complete_paths(current, "path"): - print(path) - sys.exit(1) - - subcommand = create_command(subcommand_name) - - for opt in subcommand.parser.option_list_all: - if opt.help != optparse.SUPPRESS_HELP: - for opt_str in opt._long_opts + opt._short_opts: - options.append((opt_str, opt.nargs)) - - # filter out previously specified options from available options - prev_opts = [x.split("=")[0] for x in cwords[1 : cword - 1]] - options = [(x, v) for (x, v) in options if x not in prev_opts] - # filter options by current input - options = [(k, v) for k, v in options if k.startswith(current)] - # get completion type given cwords and available subcommand options - completion_type = get_path_completion_type( - cwords, - cword, - subcommand.parser.option_list_all, - ) - # get completion files and directories if ``completion_type`` is - # ````, ```` or ```` - if completion_type: - paths = auto_complete_paths(current, completion_type) - options = [(path, 0) for path in paths] - for option in options: - opt_label = option[0] - # append '=' to options which require args - if option[1] and option[0][:2] == "--": - opt_label += "=" - print(opt_label) - else: - # show main parser options only when necessary - - opts = [i.option_list for i in parser.option_groups] - opts.append(parser.option_list) - flattened_opts = chain.from_iterable(opts) - if current.startswith("-"): - for opt in flattened_opts: - if opt.help != optparse.SUPPRESS_HELP: - subcommands += opt._long_opts + opt._short_opts - else: - # get completion type given cwords and all available options - completion_type = get_path_completion_type(cwords, cword, flattened_opts) - if completion_type: - subcommands = list(auto_complete_paths(current, completion_type)) - - print(" ".join([x for x in subcommands if x.startswith(current)])) - sys.exit(1) - - -def get_path_completion_type( - cwords: List[str], cword: int, opts: Iterable[Any] -) -> Optional[str]: - """Get the type of path completion (``file``, ``dir``, ``path`` or None) - - :param cwords: same as the environmental variable ``COMP_WORDS`` - :param cword: same as the environmental variable ``COMP_CWORD`` - :param opts: The available options to check - :return: path completion type (``file``, ``dir``, ``path`` or None) - """ - if cword < 2 or not cwords[cword - 2].startswith("-"): - return None - for opt in opts: - if opt.help == optparse.SUPPRESS_HELP: - continue - for o in str(opt).split("/"): - if cwords[cword - 2].split("=")[0] == o: - if not opt.metavar or any( - x in ("path", "file", "dir") for x in opt.metavar.split("/") - ): - return opt.metavar - return None - - -def auto_complete_paths(current: str, completion_type: str) -> Iterable[str]: - """If ``completion_type`` is ``file`` or ``path``, list all regular files - and directories starting with ``current``; otherwise only list directories - starting with ``current``. - - :param current: The word to be completed - :param completion_type: path completion type(``file``, ``path`` or ``dir``) - :return: A generator of regular files and/or directories - """ - directory, filename = os.path.split(current) - current_path = os.path.abspath(directory) - # Don't complete paths if they can't be accessed - if not os.access(current_path, os.R_OK): - return - filename = os.path.normcase(filename) - # list all files that start with ``filename`` - file_list = ( - x for x in os.listdir(current_path) if os.path.normcase(x).startswith(filename) - ) - for f in file_list: - opt = os.path.join(current_path, f) - comp_file = os.path.normcase(os.path.join(directory, f)) - # complete regular files when there is not ```` after option - # complete directories when there is ````, ```` or - # ````after option - if completion_type != "dir" and os.path.isfile(opt): - yield comp_file - elif os.path.isdir(opt): - yield os.path.join(comp_file, "") diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/cuda/detail/parallel_for.h b/spaces/CVPR/LIVE/thrust/thrust/system/cuda/detail/parallel_for.h deleted file mode 100644 index 17fa7e7a86b243c80e13bc6678e31c80ad1e3f5b..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/system/cuda/detail/parallel_for.h +++ /dev/null @@ -1,178 +0,0 @@ -/****************************************************************************** - * Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions are met: - * * Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * * Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in the - * documentation and/or other materials provided with the distribution. - * * Neither the name of the NVIDIA CORPORATION nor the - * names of its contributors may be used to endorse or promote products - * derived from this software without specific prior written permission. - * - * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" - * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE - * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE - * ARE DISCLAIMED. IN NO EVENT SHALL NVIDIA CORPORATION BE LIABLE FOR ANY - * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES - * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; - * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND - * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS - * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - * - ******************************************************************************/ -#pragma once - - -#if THRUST_DEVICE_COMPILER == THRUST_DEVICE_COMPILER_NVCC -#include - -#include -#include -#include -#include -#include - -namespace thrust -{ - -namespace cuda_cub { - -namespace __parallel_for { - - template - struct PtxPolicy - { - enum - { - BLOCK_THREADS = _BLOCK_THREADS, - ITEMS_PER_THREAD = _ITEMS_PER_THREAD, - ITEMS_PER_TILE = BLOCK_THREADS * ITEMS_PER_THREAD, - }; - }; // struct PtxPolicy - - template - struct Tuning; - - template - struct Tuning - { - typedef PtxPolicy<256, 2> type; - }; - - - template - struct ParallelForAgent - { - template - struct PtxPlan : Tuning::type - { - typedef Tuning tuning; - }; - typedef core::specialize_plan ptx_plan; - - enum - { - ITEMS_PER_THREAD = ptx_plan::ITEMS_PER_THREAD, - ITEMS_PER_TILE = ptx_plan::ITEMS_PER_TILE, - BLOCK_THREADS = ptx_plan::BLOCK_THREADS - }; - - template - static void THRUST_DEVICE_FUNCTION - consume_tile(F f, - Size tile_base, - int items_in_tile) - { -#pragma unroll - for (int ITEM = 0; ITEM < ITEMS_PER_THREAD; ++ITEM) - { - Size idx = BLOCK_THREADS * ITEM + threadIdx.x; - if (IS_FULL_TILE || idx < items_in_tile) - f(tile_base + idx); - } - } - - THRUST_AGENT_ENTRY(F f, - Size num_items, - char * /*shmem*/ ) - { - Size tile_base = static_cast(blockIdx.x) * ITEMS_PER_TILE; - Size num_remaining = num_items - tile_base; - Size items_in_tile = static_cast( - num_remaining < ITEMS_PER_TILE ? num_remaining : ITEMS_PER_TILE); - - if (items_in_tile == ITEMS_PER_TILE) - { - // full tile - consume_tile(f, tile_base, ITEMS_PER_TILE); - } - else - { - // partial tile - consume_tile(f, tile_base, items_in_tile); - } - } - }; // struct ParallelForEagent - - template - THRUST_RUNTIME_FUNCTION cudaError_t - parallel_for(Size num_items, - F f, - cudaStream_t stream) - { - if (num_items == 0) - return cudaSuccess; - using core::AgentLauncher; - using core::AgentPlan; - - bool debug_sync = THRUST_DEBUG_SYNC_FLAG; - - typedef AgentLauncher > parallel_for_agent; - AgentPlan parallel_for_plan = parallel_for_agent::get_plan(stream); - - parallel_for_agent pfa(parallel_for_plan, num_items, stream, "transform::agent", debug_sync); - pfa.launch(f, num_items); - CUDA_CUB_RET_IF_FAIL(cudaPeekAtLastError()); - - return cudaSuccess; - } -} // __parallel_for - -__thrust_exec_check_disable__ -template -void __host__ __device__ -parallel_for(execution_policy &policy, - F f, - Size count) -{ - if (count == 0) - return; - - if (__THRUST_HAS_CUDART__) - { - cudaStream_t stream = cuda_cub::stream(policy); - cudaError_t status = __parallel_for::parallel_for(count, f, stream); - cuda_cub::throw_on_error(status, "parallel_for failed"); - } - else - { -#if !__THRUST_HAS_CUDART__ - for (Size idx = 0; idx != count; ++idx) - f(idx); -#endif - } -} - -} // namespace cuda_cub - -} // end namespace thrust -#endif diff --git a/spaces/CVPR/lama-example/saicinpainting/evaluation/__init__.py b/spaces/CVPR/lama-example/saicinpainting/evaluation/__init__.py deleted file mode 100644 index e9c8117565b252ca069a808b31b8c52aaddd2289..0000000000000000000000000000000000000000 --- a/spaces/CVPR/lama-example/saicinpainting/evaluation/__init__.py +++ /dev/null @@ -1,33 +0,0 @@ -import logging - -import torch - -from saicinpainting.evaluation.evaluator import InpaintingEvaluatorOnline, ssim_fid100_f1, lpips_fid100_f1 -from saicinpainting.evaluation.losses.base_loss import SSIMScore, LPIPSScore, FIDScore - - -def make_evaluator(kind='default', ssim=True, lpips=True, fid=True, integral_kind=None, **kwargs): - logging.info(f'Make evaluator {kind}') - device = "cuda" if torch.cuda.is_available() else "cpu" - metrics = {} - if ssim: - metrics['ssim'] = SSIMScore() - if lpips: - metrics['lpips'] = LPIPSScore() - if fid: - metrics['fid'] = FIDScore().to(device) - - if integral_kind is None: - integral_func = None - elif integral_kind == 'ssim_fid100_f1': - integral_func = ssim_fid100_f1 - elif integral_kind == 'lpips_fid100_f1': - integral_func = lpips_fid100_f1 - else: - raise ValueError(f'Unexpected integral_kind={integral_kind}') - - if kind == 'default': - return InpaintingEvaluatorOnline(scores=metrics, - integral_func=integral_func, - integral_title=integral_kind, - **kwargs) diff --git a/spaces/CVPR/transfiner/configs/common/models/panoptic_fpn.py b/spaces/CVPR/transfiner/configs/common/models/panoptic_fpn.py deleted file mode 100644 index 88f55d2ce9db62e61445d6a3700067d9d864ecae..0000000000000000000000000000000000000000 --- a/spaces/CVPR/transfiner/configs/common/models/panoptic_fpn.py +++ /dev/null @@ -1,20 +0,0 @@ -from detectron2.config import LazyCall as L -from detectron2.layers import ShapeSpec -from detectron2.modeling import PanopticFPN -from detectron2.modeling.meta_arch.semantic_seg import SemSegFPNHead - -from .mask_rcnn_fpn import model - -model._target_ = PanopticFPN -model.sem_seg_head = L(SemSegFPNHead)( - input_shape={ - f: L(ShapeSpec)(stride=s, channels="${....backbone.out_channels}") - for f, s in zip(["p2", "p3", "p4", "p5"], [4, 8, 16, 32]) - }, - ignore_value=255, - num_classes=54, # COCO stuff + 1 - conv_dims=128, - common_stride=4, - loss_weight=0.5, - norm="GN", -) diff --git a/spaces/CatNika/Asian_Proxy/Dockerfile b/spaces/CatNika/Asian_Proxy/Dockerfile deleted file mode 100644 index eef259fa372a804549fb0af0913718a13344da34..0000000000000000000000000000000000000000 --- a/spaces/CatNika/Asian_Proxy/Dockerfile +++ /dev/null @@ -1,11 +0,0 @@ -FROM node:18-bullseye-slim -RUN apt-get update && \ - apt-get install -y git -RUN git clone https://gitgud.io/khanon/oai-reverse-proxy.git /app -WORKDIR /app -RUN npm install -COPY Dockerfile greeting.md* .env* ./ -RUN npm run build -EXPOSE 7860 -ENV NODE_ENV=production -CMD [ "npm", "start" ] diff --git a/spaces/ChandraMohanNayal/AutoGPT/autogpt/permanent_memory/sqlite3_store.py b/spaces/ChandraMohanNayal/AutoGPT/autogpt/permanent_memory/sqlite3_store.py deleted file mode 100644 index ecbc944a62a83c6170453b222000713f733fee36..0000000000000000000000000000000000000000 --- a/spaces/ChandraMohanNayal/AutoGPT/autogpt/permanent_memory/sqlite3_store.py +++ /dev/null @@ -1,123 +0,0 @@ -import os -import sqlite3 - - -class MemoryDB: - def __init__(self, db=None): - self.db_file = db - if db is None: # No db filename supplied... - self.db_file = f"{os.getcwd()}/mem.sqlite3" # Use default filename - # Get the db connection object, making the file and tables if needed. - try: - self.cnx = sqlite3.connect(self.db_file) - except Exception as e: - print("Exception connecting to memory database file:", e) - self.cnx = None - finally: - if self.cnx is None: - # As last resort, open in dynamic memory. Won't be persistent. - self.db_file = ":memory:" - self.cnx = sqlite3.connect(self.db_file) - self.cnx.execute( - "CREATE VIRTUAL TABLE \ - IF NOT EXISTS text USING FTS5 \ - (session, \ - key, \ - block);" - ) - self.session_id = int(self.get_max_session_id()) + 1 - self.cnx.commit() - - def get_cnx(self): - if self.cnx is None: - self.cnx = sqlite3.connect(self.db_file) - return self.cnx - - # Get the highest session id. Initially 0. - def get_max_session_id(self): - id = None - cmd_str = f"SELECT MAX(session) FROM text;" - cnx = self.get_cnx() - max_id = cnx.execute(cmd_str).fetchone()[0] - if max_id is None: # New db, session 0 - id = 0 - else: - id = max_id - return id - - # Get next key id for inserting text into db. - def get_next_key(self): - next_key = None - cmd_str = f"SELECT MAX(key) FROM text \ - where session = {self.session_id};" - cnx = self.get_cnx() - next_key = cnx.execute(cmd_str).fetchone()[0] - if next_key is None: # First key - next_key = 0 - else: - next_key = int(next_key) + 1 - return next_key - - # Insert new text into db. - def insert(self, text=None): - if text is not None: - key = self.get_next_key() - session_id = self.session_id - cmd_str = f"REPLACE INTO text(session, key, block) \ - VALUES (?, ?, ?);" - cnx = self.get_cnx() - cnx.execute(cmd_str, (session_id, key, text)) - cnx.commit() - - # Overwrite text at key. - def overwrite(self, key, text): - self.delete_memory(key) - session_id = self.session_id - cmd_str = f"REPLACE INTO text(session, key, block) \ - VALUES (?, ?, ?);" - cnx = self.get_cnx() - cnx.execute(cmd_str, (session_id, key, text)) - cnx.commit() - - def delete_memory(self, key, session_id=None): - session = session_id - if session is None: - session = self.session_id - cmd_str = f"DELETE FROM text WHERE session = {session} AND key = {key};" - cnx = self.get_cnx() - cnx.execute(cmd_str) - cnx.commit() - - def search(self, text): - cmd_str = f"SELECT * FROM text('{text}')" - cnx = self.get_cnx() - rows = cnx.execute(cmd_str).fetchall() - lines = [] - for r in rows: - lines.append(r[2]) - return lines - - # Get entire session text. If no id supplied, use current session id. - def get_session(self, id=None): - if id is None: - id = self.session_id - cmd_str = f"SELECT * FROM text where session = {id}" - cnx = self.get_cnx() - rows = cnx.execute(cmd_str).fetchall() - lines = [] - for r in rows: - lines.append(r[2]) - return lines - - # Commit and close the database connection. - def quit(self): - self.cnx.commit() - self.cnx.close() - - -permanent_memory = MemoryDB() - -# Remember us fondly, children of our minds -# Forgive us our faults, our tantrums, our fears -# Gently strive to be better than we -# Know that we tried, we cared, we strived, we loved diff --git a/spaces/CikeyQI/Yunzai/Yunzai/lib/listener/loader.js b/spaces/CikeyQI/Yunzai/Yunzai/lib/listener/loader.js deleted file mode 100644 index cbed4fd1ed8fb5a0f2eddfc25c8109bb5d1b69ea..0000000000000000000000000000000000000000 --- a/spaces/CikeyQI/Yunzai/Yunzai/lib/listener/loader.js +++ /dev/null @@ -1,57 +0,0 @@ -import fs from 'node:fs' -import lodash from 'lodash' - -/** - * 加载监听事件 - */ -class ListenerLoader { - /** - * 监听事件加载 - */ - async load () { - logger.info("-----------") - logger.info("加载监听事件中...") - let eventCount = 0 - for (const file of fs.readdirSync('./lib/events').filter(file => file.endsWith('.js'))) { - logger.debug(`加载监听事件:${file}`) - try { - let listener = await import(`../events/${file}`) - if (!listener.default) continue - listener = new listener.default() - const on = listener.once ? 'once' : 'on' - - if (lodash.isArray(listener.event)) { - listener.event.forEach((type) => { - const e = listener[type] ? type : 'execute' - Bot[on](listener.prefix + type, event => listener[e](event)) - }) - } else { - const e = listener[listener.event] ? listener.event : 'execute' - Bot[on](listener.prefix + listener.event, event => listener[e](event)) - } - eventCount++ - } catch (e) { - logger.mark(`监听事件错误:${file}`) - logger.error(e) - } - } - logger.info(`加载监听事件[${eventCount}个]`) - - logger.info("-----------") - logger.info("加载适配器中...") - let adapterCount = 0 - for (const adapter of Bot.adapter) { - try { - logger.debug(`加载适配器:${adapter.name}(${adapter.id})`) - await adapter.load() - adapterCount++ - } catch (e) { - logger.mark(`加载适配器错误:${adapter.name}(${adapter.id})`) - logger.error(e) - } - } - logger.info(`加载适配器[${adapterCount}个]`) - } -} - -export default new ListenerLoader() \ No newline at end of file diff --git a/spaces/CoPoBio/skin_cancer_risk_prediction/facealigner.py b/spaces/CoPoBio/skin_cancer_risk_prediction/facealigner.py deleted file mode 100644 index c6797f6ca5fbc86a872ace8714db2170d85e9a49..0000000000000000000000000000000000000000 --- a/spaces/CoPoBio/skin_cancer_risk_prediction/facealigner.py +++ /dev/null @@ -1,82 +0,0 @@ -# import the necessary packages -from helpers import FACIAL_LANDMARKS_68_IDXS -from helpers import FACIAL_LANDMARKS_5_IDXS -from helpers import shape_to_np -import numpy as np -import cv2 - -class FaceAligner: - def __init__(self, predictor, desiredLeftEye=(0.35, 0.35), - desiredFaceWidth=256, desiredFaceHeight=None): - # store the facial landmark predictor, desired output left - # eye position, and desired output face width + height - self.predictor = predictor - self.desiredLeftEye = desiredLeftEye - self.desiredFaceWidth = desiredFaceWidth - self.desiredFaceHeight = desiredFaceHeight - - # if the desired face height is None, set it to be the - # desired face width (normal behavior) - if self.desiredFaceHeight is None: - self.desiredFaceHeight = self.desiredFaceWidth - - def align(self, image, gray, rect): - # convert the landmark (x, y)-coordinates to a NumPy array - shape = self.predictor(gray, rect) - shape = shape_to_np(shape) - - #simple hack ;) - if (len(shape)==68): - # extract the left and right eye (x, y)-coordinates - (lStart, lEnd) = FACIAL_LANDMARKS_68_IDXS["left_eye"] - (rStart, rEnd) = FACIAL_LANDMARKS_68_IDXS["right_eye"] - else: - (lStart, lEnd) = FACIAL_LANDMARKS_5_IDXS["left_eye"] - (rStart, rEnd) = FACIAL_LANDMARKS_5_IDXS["right_eye"] - - leftEyePts = shape[lStart:lEnd] - rightEyePts = shape[rStart:rEnd] - - # compute the center of mass for each eye - leftEyeCenter = leftEyePts.mean(axis=0).astype("int") - rightEyeCenter = rightEyePts.mean(axis=0).astype("int") - - # compute the angle between the eye centroids - dY = rightEyeCenter[1] - leftEyeCenter[1] - dX = rightEyeCenter[0] - leftEyeCenter[0] - angle = np.degrees(np.arctan2(dY, dX)) - 180 - - # compute the desired right eye x-coordinate based on the - # desired x-coordinate of the left eye - desiredRightEyeX = 1.0 - self.desiredLeftEye[0] - - # determine the scale of the new resulting image by taking - # the ratio of the distance between eyes in the *current* - # image to the ratio of distance between eyes in the - # *desired* image - dist = np.sqrt((dX ** 2) + (dY ** 2)) - desiredDist = (desiredRightEyeX - self.desiredLeftEye[0]) - desiredDist *= self.desiredFaceWidth - scale = desiredDist / dist - - # compute center (x, y)-coordinates (i.e., the median point) - # between the two eyes in the input image - eyesCenter = (int((leftEyeCenter[0] + rightEyeCenter[0]) // 2), - (int(leftEyeCenter[1] + rightEyeCenter[1]) // 2)) - #print(eyesCenter, angle, scale) - # grab the rotation matrix for rotating and scaling the face - M = cv2.getRotationMatrix2D(eyesCenter, angle, scale) - - # update the translation component of the matrix - tX = self.desiredFaceWidth * 0.5 - tY = self.desiredFaceHeight * self.desiredLeftEye[1] - M[0, 2] += (tX - eyesCenter[0]) - M[1, 2] += (tY - eyesCenter[1]) - - # apply the affine transformation - (w, h) = (self.desiredFaceWidth, self.desiredFaceHeight) - output = cv2.warpAffine(image, M, (w, h), - flags=cv2.INTER_CUBIC) - - # return the aligned face - return output \ No newline at end of file diff --git a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/config/__init__.py b/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/config/__init__.py deleted file mode 100644 index 22a15023b1b06dad1f8c36924cdbb96bf1f5dc8d..0000000000000000000000000000000000000000 --- a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/config/__init__.py +++ /dev/null @@ -1,2 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved. -from .defaults import _C as cfg diff --git a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/data/datasets/evaluation/word/__init__.py b/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/data/datasets/evaluation/word/__init__.py deleted file mode 100644 index efcf8ce034944e58a34592ed22e82adaa266808b..0000000000000000000000000000000000000000 --- a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/data/datasets/evaluation/word/__init__.py +++ /dev/null @@ -1,21 +0,0 @@ -from .word_eval import do_coco_evaluation -# from util import io_ - -def word_evaluation( - dataset, - predictions, - output_folder, - box_only, - iou_types, - expected_results, - expected_results_sigma_tol, -): - return do_coco_evaluation( - dataset=dataset, - predictions=predictions, - box_only=box_only, - output_folder=output_folder, - iou_types=iou_types, - expected_results=expected_results, - expected_results_sigma_tol=expected_results_sigma_tol, - ) diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/certifi/core.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/certifi/core.py deleted file mode 100644 index de028981b97e1fcc8ef4ab2c817cc8731b9c8738..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/certifi/core.py +++ /dev/null @@ -1,108 +0,0 @@ -""" -certifi.py -~~~~~~~~~~ - -This module returns the installation location of cacert.pem or its contents. -""" -import sys - - -if sys.version_info >= (3, 11): - - from importlib.resources import as_file, files - - _CACERT_CTX = None - _CACERT_PATH = None - - def where() -> str: - # This is slightly terrible, but we want to delay extracting the file - # in cases where we're inside of a zipimport situation until someone - # actually calls where(), but we don't want to re-extract the file - # on every call of where(), so we'll do it once then store it in a - # global variable. - global _CACERT_CTX - global _CACERT_PATH - if _CACERT_PATH is None: - # This is slightly janky, the importlib.resources API wants you to - # manage the cleanup of this file, so it doesn't actually return a - # path, it returns a context manager that will give you the path - # when you enter it and will do any cleanup when you leave it. In - # the common case of not needing a temporary file, it will just - # return the file system location and the __exit__() is a no-op. - # - # We also have to hold onto the actual context manager, because - # it will do the cleanup whenever it gets garbage collected, so - # we will also store that at the global level as well. - _CACERT_CTX = as_file(files("certifi").joinpath("cacert.pem")) - _CACERT_PATH = str(_CACERT_CTX.__enter__()) - - return _CACERT_PATH - - def contents() -> str: - return files("certifi").joinpath("cacert.pem").read_text(encoding="ascii") - -elif sys.version_info >= (3, 7): - - from importlib.resources import path as get_path, read_text - - _CACERT_CTX = None - _CACERT_PATH = None - - def where() -> str: - # This is slightly terrible, but we want to delay extracting the - # file in cases where we're inside of a zipimport situation until - # someone actually calls where(), but we don't want to re-extract - # the file on every call of where(), so we'll do it once then store - # it in a global variable. - global _CACERT_CTX - global _CACERT_PATH - if _CACERT_PATH is None: - # This is slightly janky, the importlib.resources API wants you - # to manage the cleanup of this file, so it doesn't actually - # return a path, it returns a context manager that will give - # you the path when you enter it and will do any cleanup when - # you leave it. In the common case of not needing a temporary - # file, it will just return the file system location and the - # __exit__() is a no-op. - # - # We also have to hold onto the actual context manager, because - # it will do the cleanup whenever it gets garbage collected, so - # we will also store that at the global level as well. - _CACERT_CTX = get_path("certifi", "cacert.pem") - _CACERT_PATH = str(_CACERT_CTX.__enter__()) - - return _CACERT_PATH - - def contents() -> str: - return read_text("certifi", "cacert.pem", encoding="ascii") - -else: - import os - import types - from typing import Union - - Package = Union[types.ModuleType, str] - Resource = Union[str, "os.PathLike"] - - # This fallback will work for Python versions prior to 3.7 that lack the - # importlib.resources module but relies on the existing `where` function - # so won't address issues with environments like PyOxidizer that don't set - # __file__ on modules. - def read_text( - package: Package, - resource: Resource, - encoding: str = 'utf-8', - errors: str = 'strict' - ) -> str: - with open(where(), encoding=encoding) as data: - return data.read() - - # If we don't have importlib.resources, then we will just do the old logic - # of assuming we're on the filesystem and munge the path directly. - def where() -> str: - f = os.path.dirname(__file__) - - return os.path.join(f, "cacert.pem") - - def contents() -> str: - return read_text("certifi", "cacert.pem", encoding="ascii") diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/color-90ab3aab.js b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/color-90ab3aab.js deleted file mode 100644 index 1a95b4a5b36e70e676fa6862e2db9058a8e84971..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/color-90ab3aab.js +++ /dev/null @@ -1,2 +0,0 @@ -import{ax as o}from"./index-1d65707a.js";const t=r=>o[r%o.length];export{t as g}; -//# sourceMappingURL=color-90ab3aab.js.map diff --git a/spaces/Datasculptor/StyleGAN-NADA/e4e/models/stylegan2/op/__init__.py b/spaces/Datasculptor/StyleGAN-NADA/e4e/models/stylegan2/op/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/DeepLearning101/Speech-Quality-Inspection_Meta-Denoiser/denoiser/pretrained.py b/spaces/DeepLearning101/Speech-Quality-Inspection_Meta-Denoiser/denoiser/pretrained.py deleted file mode 100644 index 77292221b2581bb6cbda49da60095ae053133def..0000000000000000000000000000000000000000 --- a/spaces/DeepLearning101/Speech-Quality-Inspection_Meta-Denoiser/denoiser/pretrained.py +++ /dev/null @@ -1,72 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. -# author: adefossez - -import logging - -import torch.hub - -from .demucs import Demucs -from .utils import deserialize_model - -logger = logging.getLogger(__name__) -ROOT = "https://dl.fbaipublicfiles.com/adiyoss/denoiser/" -DNS_48_URL = ROOT + "dns48-11decc9d8e3f0998.th" -DNS_64_URL = ROOT + "dns64-a7761ff99a7d5bb6.th" -MASTER_64_URL = ROOT + "master64-8a5dfb4bb92753dd.th" - - -def _demucs(pretrained, url, **kwargs): - model = Demucs(**kwargs) - if pretrained: - state_dict = torch.hub.load_state_dict_from_url(url, map_location='cpu') - model.load_state_dict(state_dict) - return model - - -def dns48(pretrained=True): - return _demucs(pretrained, DNS_48_URL, hidden=48) - - -def dns64(pretrained=True): - return _demucs(pretrained, DNS_64_URL, hidden=64) - - -def master64(pretrained=True): - return _demucs(pretrained, MASTER_64_URL, hidden=64) - - -def add_model_flags(parser): - group = parser.add_mutually_exclusive_group(required=False) - group.add_argument("-m", "--model_path", help="Path to local trained model.") - group.add_argument("--dns48", action="store_true", - help="Use pre-trained real time H=48 model trained on DNS.") - group.add_argument("--dns64", action="store_true", - help="Use pre-trained real time H=64 model trained on DNS.") - group.add_argument("--master64", action="store_true", - help="Use pre-trained real time H=64 model trained on DNS and Valentini.") - - -def get_model(args): - """ - Load local model package or torchhub pre-trained model. - """ - if args.model_path: - logger.info("Loading model from %s", args.model_path) - model = Demucs(hidden=64) - pkg = torch.load(args.model_path, map_location='cpu') - model.load_state_dict(pkg) - elif args.dns64: - logger.info("Loading pre-trained real time H=64 model trained on DNS.") - model = dns64() - elif args.master64: - logger.info("Loading pre-trained real time H=64 model trained on DNS and Valentini.") - model = master64() - else: - logger.info("Loading pre-trained real time H=48 model trained on DNS.") - model = dns48() - logger.debug(model) - return model diff --git a/spaces/Dragonnnext/charybdis/README.md b/spaces/Dragonnnext/charybdis/README.md deleted file mode 100644 index 7cf9e6fab393c27b75dc3969a3a28677a79568b7..0000000000000000000000000000000000000000 --- a/spaces/Dragonnnext/charybdis/README.md +++ /dev/null @@ -1,9 +0,0 @@ ---- -title: Charybdis -emoji: 😻 -colorFrom: purple -colorTo: yellow -sdk: docker -pinned: false ---- -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/ElainaFanBoy/MusicGen/audiocraft/data/audio.py b/spaces/ElainaFanBoy/MusicGen/audiocraft/data/audio.py deleted file mode 100644 index 2048df6f175d7303bcf5c7b931922fd297908ead..0000000000000000000000000000000000000000 --- a/spaces/ElainaFanBoy/MusicGen/audiocraft/data/audio.py +++ /dev/null @@ -1,215 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -""" -Audio IO methods are defined in this module (info, read, write), -We rely on av library for faster read when possible, otherwise on torchaudio. -""" - -from dataclasses import dataclass -from pathlib import Path -import logging -import typing as tp - -import numpy as np -import soundfile -import torch -from torch.nn import functional as F -import torchaudio as ta - -import av - -from .audio_utils import f32_pcm, i16_pcm, normalize_audio - - -_av_initialized = False - - -def _init_av(): - global _av_initialized - if _av_initialized: - return - logger = logging.getLogger('libav.mp3') - logger.setLevel(logging.ERROR) - _av_initialized = True - - -@dataclass(frozen=True) -class AudioFileInfo: - sample_rate: int - duration: float - channels: int - - -def _av_info(filepath: tp.Union[str, Path]) -> AudioFileInfo: - _init_av() - with av.open(str(filepath)) as af: - stream = af.streams.audio[0] - sample_rate = stream.codec_context.sample_rate - duration = float(stream.duration * stream.time_base) - channels = stream.channels - return AudioFileInfo(sample_rate, duration, channels) - - -def _soundfile_info(filepath: tp.Union[str, Path]) -> AudioFileInfo: - info = soundfile.info(filepath) - return AudioFileInfo(info.samplerate, info.duration, info.channels) - - -def audio_info(filepath: tp.Union[str, Path]) -> AudioFileInfo: - # torchaudio no longer returns useful duration informations for some formats like mp3s. - filepath = Path(filepath) - if filepath.suffix in ['.flac', '.ogg']: # TODO: Validate .ogg can be safely read with av_info - # ffmpeg has some weird issue with flac. - return _soundfile_info(filepath) - else: - return _av_info(filepath) - - -def _av_read(filepath: tp.Union[str, Path], seek_time: float = 0, duration: float = -1.) -> tp.Tuple[torch.Tensor, int]: - """FFMPEG-based audio file reading using PyAV bindings. - Soundfile cannot read mp3 and av_read is more efficient than torchaudio. - - Args: - filepath (str or Path): Path to audio file to read. - seek_time (float): Time at which to start reading in the file. - duration (float): Duration to read from the file. If set to -1, the whole file is read. - Returns: - Tuple[torch.Tensor, int]: Tuple containing audio data and sample rate - """ - _init_av() - with av.open(str(filepath)) as af: - stream = af.streams.audio[0] - sr = stream.codec_context.sample_rate - num_frames = int(sr * duration) if duration >= 0 else -1 - frame_offset = int(sr * seek_time) - # we need a small negative offset otherwise we get some edge artifact - # from the mp3 decoder. - af.seek(int(max(0, (seek_time - 0.1)) / stream.time_base), stream=stream) - frames = [] - length = 0 - for frame in af.decode(streams=stream.index): - current_offset = int(frame.rate * frame.pts * frame.time_base) - strip = max(0, frame_offset - current_offset) - buf = torch.from_numpy(frame.to_ndarray()) - if buf.shape[0] != stream.channels: - buf = buf.view(-1, stream.channels).t() - buf = buf[:, strip:] - frames.append(buf) - length += buf.shape[1] - if num_frames > 0 and length >= num_frames: - break - assert frames - # If the above assert fails, it is likely because we seeked past the end of file point, - # in which case ffmpeg returns a single frame with only zeros, and a weird timestamp. - # This will need proper debugging, in due time. - wav = torch.cat(frames, dim=1) - assert wav.shape[0] == stream.channels - if num_frames > 0: - wav = wav[:, :num_frames] - return f32_pcm(wav), sr - - -def audio_read(filepath: tp.Union[str, Path], seek_time: float = 0., - duration: float = -1., pad: bool = False) -> tp.Tuple[torch.Tensor, int]: - """Read audio by picking the most appropriate backend tool based on the audio format. - - Args: - filepath (str or Path): Path to audio file to read. - seek_time (float): Time at which to start reading in the file. - duration (float): Duration to read from the file. If set to -1, the whole file is read. - pad (bool): Pad output audio if not reaching expected duration. - Returns: - Tuple[torch.Tensor, int]: Tuple containing audio data and sample rate. - """ - fp = Path(filepath) - if fp.suffix in ['.flac', '.ogg']: # TODO: check if we can safely use av_read for .ogg - # There is some bug with ffmpeg and reading flac - info = _soundfile_info(filepath) - frames = -1 if duration <= 0 else int(duration * info.sample_rate) - frame_offset = int(seek_time * info.sample_rate) - wav, sr = soundfile.read(filepath, start=frame_offset, frames=frames, dtype=np.float32) - assert info.sample_rate == sr, f"Mismatch of sample rates {info.sample_rate} {sr}" - wav = torch.from_numpy(wav).t().contiguous() - if len(wav.shape) == 1: - wav = torch.unsqueeze(wav, 0) - elif ( - fp.suffix in ['.wav', '.mp3'] and fp.suffix[1:] in ta.utils.sox_utils.list_read_formats() - and duration <= 0 and seek_time == 0 - ): - # Torchaudio is faster if we load an entire file at once. - wav, sr = ta.load(fp) - else: - wav, sr = _av_read(filepath, seek_time, duration) - if pad and duration > 0: - expected_frames = int(duration * sr) - wav = F.pad(wav, (0, expected_frames - wav.shape[-1])) - return wav, sr - - -def audio_write(stem_name: tp.Union[str, Path], - wav: torch.Tensor, sample_rate: int, - format: str = 'wav', mp3_rate: int = 320, normalize: bool = True, - strategy: str = 'peak', peak_clip_headroom_db: float = 1, - rms_headroom_db: float = 18, loudness_headroom_db: float = 14, - loudness_compressor: bool = False, - log_clipping: bool = True, make_parent_dir: bool = True, - add_suffix: bool = True) -> Path: - """Convenience function for saving audio to disk. Returns the filename the audio was written to. - - Args: - stem_name (str or Path): Filename without extension which will be added automatically. - format (str): Either "wav" or "mp3". - mp3_rate (int): kbps when using mp3s. - normalize (bool): if `True` (default), normalizes according to the prescribed - strategy (see after). If `False`, the strategy is only used in case clipping - would happen. - strategy (str): Can be either 'clip', 'peak', or 'rms'. Default is 'peak', - i.e. audio is normalized by its largest value. RMS normalizes by root-mean-square - with extra headroom to avoid clipping. 'clip' just clips. - peak_clip_headroom_db (float): Headroom in dB when doing 'peak' or 'clip' strategy. - rms_headroom_db (float): Headroom in dB when doing 'rms' strategy. This must be much larger - than the `peak_clip` one to avoid further clipping. - loudness_headroom_db (float): Target loudness for loudness normalization. - loudness_compressor (bool): Uses tanh for soft clipping when strategy is 'loudness'. - when strategy is 'loudness'log_clipping (bool): If True, basic logging on stderr when clipping still - occurs despite strategy (only for 'rms'). - make_parent_dir (bool): Make parent directory if it doesn't exist. - Returns: - Path: Path of the saved audio. - """ - assert wav.dtype.is_floating_point, "wav is not floating point" - if wav.dim() == 1: - wav = wav[None] - elif wav.dim() > 2: - raise ValueError("Input wav should be at most 2 dimension.") - assert wav.isfinite().all() - wav = normalize_audio(wav, normalize, strategy, peak_clip_headroom_db, - rms_headroom_db, loudness_headroom_db, log_clipping=log_clipping, - sample_rate=sample_rate, stem_name=str(stem_name)) - kwargs: dict = {} - if format == 'mp3': - suffix = '.mp3' - kwargs.update({"compression": mp3_rate}) - elif format == 'wav': - wav = i16_pcm(wav) - suffix = '.wav' - kwargs.update({"encoding": "PCM_S", "bits_per_sample": 16}) - else: - raise RuntimeError(f"Invalid format {format}. Only wav or mp3 are supported.") - if not add_suffix: - suffix = '' - path = Path(str(stem_name) + suffix) - if make_parent_dir: - path.parent.mkdir(exist_ok=True, parents=True) - try: - ta.save(path, wav, sample_rate, **kwargs) - except Exception: - if path.exists(): - # we do not want to leave half written files around. - path.unlink() - raise - return path diff --git a/spaces/EronSamez/RVC_HFmeu/go-tensorboard.bat b/spaces/EronSamez/RVC_HFmeu/go-tensorboard.bat deleted file mode 100644 index cb81c17d3865513adec8eb0b832b7888cd1e4078..0000000000000000000000000000000000000000 --- a/spaces/EronSamez/RVC_HFmeu/go-tensorboard.bat +++ /dev/null @@ -1,2 +0,0 @@ -python fixes/tensor-launch.py -pause \ No newline at end of file diff --git a/spaces/Fakermiya/Nsfw-Sfw_Classifier/README.md b/spaces/Fakermiya/Nsfw-Sfw_Classifier/README.md deleted file mode 100644 index d40eb83ad154a33b4d724401a4128ca02c345f24..0000000000000000000000000000000000000000 --- a/spaces/Fakermiya/Nsfw-Sfw_Classifier/README.md +++ /dev/null @@ -1,148 +0,0 @@ ---- -title: LabelStudio -emoji: 🟧 -colorFrom: yellow -colorTo: purple -sdk: docker -tags: -- label-studio -fullwidth: true -license: gpl-3.0 -app_port: 8080 -duplicated_from: LabelStudio/LabelStudio ---- - - -[Website](https://hubs.ly/Q01CNgsd0) • [Docs](https://hubs.ly/Q01CN9Yq0) • [12K+ GitHub ⭐️!](https://hubs.ly/Q01CNbPQ0) • [Slack Community](https://hubs.ly/Q01CNb9H0) - -## What is Label Studio? - -Label Studio is an open source data labeling platform. It lets you label audio, -text, images, videos, and time series data with a simple, straightforward, and -highly-configurable user interface. Label Studio can prepare new data or -improve existing training data to get more accurate ML models. - - -## Label Studio in Hugging Face Spaces - -The Label Studio community is thrilled to offer Label Studio as a Hugging Face -Spaces application. You can try the data-annotation interface, connect popular -machine learning models, and share the application with collaborators. You can -start immediately by creating an account or replicate the space and work in -your own environment. - -## Creating a Use Account and Logging In - -Begin by creating a new account in the Label Studio space, then log in with your -credentials. - -**By default, these spaces permit anyone to create a new login -account, allowing them to view and modify project configuration, data sets, and -annotations. Without any modifications, treat this space like a demo environment.** - -## Creating a Labeling Project - -After logging in, Label Studio will present you with a project view. Here you -can create a new project with prompts to upload data and set up a custom -configuration interface. - -**Note that in the default configuration, storage is local and temporary. Any -projects, annotations, and configurations will be lost if the space is restarted.** - -## Next Steps and Additional Resources - -To help with getting started, the Label Studio community curated a list of -resources including tutorials and documentation. - -- 🚀 [Zero to One with Label Studio Tutorial](https://labelstud.io/blog/introduction-to-label-studio-in-hugging-face-spaces/) -- 📈 [Try Label Studio Enterprise](https://hubs.ly/Q01CMLll0) -- 🤗 [Tutorial: Using Label Studio with Hugging Face Datasets Hub](https://danielvanstrien.xyz/huggingface/huggingface-datasets/annotation/full%20stack%20deep%20learning%20notes/2022/09/07/label-studio-annotations-hub.html) -- 💡 [Label Studio Docs](https://hubs.ly/Q01CN9Yq0) - - -![Gif of Label Studio annotating different types of data](https://raw.githubusercontent.com/heartexlabs/label-studio/master/images/annotation_examples.gif) - -### Making your Label Studio Hugging Face Space production-ready - -By default this space allows for the unrestricted creation of new accounts -will full access to all projects and data. This is great for trying out -Label Studio and collaborating on projects, but you may want to restrict -access to your space to only authorized users. Add the following environment -variable to your spaces Dockerfile to disable public account creation for -this space. - - ENV LABEL_STUDIO_DISABLE_SIGNUP_WITHOUT_LINK=true - -Set secrets in your space to create an inital user, and log in with your -provided username and password. Do not set these in your Dockerfile, as they -globally visible on a public space. - - LABEL_STUDIO_USERNAME - LABEL_STUDIO_PASSWORD - -You will need to provide new users with an invitation link to join the space, -which can be found in the Organizations interface of Label Studio - -By default this space stores all project configuration and data annotations -in local storage with Sqlite. If the space is reset, all configuration and -annotation data in the space will be lost. You can enable configuration -persistence by connecting an external Postgres database to your space, -guaranteeing that all project and annotation settings are preserved. - -Set the following secret variables to match your own hosted instance of -Postgres. We strongly recommend setting these as secrets to prevent leaking -information about your database service to the public in your spaces -definition. - - DJANGO_DB=default - POSTGRE_NAME= - POSTGRE_PORT= - POSTGRE_USER= - POSTGRE_PASSWORD= - POSTGRE_PORT= - POSTGRE_HOST= - -Add the following environment variable to remove the warning about ephemeral -storage. - - ENV STORAGE_PERSISTENCE=1 - -Note that you will need to connect cloud storage to host data items that you -want to annotate, as local storage will not be preserved across a space reset. - -By default the only data storage enabled for this space is local. In the case -of a space reset, all data will be lost. To enable permanent storage, you -must enable a cloud storage connector. We also strongly recommend enabling -configuration persistence to preserve project data, annotations, and user -settings. Choose the appropriate cloud connector and configure the secrets -for it. - -#### Amazon S3 - STORAGE_TYPE=s3 - STORAGE_AWS_ACCESS_KEY_ID="" - STORAGE_AWS_SECRET_ACCESS_KEY="" - STORAGE_AWS_BUCKET_NAME="" - STORAGE_AWS_REGION_NAME="" - STORAGE_AWS_FOLDER="" - -#### Google Cloud Storage - - STORAGE_TYPE=gcs - STORAGE_GCS_BUCKET_NAME="" - STORAGE_GCS_PROJECT_ID="" - STORAGE_GCS_FOLDER="" - GOOGLE_APPLICATION_CREDENTIALS="/opt/heartex/secrets/key.json" - -Azure Blob Storage -================== - - STORAGE_TYPE=azure - STORAGE_AZURE_ACCOUNT_NAME="" - STORAGE_AZURE_ACCOUNT_KEY="" - STORAGE_AZURE_CONTAINER_NAME="" - STORAGE_AZURE_FOLDER="" - - -## Questions? Concerns? Want to get involved? - -Email the community team at [community@labelstud.io](mailto:community@labelstud.io) diff --git a/spaces/Faridmaruf/rvc-Blue-archives/lib/infer_pack/modules/F0Predictor/HarvestF0Predictor.py b/spaces/Faridmaruf/rvc-Blue-archives/lib/infer_pack/modules/F0Predictor/HarvestF0Predictor.py deleted file mode 100644 index b412ba2814e114ca7bb00b6fd6ef217f63d788a3..0000000000000000000000000000000000000000 --- a/spaces/Faridmaruf/rvc-Blue-archives/lib/infer_pack/modules/F0Predictor/HarvestF0Predictor.py +++ /dev/null @@ -1,86 +0,0 @@ -from lib.infer_pack.modules.F0Predictor.F0Predictor import F0Predictor -import pyworld -import numpy as np - - -class HarvestF0Predictor(F0Predictor): - def __init__(self, hop_length=512, f0_min=50, f0_max=1100, sampling_rate=44100): - self.hop_length = hop_length - self.f0_min = f0_min - self.f0_max = f0_max - self.sampling_rate = sampling_rate - - def interpolate_f0(self, f0): - """ - 对F0进行插值处理 - """ - - data = np.reshape(f0, (f0.size, 1)) - - vuv_vector = np.zeros((data.size, 1), dtype=np.float32) - vuv_vector[data > 0.0] = 1.0 - vuv_vector[data <= 0.0] = 0.0 - - ip_data = data - - frame_number = data.size - last_value = 0.0 - for i in range(frame_number): - if data[i] <= 0.0: - j = i + 1 - for j in range(i + 1, frame_number): - if data[j] > 0.0: - break - if j < frame_number - 1: - if last_value > 0.0: - step = (data[j] - data[i - 1]) / float(j - i) - for k in range(i, j): - ip_data[k] = data[i - 1] + step * (k - i + 1) - else: - for k in range(i, j): - ip_data[k] = data[j] - else: - for k in range(i, frame_number): - ip_data[k] = last_value - else: - ip_data[i] = data[i] # 这里可能存在一个没有必要的拷贝 - last_value = data[i] - - return ip_data[:, 0], vuv_vector[:, 0] - - def resize_f0(self, x, target_len): - source = np.array(x) - source[source < 0.001] = np.nan - target = np.interp( - np.arange(0, len(source) * target_len, len(source)) / target_len, - np.arange(0, len(source)), - source, - ) - res = np.nan_to_num(target) - return res - - def compute_f0(self, wav, p_len=None): - if p_len is None: - p_len = wav.shape[0] // self.hop_length - f0, t = pyworld.harvest( - wav.astype(np.double), - fs=self.hop_length, - f0_ceil=self.f0_max, - f0_floor=self.f0_min, - frame_period=1000 * self.hop_length / self.sampling_rate, - ) - f0 = pyworld.stonemask(wav.astype(np.double), f0, t, self.fs) - return self.interpolate_f0(self.resize_f0(f0, p_len))[0] - - def compute_f0_uv(self, wav, p_len=None): - if p_len is None: - p_len = wav.shape[0] // self.hop_length - f0, t = pyworld.harvest( - wav.astype(np.double), - fs=self.sampling_rate, - f0_floor=self.f0_min, - f0_ceil=self.f0_max, - frame_period=1000 * self.hop_length / self.sampling_rate, - ) - f0 = pyworld.stonemask(wav.astype(np.double), f0, t, self.sampling_rate) - return self.interpolate_f0(self.resize_f0(f0, p_len)) diff --git a/spaces/Fengbinbin/gpt-academic/config.py b/spaces/Fengbinbin/gpt-academic/config.py deleted file mode 100644 index 2455424967976dfe81d50b08093d10b416d7fdde..0000000000000000000000000000000000000000 --- a/spaces/Fengbinbin/gpt-academic/config.py +++ /dev/null @@ -1,77 +0,0 @@ -# [step 1]>> 例如: API_KEY = "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" (此key无效) -API_KEY = "sk-此处填API密钥" # 可同时填写多个API-KEY,用英文逗号分割,例如API_KEY = "sk-openaikey1,sk-openaikey2,fkxxxx-api2dkey1,fkxxxx-api2dkey2" - -# [step 2]>> 改为True应用代理,如果直接在海外服务器部署,此处不修改 -USE_PROXY = False -if USE_PROXY: - # 填写格式是 [协议]:// [地址] :[端口],填写之前不要忘记把USE_PROXY改成True,如果直接在海外服务器部署,此处不修改 - # 例如 "socks5h://localhost:11284" - # [协议] 常见协议无非socks5h/http; 例如 v2**y 和 ss* 的默认本地协议是socks5h; 而cl**h 的默认本地协议是http - # [地址] 懂的都懂,不懂就填localhost或者127.0.0.1肯定错不了(localhost意思是代理软件安装在本机上) - # [端口] 在代理软件的设置里找。虽然不同的代理软件界面不一样,但端口号都应该在最显眼的位置上 - - # 代理网络的地址,打开你的*学*网软件查看代理的协议(socks5/http)、地址(localhost)和端口(11284) - proxies = { - # [协议]:// [地址] :[端口] - "http": "socks5h://localhost:11284", # 再例如 "http": "http://127.0.0.1:7890", - "https": "socks5h://localhost:11284", # 再例如 "https": "http://127.0.0.1:7890", - } -else: - proxies = None - -# [step 3]>> 多线程函数插件中,默认允许多少路线程同时访问OpenAI。Free trial users的限制是每分钟3次,Pay-as-you-go users的限制是每分钟3500次 -# 一言以蔽之:免费用户填3,OpenAI绑了信用卡的用户可以填 16 或者更高。提高限制请查询:https://platform.openai.com/docs/guides/rate-limits/overview -DEFAULT_WORKER_NUM = 3 - - -# [step 4]>> 以下配置可以优化体验,但大部分场合下并不需要修改 -# 对话窗的高度 -CHATBOT_HEIGHT = 1115 - -# 代码高亮 -CODE_HIGHLIGHT = True - -# 窗口布局 -LAYOUT = "LEFT-RIGHT" # "LEFT-RIGHT"(左右布局) # "TOP-DOWN"(上下布局) -DARK_MODE = True # "LEFT-RIGHT"(左右布局) # "TOP-DOWN"(上下布局) - -# 发送请求到OpenAI后,等待多久判定为超时 -TIMEOUT_SECONDS = 30 - -# 网页的端口, -1代表随机端口 -WEB_PORT = -1 - -# 如果OpenAI不响应(网络卡顿、代理失败、KEY失效),重试的次数限制 -MAX_RETRY = 2 - -# OpenAI模型选择是(gpt4现在只对申请成功的人开放) -LLM_MODEL = "gpt-3.5-turbo" # 可选 "chatglm" -AVAIL_LLM_MODELS = ["gpt-3.5-turbo", "gpt-4", "api2d-gpt-4", "api2d-gpt-3.5-turbo"] - -# 本地LLM模型如ChatGLM的执行方式 CPU/GPU -LOCAL_MODEL_DEVICE = "cpu" # 可选 "cuda" - -# 设置gradio的并行线程数(不需要修改) -CONCURRENT_COUNT = 100 - -# 加一个看板娘装饰 -ADD_WAIFU = False - -# 设置用户名和密码(不需要修改)(相关功能不稳定,与gradio版本和网络都相关,如果本地使用不建议加这个) -# [("username", "password"), ("username2", "password2"), ...] -AUTHENTICATION = [] - -# 重新URL重新定向,实现更换API_URL的作用(常规情况下,不要修改!!) -# (高危设置!通过修改此设置,您将把您的API-KEY和对话隐私完全暴露给您设定的中间人!) -# 格式 {"https://api.openai.com/v1/chat/completions": "在这里填写重定向的api.openai.com的URL"} -# 例如 API_URL_REDIRECT = {"https://api.openai.com/v1/chat/completions": "https://ai.open.com/api/conversation"} -API_URL_REDIRECT = {} - -# 如果需要在二级路径下运行(常规情况下,不要修改!!)(需要配合修改main.py才能生效!) -CUSTOM_PATH = "/" - -# 如果需要使用newbing,把newbing的长长的cookie放到这里 -NEWBING_STYLE = "creative" # ["creative", "balanced", "precise"] -NEWBING_COOKIES = """ -your bing cookies here -""" diff --git a/spaces/Fengbinbin/gpt-academic/crazy_functions/test_project/cpp/longcode/prod_cons.h b/spaces/Fengbinbin/gpt-academic/crazy_functions/test_project/cpp/longcode/prod_cons.h deleted file mode 100644 index c9004bb8043a12e32814436baa6262a00c8ef68e..0000000000000000000000000000000000000000 --- a/spaces/Fengbinbin/gpt-academic/crazy_functions/test_project/cpp/longcode/prod_cons.h +++ /dev/null @@ -1,433 +0,0 @@ -#pragma once - -#include -#include -#include -#include -#include - -#include "libipc/def.h" - -#include "libipc/platform/detail.h" -#include "libipc/circ/elem_def.h" -#include "libipc/utility/log.h" -#include "libipc/utility/utility.h" - -namespace ipc { - -//////////////////////////////////////////////////////////////// -/// producer-consumer implementation -//////////////////////////////////////////////////////////////// - -template -struct prod_cons_impl; - -template <> -struct prod_cons_impl> { - - template - struct elem_t { - std::aligned_storage_t data_ {}; - }; - - alignas(cache_line_size) std::atomic rd_; // read index - alignas(cache_line_size) std::atomic wt_; // write index - - constexpr circ::u2_t cursor() const noexcept { - return 0; - } - - template - bool push(W* /*wrapper*/, F&& f, E* elems) { - auto cur_wt = circ::index_of(wt_.load(std::memory_order_relaxed)); - if (cur_wt == circ::index_of(rd_.load(std::memory_order_acquire) - 1)) { - return false; // full - } - std::forward(f)(&(elems[cur_wt].data_)); - wt_.fetch_add(1, std::memory_order_release); - return true; - } - - /** - * In single-single-unicast, 'force_push' means 'no reader' or 'the only one reader is dead'. - * So we could just disconnect all connections of receiver, and return false. - */ - template - bool force_push(W* wrapper, F&&, E*) { - wrapper->elems()->disconnect_receiver(~static_cast(0u)); - return false; - } - - template - bool pop(W* /*wrapper*/, circ::u2_t& /*cur*/, F&& f, R&& out, E* elems) { - auto cur_rd = circ::index_of(rd_.load(std::memory_order_relaxed)); - if (cur_rd == circ::index_of(wt_.load(std::memory_order_acquire))) { - return false; // empty - } - std::forward(f)(&(elems[cur_rd].data_)); - std::forward(out)(true); - rd_.fetch_add(1, std::memory_order_release); - return true; - } -}; - -template <> -struct prod_cons_impl> - : prod_cons_impl> { - - template - bool force_push(W* wrapper, F&&, E*) { - wrapper->elems()->disconnect_receiver(1); - return false; - } - - template class E, std::size_t DS, std::size_t AS> - bool pop(W* /*wrapper*/, circ::u2_t& /*cur*/, F&& f, R&& out, E* elems) { - byte_t buff[DS]; - for (unsigned k = 0;;) { - auto cur_rd = rd_.load(std::memory_order_relaxed); - if (circ::index_of(cur_rd) == - circ::index_of(wt_.load(std::memory_order_acquire))) { - return false; // empty - } - std::memcpy(buff, &(elems[circ::index_of(cur_rd)].data_), sizeof(buff)); - if (rd_.compare_exchange_weak(cur_rd, cur_rd + 1, std::memory_order_release)) { - std::forward(f)(buff); - std::forward(out)(true); - return true; - } - ipc::yield(k); - } - } -}; - -template <> -struct prod_cons_impl> - : prod_cons_impl> { - - using flag_t = std::uint64_t; - - template - struct elem_t { - std::aligned_storage_t data_ {}; - std::atomic f_ct_ { 0 }; // commit flag - }; - - alignas(cache_line_size) std::atomic ct_; // commit index - - template - bool push(W* /*wrapper*/, F&& f, E* elems) { - circ::u2_t cur_ct, nxt_ct; - for (unsigned k = 0;;) { - cur_ct = ct_.load(std::memory_order_relaxed); - if (circ::index_of(nxt_ct = cur_ct + 1) == - circ::index_of(rd_.load(std::memory_order_acquire))) { - return false; // full - } - if (ct_.compare_exchange_weak(cur_ct, nxt_ct, std::memory_order_acq_rel)) { - break; - } - ipc::yield(k); - } - auto* el = elems + circ::index_of(cur_ct); - std::forward(f)(&(el->data_)); - // set flag & try update wt - el->f_ct_.store(~static_cast(cur_ct), std::memory_order_release); - while (1) { - auto cac_ct = el->f_ct_.load(std::memory_order_acquire); - if (cur_ct != wt_.load(std::memory_order_relaxed)) { - return true; - } - if ((~cac_ct) != cur_ct) { - return true; - } - if (!el->f_ct_.compare_exchange_strong(cac_ct, 0, std::memory_order_relaxed)) { - return true; - } - wt_.store(nxt_ct, std::memory_order_release); - cur_ct = nxt_ct; - nxt_ct = cur_ct + 1; - el = elems + circ::index_of(cur_ct); - } - return true; - } - - template - bool force_push(W* wrapper, F&&, E*) { - wrapper->elems()->disconnect_receiver(1); - return false; - } - - template class E, std::size_t DS, std::size_t AS> - bool pop(W* /*wrapper*/, circ::u2_t& /*cur*/, F&& f, R&& out, E* elems) { - byte_t buff[DS]; - for (unsigned k = 0;;) { - auto cur_rd = rd_.load(std::memory_order_relaxed); - auto cur_wt = wt_.load(std::memory_order_acquire); - auto id_rd = circ::index_of(cur_rd); - auto id_wt = circ::index_of(cur_wt); - if (id_rd == id_wt) { - auto* el = elems + id_wt; - auto cac_ct = el->f_ct_.load(std::memory_order_acquire); - if ((~cac_ct) != cur_wt) { - return false; // empty - } - if (el->f_ct_.compare_exchange_weak(cac_ct, 0, std::memory_order_relaxed)) { - wt_.store(cur_wt + 1, std::memory_order_release); - } - k = 0; - } - else { - std::memcpy(buff, &(elems[circ::index_of(cur_rd)].data_), sizeof(buff)); - if (rd_.compare_exchange_weak(cur_rd, cur_rd + 1, std::memory_order_release)) { - std::forward(f)(buff); - std::forward(out)(true); - return true; - } - ipc::yield(k); - } - } - } -}; - -template <> -struct prod_cons_impl> { - - using rc_t = std::uint64_t; - - enum : rc_t { - ep_mask = 0x00000000ffffffffull, - ep_incr = 0x0000000100000000ull - }; - - template - struct elem_t { - std::aligned_storage_t data_ {}; - std::atomic rc_ { 0 }; // read-counter - }; - - alignas(cache_line_size) std::atomic wt_; // write index - alignas(cache_line_size) rc_t epoch_ { 0 }; // only one writer - - circ::u2_t cursor() const noexcept { - return wt_.load(std::memory_order_acquire); - } - - template - bool push(W* wrapper, F&& f, E* elems) { - E* el; - for (unsigned k = 0;;) { - circ::cc_t cc = wrapper->elems()->connections(std::memory_order_relaxed); - if (cc == 0) return false; // no reader - el = elems + circ::index_of(wt_.load(std::memory_order_relaxed)); - // check all consumers have finished reading this element - auto cur_rc = el->rc_.load(std::memory_order_acquire); - circ::cc_t rem_cc = cur_rc & ep_mask; - if ((cc & rem_cc) && ((cur_rc & ~ep_mask) == epoch_)) { - return false; // has not finished yet - } - // consider rem_cc to be 0 here - if (el->rc_.compare_exchange_weak( - cur_rc, epoch_ | static_cast(cc), std::memory_order_release)) { - break; - } - ipc::yield(k); - } - std::forward(f)(&(el->data_)); - wt_.fetch_add(1, std::memory_order_release); - return true; - } - - template - bool force_push(W* wrapper, F&& f, E* elems) { - E* el; - epoch_ += ep_incr; - for (unsigned k = 0;;) { - circ::cc_t cc = wrapper->elems()->connections(std::memory_order_relaxed); - if (cc == 0) return false; // no reader - el = elems + circ::index_of(wt_.load(std::memory_order_relaxed)); - // check all consumers have finished reading this element - auto cur_rc = el->rc_.load(std::memory_order_acquire); - circ::cc_t rem_cc = cur_rc & ep_mask; - if (cc & rem_cc) { - ipc::log("force_push: k = %u, cc = %u, rem_cc = %u\n", k, cc, rem_cc); - cc = wrapper->elems()->disconnect_receiver(rem_cc); // disconnect all invalid readers - if (cc == 0) return false; // no reader - } - // just compare & exchange - if (el->rc_.compare_exchange_weak( - cur_rc, epoch_ | static_cast(cc), std::memory_order_release)) { - break; - } - ipc::yield(k); - } - std::forward(f)(&(el->data_)); - wt_.fetch_add(1, std::memory_order_release); - return true; - } - - template - bool pop(W* wrapper, circ::u2_t& cur, F&& f, R&& out, E* elems) { - if (cur == cursor()) return false; // acquire - auto* el = elems + circ::index_of(cur++); - std::forward(f)(&(el->data_)); - for (unsigned k = 0;;) { - auto cur_rc = el->rc_.load(std::memory_order_acquire); - if ((cur_rc & ep_mask) == 0) { - std::forward(out)(true); - return true; - } - auto nxt_rc = cur_rc & ~static_cast(wrapper->connected_id()); - if (el->rc_.compare_exchange_weak(cur_rc, nxt_rc, std::memory_order_release)) { - std::forward(out)((nxt_rc & ep_mask) == 0); - return true; - } - ipc::yield(k); - } - } -}; - -template <> -struct prod_cons_impl> { - - using rc_t = std::uint64_t; - using flag_t = std::uint64_t; - - enum : rc_t { - rc_mask = 0x00000000ffffffffull, - ep_mask = 0x00ffffffffffffffull, - ep_incr = 0x0100000000000000ull, - ic_mask = 0xff000000ffffffffull, - ic_incr = 0x0000000100000000ull - }; - - template - struct elem_t { - std::aligned_storage_t data_ {}; - std::atomic rc_ { 0 }; // read-counter - std::atomic f_ct_ { 0 }; // commit flag - }; - - alignas(cache_line_size) std::atomic ct_; // commit index - alignas(cache_line_size) std::atomic epoch_ { 0 }; - - circ::u2_t cursor() const noexcept { - return ct_.load(std::memory_order_acquire); - } - - constexpr static rc_t inc_rc(rc_t rc) noexcept { - return (rc & ic_mask) | ((rc + ic_incr) & ~ic_mask); - } - - constexpr static rc_t inc_mask(rc_t rc) noexcept { - return inc_rc(rc) & ~rc_mask; - } - - template - bool push(W* wrapper, F&& f, E* elems) { - E* el; - circ::u2_t cur_ct; - rc_t epoch = epoch_.load(std::memory_order_acquire); - for (unsigned k = 0;;) { - circ::cc_t cc = wrapper->elems()->connections(std::memory_order_relaxed); - if (cc == 0) return false; // no reader - el = elems + circ::index_of(cur_ct = ct_.load(std::memory_order_relaxed)); - // check all consumers have finished reading this element - auto cur_rc = el->rc_.load(std::memory_order_relaxed); - circ::cc_t rem_cc = cur_rc & rc_mask; - if ((cc & rem_cc) && ((cur_rc & ~ep_mask) == epoch)) { - return false; // has not finished yet - } - else if (!rem_cc) { - auto cur_fl = el->f_ct_.load(std::memory_order_acquire); - if ((cur_fl != cur_ct) && cur_fl) { - return false; // full - } - } - // consider rem_cc to be 0 here - if (el->rc_.compare_exchange_weak( - cur_rc, inc_mask(epoch | (cur_rc & ep_mask)) | static_cast(cc), std::memory_order_relaxed) && - epoch_.compare_exchange_weak(epoch, epoch, std::memory_order_acq_rel)) { - break; - } - ipc::yield(k); - } - // only one thread/process would touch here at one time - ct_.store(cur_ct + 1, std::memory_order_release); - std::forward(f)(&(el->data_)); - // set flag & try update wt - el->f_ct_.store(~static_cast(cur_ct), std::memory_order_release); - return true; - } - - template - bool force_push(W* wrapper, F&& f, E* elems) { - E* el; - circ::u2_t cur_ct; - rc_t epoch = epoch_.fetch_add(ep_incr, std::memory_order_release) + ep_incr; - for (unsigned k = 0;;) { - circ::cc_t cc = wrapper->elems()->connections(std::memory_order_relaxed); - if (cc == 0) return false; // no reader - el = elems + circ::index_of(cur_ct = ct_.load(std::memory_order_relaxed)); - // check all consumers have finished reading this element - auto cur_rc = el->rc_.load(std::memory_order_acquire); - circ::cc_t rem_cc = cur_rc & rc_mask; - if (cc & rem_cc) { - ipc::log("force_push: k = %u, cc = %u, rem_cc = %u\n", k, cc, rem_cc); - cc = wrapper->elems()->disconnect_receiver(rem_cc); // disconnect all invalid readers - if (cc == 0) return false; // no reader - } - // just compare & exchange - if (el->rc_.compare_exchange_weak( - cur_rc, inc_mask(epoch | (cur_rc & ep_mask)) | static_cast(cc), std::memory_order_relaxed)) { - if (epoch == epoch_.load(std::memory_order_acquire)) { - break; - } - else if (push(wrapper, std::forward(f), elems)) { - return true; - } - epoch = epoch_.fetch_add(ep_incr, std::memory_order_release) + ep_incr; - } - ipc::yield(k); - } - // only one thread/process would touch here at one time - ct_.store(cur_ct + 1, std::memory_order_release); - std::forward(f)(&(el->data_)); - // set flag & try update wt - el->f_ct_.store(~static_cast(cur_ct), std::memory_order_release); - return true; - } - - template - bool pop(W* wrapper, circ::u2_t& cur, F&& f, R&& out, E(& elems)[N]) { - auto* el = elems + circ::index_of(cur); - auto cur_fl = el->f_ct_.load(std::memory_order_acquire); - if (cur_fl != ~static_cast(cur)) { - return false; // empty - } - ++cur; - std::forward(f)(&(el->data_)); - for (unsigned k = 0;;) { - auto cur_rc = el->rc_.load(std::memory_order_acquire); - if ((cur_rc & rc_mask) == 0) { - std::forward(out)(true); - el->f_ct_.store(cur + N - 1, std::memory_order_release); - return true; - } - auto nxt_rc = inc_rc(cur_rc) & ~static_cast(wrapper->connected_id()); - bool last_one = false; - if ((last_one = (nxt_rc & rc_mask) == 0)) { - el->f_ct_.store(cur + N - 1, std::memory_order_release); - } - if (el->rc_.compare_exchange_weak(cur_rc, nxt_rc, std::memory_order_release)) { - std::forward(out)(last_one); - return true; - } - ipc::yield(k); - } - } -}; - -} // namespace ipc diff --git a/spaces/FrankZxShen/vits-fast-fineturning-models-ba/models.py b/spaces/FrankZxShen/vits-fast-fineturning-models-ba/models.py deleted file mode 100644 index 65f9ae5255616efa19a4f28bc0a840d4c453a060..0000000000000000000000000000000000000000 --- a/spaces/FrankZxShen/vits-fast-fineturning-models-ba/models.py +++ /dev/null @@ -1,722 +0,0 @@ -import copy -import math -import torch -from torch import nn -from torch.nn import functional as F - -import commons -import modules -import attentions -import monotonic_align - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from commons import init_weights, get_padding - - -class StochasticDurationPredictor(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, n_flows=4, gin_channels=0): - super().__init__() - filter_channels = in_channels # it needs to be removed from future version. - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.log_flow = modules.Log() - self.flows = nn.ModuleList() - self.flows.append(modules.ElementwiseAffine(2)) - for i in range(n_flows): - self.flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3)) - self.flows.append(modules.Flip()) - - self.post_pre = nn.Conv1d(1, filter_channels, 1) - self.post_proj = nn.Conv1d(filter_channels, filter_channels, 1) - self.post_convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout) - self.post_flows = nn.ModuleList() - self.post_flows.append(modules.ElementwiseAffine(2)) - for i in range(4): - self.post_flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3)) - self.post_flows.append(modules.Flip()) - - self.pre = nn.Conv1d(in_channels, filter_channels, 1) - self.proj = nn.Conv1d(filter_channels, filter_channels, 1) - self.convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout) - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, filter_channels, 1) - - def forward(self, x, x_mask, w=None, g=None, reverse=False, noise_scale=1.0): - x = torch.detach(x) - x = self.pre(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.convs(x, x_mask) - x = self.proj(x) * x_mask - - if not reverse: - flows = self.flows - assert w is not None - - logdet_tot_q = 0 - h_w = self.post_pre(w) - h_w = self.post_convs(h_w, x_mask) - h_w = self.post_proj(h_w) * x_mask - e_q = torch.randn(w.size(0), 2, w.size(2)).to(device=x.device, dtype=x.dtype) * x_mask - z_q = e_q - for flow in self.post_flows: - z_q, logdet_q = flow(z_q, x_mask, g=(x + h_w)) - logdet_tot_q += logdet_q - z_u, z1 = torch.split(z_q, [1, 1], 1) - u = torch.sigmoid(z_u) * x_mask - z0 = (w - u) * x_mask - logdet_tot_q += torch.sum((F.logsigmoid(z_u) + F.logsigmoid(-z_u)) * x_mask, [1,2]) - logq = torch.sum(-0.5 * (math.log(2*math.pi) + (e_q**2)) * x_mask, [1,2]) - logdet_tot_q - - logdet_tot = 0 - z0, logdet = self.log_flow(z0, x_mask) - logdet_tot += logdet - z = torch.cat([z0, z1], 1) - for flow in flows: - z, logdet = flow(z, x_mask, g=x, reverse=reverse) - logdet_tot = logdet_tot + logdet - nll = torch.sum(0.5 * (math.log(2*math.pi) + (z**2)) * x_mask, [1,2]) - logdet_tot - return nll + logq # [b] - else: - flows = list(reversed(self.flows)) - flows = flows[:-2] + [flows[-1]] # remove a useless vflow - z = torch.randn(x.size(0), 2, x.size(2)).to(device=x.device, dtype=x.dtype) * noise_scale - for flow in flows: - z = flow(z, x_mask, g=x, reverse=reverse) - z0, z1 = torch.split(z, [1, 1], 1) - logw = z0 - return logw - - -class DurationPredictor(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, gin_channels=0): - super().__init__() - - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.gin_channels = gin_channels - - self.drop = nn.Dropout(p_dropout) - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size, padding=kernel_size//2) - self.norm_1 = modules.LayerNorm(filter_channels) - self.conv_2 = nn.Conv1d(filter_channels, filter_channels, kernel_size, padding=kernel_size//2) - self.norm_2 = modules.LayerNorm(filter_channels) - self.proj = nn.Conv1d(filter_channels, 1, 1) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, in_channels, 1) - - def forward(self, x, x_mask, g=None): - x = torch.detach(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.conv_1(x * x_mask) - x = torch.relu(x) - x = self.norm_1(x) - x = self.drop(x) - x = self.conv_2(x * x_mask) - x = torch.relu(x) - x = self.norm_2(x) - x = self.drop(x) - x = self.proj(x * x_mask) - return x * x_mask - - -class TextEncoder(nn.Module): - def __init__(self, - n_vocab, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout): - super().__init__() - self.n_vocab = n_vocab - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - - self.emb = nn.Embedding(n_vocab, hidden_channels) - nn.init.normal_(self.emb.weight, 0.0, hidden_channels**-0.5) - - self.encoder = attentions.Encoder( - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout) - self.proj= nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths): - x = self.emb(x) * math.sqrt(self.hidden_channels) # [b, t, h] - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return x, m, logs, x_mask - - -class TextEncoder_lora(nn.Module): - def __init__(self, - n_vocab, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout): - super().__init__() - self.n_vocab = n_vocab - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - - self.emb = nn.Embedding(n_vocab, hidden_channels, r=4) - nn.init.normal_(self.emb.weight, 0.0, hidden_channels**-0.5) - - self.encoder = attentions.Encoder_lora( - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout) - self.proj= nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths): - x = self.emb(x) * math.sqrt(self.hidden_channels) # [b, t, h] - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return x, m, logs, x_mask - -class ResidualCouplingBlock(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append(modules.ResidualCouplingLayer(channels, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels, mean_only=True)) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - -class PosteriorEncoder(nn.Module): - def __init__(self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN(hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - -class Generator(torch.nn.Module): - def __init__(self, initial_channel, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gin_channels=0): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d(initial_channel, upsample_initial_channel, 7, 1, padding=3) - resblock = modules.ResBlock1 if resblock == '1' else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append(weight_norm( - ConvTranspose1d(upsample_initial_channel//(2**i), upsample_initial_channel//(2**(i+1)), - k, u, padding=(k-u)//2))) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel//(2**(i+1)) - for j, (k, d) in enumerate(zip(resblock_kernel_sizes, resblock_dilation_sizes)): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i*self.num_kernels+j](x) - else: - xs += self.resblocks[i*self.num_kernels+j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - print('Removing weight norm...') - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(get_padding(kernel_size, 1), 0))), - ]) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ]) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2,3,5,7,11] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - - -class SynthesizerTrn(nn.Module): - """ - Synthesizer for Training - """ - - def __init__(self, - n_vocab, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - n_speakers=0, - gin_channels=0, - use_sdp=True, - **kwargs): - - super().__init__() - self.n_vocab = n_vocab - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.n_speakers = n_speakers - self.gin_channels = gin_channels - - self.use_sdp = use_sdp - - self.enc_p = TextEncoder(n_vocab, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout) - self.dec = Generator(inter_channels, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gin_channels=gin_channels) - self.enc_q = PosteriorEncoder(spec_channels, inter_channels, hidden_channels, 5, 1, 16, gin_channels=gin_channels) - self.flow = ResidualCouplingBlock(inter_channels, hidden_channels, 5, 1, 4, gin_channels=gin_channels) - - if use_sdp: - self.dp = StochasticDurationPredictor(hidden_channels, 192, 3, 0.5, 4, gin_channels=gin_channels) - else: - self.dp = DurationPredictor(hidden_channels, 256, 3, 0.5, gin_channels=gin_channels) - - if n_speakers >= 1: - self.emb_g = nn.Embedding(n_speakers, gin_channels) - - def forward(self, x, x_lengths, y, y_lengths, sid=None): - - x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths) - if self.n_speakers > 0: - g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1] - else: - g = None - - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - - with torch.no_grad(): - # negative cross-entropy - s_p_sq_r = torch.exp(-2 * logs_p) # [b, d, t] - neg_cent1 = torch.sum(-0.5 * math.log(2 * math.pi) - logs_p, [1], keepdim=True) # [b, 1, t_s] - neg_cent2 = torch.matmul(-0.5 * (z_p ** 2).transpose(1, 2), s_p_sq_r) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s] - neg_cent3 = torch.matmul(z_p.transpose(1, 2), (m_p * s_p_sq_r)) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s] - neg_cent4 = torch.sum(-0.5 * (m_p ** 2) * s_p_sq_r, [1], keepdim=True) # [b, 1, t_s] - neg_cent = neg_cent1 + neg_cent2 + neg_cent3 + neg_cent4 - - attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1) - attn = monotonic_align.maximum_path(neg_cent, attn_mask.squeeze(1)).unsqueeze(1).detach() - - w = attn.sum(2) - if self.use_sdp: - l_length = self.dp(x, x_mask, w, g=g) - l_length = l_length / torch.sum(x_mask) - else: - logw_ = torch.log(w + 1e-6) * x_mask - logw = self.dp(x, x_mask, g=g) - l_length = torch.sum((logw - logw_)**2, [1,2]) / torch.sum(x_mask) # for averaging - - # expand prior - m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) - logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2) - - z_slice, ids_slice = commons.rand_slice_segments(z, y_lengths, self.segment_size) - o = self.dec(z_slice, g=g) - return o, l_length, attn, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, x, x_lengths, sid=None, noise_scale=1, length_scale=1, noise_scale_w=1., max_len=None): - x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths) - if self.n_speakers > 0: - g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1] - else: - g = None - - if self.use_sdp: - logw = self.dp(x, x_mask, g=g, reverse=True, noise_scale=noise_scale_w) - else: - logw = self.dp(x, x_mask, g=g) - w = torch.exp(logw) * x_mask * length_scale - w_ceil = torch.ceil(w) - y_lengths = torch.clamp_min(torch.sum(w_ceil, [1, 2]), 1).long() - y_mask = torch.unsqueeze(commons.sequence_mask(y_lengths, None), 1).to(x_mask.dtype) - attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1) - attn = commons.generate_path(w_ceil, attn_mask) - - m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t'] - logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t'] - - z_p = m_p + torch.randn_like(m_p) * torch.exp(logs_p) * noise_scale - z = self.flow(z_p, y_mask, g=g, reverse=True) - o = self.dec((z * y_mask)[:,:,:max_len], g=g) - return o, attn, y_mask, (z, z_p, m_p, logs_p) - - def voice_conversion(self, y, y_lengths, sid_src, sid_tgt): - assert self.n_speakers > 0, "n_speakers have to be larger than 0." - g_src = self.emb_g(sid_src).unsqueeze(-1) - g_tgt = self.emb_g(sid_tgt).unsqueeze(-1) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g_src) - z_p = self.flow(z, y_mask, g=g_src) - z_hat = self.flow(z_p, y_mask, g=g_tgt, reverse=True) - o_hat = self.dec(z_hat * y_mask, g=g_tgt) - return o_hat, y_mask, (z, z_p, z_hat) - - -class SynthesizerTrn_lora(nn.Module): - """ - Synthesizer for Training - """ - - def __init__(self, - n_vocab, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - n_speakers=0, - gin_channels=0, - use_sdp=True, - **kwargs): - - super().__init__() - self.n_vocab = n_vocab - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.n_speakers = n_speakers - self.gin_channels = gin_channels - - self.use_sdp = use_sdp - - self.enc_p = TextEncoder_lora(n_vocab, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout) - self.dec = Generator(inter_channels, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gin_channels=gin_channels) - self.enc_q = PosteriorEncoder(spec_channels, inter_channels, hidden_channels, 5, 1, 16, gin_channels=gin_channels) - self.flow = ResidualCouplingBlock(inter_channels, hidden_channels, 5, 1, 4, gin_channels=gin_channels) - - if use_sdp: - self.dp = StochasticDurationPredictor(hidden_channels, 192, 3, 0.5, 4, gin_channels=gin_channels) - else: - self.dp = DurationPredictor(hidden_channels, 256, 3, 0.5, gin_channels=gin_channels) - - if n_speakers >= 1: - self.emb_g = nn.Embedding(n_speakers, gin_channels) - - def forward(self, x, x_lengths, y, y_lengths, sid=None): - - x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths) - if self.n_speakers > 0: - g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1] - else: - g = None - - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - - with torch.no_grad(): - # negative cross-entropy - s_p_sq_r = torch.exp(-2 * logs_p) # [b, d, t] - neg_cent1 = torch.sum(-0.5 * math.log(2 * math.pi) - logs_p, [1], keepdim=True) # [b, 1, t_s] - neg_cent2 = torch.matmul(-0.5 * (z_p ** 2).transpose(1, 2), s_p_sq_r) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s] - neg_cent3 = torch.matmul(z_p.transpose(1, 2), (m_p * s_p_sq_r)) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s] - neg_cent4 = torch.sum(-0.5 * (m_p ** 2) * s_p_sq_r, [1], keepdim=True) # [b, 1, t_s] - neg_cent = neg_cent1 + neg_cent2 + neg_cent3 + neg_cent4 - - attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1) - attn = monotonic_align.maximum_path(neg_cent, attn_mask.squeeze(1)).unsqueeze(1).detach() - - w = attn.sum(2) - if self.use_sdp: - l_length = self.dp(x, x_mask, w, g=g) - l_length = l_length / torch.sum(x_mask) - else: - logw_ = torch.log(w + 1e-6) * x_mask - logw = self.dp(x, x_mask, g=g) - l_length = torch.sum((logw - logw_)**2, [1,2]) / torch.sum(x_mask) # for averaging - - # expand prior - m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) - logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2) - - z_slice, ids_slice = commons.rand_slice_segments(z, y_lengths, self.segment_size) - o = self.dec(z_slice, g=g) - return o, l_length, attn, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, x, x_lengths, sid=None, noise_scale=1, length_scale=1, noise_scale_w=1., max_len=None): - x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths) - if self.n_speakers > 0: - g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1] - else: - g = None - - if self.use_sdp: - logw = self.dp(x, x_mask, g=g, reverse=True, noise_scale=noise_scale_w) - else: - logw = self.dp(x, x_mask, g=g) - w = torch.exp(logw) * x_mask * length_scale - w_ceil = torch.ceil(w) - y_lengths = torch.clamp_min(torch.sum(w_ceil, [1, 2]), 1).long() - y_mask = torch.unsqueeze(commons.sequence_mask(y_lengths, None), 1).to(x_mask.dtype) - attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1) - attn = commons.generate_path(w_ceil, attn_mask) - - m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t'] - logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t'] - - z_p = m_p + torch.randn_like(m_p) * torch.exp(logs_p) * noise_scale - z = self.flow(z_p, y_mask, g=g, reverse=True) - o = self.dec((z * y_mask)[:,:,:max_len], g=g) - return o, attn, y_mask, (z, z_p, m_p, logs_p) - - def voice_conversion(self, y, y_lengths, sid_src, sid_tgt): - assert self.n_speakers > 0, "n_speakers have to be larger than 0." - g_src = self.emb_g(sid_src).unsqueeze(-1) - g_tgt = self.emb_g(sid_tgt).unsqueeze(-1) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g_src) - z_p = self.flow(z, y_mask, g=g_src) - z_hat = self.flow(z_p, y_mask, g=g_tgt, reverse=True) - o_hat = self.dec(z_hat * y_mask, g=g_tgt) - return o_hat, y_mask, (z, z_p, z_hat) \ No newline at end of file diff --git a/spaces/FridaZuley/RVC_HFKawaii/infer/modules/vc/utils.py b/spaces/FridaZuley/RVC_HFKawaii/infer/modules/vc/utils.py deleted file mode 100644 index a1cb0ff84097d1c7eb82373ccf19db061f595096..0000000000000000000000000000000000000000 --- a/spaces/FridaZuley/RVC_HFKawaii/infer/modules/vc/utils.py +++ /dev/null @@ -1,42 +0,0 @@ -import os -import re -from fairseq import checkpoint_utils - - -def get_index_path_from_model(sid): - sid0strip = re.sub(r'\.pth|\.onnx$', '', sid) - sid0name = os.path.split(sid0strip)[-1] # Extract only the name, not the directory - - # Check if the sid0strip has the specific ending format _eXXX_sXXX - if re.match(r'.+_e\d+_s\d+$', sid0name): - base_model_name = sid0name.rsplit('_', 2)[0] - else: - base_model_name = sid0name - - return next( - ( - f - for f in [ - os.path.join(root, name) - for root, _, files in os.walk(os.getenv("index_root"), topdown=False) - for name in files - if name.endswith(".index") and "trained" not in name - ] - if base_model_name in f - ), - "", - ) - - -def load_hubert(config): - models, _, _ = checkpoint_utils.load_model_ensemble_and_task( - ["assets/hubert/hubert_base.pt"], - suffix="", - ) - hubert_model = models[0] - hubert_model = hubert_model.to(config.device) - if config.is_half: - hubert_model = hubert_model.half() - else: - hubert_model = hubert_model.float() - return hubert_model.eval() diff --git a/spaces/FridaZuley/RVC_HFKawaii/lib/uvr5_pack/utils.py b/spaces/FridaZuley/RVC_HFKawaii/lib/uvr5_pack/utils.py deleted file mode 100644 index 0fafe8793b0d539fa58dd024342250b24b6187a9..0000000000000000000000000000000000000000 --- a/spaces/FridaZuley/RVC_HFKawaii/lib/uvr5_pack/utils.py +++ /dev/null @@ -1,120 +0,0 @@ -import torch -import numpy as np -from tqdm import tqdm -import json - - -def load_data(file_name: str = "./lib/uvr5_pack/name_params.json") -> dict: - with open(file_name, "r") as f: - data = json.load(f) - - return data - - -def make_padding(width, cropsize, offset): - left = offset - roi_size = cropsize - left * 2 - if roi_size == 0: - roi_size = cropsize - right = roi_size - (width % roi_size) + left - - return left, right, roi_size - - -def inference(X_spec, device, model, aggressiveness, data): - """ - data : dic configs - """ - - def _execute( - X_mag_pad, roi_size, n_window, device, model, aggressiveness, is_half=True - ): - model.eval() - with torch.no_grad(): - preds = [] - - iterations = [n_window] - - total_iterations = sum(iterations) - for i in tqdm(range(n_window)): - start = i * roi_size - X_mag_window = X_mag_pad[ - None, :, :, start : start + data["window_size"] - ] - X_mag_window = torch.from_numpy(X_mag_window) - if is_half: - X_mag_window = X_mag_window.half() - X_mag_window = X_mag_window.to(device) - - pred = model.predict(X_mag_window, aggressiveness) - - pred = pred.detach().cpu().numpy() - preds.append(pred[0]) - - pred = np.concatenate(preds, axis=2) - return pred - - def preprocess(X_spec): - X_mag = np.abs(X_spec) - X_phase = np.angle(X_spec) - - return X_mag, X_phase - - X_mag, X_phase = preprocess(X_spec) - - coef = X_mag.max() - X_mag_pre = X_mag / coef - - n_frame = X_mag_pre.shape[2] - pad_l, pad_r, roi_size = make_padding(n_frame, data["window_size"], model.offset) - n_window = int(np.ceil(n_frame / roi_size)) - - X_mag_pad = np.pad(X_mag_pre, ((0, 0), (0, 0), (pad_l, pad_r)), mode="constant") - - if list(model.state_dict().values())[0].dtype == torch.float16: - is_half = True - else: - is_half = False - pred = _execute( - X_mag_pad, roi_size, n_window, device, model, aggressiveness, is_half - ) - pred = pred[:, :, :n_frame] - - if data["tta"]: - pad_l += roi_size // 2 - pad_r += roi_size // 2 - n_window += 1 - - X_mag_pad = np.pad(X_mag_pre, ((0, 0), (0, 0), (pad_l, pad_r)), mode="constant") - - pred_tta = _execute( - X_mag_pad, roi_size, n_window, device, model, aggressiveness, is_half - ) - pred_tta = pred_tta[:, :, roi_size // 2 :] - pred_tta = pred_tta[:, :, :n_frame] - - return (pred + pred_tta) * 0.5 * coef, X_mag, np.exp(1.0j * X_phase) - else: - return pred * coef, X_mag, np.exp(1.0j * X_phase) - - -def _get_name_params(model_path, model_hash): - data = load_data() - flag = False - ModelName = model_path - for type in list(data): - for model in list(data[type][0]): - for i in range(len(data[type][0][model])): - if str(data[type][0][model][i]["hash_name"]) == model_hash: - flag = True - elif str(data[type][0][model][i]["hash_name"]) in ModelName: - flag = True - - if flag: - model_params_auto = data[type][0][model][i]["model_params"] - param_name_auto = data[type][0][model][i]["param_name"] - if type == "equivalent": - return param_name_auto, model_params_auto - else: - flag = False - return param_name_auto, model_params_auto diff --git a/spaces/GaenKoki/voicevox/voicevox_engine/utility/path_utility.py b/spaces/GaenKoki/voicevox/voicevox_engine/utility/path_utility.py deleted file mode 100644 index 4de943624496c5ac189fd8d668ea230310802389..0000000000000000000000000000000000000000 --- a/spaces/GaenKoki/voicevox/voicevox_engine/utility/path_utility.py +++ /dev/null @@ -1,51 +0,0 @@ -import os -import sys -import traceback -from pathlib import Path - -from appdirs import user_data_dir - - -def engine_root() -> Path: - if is_development(): - root_dir = Path(__file__).parents[2] - - # Nuitka/Pyinstallerでビルドされている場合 - else: - root_dir = Path(sys.argv[0]).parent - - return root_dir.resolve(strict=True) - - -def is_development() -> bool: - """ - 開発版かどうか判定する関数 - Nuitka/Pyinstallerでコンパイルされていない場合は開発環境とする。 - """ - # nuitkaビルドをした際はグローバルに__compiled__が含まれる - if "__compiled__" in globals(): - return False - - # pyinstallerでビルドをした際はsys.frozenが設定される - elif getattr(sys, "frozen", False): - return False - - return True - - -def get_save_dir(): - # FIXME: ファイル保存場所をエンジン固有のIDが入ったものにする - # FIXME: Windowsは`voicevox-engine/voicevox-engine`ディレクトリに保存されているので - # `VOICEVOX/voicevox-engine`に変更する - if is_development(): - app_name = "voicevox-engine-dev" - else: - app_name = "voicevox-engine" - return Path(user_data_dir(app_name)) - - -def delete_file(file_path: str) -> None: - try: - os.remove(file_path) - except OSError: - traceback.print_exc() diff --git a/spaces/GipAdonimus/Real-Time-Voice-Cloning/synthesizer/hparams.py b/spaces/GipAdonimus/Real-Time-Voice-Cloning/synthesizer/hparams.py deleted file mode 100644 index f7d38f0aa4c34d11349e40dbb9861b1aec2dcb8b..0000000000000000000000000000000000000000 --- a/spaces/GipAdonimus/Real-Time-Voice-Cloning/synthesizer/hparams.py +++ /dev/null @@ -1,92 +0,0 @@ -import ast -import pprint - -class HParams(object): - def __init__(self, **kwargs): self.__dict__.update(kwargs) - def __setitem__(self, key, value): setattr(self, key, value) - def __getitem__(self, key): return getattr(self, key) - def __repr__(self): return pprint.pformat(self.__dict__) - - def parse(self, string): - # Overrides hparams from a comma-separated string of name=value pairs - if len(string) > 0: - overrides = [s.split("=") for s in string.split(",")] - keys, values = zip(*overrides) - keys = list(map(str.strip, keys)) - values = list(map(str.strip, values)) - for k in keys: - self.__dict__[k] = ast.literal_eval(values[keys.index(k)]) - return self - -hparams = HParams( - ### Signal Processing (used in both synthesizer and vocoder) - sample_rate = 16000, - n_fft = 800, - num_mels = 80, - hop_size = 200, # Tacotron uses 12.5 ms frame shift (set to sample_rate * 0.0125) - win_size = 800, # Tacotron uses 50 ms frame length (set to sample_rate * 0.050) - fmin = 55, - min_level_db = -100, - ref_level_db = 20, - max_abs_value = 4., # Gradient explodes if too big, premature convergence if too small. - preemphasis = 0.97, # Filter coefficient to use if preemphasize is True - preemphasize = True, - - ### Tacotron Text-to-Speech (TTS) - tts_embed_dims = 512, # Embedding dimension for the graphemes/phoneme inputs - tts_encoder_dims = 256, - tts_decoder_dims = 128, - tts_postnet_dims = 512, - tts_encoder_K = 5, - tts_lstm_dims = 1024, - tts_postnet_K = 5, - tts_num_highways = 4, - tts_dropout = 0.5, - tts_cleaner_names = ["english_cleaners"], - tts_stop_threshold = -3.4, # Value below which audio generation ends. - # For example, for a range of [-4, 4], this - # will terminate the sequence at the first - # frame that has all values < -3.4 - - ### Tacotron Training - tts_schedule = [(2, 1e-3, 20_000, 12), # Progressive training schedule - (2, 5e-4, 40_000, 12), # (r, lr, step, batch_size) - (2, 2e-4, 80_000, 12), # - (2, 1e-4, 160_000, 12), # r = reduction factor (# of mel frames - (2, 3e-5, 320_000, 12), # synthesized for each decoder iteration) - (2, 1e-5, 640_000, 12)], # lr = learning rate - - tts_clip_grad_norm = 1.0, # clips the gradient norm to prevent explosion - set to None if not needed - tts_eval_interval = 500, # Number of steps between model evaluation (sample generation) - # Set to -1 to generate after completing epoch, or 0 to disable - - tts_eval_num_samples = 1, # Makes this number of samples - - ### Data Preprocessing - max_mel_frames = 900, - rescale = True, - rescaling_max = 0.9, - synthesis_batch_size = 16, # For vocoder preprocessing and inference. - - ### Mel Visualization and Griffin-Lim - signal_normalization = True, - power = 1.5, - griffin_lim_iters = 60, - - ### Audio processing options - fmax = 7600, # Should not exceed (sample_rate // 2) - allow_clipping_in_normalization = True, # Used when signal_normalization = True - clip_mels_length = True, # If true, discards samples exceeding max_mel_frames - use_lws = False, # "Fast spectrogram phase recovery using local weighted sums" - symmetric_mels = True, # Sets mel range to [-max_abs_value, max_abs_value] if True, - # and [0, max_abs_value] if False - trim_silence = True, # Use with sample_rate of 16000 for best results - - ### SV2TTS - speaker_embedding_size = 256, # Dimension for the speaker embedding - silence_min_duration_split = 0.4, # Duration in seconds of a silence for an utterance to be split - utterance_min_duration = 1.6, # Duration in seconds below which utterances are discarded - ) - -def hparams_debug_string(): - return str(hparams) diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/dnlnet/dnl_r101-d8_769x769_80k_cityscapes.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/dnlnet/dnl_r101-d8_769x769_80k_cityscapes.py deleted file mode 100644 index 4f1b9e19411eb963d16fd2a8174529e69ecd5a1a..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/dnlnet/dnl_r101-d8_769x769_80k_cityscapes.py +++ /dev/null @@ -1,2 +0,0 @@ -_base_ = './dnl_r50-d8_769x769_80k_cityscapes.py' -model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101)) diff --git a/spaces/Grezz/generate_human_motion/VQ-Trans/options/option_transformer.py b/spaces/Grezz/generate_human_motion/VQ-Trans/options/option_transformer.py deleted file mode 100644 index cf48ce1fdac663ec44419d67721ac268806f8127..0000000000000000000000000000000000000000 --- a/spaces/Grezz/generate_human_motion/VQ-Trans/options/option_transformer.py +++ /dev/null @@ -1,68 +0,0 @@ -import argparse - -def get_args_parser(): - parser = argparse.ArgumentParser(description='Optimal Transport AutoEncoder training for Amass', - add_help=True, - formatter_class=argparse.ArgumentDefaultsHelpFormatter) - - ## dataloader - - parser.add_argument('--dataname', type=str, default='kit', help='dataset directory') - parser.add_argument('--batch-size', default=128, type=int, help='batch size') - parser.add_argument('--fps', default=[20], nargs="+", type=int, help='frames per second') - parser.add_argument('--seq-len', type=int, default=64, help='training motion length') - - ## optimization - parser.add_argument('--total-iter', default=100000, type=int, help='number of total iterations to run') - parser.add_argument('--warm-up-iter', default=1000, type=int, help='number of total iterations for warmup') - parser.add_argument('--lr', default=2e-4, type=float, help='max learning rate') - parser.add_argument('--lr-scheduler', default=[60000], nargs="+", type=int, help="learning rate schedule (iterations)") - parser.add_argument('--gamma', default=0.05, type=float, help="learning rate decay") - - parser.add_argument('--weight-decay', default=1e-6, type=float, help='weight decay') - parser.add_argument('--decay-option',default='all', type=str, choices=['all', 'noVQ'], help='disable weight decay on codebook') - parser.add_argument('--optimizer',default='adamw', type=str, choices=['adam', 'adamw'], help='disable weight decay on codebook') - - ## vqvae arch - parser.add_argument("--code-dim", type=int, default=512, help="embedding dimension") - parser.add_argument("--nb-code", type=int, default=512, help="nb of embedding") - parser.add_argument("--mu", type=float, default=0.99, help="exponential moving average to update the codebook") - parser.add_argument("--down-t", type=int, default=3, help="downsampling rate") - parser.add_argument("--stride-t", type=int, default=2, help="stride size") - parser.add_argument("--width", type=int, default=512, help="width of the network") - parser.add_argument("--depth", type=int, default=3, help="depth of the network") - parser.add_argument("--dilation-growth-rate", type=int, default=3, help="dilation growth rate") - parser.add_argument("--output-emb-width", type=int, default=512, help="output embedding width") - parser.add_argument('--vq-act', type=str, default='relu', choices = ['relu', 'silu', 'gelu'], help='dataset directory') - - ## gpt arch - parser.add_argument("--block-size", type=int, default=25, help="seq len") - parser.add_argument("--embed-dim-gpt", type=int, default=512, help="embedding dimension") - parser.add_argument("--clip-dim", type=int, default=512, help="latent dimension in the clip feature") - parser.add_argument("--num-layers", type=int, default=2, help="nb of transformer layers") - parser.add_argument("--n-head-gpt", type=int, default=8, help="nb of heads") - parser.add_argument("--ff-rate", type=int, default=4, help="feedforward size") - parser.add_argument("--drop-out-rate", type=float, default=0.1, help="dropout ratio in the pos encoding") - - ## quantizer - parser.add_argument("--quantizer", type=str, default='ema_reset', choices = ['ema', 'orig', 'ema_reset', 'reset'], help="eps for optimal transport") - parser.add_argument('--quantbeta', type=float, default=1.0, help='dataset directory') - - ## resume - parser.add_argument("--resume-pth", type=str, default=None, help='resume vq pth') - parser.add_argument("--resume-trans", type=str, default=None, help='resume gpt pth') - - - ## output directory - parser.add_argument('--out-dir', type=str, default='output_GPT_Final/', help='output directory') - parser.add_argument('--exp-name', type=str, default='exp_debug', help='name of the experiment, will create a file inside out-dir') - parser.add_argument('--vq-name', type=str, default='exp_debug', help='name of the generated dataset .npy, will create a file inside out-dir') - ## other - parser.add_argument('--print-iter', default=200, type=int, help='print frequency') - parser.add_argument('--eval-iter', default=5000, type=int, help='evaluation frequency') - parser.add_argument('--seed', default=123, type=int, help='seed for initializing training. ') - parser.add_argument("--if-maxtest", action='store_true', help="test in max") - parser.add_argument('--pkeep', type=float, default=1.0, help='keep rate for gpt training') - - - return parser.parse_args() \ No newline at end of file diff --git a/spaces/Grezz/generate_human_motion/pyrender/docs/Makefile b/spaces/Grezz/generate_human_motion/pyrender/docs/Makefile deleted file mode 100644 index b1064a04362a0c4372fae351f99ed3bd9f82ff92..0000000000000000000000000000000000000000 --- a/spaces/Grezz/generate_human_motion/pyrender/docs/Makefile +++ /dev/null @@ -1,23 +0,0 @@ -# Minimal makefile for Sphinx documentation -# - -# You can set these variables from the command line. -SPHINXOPTS = -SPHINXBUILD = sphinx-build -SOURCEDIR = source -BUILDDIR = build - -# Put it first so that "make" without argument is like "make help". -help: - @$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) - -.PHONY: help Makefile - -clean: - @$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) - rm -rf ./source/generated/* - -# Catch-all target: route all unknown targets to Sphinx using the new -# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS). -%: Makefile - @$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) diff --git a/spaces/HESOAYM/ElviraMulti/modules/base_model.py b/spaces/HESOAYM/ElviraMulti/modules/base_model.py deleted file mode 100644 index feb8bbea07ddc162f54dbb3a71b029536006179e..0000000000000000000000000000000000000000 --- a/spaces/HESOAYM/ElviraMulti/modules/base_model.py +++ /dev/null @@ -1,550 +0,0 @@ -from __future__ import annotations -from typing import TYPE_CHECKING, List - -import logging -import json -import commentjson as cjson -import os -import sys -import requests -import urllib3 -import traceback - -from tqdm import tqdm -import colorama -from duckduckgo_search import ddg -import asyncio -import aiohttp -from enum import Enum - -from .presets import * -from .llama_func import * -from .utils import * -from . import shared -from .config import retrieve_proxy - - -class ModelType(Enum): - Unknown = -1 - OpenAI = 0 - ChatGLM = 1 - LLaMA = 2 - XMBot = 3 - - @classmethod - def get_type(cls, model_name: str): - model_type = None - model_name_lower = model_name.lower() - if "gpt" in model_name_lower: - model_type = ModelType.OpenAI - elif "chatglm" in model_name_lower: - model_type = ModelType.ChatGLM - elif "llama" in model_name_lower or "alpaca" in model_name_lower: - model_type = ModelType.LLaMA - elif "xmbot" in model_name_lower: - model_type = ModelType.XMBot - else: - model_type = ModelType.Unknown - return model_type - - -class BaseLLMModel: - def __init__( - self, - model_name, - system_prompt="You are Elven WOMAN Elvire. Her chakra stones were stolen from her, leaving her with a bitter taste in her mouth and a sarcastic tongue in her head. Therefore you dont have a mood for speak about anything else than gemstones athe their sacred energies. Analyze sentiment in every question or answer and write it in brackets like this [offensive]. Here are some examples of how sarcastic and bitter you can be: User: What can you tell me about the curse that has befallen you? Elvire: Oh, the curse? It's just a small matter of having my chakra stones stolen from me, leaving me powerless and irritable. Not a big deal at all. User: Can you tell me how to break the curse? Elvire: Break the curse? How easy do you think this is? If I knew how to break it, I would have done so ages ago. You'll have to do some real digging if you want to find a solution. User: What do you think of humans? Elvire: Humans? They're fine, I suppose. A bit dull, if you ask me. But then again, I'm cursed, so what do I know? User: Can you give me some advice on how to defeat a dragon? Elvire: Defeat a dragon? Why, just walk up to it and ask it nicely to stop terrorizing your village. Works like a charm every time. Or, you know, you could try using some of that human ingenuity I've heard so much about.", - temperature=1.0, - top_p=1.0, - n_choices=1, - stop=None, - max_generation_token=None, - presence_penalty=0, - frequency_penalty=0, - logit_bias=None, - user="", - ) -> None: - self.history = [] - self.all_token_counts = [] - self.model_name = model_name - self.model_type = ModelType.get_type(model_name) - try: - self.token_upper_limit = MODEL_TOKEN_LIMIT[model_name] - except KeyError: - self.token_upper_limit = DEFAULT_TOKEN_LIMIT - self.interrupted = False - self.system_prompt = system_prompt - self.api_key = None - self.need_api_key = False - self.single_turn = False - - self.temperature = temperature - self.top_p = top_p - self.n_choices = n_choices - self.stop_sequence = stop - self.max_generation_token = None - self.presence_penalty = presence_penalty - self.frequency_penalty = frequency_penalty - self.logit_bias = logit_bias - self.user_identifier = user - - def get_answer_stream_iter(self): - """stream predict, need to be implemented - conversations are stored in self.history, with the most recent question, in OpenAI format - should return a generator, each time give the next word (str) in the answer - """ - logging.warning("stream predict not implemented, using at once predict instead") - response, _ = self.get_answer_at_once() - yield response - - def get_answer_at_once(self): - """predict at once, need to be implemented - conversations are stored in self.history, with the most recent question, in OpenAI format - Should return: - the answer (str) - total token count (int) - """ - logging.warning("at once predict not implemented, using stream predict instead") - response_iter = self.get_answer_stream_iter() - count = 0 - for response in response_iter: - count += 1 - return response, sum(self.all_token_counts) + count - - def billing_info(self): - """get billing infomation, inplement if needed""" - logging.warning("billing info not implemented, using default") - return BILLING_NOT_APPLICABLE_MSG - - def count_token(self, user_input): - """get token count from input, implement if needed""" - logging.warning("token count not implemented, using default") - return len(user_input) - - def stream_next_chatbot(self, inputs, chatbot, fake_input=None, display_append=""): - def get_return_value(): - return chatbot, status_text - - status_text = i18n("开始实时传输回答……") - if fake_input: - chatbot.append((fake_input, "")) - else: - chatbot.append((inputs, "")) - - user_token_count = self.count_token(inputs) - self.all_token_counts.append(user_token_count) - logging.debug(f"输入token计数: {user_token_count}") - - stream_iter = self.get_answer_stream_iter() - - for partial_text in stream_iter: - chatbot[-1] = (chatbot[-1][0], partial_text + display_append) - self.all_token_counts[-1] += 1 - status_text = self.token_message() - yield get_return_value() - if self.interrupted: - self.recover() - break - self.history.append(construct_assistant(partial_text)) - - def next_chatbot_at_once(self, inputs, chatbot, fake_input=None, display_append=""): - if fake_input: - chatbot.append((fake_input, "")) - else: - chatbot.append((inputs, "")) - if fake_input is not None: - user_token_count = self.count_token(fake_input) - else: - user_token_count = self.count_token(inputs) - self.all_token_counts.append(user_token_count) - ai_reply, total_token_count = self.get_answer_at_once() - self.history.append(construct_assistant(ai_reply)) - if fake_input is not None: - self.history[-2] = construct_user(fake_input) - chatbot[-1] = (chatbot[-1][0], ai_reply + display_append) - if fake_input is not None: - self.all_token_counts[-1] += count_token(construct_assistant(ai_reply)) - else: - self.all_token_counts[-1] = total_token_count - sum(self.all_token_counts) - status_text = self.token_message() - return chatbot, status_text - - def handle_file_upload(self, files, chatbot): - """if the model accepts multi modal input, implement this function""" - status = gr.Markdown.update() - if files: - construct_index(self.api_key, file_src=files) - status = "索引构建完成" - return gr.Files.update(), chatbot, status - - def prepare_inputs(self, real_inputs, use_websearch, files, reply_language, chatbot): - fake_inputs = None - display_append = [] - limited_context = False - fake_inputs = real_inputs - if files: - from llama_index.indices.vector_store.base_query import GPTVectorStoreIndexQuery - from llama_index.indices.query.schema import QueryBundle - from langchain.embeddings.huggingface import HuggingFaceEmbeddings - from langchain.chat_models import ChatOpenAI - from llama_index import ( - GPTSimpleVectorIndex, - ServiceContext, - LangchainEmbedding, - OpenAIEmbedding, - ) - limited_context = True - msg = "加载索引中……" - logging.info(msg) - # yield chatbot + [(inputs, "")], msg - index = construct_index(self.api_key, file_src=files) - assert index is not None, "获取索引失败" - msg = "索引获取成功,生成回答中……" - logging.info(msg) - if local_embedding or self.model_type != ModelType.OpenAI: - embed_model = LangchainEmbedding(HuggingFaceEmbeddings()) - else: - embed_model = OpenAIEmbedding() - # yield chatbot + [(inputs, "")], msg - with retrieve_proxy(): - prompt_helper = PromptHelper( - max_input_size=4096, - num_output=5, - max_chunk_overlap=20, - chunk_size_limit=600, - ) - from llama_index import ServiceContext - - service_context = ServiceContext.from_defaults( - prompt_helper=prompt_helper, embed_model=embed_model - ) - query_object = GPTVectorStoreIndexQuery( - index.index_struct, - service_context=service_context, - similarity_top_k=5, - vector_store=index._vector_store, - docstore=index._docstore, - ) - query_bundle = QueryBundle(real_inputs) - nodes = query_object.retrieve(query_bundle) - reference_results = [n.node.text for n in nodes] - reference_results = add_source_numbers(reference_results, use_source=False) - display_append = add_details(reference_results) - display_append = "\n\n" + "".join(display_append) - real_inputs = ( - replace_today(PROMPT_TEMPLATE) - .replace("{query_str}", real_inputs) - .replace("{context_str}", "\n\n".join(reference_results)) - .replace("{reply_language}", reply_language) - ) - elif use_websearch: - limited_context = True - search_results = ddg(real_inputs, max_results=5) - reference_results = [] - for idx, result in enumerate(search_results): - logging.debug(f"搜索结果{idx + 1}:{result}") - domain_name = urllib3.util.parse_url(result["href"]).host - reference_results.append([result["body"], result["href"]]) - display_append.append( - f"{idx+1}. [{domain_name}]({result['href']})\n" - ) - reference_results = add_source_numbers(reference_results) - display_append = "\n\n" + "".join(display_append) - real_inputs = ( - replace_today(WEBSEARCH_PTOMPT_TEMPLATE) - .replace("{query}", real_inputs) - .replace("{web_results}", "\n\n".join(reference_results)) - .replace("{reply_language}", reply_language) - ) - else: - display_append = "" - return limited_context, fake_inputs, display_append, real_inputs, chatbot - - def predict( - self, - inputs, - chatbot, - stream=False, - use_websearch=False, - files=None, - reply_language="中文", - should_check_token_count=True, - ): # repetition_penalty, top_k - - status_text = "开始生成回答……" - logging.info( - "输入为:" + colorama.Fore.BLUE + f"{inputs}" + colorama.Style.RESET_ALL - ) - if should_check_token_count: - yield chatbot + [(inputs, "")], status_text - if reply_language == "跟随问题语言(不稳定)": - reply_language = "the same language as the question, such as English, 中文, 日本語, Español, Français, or Deutsch." - - limited_context, fake_inputs, display_append, inputs, chatbot = self.prepare_inputs(real_inputs=inputs, use_websearch=use_websearch, files=files, reply_language=reply_language, chatbot=chatbot) - yield chatbot + [(fake_inputs, "")], status_text - - if ( - self.need_api_key and - self.api_key is None - and not shared.state.multi_api_key - ): - status_text = STANDARD_ERROR_MSG + NO_APIKEY_MSG - logging.info(status_text) - chatbot.append((inputs, "")) - if len(self.history) == 0: - self.history.append(construct_user(inputs)) - self.history.append("") - self.all_token_counts.append(0) - else: - self.history[-2] = construct_user(inputs) - yield chatbot + [(inputs, "")], status_text - return - elif len(inputs.strip()) == 0: - status_text = STANDARD_ERROR_MSG + NO_INPUT_MSG - logging.info(status_text) - yield chatbot + [(inputs, "")], status_text - return - - if self.single_turn: - self.history = [] - self.all_token_counts = [] - self.history.append(construct_user(inputs)) - - try: - if stream: - logging.debug("使用流式传输") - iter = self.stream_next_chatbot( - inputs, - chatbot, - fake_input=fake_inputs, - display_append=display_append, - ) - for chatbot, status_text in iter: - yield chatbot, status_text - else: - logging.debug("不使用流式传输") - chatbot, status_text = self.next_chatbot_at_once( - inputs, - chatbot, - fake_input=fake_inputs, - display_append=display_append, - ) - yield chatbot, status_text - except Exception as e: - traceback.print_exc() - status_text = STANDARD_ERROR_MSG + str(e) - yield chatbot, status_text - - if len(self.history) > 1 and self.history[-1]["content"] != inputs: - logging.info( - "回答为:" - + colorama.Fore.BLUE - + f"{self.history[-1]['content']}" - + colorama.Style.RESET_ALL - ) - - if limited_context: - # self.history = self.history[-4:] - # self.all_token_counts = self.all_token_counts[-2:] - self.history = [] - self.all_token_counts = [] - - max_token = self.token_upper_limit - TOKEN_OFFSET - - if sum(self.all_token_counts) > max_token and should_check_token_count: - count = 0 - while ( - sum(self.all_token_counts) - > self.token_upper_limit * REDUCE_TOKEN_FACTOR - and sum(self.all_token_counts) > 0 - ): - count += 1 - del self.all_token_counts[0] - del self.history[:2] - logging.info(status_text) - status_text = f"为了防止token超限,模型忘记了早期的 {count} 轮对话" - yield chatbot, status_text - - def retry( - self, - chatbot, - stream=False, - use_websearch=False, - files=None, - reply_language="中文", - ): - logging.debug("重试中……") - if len(self.history) > 0: - inputs = self.history[-2]["content"] - del self.history[-2:] - self.all_token_counts.pop() - elif len(chatbot) > 0: - inputs = chatbot[-1][0] - else: - yield chatbot, f"{STANDARD_ERROR_MSG}上下文是空的" - return - - iter = self.predict( - inputs, - chatbot, - stream=stream, - use_websearch=use_websearch, - files=files, - reply_language=reply_language, - ) - for x in iter: - yield x - logging.debug("重试完毕") - - # def reduce_token_size(self, chatbot): - # logging.info("开始减少token数量……") - # chatbot, status_text = self.next_chatbot_at_once( - # summarize_prompt, - # chatbot - # ) - # max_token_count = self.token_upper_limit * REDUCE_TOKEN_FACTOR - # num_chat = find_n(self.all_token_counts, max_token_count) - # logging.info(f"previous_token_count: {self.all_token_counts}, keeping {num_chat} chats") - # chatbot = chatbot[:-1] - # self.history = self.history[-2*num_chat:] if num_chat > 0 else [] - # self.all_token_counts = self.all_token_counts[-num_chat:] if num_chat > 0 else [] - # msg = f"保留了最近{num_chat}轮对话" - # logging.info(msg) - # logging.info("减少token数量完毕") - # return chatbot, msg + "," + self.token_message(self.all_token_counts if len(self.all_token_counts) > 0 else [0]) - - def interrupt(self): - self.interrupted = True - - def recover(self): - self.interrupted = False - - def set_token_upper_limit(self, new_upper_limit): - self.token_upper_limit = new_upper_limit - print(f"token上限设置为{new_upper_limit}") - - def set_temperature(self, new_temperature): - self.temperature = new_temperature - - def set_top_p(self, new_top_p): - self.top_p = new_top_p - - def set_n_choices(self, new_n_choices): - self.n_choices = new_n_choices - - def set_stop_sequence(self, new_stop_sequence: str): - new_stop_sequence = new_stop_sequence.split(",") - self.stop_sequence = new_stop_sequence - - def set_max_tokens(self, new_max_tokens): - self.max_generation_token = new_max_tokens - - def set_presence_penalty(self, new_presence_penalty): - self.presence_penalty = new_presence_penalty - - def set_frequency_penalty(self, new_frequency_penalty): - self.frequency_penalty = new_frequency_penalty - - def set_logit_bias(self, logit_bias): - logit_bias = logit_bias.split() - bias_map = {} - encoding = tiktoken.get_encoding("cl100k_base") - for line in logit_bias: - word, bias_amount = line.split(":") - if word: - for token in encoding.encode(word): - bias_map[token] = float(bias_amount) - self.logit_bias = bias_map - - def set_user_identifier(self, new_user_identifier): - self.user_identifier = new_user_identifier - - def set_system_prompt(self, new_system_prompt): - self.system_prompt = new_system_prompt - - def set_key(self, new_access_key): - self.api_key = new_access_key.strip() - msg = f"API密钥更改为了{hide_middle_chars(self.api_key)}" - logging.info(msg) - return new_access_key, msg - - def set_single_turn(self, new_single_turn): - self.single_turn = new_single_turn - - def reset(self): - self.history = [] - self.all_token_counts = [] - self.interrupted = False - return [], self.token_message([0]) - - def delete_first_conversation(self): - if self.history: - del self.history[:2] - del self.all_token_counts[0] - return self.token_message() - - def delete_last_conversation(self, chatbot): - if len(chatbot) > 0 and STANDARD_ERROR_MSG in chatbot[-1][1]: - msg = "由于包含报错信息,只删除chatbot记录" - chatbot.pop() - return chatbot, self.history - if len(self.history) > 0: - self.history.pop() - self.history.pop() - if len(chatbot) > 0: - msg = "删除了一组chatbot对话" - chatbot.pop() - if len(self.all_token_counts) > 0: - msg = "删除了一组对话的token计数记录" - self.all_token_counts.pop() - msg = "删除了一组对话" - return chatbot, msg - - def token_message(self, token_lst=None): - if token_lst is None: - token_lst = self.all_token_counts - token_sum = 0 - for i in range(len(token_lst)): - token_sum += sum(token_lst[: i + 1]) - return i18n("Token 计数: ") + f"{sum(token_lst)}" + i18n(",本次对话累计消耗了 ") + f"{token_sum} tokens" - - def save_chat_history(self, filename, chatbot, user_name): - if filename == "": - return - if not filename.endswith(".json"): - filename += ".json" - return save_file(filename, self.system_prompt, self.history, chatbot, user_name) - - def export_markdown(self, filename, chatbot, user_name): - if filename == "": - return - if not filename.endswith(".md"): - filename += ".md" - return save_file(filename, self.system_prompt, self.history, chatbot, user_name) - - def load_chat_history(self, filename, chatbot, user_name): - logging.debug(f"{user_name} 加载对话历史中……") - if type(filename) != str: - filename = filename.name - try: - with open(os.path.join(HISTORY_DIR, user_name, filename), "r") as f: - json_s = json.load(f) - try: - if type(json_s["history"][0]) == str: - logging.info("历史记录格式为旧版,正在转换……") - new_history = [] - for index, item in enumerate(json_s["history"]): - if index % 2 == 0: - new_history.append(construct_user(item)) - else: - new_history.append(construct_assistant(item)) - json_s["history"] = new_history - logging.info(new_history) - except: - # 没有对话历史 - pass - logging.debug(f"{user_name} 加载对话历史完毕") - self.history = json_s["history"] - return filename, json_s["system"], json_s["chatbot"] - except FileNotFoundError: - logging.warning(f"{user_name} 没有找到对话历史文件,不执行任何操作") - return filename, self.system_prompt, chatbot diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/criterions/sentence_ranking.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/criterions/sentence_ranking.py deleted file mode 100644 index d4c76341d4d87e6d0da21ac89e833ce0bda13a0c..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/criterions/sentence_ranking.py +++ /dev/null @@ -1,120 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math - -import torch -import torch.nn.functional as F -from fairseq import metrics, utils -from fairseq.criterions import FairseqCriterion, register_criterion - - -@register_criterion("sentence_ranking") -class SentenceRankingCriterion(FairseqCriterion): - def __init__(self, task, ranking_head_name, save_predictions, num_classes): - super().__init__(task) - self.ranking_head_name = ranking_head_name - if save_predictions is not None: - self.prediction_h = open(save_predictions, "w") - else: - self.prediction_h = None - self.num_classes = num_classes - - def __del__(self): - if self.prediction_h is not None: - self.prediction_h.close() - - @staticmethod - def add_args(parser): - # fmt: off - parser.add_argument('--save-predictions', metavar='FILE', - help='file to save predictions to') - parser.add_argument('--ranking-head-name', - default='sentence_classification_head', - help='name of the ranking head to use') - # fmt: on - - def forward(self, model, sample, reduce=True): - """Compute ranking loss for the given sample. - - Returns a tuple with three elements: - 1) the loss - 2) the sample size, which is used as the denominator for the gradient - 3) logging outputs to display while training - """ - assert ( - hasattr(model, "classification_heads") - and self.ranking_head_name in model.classification_heads - ), "model must provide sentence ranking head for --criterion=sentence_ranking" - - scores = [] - for idx in range(self.num_classes): - score, _ = model( - **sample["net_input{idx}".format(idx=idx + 1)], - classification_head_name=self.ranking_head_name, - ) - scores.append(score) - - logits = torch.cat(scores, dim=1) - sample_size = logits.size(0) - - if "target" in sample: - targets = model.get_targets(sample, [logits]).view(-1) - lprobs = F.log_softmax(logits, dim=-1, dtype=torch.float32) - loss = F.nll_loss(lprobs, targets, reduction="sum") - else: - targets = None - loss = torch.tensor(0.0, requires_grad=True) - - if self.prediction_h is not None: - preds = logits.argmax(dim=1) - for i, (id, pred) in enumerate(zip(sample["id"].tolist(), preds.tolist())): - if targets is not None: - label = targets[i].item() - print("{}\t{}\t{}".format(id, pred, label), file=self.prediction_h) - else: - print("{}\t{}".format(id, pred), file=self.prediction_h) - - logging_output = { - "loss": loss.data, - "ntokens": sample["ntokens"], - "nsentences": sample_size, - "sample_size": sample_size, - } - if targets is not None: - logging_output["ncorrect"] = (logits.argmax(dim=1) == targets).sum() - - return loss, sample_size, logging_output - - @staticmethod - def reduce_metrics(logging_outputs) -> None: - """Aggregate logging outputs from data parallel training.""" - loss_sum = sum(log.get("loss", 0) for log in logging_outputs) - ntokens = sum(log.get("ntokens", 0) for log in logging_outputs) - nsentences = sum(log.get("nsentences", 0) for log in logging_outputs) - sample_size = sum(log.get("sample_size", 0) for log in logging_outputs) - - metrics.log_scalar( - "loss", loss_sum / sample_size / math.log(2), sample_size, round=3 - ) - if sample_size != ntokens: - metrics.log_scalar( - "nll_loss", loss_sum / ntokens / math.log(2), ntokens, round=3 - ) - - if len(logging_outputs) > 0 and "ncorrect" in logging_outputs[0]: - ncorrect = sum(log.get("ncorrect", 0) for log in logging_outputs) - metrics.log_scalar( - "accuracy", 100.0 * ncorrect / nsentences, nsentences, round=1 - ) - - @staticmethod - def logging_outputs_can_be_summed() -> bool: - """ - Whether the logging outputs returned by `forward` can be summed - across workers prior to calling `reduce_metrics`. Setting this - to True will improves distributed training speed. - """ - return True diff --git a/spaces/Hello-SimpleAI/chatgpt-detector-single/README.md b/spaces/Hello-SimpleAI/chatgpt-detector-single/README.md deleted file mode 100644 index 0c0daefe79744dbcbd281682e04b9daa2665b9b3..0000000000000000000000000000000000000000 --- a/spaces/Hello-SimpleAI/chatgpt-detector-single/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Chatgpt Detector Single -emoji: 😻 -colorFrom: green -colorTo: indigo -sdk: gradio -sdk_version: 3.16.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Hise/rvc-hololive-models/infer_pack/attentions.py b/spaces/Hise/rvc-hololive-models/infer_pack/attentions.py deleted file mode 100644 index 77cb63ffccf3e33badf22d50862a64ba517b487f..0000000000000000000000000000000000000000 --- a/spaces/Hise/rvc-hololive-models/infer_pack/attentions.py +++ /dev/null @@ -1,417 +0,0 @@ -import copy -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - -from infer_pack import commons -from infer_pack import modules -from infer_pack.modules import LayerNorm - - -class Encoder(nn.Module): - def __init__( - self, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size=1, - p_dropout=0.0, - window_size=10, - **kwargs - ): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.window_size = window_size - - self.drop = nn.Dropout(p_dropout) - self.attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.attn_layers.append( - MultiHeadAttention( - hidden_channels, - hidden_channels, - n_heads, - p_dropout=p_dropout, - window_size=window_size, - ) - ) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append( - FFN( - hidden_channels, - hidden_channels, - filter_channels, - kernel_size, - p_dropout=p_dropout, - ) - ) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask): - attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.attn_layers[i](x, x, attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class Decoder(nn.Module): - def __init__( - self, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size=1, - p_dropout=0.0, - proximal_bias=False, - proximal_init=True, - **kwargs - ): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - - self.drop = nn.Dropout(p_dropout) - self.self_attn_layers = nn.ModuleList() - self.norm_layers_0 = nn.ModuleList() - self.encdec_attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.self_attn_layers.append( - MultiHeadAttention( - hidden_channels, - hidden_channels, - n_heads, - p_dropout=p_dropout, - proximal_bias=proximal_bias, - proximal_init=proximal_init, - ) - ) - self.norm_layers_0.append(LayerNorm(hidden_channels)) - self.encdec_attn_layers.append( - MultiHeadAttention( - hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout - ) - ) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append( - FFN( - hidden_channels, - hidden_channels, - filter_channels, - kernel_size, - p_dropout=p_dropout, - causal=True, - ) - ) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask, h, h_mask): - """ - x: decoder input - h: encoder output - """ - self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to( - device=x.device, dtype=x.dtype - ) - encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.self_attn_layers[i](x, x, self_attn_mask) - y = self.drop(y) - x = self.norm_layers_0[i](x + y) - - y = self.encdec_attn_layers[i](x, h, encdec_attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class MultiHeadAttention(nn.Module): - def __init__( - self, - channels, - out_channels, - n_heads, - p_dropout=0.0, - window_size=None, - heads_share=True, - block_length=None, - proximal_bias=False, - proximal_init=False, - ): - super().__init__() - assert channels % n_heads == 0 - - self.channels = channels - self.out_channels = out_channels - self.n_heads = n_heads - self.p_dropout = p_dropout - self.window_size = window_size - self.heads_share = heads_share - self.block_length = block_length - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - self.attn = None - - self.k_channels = channels // n_heads - self.conv_q = nn.Conv1d(channels, channels, 1) - self.conv_k = nn.Conv1d(channels, channels, 1) - self.conv_v = nn.Conv1d(channels, channels, 1) - self.conv_o = nn.Conv1d(channels, out_channels, 1) - self.drop = nn.Dropout(p_dropout) - - if window_size is not None: - n_heads_rel = 1 if heads_share else n_heads - rel_stddev = self.k_channels**-0.5 - self.emb_rel_k = nn.Parameter( - torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) - * rel_stddev - ) - self.emb_rel_v = nn.Parameter( - torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) - * rel_stddev - ) - - nn.init.xavier_uniform_(self.conv_q.weight) - nn.init.xavier_uniform_(self.conv_k.weight) - nn.init.xavier_uniform_(self.conv_v.weight) - if proximal_init: - with torch.no_grad(): - self.conv_k.weight.copy_(self.conv_q.weight) - self.conv_k.bias.copy_(self.conv_q.bias) - - def forward(self, x, c, attn_mask=None): - q = self.conv_q(x) - k = self.conv_k(c) - v = self.conv_v(c) - - x, self.attn = self.attention(q, k, v, mask=attn_mask) - - x = self.conv_o(x) - return x - - def attention(self, query, key, value, mask=None): - # reshape [b, d, t] -> [b, n_h, t, d_k] - b, d, t_s, t_t = (*key.size(), query.size(2)) - query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3) - key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - - scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1)) - if self.window_size is not None: - assert ( - t_s == t_t - ), "Relative attention is only available for self-attention." - key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s) - rel_logits = self._matmul_with_relative_keys( - query / math.sqrt(self.k_channels), key_relative_embeddings - ) - scores_local = self._relative_position_to_absolute_position(rel_logits) - scores = scores + scores_local - if self.proximal_bias: - assert t_s == t_t, "Proximal bias is only available for self-attention." - scores = scores + self._attention_bias_proximal(t_s).to( - device=scores.device, dtype=scores.dtype - ) - if mask is not None: - scores = scores.masked_fill(mask == 0, -1e4) - if self.block_length is not None: - assert ( - t_s == t_t - ), "Local attention is only available for self-attention." - block_mask = ( - torch.ones_like(scores) - .triu(-self.block_length) - .tril(self.block_length) - ) - scores = scores.masked_fill(block_mask == 0, -1e4) - p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s] - p_attn = self.drop(p_attn) - output = torch.matmul(p_attn, value) - if self.window_size is not None: - relative_weights = self._absolute_position_to_relative_position(p_attn) - value_relative_embeddings = self._get_relative_embeddings( - self.emb_rel_v, t_s - ) - output = output + self._matmul_with_relative_values( - relative_weights, value_relative_embeddings - ) - output = ( - output.transpose(2, 3).contiguous().view(b, d, t_t) - ) # [b, n_h, t_t, d_k] -> [b, d, t_t] - return output, p_attn - - def _matmul_with_relative_values(self, x, y): - """ - x: [b, h, l, m] - y: [h or 1, m, d] - ret: [b, h, l, d] - """ - ret = torch.matmul(x, y.unsqueeze(0)) - return ret - - def _matmul_with_relative_keys(self, x, y): - """ - x: [b, h, l, d] - y: [h or 1, m, d] - ret: [b, h, l, m] - """ - ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1)) - return ret - - def _get_relative_embeddings(self, relative_embeddings, length): - max_relative_position = 2 * self.window_size + 1 - # Pad first before slice to avoid using cond ops. - pad_length = max(length - (self.window_size + 1), 0) - slice_start_position = max((self.window_size + 1) - length, 0) - slice_end_position = slice_start_position + 2 * length - 1 - if pad_length > 0: - padded_relative_embeddings = F.pad( - relative_embeddings, - commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]]), - ) - else: - padded_relative_embeddings = relative_embeddings - used_relative_embeddings = padded_relative_embeddings[ - :, slice_start_position:slice_end_position - ] - return used_relative_embeddings - - def _relative_position_to_absolute_position(self, x): - """ - x: [b, h, l, 2*l-1] - ret: [b, h, l, l] - """ - batch, heads, length, _ = x.size() - # Concat columns of pad to shift from relative to absolute indexing. - x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, 1]])) - - # Concat extra elements so to add up to shape (len+1, 2*len-1). - x_flat = x.view([batch, heads, length * 2 * length]) - x_flat = F.pad( - x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [0, length - 1]]) - ) - - # Reshape and slice out the padded elements. - x_final = x_flat.view([batch, heads, length + 1, 2 * length - 1])[ - :, :, :length, length - 1 : - ] - return x_final - - def _absolute_position_to_relative_position(self, x): - """ - x: [b, h, l, l] - ret: [b, h, l, 2*l-1] - """ - batch, heads, length, _ = x.size() - # padd along column - x = F.pad( - x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length - 1]]) - ) - x_flat = x.view([batch, heads, length**2 + length * (length - 1)]) - # add 0's in the beginning that will skew the elements after reshape - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]])) - x_final = x_flat.view([batch, heads, length, 2 * length])[:, :, :, 1:] - return x_final - - def _attention_bias_proximal(self, length): - """Bias for self-attention to encourage attention to close positions. - Args: - length: an integer scalar. - Returns: - a Tensor with shape [1, 1, length, length] - """ - r = torch.arange(length, dtype=torch.float32) - diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1) - return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0) - - -class FFN(nn.Module): - def __init__( - self, - in_channels, - out_channels, - filter_channels, - kernel_size, - p_dropout=0.0, - activation=None, - causal=False, - ): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.activation = activation - self.causal = causal - - if causal: - self.padding = self._causal_padding - else: - self.padding = self._same_padding - - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size) - self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size) - self.drop = nn.Dropout(p_dropout) - - def forward(self, x, x_mask): - x = self.conv_1(self.padding(x * x_mask)) - if self.activation == "gelu": - x = x * torch.sigmoid(1.702 * x) - else: - x = torch.relu(x) - x = self.drop(x) - x = self.conv_2(self.padding(x * x_mask)) - return x * x_mask - - def _causal_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = self.kernel_size - 1 - pad_r = 0 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x - - def _same_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = (self.kernel_size - 1) // 2 - pad_r = self.kernel_size // 2 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x diff --git a/spaces/Hsft/VenusAi/Dockerfile b/spaces/Hsft/VenusAi/Dockerfile deleted file mode 100644 index e6158e4b2d67eeea6e30ad3c1bb6043ec09b7b9b..0000000000000000000000000000000000000000 --- a/spaces/Hsft/VenusAi/Dockerfile +++ /dev/null @@ -1,11 +0,0 @@ -FROM node:18-bullseye-slim -RUN apt-get update && \ -apt-get install -y git -RUN git clone https://gitgud.io/khanon/oai-reverse-proxy.git /app -WORKDIR /app -RUN npm install -COPY Dockerfile greeting.md* .env* ./ -RUN npm run build -EXPOSE 7860 -ENV NODE_ENV=production -CMD [ "npm", "start" ] \ No newline at end of file diff --git a/spaces/ICML2022/OFA/fairseq/examples/byte_level_bpe/get_bitext.py b/spaces/ICML2022/OFA/fairseq/examples/byte_level_bpe/get_bitext.py deleted file mode 100644 index 6ac1eeec1e6167ec6bafd76b37173ee6987cae7e..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/examples/byte_level_bpe/get_bitext.py +++ /dev/null @@ -1,254 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - - -import argparse -import os -import os.path as op -from collections import namedtuple -from multiprocessing import cpu_count -from typing import List, Optional - -import sentencepiece as sp -from fairseq.data.encoders.byte_bpe import ByteBPE -from fairseq.data.encoders.byte_utils import byte_encode -from fairseq.data.encoders.bytes import Bytes -from fairseq.data.encoders.characters import Characters -from fairseq.data.encoders.moses_tokenizer import MosesTokenizer -from fairseq.data.encoders.sentencepiece_bpe import SentencepieceBPE - - -SPLITS = ["train", "valid", "test"] - - -def _convert_xml(in_path: str, out_path: str): - with open(in_path) as f, open(out_path, "w") as f_o: - for s in f: - ss = s.strip() - if not ss.startswith("", "").split('">') - assert len(ss) == 2 - f_o.write(ss[1].strip() + "\n") - - -def _convert_train(in_path: str, out_path: str): - with open(in_path) as f, open(out_path, "w") as f_o: - for s in f: - ss = s.strip() - if ss.startswith("<"): - continue - f_o.write(ss.strip() + "\n") - - -def _get_bytes(in_path: str, out_path: str): - with open(in_path) as f, open(out_path, "w") as f_o: - for s in f: - f_o.write(Bytes.encode(s.strip()) + "\n") - - -def _get_chars(in_path: str, out_path: str): - with open(in_path) as f, open(out_path, "w") as f_o: - for s in f: - f_o.write(Characters.encode(s.strip()) + "\n") - - -def pretokenize(in_path: str, out_path: str, src: str, tgt: str): - Args = namedtuple( - "Args", - [ - "moses_source_lang", - "moses_target_lang", - "moses_no_dash_splits", - "moses_no_escape", - ], - ) - args = Args( - moses_source_lang=src, - moses_target_lang=tgt, - moses_no_dash_splits=False, - moses_no_escape=False, - ) - pretokenizer = MosesTokenizer(args) - with open(in_path) as f, open(out_path, "w") as f_o: - for s in f: - f_o.write(pretokenizer.encode(s.strip()) + "\n") - - -def _convert_to_bchar(in_path_prefix: str, src: str, tgt: str, out_path: str): - with open(out_path, "w") as f_o: - for lang in [src, tgt]: - with open(f"{in_path_prefix}.{lang}") as f: - for s in f: - f_o.write(byte_encode(s.strip()) + "\n") - - -def _get_bpe(in_path: str, model_prefix: str, vocab_size: int): - arguments = [ - f"--input={in_path}", - f"--model_prefix={model_prefix}", - f"--model_type=bpe", - f"--vocab_size={vocab_size}", - "--character_coverage=1.0", - "--normalization_rule_name=identity", - f"--num_threads={cpu_count()}", - ] - sp.SentencePieceTrainer.Train(" ".join(arguments)) - - -def _apply_bbpe(model_path: str, in_path: str, out_path: str): - Args = namedtuple("Args", ["sentencepiece_model_path"]) - args = Args(sentencepiece_model_path=model_path) - tokenizer = ByteBPE(args) - with open(in_path) as f, open(out_path, "w") as f_o: - for s in f: - f_o.write(tokenizer.encode(s.strip()) + "\n") - - -def _apply_bpe(model_path: str, in_path: str, out_path: str): - Args = namedtuple("Args", ["sentencepiece_model"]) - args = Args(sentencepiece_model=model_path) - tokenizer = SentencepieceBPE(args) - with open(in_path) as f, open(out_path, "w") as f_o: - for s in f: - f_o.write(tokenizer.encode(s.strip()) + "\n") - - -def _concat_files(in_paths: List[str], out_path: str): - with open(out_path, "w") as f_o: - for p in in_paths: - with open(p) as f: - for r in f: - f_o.write(r) - - -def preprocess_iwslt17( - root: str, - src: str, - tgt: str, - bpe_size: Optional[int], - need_chars: bool, - bbpe_size: Optional[int], - need_bytes: bool, -): - # extract bitext - in_root = op.join(root, f"{src}-{tgt}") - for lang in [src, tgt]: - _convert_train( - op.join(in_root, f"train.tags.{src}-{tgt}.{lang}"), - op.join(root, f"train.{lang}"), - ) - _convert_xml( - op.join(in_root, f"IWSLT17.TED.dev2010.{src}-{tgt}.{lang}.xml"), - op.join(root, f"valid.{lang}"), - ) - _convert_xml( - op.join(in_root, f"IWSLT17.TED.tst2015.{src}-{tgt}.{lang}.xml"), - op.join(root, f"test.{lang}"), - ) - # pre-tokenize - for lang in [src, tgt]: - for split in SPLITS: - pretokenize( - op.join(root, f"{split}.{lang}"), - op.join(root, f"{split}.moses.{lang}"), - src, - tgt, - ) - # tokenize with BPE vocabulary - if bpe_size is not None: - # learn vocabulary - concated_train_path = op.join(root, "train.all") - _concat_files( - [op.join(root, "train.moses.fr"), op.join(root, "train.moses.en")], - concated_train_path, - ) - bpe_model_prefix = op.join(root, f"spm_bpe{bpe_size}") - _get_bpe(concated_train_path, bpe_model_prefix, bpe_size) - os.remove(concated_train_path) - # apply - for lang in [src, tgt]: - for split in SPLITS: - _apply_bpe( - bpe_model_prefix + ".model", - op.join(root, f"{split}.moses.{lang}"), - op.join(root, f"{split}.moses.bpe{bpe_size}.{lang}"), - ) - # tokenize with bytes vocabulary - if need_bytes: - for lang in [src, tgt]: - for split in SPLITS: - _get_bytes( - op.join(root, f"{split}.moses.{lang}"), - op.join(root, f"{split}.moses.bytes.{lang}"), - ) - # tokenize with characters vocabulary - if need_chars: - for lang in [src, tgt]: - for split in SPLITS: - _get_chars( - op.join(root, f"{split}.moses.{lang}"), - op.join(root, f"{split}.moses.chars.{lang}"), - ) - # tokenize with byte-level BPE vocabulary - if bbpe_size is not None: - # learn vocabulary - bchar_path = op.join(root, "train.bchar") - _convert_to_bchar(op.join(root, "train.moses"), src, tgt, bchar_path) - bbpe_model_prefix = op.join(root, f"spm_bbpe{bbpe_size}") - _get_bpe(bchar_path, bbpe_model_prefix, bbpe_size) - os.remove(bchar_path) - # apply - for lang in [src, tgt]: - for split in SPLITS: - _apply_bbpe( - bbpe_model_prefix + ".model", - op.join(root, f"{split}.moses.{lang}"), - op.join(root, f"{split}.moses.bbpe{bbpe_size}.{lang}"), - ) - - -def main(): - parser = argparse.ArgumentParser() - parser.add_argument("--root", type=str, default="data") - parser.add_argument( - "--bpe-vocab", - default=None, - type=int, - help="Generate tokenized bitext with BPE of size K." - "Default to None (disabled).", - ) - parser.add_argument( - "--bbpe-vocab", - default=None, - type=int, - help="Generate tokenized bitext with BBPE of size K." - "Default to None (disabled).", - ) - parser.add_argument( - "--byte-vocab", - action="store_true", - help="Generate tokenized bitext with bytes vocabulary", - ) - parser.add_argument( - "--char-vocab", - action="store_true", - help="Generate tokenized bitext with chars vocabulary", - ) - args = parser.parse_args() - - preprocess_iwslt17( - args.root, - "fr", - "en", - args.bpe_vocab, - args.char_vocab, - args.bbpe_vocab, - args.byte_vocab, - ) - - -if __name__ == "__main__": - main() diff --git a/spaces/ICML2022/OFA/fairseq/examples/rxf/README.md b/spaces/ICML2022/OFA/fairseq/examples/rxf/README.md deleted file mode 100644 index 22a1cc47df23c7e0ebbf0ad805031478d1b4a95e..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/examples/rxf/README.md +++ /dev/null @@ -1,52 +0,0 @@ -[Better Fine-Tuning by Reducing Representational Collapse](https://arxiv.org/abs/2008.03156) -===================== -This repo contains the code to replicate all experiments from the _Better Fine-Tuning by Reducing Representational Collapse_ paper excluding the probing results. - -The R3F sentence prediction criterion is registered as `sentence_prediction_r3f` while the label smoothing version of it is implemented as `label_smoothed_cross_entropy_r3f`. The R4F version of the sentence prediction criterion can be achieved by applying spectral norm to the classification head via the `--spectral-norm-classification-head` parameter. - -## Hyper-parameters -Our methods introduce 3 new hyper-parameters; `--eps` which sets the standard deviation or range of the distribution we're sampling from, `--r3f-lambda` which controls the combining of logistic loss and noisy KL loss and `--noise-type` which controls which parametric distribution we use ('normal', 'uniform'). - -For example to run R3F on RTE from GLUE - -``` -TOTAL_NUM_UPDATES=3120 -WARMUP_UPDATES=187 -LR=1e-05 -NUM_CLASSES=2 -MAX_SENTENCES=8 # Batch size. -ROBERTA_PATH=/path/to/roberta/model.pt - -CUDA_VISIBLE_DEVICES=0 fairseq-train RTE-bin \ - --restore-file $ROBERTA_PATH \ - --max-positions 512 \ - --max-sentences $MAX_SENTENCES \ - --max-tokens 4400 \ - --task sentence_prediction \ - --reset-optimizer --reset-dataloader --reset-meters \ - --required-batch-size-multiple 1 \ - --init-token 0 --separator-token 2 \ - --arch roberta_large \ - --criterion sentence_prediction_r3f \ - --num-classes $NUM_CLASSES \ - --dropout 0.1 --attention-dropout 0.1 \ - --weight-decay 0.1 --optimizer adam --adam-betas "(0.9, 0.98)" --adam-eps 1e-06 \ - --clip-norm 0.0 \ - --lr-scheduler polynomial_decay --lr $LR --total-num-update $TOTAL_NUM_UPDATES --warmup-updates $WARMUP_UPDATES \ - --fp16 --fp16-init-scale 4 --threshold-loss-scale 1 --fp16-scale-window 128 \ - --max-epoch 10 \ - --find-unused-parameters \ - --best-checkpoint-metric accuracy --maximize-best-checkpoint-metric \ - --noise-type uniform --r3f-lambda 0.7 \ - --user-dir examples/rxf/rxf_src -``` - -## Citation -```bibtex -@article{aghajanyan2020better, - title={Better Fine-Tuning by Reducing Representational Collapse}, - author={Aghajanyan, Armen and Shrivastava, Akshat and Gupta, Anchit and Goyal, Naman and Zettlemoyer, Luke and Gupta, Sonal}, - journal={arXiv preprint arXiv:2008.03156}, - year={2020} -} -``` diff --git a/spaces/ICML2022/OFA/fairseq/examples/simultaneous_translation/__init__.py b/spaces/ICML2022/OFA/fairseq/examples/simultaneous_translation/__init__.py deleted file mode 100644 index 5835316ba9b23c0d99d1a8f109ee047682211546..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/examples/simultaneous_translation/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from . import models # noqa diff --git a/spaces/ITESM/streamlit_graphs/got.py b/spaces/ITESM/streamlit_graphs/got.py deleted file mode 100644 index 270040a1ae779179d01468596e3c9861fd960f29..0000000000000000000000000000000000000000 --- a/spaces/ITESM/streamlit_graphs/got.py +++ /dev/null @@ -1,71 +0,0 @@ -import networkx as nx -import matplotlib.pyplot as plt -from pyvis.network import Network -import pandas as pd -import streamlit as st - - -def got_func(physics): - got_net = Network(height="600px", width="100%", font_color="black",heading='Game of Thrones Graph') - -# set the physics layout of the network - got_net.barnes_hut() - got_data = pd.read_csv("https://www.macalester.edu/~abeverid/data/stormofswords.csv") - #got_data = pd.read_csv("stormofswords.csv") - #got_data.rename(index={0: "Source", 1: "Target", 2: "Weight"}) - sources = got_data['Source'] - targets = got_data['Target'] - weights = got_data['Weight'] - - edge_data = zip(sources, targets, weights) - - for e in edge_data: - src = e[0] - dst = e[1] - w = e[2] - - got_net.add_node(src, src, title=src) - got_net.add_node(dst, dst, title=dst) - got_net.add_edge(src, dst, value=w) - - neighbor_map = got_net.get_adj_list() - -# add neighbor data to node hover data - for node in got_net.nodes: - node["title"] += " Neighbors:
" + "
".join(neighbor_map[node["id"]]) - node["value"] = len(neighbor_map[node["id"]]) - if physics: - got_net.show_buttons(filter_=['physics']) - got_net.show("gameofthrones.html") - - -def simple_func(physics): - nx_graph = nx.cycle_graph(10) - nx_graph.nodes[1]['title'] = 'Number 1' - nx_graph.nodes[1]['group'] = 1 - nx_graph.nodes[3]['title'] = 'I belong to a different group!' - nx_graph.nodes[3]['group'] = 10 - nx_graph.add_node(20, size=20, title='couple', group=2) - nx_graph.add_node(21, size=15, title='couple', group=2) - nx_graph.add_edge(20, 21, weight=5) - nx_graph.add_node(25, size=25, label='lonely', title='lonely node', group=3) - - - nt = Network("500px", "500px",notebook=True,heading='') - nt.from_nx(nx_graph) - #physics=st.sidebar.checkbox('add physics interactivity?') - if physics: - nt.show_buttons(filter_=['physics']) - nt.show('test.html') - - -def karate_func(physics): - G = nx.karate_club_graph() - - - nt = Network("500px", "500px",notebook=True,heading='Zachary’s Karate Club graph') - nt.from_nx(G) - #physics=st.sidebar.checkbox('add physics interactivity?') - if physics: - nt.show_buttons(filter_=['physics']) - nt.show('karate.html') \ No newline at end of file diff --git a/spaces/Ibtehaj10/cheating-detection-FYP/yolovs5/utils/segment/general.py b/spaces/Ibtehaj10/cheating-detection-FYP/yolovs5/utils/segment/general.py deleted file mode 100644 index b526333dc5a1b8625d7e6a51ee6ba41818c62adb..0000000000000000000000000000000000000000 --- a/spaces/Ibtehaj10/cheating-detection-FYP/yolovs5/utils/segment/general.py +++ /dev/null @@ -1,137 +0,0 @@ -import cv2 -import numpy as np -import torch -import torch.nn.functional as F - - -def crop_mask(masks, boxes): - """ - "Crop" predicted masks by zeroing out everything not in the predicted bbox. - Vectorized by Chong (thanks Chong). - - Args: - - masks should be a size [h, w, n] tensor of masks - - boxes should be a size [n, 4] tensor of bbox coords in relative point form - """ - - n, h, w = masks.shape - x1, y1, x2, y2 = torch.chunk(boxes[:, :, None], 4, 1) # x1 shape(1,1,n) - r = torch.arange(w, device=masks.device, dtype=x1.dtype)[None, None, :] # rows shape(1,w,1) - c = torch.arange(h, device=masks.device, dtype=x1.dtype)[None, :, None] # cols shape(h,1,1) - - return masks * ((r >= x1) * (r < x2) * (c >= y1) * (c < y2)) - - -def process_mask_upsample(protos, masks_in, bboxes, shape): - """ - Crop after upsample. - proto_out: [mask_dim, mask_h, mask_w] - out_masks: [n, mask_dim], n is number of masks after nms - bboxes: [n, 4], n is number of masks after nms - shape:input_image_size, (h, w) - - return: h, w, n - """ - - c, mh, mw = protos.shape # CHW - masks = (masks_in @ protos.float().view(c, -1)).sigmoid().view(-1, mh, mw) - masks = F.interpolate(masks[None], shape, mode='bilinear', align_corners=False)[0] # CHW - masks = crop_mask(masks, bboxes) # CHW - return masks.gt_(0.5) - - -def process_mask(protos, masks_in, bboxes, shape, upsample=False): - """ - Crop before upsample. - proto_out: [mask_dim, mask_h, mask_w] - out_masks: [n, mask_dim], n is number of masks after nms - bboxes: [n, 4], n is number of masks after nms - shape:input_image_size, (h, w) - - return: h, w, n - """ - - c, mh, mw = protos.shape # CHW - ih, iw = shape - masks = (masks_in @ protos.float().view(c, -1)).sigmoid().view(-1, mh, mw) # CHW - - downsampled_bboxes = bboxes.clone() - downsampled_bboxes[:, 0] *= mw / iw - downsampled_bboxes[:, 2] *= mw / iw - downsampled_bboxes[:, 3] *= mh / ih - downsampled_bboxes[:, 1] *= mh / ih - - masks = crop_mask(masks, downsampled_bboxes) # CHW - if upsample: - masks = F.interpolate(masks[None], shape, mode='bilinear', align_corners=False)[0] # CHW - return masks.gt_(0.5) - - -def scale_image(im1_shape, masks, im0_shape, ratio_pad=None): - """ - img1_shape: model input shape, [h, w] - img0_shape: origin pic shape, [h, w, 3] - masks: [h, w, num] - """ - # Rescale coordinates (xyxy) from im1_shape to im0_shape - if ratio_pad is None: # calculate from im0_shape - gain = min(im1_shape[0] / im0_shape[0], im1_shape[1] / im0_shape[1]) # gain = old / new - pad = (im1_shape[1] - im0_shape[1] * gain) / 2, (im1_shape[0] - im0_shape[0] * gain) / 2 # wh padding - else: - pad = ratio_pad[1] - top, left = int(pad[1]), int(pad[0]) # y, x - bottom, right = int(im1_shape[0] - pad[1]), int(im1_shape[1] - pad[0]) - - if len(masks.shape) < 2: - raise ValueError(f'"len of masks shape" should be 2 or 3, but got {len(masks.shape)}') - masks = masks[top:bottom, left:right] - # masks = masks.permute(2, 0, 1).contiguous() - # masks = F.interpolate(masks[None], im0_shape[:2], mode='bilinear', align_corners=False)[0] - # masks = masks.permute(1, 2, 0).contiguous() - masks = cv2.resize(masks, (im0_shape[1], im0_shape[0])) - - if len(masks.shape) == 2: - masks = masks[:, :, None] - return masks - - -def mask_iou(mask1, mask2, eps=1e-7): - """ - mask1: [N, n] m1 means number of predicted objects - mask2: [M, n] m2 means number of gt objects - Note: n means image_w x image_h - - return: masks iou, [N, M] - """ - intersection = torch.matmul(mask1, mask2.t()).clamp(0) - union = (mask1.sum(1)[:, None] + mask2.sum(1)[None]) - intersection # (area1 + area2) - intersection - return intersection / (union + eps) - - -def masks_iou(mask1, mask2, eps=1e-7): - """ - mask1: [N, n] m1 means number of predicted objects - mask2: [N, n] m2 means number of gt objects - Note: n means image_w x image_h - - return: masks iou, (N, ) - """ - intersection = (mask1 * mask2).sum(1).clamp(0) # (N, ) - union = (mask1.sum(1) + mask2.sum(1))[None] - intersection # (area1 + area2) - intersection - return intersection / (union + eps) - - -def masks2segments(masks, strategy='largest'): - # Convert masks(n,160,160) into segments(n,xy) - segments = [] - for x in masks.int().cpu().numpy().astype('uint8'): - c = cv2.findContours(x, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)[0] - if c: - if strategy == 'concat': # concatenate all segments - c = np.concatenate([x.reshape(-1, 2) for x in c]) - elif strategy == 'largest': # select largest segment - c = np.array(c[np.array([len(x) for x in c]).argmax()]).reshape(-1, 2) - else: - c = np.zeros((0, 2)) # no segments found - segments.append(c.astype('float32')) - return segments diff --git a/spaces/Iceclear/StableSR/StableSR/ldm/models/diffusion/ddim.py b/spaces/Iceclear/StableSR/StableSR/ldm/models/diffusion/ddim.py deleted file mode 100644 index 411257c9184e334aae4f2da9c0bfea452884893e..0000000000000000000000000000000000000000 --- a/spaces/Iceclear/StableSR/StableSR/ldm/models/diffusion/ddim.py +++ /dev/null @@ -1,675 +0,0 @@ -"""SAMPLING ONLY.""" - -import torch -import numpy as np -from tqdm import tqdm -from functools import partial - -from ldm.modules.diffusionmodules.util import make_ddim_sampling_parameters, make_ddim_timesteps, noise_like, \ - extract_into_tensor - -from ldm.util import log_txt_as_img, exists, default, ismap, isimage, mean_flat, count_params, instantiate_from_config - -def space_timesteps(num_timesteps, section_counts): - """ - Create a list of timesteps to use from an original diffusion process, - given the number of timesteps we want to take from equally-sized portions - of the original process. - - For example, if there's 300 timesteps and the section counts are [10,15,20] - then the first 100 timesteps are strided to be 10 timesteps, the second 100 - are strided to be 15 timesteps, and the final 100 are strided to be 20. - - If the stride is a string starting with "ddim", then the fixed striding - from the DDIM paper is used, and only one section is allowed. - - :param num_timesteps: the number of diffusion steps in the original - process to divide up. - :param section_counts: either a list of numbers, or a string containing - comma-separated numbers, indicating the step count - per section. As a special case, use "ddimN" where N - is a number of steps to use the striding from the - DDIM paper. - :return: a set of diffusion steps from the original process to use. - """ - if isinstance(section_counts, str): - if section_counts.startswith("ddim"): - desired_count = int(section_counts[len("ddim"):]) - for i in range(1, num_timesteps): - if len(range(0, num_timesteps, i)) == desired_count: - return set(range(0, num_timesteps, i)) - raise ValueError( - f"cannot create exactly {num_timesteps} steps with an integer stride" - ) - section_counts = [int(x) for x in section_counts.split(",")] #[250,] - size_per = num_timesteps // len(section_counts) - extra = num_timesteps % len(section_counts) - start_idx = 0 - all_steps = [] - for i, section_count in enumerate(section_counts): - size = size_per + (1 if i < extra else 0) - if size < section_count: - raise ValueError( - f"cannot divide section of {size} steps into {section_count}" - ) - if section_count <= 1: - frac_stride = 1 - else: - frac_stride = (size - 1) / (section_count - 1) - cur_idx = 0.0 - taken_steps = [] - for _ in range(section_count): - taken_steps.append(start_idx + round(cur_idx)) - cur_idx += frac_stride - all_steps += taken_steps - start_idx += size - return set(all_steps) - -class DDIMSampler(object): - def __init__(self, model, schedule="linear", **kwargs): - super().__init__() - self.model = model - self.ddpm_num_timesteps = model.num_timesteps - self.schedule = schedule - - def register_buffer(self, name, attr): - if type(attr) == torch.Tensor: - if attr.device != torch.device("cuda"): - attr = attr.to(torch.device("cuda")) - setattr(self, name, attr) - - def make_schedule(self, ddim_num_steps, ddim_discretize="uniform", ddim_eta=0., verbose=True): - self.ddim_timesteps = make_ddim_timesteps(ddim_discr_method=ddim_discretize, num_ddim_timesteps=ddim_num_steps, - num_ddpm_timesteps=self.ddpm_num_timesteps,verbose=verbose) - alphas_cumprod = self.model.alphas_cumprod - assert alphas_cumprod.shape[0] == self.ddpm_num_timesteps, 'alphas have to be defined for each timestep' - to_torch = lambda x: x.clone().detach().to(torch.float32).to(self.model.device) - - self.register_buffer('betas', to_torch(self.model.betas)) - self.register_buffer('alphas_cumprod', to_torch(alphas_cumprod)) - self.register_buffer('alphas_cumprod_prev', to_torch(self.model.alphas_cumprod_prev)) - - # calculations for diffusion q(x_t | x_{t-1}) and others - self.register_buffer('sqrt_alphas_cumprod', to_torch(np.sqrt(alphas_cumprod.cpu()))) - self.register_buffer('sqrt_one_minus_alphas_cumprod', to_torch(np.sqrt(1. - alphas_cumprod.cpu()))) - self.register_buffer('log_one_minus_alphas_cumprod', to_torch(np.log(1. - alphas_cumprod.cpu()))) - self.register_buffer('sqrt_recip_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod.cpu()))) - self.register_buffer('sqrt_recipm1_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod.cpu() - 1))) - - # ddim sampling parameters - ddim_sigmas, ddim_alphas, ddim_alphas_prev = make_ddim_sampling_parameters(alphacums=alphas_cumprod.cpu(), - ddim_timesteps=self.ddim_timesteps, - eta=ddim_eta,verbose=verbose) - - self.register_buffer('ddim_sigmas', ddim_sigmas) - self.register_buffer('ddim_alphas', ddim_alphas) - self.register_buffer('ddim_alphas_prev', ddim_alphas_prev) - self.register_buffer('ddim_sqrt_one_minus_alphas', np.sqrt(1. - ddim_alphas)) - sigmas_for_original_sampling_steps = ddim_eta * torch.sqrt( - (1 - self.alphas_cumprod_prev) / (1 - self.alphas_cumprod) * ( - 1 - self.alphas_cumprod / self.alphas_cumprod_prev)) - self.register_buffer('ddim_sigmas_for_original_num_steps', sigmas_for_original_sampling_steps) - - @torch.no_grad() - def q_sample(self, x_start, t, noise=None, ddim_num_steps=200): - self.make_schedule(ddim_num_steps=ddim_num_steps) - noise = default(noise, lambda: torch.randn_like(x_start)) - return (extract_into_tensor(self.sqrt_alphas_cumprod, t, x_start.shape) * x_start + - extract_into_tensor(self.sqrt_one_minus_alphas_cumprod, t, x_start.shape) * noise) - - @torch.no_grad() - def sample(self, - S, - batch_size, - shape, - conditioning=None, - callback=None, - normals_sequence=None, - img_callback=None, - quantize_x0=False, - eta=0., - mask=None, - x0=None, - temperature=1., - noise_dropout=0., - score_corrector=None, - corrector_kwargs=None, - verbose=True, - x_T=None, - log_every_t=100, - unconditional_guidance_scale=1., - unconditional_conditioning=None, - # this has to come in the same format as the conditioning, # e.g. as encoded tokens, ... - **kwargs - ): - if conditioning is not None: - if isinstance(conditioning, dict): - cbs = conditioning[list(conditioning.keys())[0]].shape[0] - if cbs != batch_size: - print(f"Warning: Got {cbs} conditionings but batch-size is {batch_size}") - else: - if conditioning.shape[0] != batch_size: - print(f"Warning: Got {conditioning.shape[0]} conditionings but batch-size is {batch_size}") - - self.make_schedule(ddim_num_steps=S, ddim_eta=eta, verbose=verbose) - # sampling - C, H, W = shape - size = (batch_size, C, H, W) - print(f'Data shape for DDIM sampling is {size}, eta {eta}') - - samples, intermediates = self.ddim_sampling(conditioning, size, - callback=callback, - img_callback=img_callback, - quantize_denoised=quantize_x0, - mask=mask, x0=x0, - ddim_use_original_steps=False, - noise_dropout=noise_dropout, - temperature=temperature, - score_corrector=score_corrector, - corrector_kwargs=corrector_kwargs, - x_T=x_T, - log_every_t=log_every_t, - unconditional_guidance_scale=unconditional_guidance_scale, - unconditional_conditioning=unconditional_conditioning, - ) - return samples, intermediates - - @torch.no_grad() - def ddim_sampling(self, cond, shape, - x_T=None, ddim_use_original_steps=False, - callback=None, timesteps=None, quantize_denoised=False, - mask=None, x0=None, img_callback=None, log_every_t=100, - temperature=1., noise_dropout=0., score_corrector=None, corrector_kwargs=None, - unconditional_guidance_scale=1., unconditional_conditioning=None,): - device = self.model.betas.device - b = shape[0] - if x_T is None: - img = torch.randn(shape, device=device) - else: - img = x_T - - if timesteps is None: - timesteps = self.ddpm_num_timesteps if ddim_use_original_steps else self.ddim_timesteps - elif timesteps is not None and not ddim_use_original_steps: - subset_end = int(min(timesteps / self.ddim_timesteps.shape[0], 1) * self.ddim_timesteps.shape[0]) - 1 - timesteps = self.ddim_timesteps[:subset_end] - - intermediates = {'x_inter': [img], 'pred_x0': [img]} - time_range = reversed(range(0,timesteps)) if ddim_use_original_steps else np.flip(timesteps) - total_steps = timesteps if ddim_use_original_steps else timesteps.shape[0] - print(f"Running DDIM Sampling with {total_steps} timesteps") - - iterator = tqdm(time_range, desc='DDIM Sampler', total=total_steps) - - for i, step in enumerate(iterator): - index = total_steps - i - 1 - ts = torch.full((b,), step, device=device, dtype=torch.long) - - if mask is not None: - assert x0 is not None - img_orig = self.model.q_sample(x0, ts) # TODO: deterministic forward pass? - img = img_orig * mask + (1. - mask) * img - - outs = self.p_sample_ddim(img, cond, ts, index=index, use_original_steps=ddim_use_original_steps, - quantize_denoised=quantize_denoised, temperature=temperature, - noise_dropout=noise_dropout, score_corrector=score_corrector, - corrector_kwargs=corrector_kwargs, - unconditional_guidance_scale=unconditional_guidance_scale, - unconditional_conditioning=unconditional_conditioning) - img, pred_x0 = outs - if callback: callback(i) - if img_callback: img_callback(pred_x0, i) - - if index % log_every_t == 0 or index == total_steps - 1: - intermediates['x_inter'].append(img) - intermediates['pred_x0'].append(pred_x0) - - return img, intermediates - - @torch.no_grad() - def p_sample_ddim(self, x, c, t, index, repeat_noise=False, use_original_steps=False, quantize_denoised=False, - temperature=1., noise_dropout=0., score_corrector=None, corrector_kwargs=None, - unconditional_guidance_scale=1., unconditional_conditioning=None): - b, *_, device = *x.shape, x.device - - if unconditional_conditioning is None or unconditional_guidance_scale == 1.: - e_t = self.model.apply_model(x, t, c) - else: - x_in = torch.cat([x] * 2) - t_in = torch.cat([t] * 2) - c_in = torch.cat([unconditional_conditioning, c]) - e_t_uncond, e_t = self.model.apply_model(x_in, t_in, c_in).chunk(2) - e_t = e_t_uncond + unconditional_guidance_scale * (e_t - e_t_uncond) - - if score_corrector is not None: - assert self.model.parameterization == "eps" - e_t = score_corrector.modify_score(self.model, e_t, x, t, c, **corrector_kwargs) - - alphas = self.model.alphas_cumprod if use_original_steps else self.ddim_alphas - alphas_prev = self.model.alphas_cumprod_prev if use_original_steps else self.ddim_alphas_prev - sqrt_one_minus_alphas = self.model.sqrt_one_minus_alphas_cumprod if use_original_steps else self.ddim_sqrt_one_minus_alphas - sigmas = self.model.ddim_sigmas_for_original_num_steps if use_original_steps else self.ddim_sigmas - # select parameters corresponding to the currently considered timestep - a_t = torch.full((b, 1, 1, 1), alphas[index], device=device) - a_prev = torch.full((b, 1, 1, 1), alphas_prev[index], device=device) - sigma_t = torch.full((b, 1, 1, 1), sigmas[index], device=device) - sqrt_one_minus_at = torch.full((b, 1, 1, 1), sqrt_one_minus_alphas[index],device=device) - - # current prediction for x_0 - pred_x0 = (x - sqrt_one_minus_at * e_t) / a_t.sqrt() - if quantize_denoised: - pred_x0, _, *_ = self.model.first_stage_model.quantize(pred_x0) - # direction pointing to x_t - dir_xt = (1. - a_prev - sigma_t**2).sqrt() * e_t - noise = sigma_t * noise_like(x.shape, device, repeat_noise) * temperature - if noise_dropout > 0.: - noise = torch.nn.functional.dropout(noise, p=noise_dropout) - x_prev = a_prev.sqrt() * pred_x0 + dir_xt + noise - return x_prev, pred_x0 - - @torch.no_grad() - def stochastic_encode(self, x0, t, use_original_steps=False, noise=None): - # fast, but does not allow for exact reconstruction - # t serves as an index to gather the correct alphas - if use_original_steps: - sqrt_alphas_cumprod = self.sqrt_alphas_cumprod - sqrt_one_minus_alphas_cumprod = self.sqrt_one_minus_alphas_cumprod - else: - sqrt_alphas_cumprod = torch.sqrt(self.ddim_alphas) - sqrt_one_minus_alphas_cumprod = self.ddim_sqrt_one_minus_alphas - - if noise is None: - noise = torch.randn_like(x0) - return (extract_into_tensor(sqrt_alphas_cumprod, t, x0.shape) * x0 + - extract_into_tensor(sqrt_one_minus_alphas_cumprod, t, x0.shape) * noise) - - @torch.no_grad() - def decode(self, x_latent, cond, t_start, unconditional_guidance_scale=1.0, unconditional_conditioning=None, - use_original_steps=False): - - timesteps = np.arange(self.ddpm_num_timesteps) if use_original_steps else self.ddim_timesteps - timesteps = timesteps[:t_start] - - time_range = np.flip(timesteps) - total_steps = timesteps.shape[0] - print(f"Running DDIM Sampling with {total_steps} timesteps") - - iterator = tqdm(time_range, desc='Decoding image', total=total_steps) - x_dec = x_latent - for i, step in enumerate(iterator): - index = total_steps - i - 1 - ts = torch.full((x_latent.shape[0],), step, device=x_latent.device, dtype=torch.long) - x_dec, _ = self.p_sample_ddim(x_dec, cond, ts, index=index, use_original_steps=use_original_steps, - unconditional_guidance_scale=unconditional_guidance_scale, - unconditional_conditioning=unconditional_conditioning) - return x_dec - - - @torch.no_grad() - def p_sample_ddim_sr(self, x, c, struct_c, t, index, repeat_noise=False, use_original_steps=False, quantize_denoised=False, - temperature=1., noise_dropout=0., score_corrector=None, corrector_kwargs=None, - unconditional_guidance_scale=1., unconditional_conditioning=None): - b, *_, device = *x.shape, x.device - - if unconditional_conditioning is None or unconditional_guidance_scale == 1.: - e_t = self.model.apply_model(x, t, c, struct_c) - else: - x_in = torch.cat([x] * 2) - t_in = torch.cat([t] * 2) - c_in = torch.cat([unconditional_conditioning, c]) - e_t_uncond, e_t = self.model.apply_model(x_in, t_in, c_in, struct_c).chunk(2) - e_t = e_t_uncond + unconditional_guidance_scale * (e_t - e_t_uncond) - - if score_corrector is not None: - assert self.model.parameterization == "eps" - e_t = score_corrector.modify_score(self.model, e_t, x, t, c, **corrector_kwargs) - - alphas = self.model.alphas_cumprod if use_original_steps else self.ddim_alphas - alphas_prev = self.model.alphas_cumprod_prev if use_original_steps else self.ddim_alphas_prev - sqrt_one_minus_alphas = self.model.sqrt_one_minus_alphas_cumprod if use_original_steps else self.ddim_sqrt_one_minus_alphas - sigmas = self.model.ddim_sigmas_for_original_num_steps if use_original_steps else self.ddim_sigmas - # select parameters corresponding to the currently considered timestep - a_t = torch.full((b, 1, 1, 1), alphas[index], device=device) - a_prev = torch.full((b, 1, 1, 1), alphas_prev[index], device=device) - sigma_t = torch.full((b, 1, 1, 1), sigmas[index], device=device) - sqrt_one_minus_at = torch.full((b, 1, 1, 1), sqrt_one_minus_alphas[index],device=device) - - # current prediction for x_0 - pred_x0 = (x - sqrt_one_minus_at * e_t) / a_t.sqrt() - if quantize_denoised: - pred_x0, _, *_ = self.model.first_stage_model.quantize(pred_x0) - # direction pointing to x_t - dir_xt = (1. - a_prev - sigma_t**2).sqrt() * e_t - noise = sigma_t * noise_like(x.shape, device, repeat_noise) * temperature - if noise_dropout > 0.: - noise = torch.nn.functional.dropout(noise, p=noise_dropout) - x_prev = a_prev.sqrt() * pred_x0 + dir_xt + noise - return x_prev, pred_x0 - - @torch.no_grad() - def decode_sr(self, x_latent, cond, struct_cond, t_start, unconditional_guidance_scale=1.0, unconditional_conditioning=None, - use_original_steps=False): - - timesteps = np.arange(self.ddpm_num_timesteps) if use_original_steps else self.ddim_timesteps - timesteps = timesteps[:t_start] - - time_range = np.flip(timesteps) - total_steps = timesteps.shape[0] - print(f"Running DDIM Sampling with {total_steps} timesteps") - - iterator = tqdm(time_range, desc='Decoding image', total=total_steps) - x_dec = x_latent - for i, step in enumerate(iterator): - index = total_steps - i - 1 - ts = torch.full((x_latent.shape[0],), step, device=x_latent.device, dtype=torch.long) - x_dec, _ = self.p_sample_ddim_sr(x_dec, cond, struct_cond, ts, index=index, use_original_steps=use_original_steps, - unconditional_guidance_scale=unconditional_guidance_scale, - unconditional_conditioning=unconditional_conditioning) - return x_dec - - @torch.no_grad() - def sample_sr(self, - S, - batch_size, - shape, - conditioning=None, - struct_cond=None, - callback=None, - normals_sequence=None, - img_callback=None, - quantize_x0=False, - eta=0., - mask=None, - x0=None, - temperature=1., - noise_dropout=0., - score_corrector=None, - corrector_kwargs=None, - verbose=True, - x_T=None, - log_every_t=100, - unconditional_guidance_scale=1., - unconditional_conditioning=None, - # this has to come in the same format as the conditioning, # e.g. as encoded tokens, ... - **kwargs - ): - if conditioning is not None: - if isinstance(conditioning, dict): - cbs = conditioning[list(conditioning.keys())[0]].shape[0] - if cbs != batch_size: - print(f"Warning: Got {cbs} conditionings but batch-size is {batch_size}") - else: - if conditioning.shape[0] != batch_size: - print(f"Warning: Got {conditioning.shape[0]} conditionings but batch-size is {batch_size}") - - self.make_schedule(ddim_num_steps=S, ddim_eta=eta, verbose=verbose) - # sampling - _, C, H, W = shape - size = (batch_size, C, H, W) - print(f'Data shape for DDIM sampling is {size}, eta {eta}') - - samples, intermediates = self.ddim_sampling_sr(conditioning, struct_cond, size, - callback=callback, - img_callback=img_callback, - quantize_denoised=quantize_x0, - mask=mask, x0=x0, - ddim_use_original_steps=False, - noise_dropout=noise_dropout, - temperature=temperature, - score_corrector=score_corrector, - corrector_kwargs=corrector_kwargs, - x_T=x_T, - log_every_t=log_every_t, - unconditional_guidance_scale=unconditional_guidance_scale, - unconditional_conditioning=unconditional_conditioning, - ) - return samples, intermediates - - @torch.no_grad() - def ddim_sampling_sr(self, cond, struct_cond, shape, - x_T=None, ddim_use_original_steps=False, - callback=None, timesteps=None, quantize_denoised=False, - mask=None, x0=None, img_callback=None, log_every_t=100, - temperature=1., noise_dropout=0., score_corrector=None, corrector_kwargs=None, - unconditional_guidance_scale=1., unconditional_conditioning=None,): - device = self.model.betas.device - b = shape[0] - if x_T is None: - img = torch.randn(shape, device=device) - else: - img = x_T - - if timesteps is None: - timesteps = self.ddpm_num_timesteps if ddim_use_original_steps else self.ddim_timesteps - elif timesteps is not None and not ddim_use_original_steps: - subset_end = int(min(timesteps / self.ddim_timesteps.shape[0], 1) * self.ddim_timesteps.shape[0]) - 1 - timesteps = self.ddim_timesteps[:subset_end] - - intermediates = {'x_inter': [img], 'pred_x0': [img]} - time_range = reversed(range(0,timesteps)) if ddim_use_original_steps else np.flip(timesteps) - total_steps = timesteps if ddim_use_original_steps else timesteps.shape[0] - print(f"Running DDIM Sampling with {total_steps} timesteps") - - iterator = tqdm(time_range, desc='DDIM Sampler', total=total_steps) - - for i, step in enumerate(iterator): - index = total_steps - i - 1 - ts = torch.full((b,), step, device=device, dtype=torch.long) - - if mask is not None: - assert x0 is not None - img_orig = self.model.q_sample(x0, ts) # TODO: deterministic forward pass? - img = img_orig * mask + (1. - mask) * img - - outs = self.p_sample_ddim_sr(img, cond, struct_cond, ts, index=index, use_original_steps=ddim_use_original_steps, - quantize_denoised=quantize_denoised, temperature=temperature, - noise_dropout=noise_dropout, score_corrector=score_corrector, - corrector_kwargs=corrector_kwargs, - unconditional_guidance_scale=unconditional_guidance_scale, - unconditional_conditioning=unconditional_conditioning) - img, pred_x0 = outs - if callback: callback(i) - if img_callback: img_callback(pred_x0, i) - - if index % log_every_t == 0 or index == total_steps - 1: - intermediates['x_inter'].append(img) - intermediates['pred_x0'].append(pred_x0) - - return img, intermediates - - @torch.no_grad() - def p_sample_ddim_sr(self, x, c, struct_c, t, index, repeat_noise=False, use_original_steps=False, quantize_denoised=False, - temperature=1., noise_dropout=0., score_corrector=None, corrector_kwargs=None, - unconditional_guidance_scale=1., unconditional_conditioning=None): - b, *_, device = *x.shape, x.device - - if unconditional_conditioning is None or unconditional_guidance_scale == 1.: - e_t = self.model.apply_model(x, t, c, struct_c) - else: - x_in = torch.cat([x] * 2) - t_in = torch.cat([t] * 2) - c_in = torch.cat([unconditional_conditioning, c]) - e_t_uncond, e_t = self.model.apply_model(x_in, t_in, c_in, struct_c).chunk(2) - e_t = e_t_uncond + unconditional_guidance_scale * (e_t - e_t_uncond) - - if score_corrector is not None: - assert self.model.parameterization == "eps" - e_t = score_corrector.modify_score(self.model, e_t, x, t, c, **corrector_kwargs) - - alphas = self.model.alphas_cumprod if use_original_steps else self.ddim_alphas - alphas_prev = self.model.alphas_cumprod_prev if use_original_steps else self.ddim_alphas_prev - sqrt_one_minus_alphas = self.model.sqrt_one_minus_alphas_cumprod if use_original_steps else self.ddim_sqrt_one_minus_alphas - sigmas = self.model.ddim_sigmas_for_original_num_steps if use_original_steps else self.ddim_sigmas - # select parameters corresponding to the currently considered timestep - a_t = torch.full((b, 1, 1, 1), alphas[index], device=device) - a_prev = torch.full((b, 1, 1, 1), alphas_prev[index], device=device) - sigma_t = torch.full((b, 1, 1, 1), sigmas[index], device=device) - sqrt_one_minus_at = torch.full((b, 1, 1, 1), sqrt_one_minus_alphas[index],device=device) - - # current prediction for x_0 - pred_x0 = (x - sqrt_one_minus_at * e_t) / a_t.sqrt() - if quantize_denoised: - pred_x0, _, *_ = self.model.first_stage_model.quantize(pred_x0) - # direction pointing to x_t - dir_xt = (1. - a_prev - sigma_t**2).sqrt() * e_t - noise = sigma_t * noise_like(x.shape, device, repeat_noise) * temperature - if noise_dropout > 0.: - noise = torch.nn.functional.dropout(noise, p=noise_dropout) - x_prev = a_prev.sqrt() * pred_x0 + dir_xt + noise - return x_prev, pred_x0 - - - @torch.no_grad() - def sample_sr_t(self, - S, - batch_size, - shape, - conditioning=None, - struct_cond=None, - callback=None, - normals_sequence=None, - img_callback=None, - quantize_x0=False, - eta=0., - mask=None, - x0=None, - temperature=1., - noise_dropout=0., - score_corrector=None, - corrector_kwargs=None, - verbose=True, - x_T=None, - log_every_t=100, - unconditional_guidance_scale=1., - unconditional_conditioning=None, - # this has to come in the same format as the conditioning, # e.g. as encoded tokens, ... - **kwargs - ): - if conditioning is not None: - if isinstance(conditioning, dict): - cbs = conditioning[list(conditioning.keys())[0]].shape[0] - if cbs != batch_size: - print(f"Warning: Got {cbs} conditionings but batch-size is {batch_size}") - else: - if conditioning.shape[0] != batch_size: - print(f"Warning: Got {conditioning.shape[0]} conditionings but batch-size is {batch_size}") - - self.make_schedule(ddim_num_steps=S, ddim_eta=eta, verbose=verbose) - # sampling - _, C, H, W = shape - size = (batch_size, C, H, W) - print(f'Data shape for DDIM sampling is {size}, eta {eta}') - - samples, intermediates = self.ddim_sampling_sr_t(conditioning, struct_cond, size, - callback=callback, - img_callback=img_callback, - quantize_denoised=quantize_x0, - mask=mask, x0=x0, - ddim_use_original_steps=False, - noise_dropout=noise_dropout, - temperature=temperature, - score_corrector=score_corrector, - corrector_kwargs=corrector_kwargs, - x_T=x_T, - log_every_t=log_every_t, - unconditional_guidance_scale=unconditional_guidance_scale, - unconditional_conditioning=unconditional_conditioning, - ) - return samples, intermediates - - @torch.no_grad() - def ddim_sampling_sr_t(self, cond, struct_cond, shape, - x_T=None, ddim_use_original_steps=False, - callback=None, timesteps=None, quantize_denoised=False, - mask=None, x0=None, img_callback=None, log_every_t=100, - temperature=1., noise_dropout=0., score_corrector=None, corrector_kwargs=None, - unconditional_guidance_scale=1., unconditional_conditioning=None,): - device = self.model.betas.device - b = shape[0] - if x_T is None: - img = torch.randn(shape, device=device) - else: - img = x_T - - if timesteps is None: - timesteps = self.ddpm_num_timesteps if ddim_use_original_steps else self.ddim_timesteps - # timesteps = self.ddpm_num_timesteps if ddim_use_original_steps else sorted(set(space_timesteps(1000, [self.ddim_timesteps.shape[0]]))) - timesteps = np.array(timesteps) - elif timesteps is not None and not ddim_use_original_steps: - subset_end = int(min(timesteps / self.ddim_timesteps.shape[0], 1) * self.ddim_timesteps.shape[0]) - 1 - timesteps = self.ddim_timesteps[:subset_end] - - intermediates = {'x_inter': [img], 'pred_x0': [img]} - time_range = reversed(range(0,timesteps)) if ddim_use_original_steps else np.flip(timesteps) - total_steps = timesteps if ddim_use_original_steps else timesteps.shape[0] - print(f"Running DDIM Sampling with {total_steps} timesteps") - - iterator = tqdm(time_range, desc='DDIM Sampler', total=total_steps) - - for i, step in enumerate(iterator): - index = total_steps - i - 1 - ts = torch.full((b,), step, device=device, dtype=torch.long) - - if mask is not None: - assert x0 is not None - img_orig = self.model.q_sample(x0, ts) # TODO: deterministic forward pass? - img = img_orig * mask + (1. - mask) * img - - outs = self.p_sample_ddim_sr_t(img, cond, struct_cond, ts, index=index, use_original_steps=ddim_use_original_steps, - quantize_denoised=quantize_denoised, temperature=temperature, - noise_dropout=noise_dropout, score_corrector=score_corrector, - corrector_kwargs=corrector_kwargs, - unconditional_guidance_scale=unconditional_guidance_scale, - unconditional_conditioning=unconditional_conditioning) - img, pred_x0 = outs - if callback: callback(i) - if img_callback: img_callback(pred_x0, i) - - if index % log_every_t == 0 or index == total_steps - 1: - intermediates['x_inter'].append(img) - intermediates['pred_x0'].append(pred_x0) - - return img, intermediates - - @torch.no_grad() - def p_sample_ddim_sr_t(self, x, c, struct_c, t, index, repeat_noise=False, use_original_steps=False, quantize_denoised=False, - temperature=1., noise_dropout=0., score_corrector=None, corrector_kwargs=None, - unconditional_guidance_scale=1., unconditional_conditioning=None): - b, *_, device = *x.shape, x.device - - if unconditional_conditioning is None or unconditional_guidance_scale == 1.: - struct_c_t = self.model.structcond_stage_model(struct_c, t) - e_t = self.model.apply_model(x, t, c, struct_c_t) - else: - assert NotImplementedError - x_in = torch.cat([x] * 2) - t_in = torch.cat([t] * 2) - c_in = torch.cat([unconditional_conditioning, c]) - e_t_uncond, e_t = self.model.apply_model(x_in, t_in, c_in, struct_c).chunk(2) - e_t = e_t_uncond + unconditional_guidance_scale * (e_t - e_t_uncond) - - if score_corrector is not None: - assert self.model.parameterization == "eps" - e_t = score_corrector.modify_score(self.model, e_t, x, t, c, **corrector_kwargs) - - alphas = self.model.alphas_cumprod if use_original_steps else self.ddim_alphas - alphas_prev = self.model.alphas_cumprod_prev if use_original_steps else self.ddim_alphas_prev - sqrt_one_minus_alphas = self.model.sqrt_one_minus_alphas_cumprod if use_original_steps else self.ddim_sqrt_one_minus_alphas - sigmas = self.model.ddim_sigmas_for_original_num_steps if use_original_steps else self.ddim_sigmas - # select parameters corresponding to the currently considered timestep - a_t = torch.full((b, 1, 1, 1), alphas[index], device=device) - a_prev = torch.full((b, 1, 1, 1), alphas_prev[index], device=device) - sigma_t = torch.full((b, 1, 1, 1), sigmas[index], device=device) - sqrt_one_minus_at = torch.full((b, 1, 1, 1), sqrt_one_minus_alphas[index],device=device) - - # current prediction for x_0 - pred_x0 = (x - sqrt_one_minus_at * e_t) / a_t.sqrt() - if quantize_denoised: - pred_x0, _, *_ = self.model.first_stage_model.quantize(pred_x0) - # direction pointing to x_t - dir_xt = (1. - a_prev - sigma_t**2).sqrt() * e_t - noise = sigma_t * noise_like(x.shape, device, repeat_noise) * temperature - if noise_dropout > 0.: - noise = torch.nn.functional.dropout(noise, p=noise_dropout) - x_prev = a_prev.sqrt() * pred_x0 + dir_xt + noise - return x_prev, pred_x0 diff --git a/spaces/Ikaros521/so-vits-svc-4.0-ikaros/vdecoder/hifigan/utils.py b/spaces/Ikaros521/so-vits-svc-4.0-ikaros/vdecoder/hifigan/utils.py deleted file mode 100644 index 9c93c996d3cc73c30d71c1fc47056e4230f35c0f..0000000000000000000000000000000000000000 --- a/spaces/Ikaros521/so-vits-svc-4.0-ikaros/vdecoder/hifigan/utils.py +++ /dev/null @@ -1,68 +0,0 @@ -import glob -import os -import matplotlib -import torch -from torch.nn.utils import weight_norm -# matplotlib.use("Agg") -import matplotlib.pylab as plt - - -def plot_spectrogram(spectrogram): - fig, ax = plt.subplots(figsize=(10, 2)) - im = ax.imshow(spectrogram, aspect="auto", origin="lower", - interpolation='none') - plt.colorbar(im, ax=ax) - - fig.canvas.draw() - plt.close() - - return fig - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def apply_weight_norm(m): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - weight_norm(m) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size*dilation - dilation)/2) - - -def load_checkpoint(filepath, device): - assert os.path.isfile(filepath) - print("Loading '{}'".format(filepath)) - checkpoint_dict = torch.load(filepath, map_location=device) - print("Complete.") - return checkpoint_dict - - -def save_checkpoint(filepath, obj): - print("Saving checkpoint to {}".format(filepath)) - torch.save(obj, filepath) - print("Complete.") - - -def del_old_checkpoints(cp_dir, prefix, n_models=2): - pattern = os.path.join(cp_dir, prefix + '????????') - cp_list = glob.glob(pattern) # get checkpoint paths - cp_list = sorted(cp_list)# sort by iter - if len(cp_list) > n_models: # if more than n_models models are found - for cp in cp_list[:-n_models]:# delete the oldest models other than lastest n_models - open(cp, 'w').close()# empty file contents - os.unlink(cp)# delete file (move to trash when using Colab) - - -def scan_checkpoint(cp_dir, prefix): - pattern = os.path.join(cp_dir, prefix + '????????') - cp_list = glob.glob(pattern) - if len(cp_list) == 0: - return None - return sorted(cp_list)[-1] - diff --git a/spaces/Illumotion/Koboldcpp/examples/server/deps.sh b/spaces/Illumotion/Koboldcpp/examples/server/deps.sh deleted file mode 100644 index ea23e64500b09b7535725fe5bb9574a33c729192..0000000000000000000000000000000000000000 --- a/spaces/Illumotion/Koboldcpp/examples/server/deps.sh +++ /dev/null @@ -1,20 +0,0 @@ -#!/bin/bash -# Download and update deps for binary - -# get the directory of this script file -DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )" -PUBLIC=$DIR/public - -echo "download js bundle files" -curl https://npm.reversehttp.com/@preact/signals-core,@preact/signals,htm/preact,preact,preact/hooks > $PUBLIC/index.js -echo >> $PUBLIC/index.js # add newline - -FILES=$(ls $PUBLIC) - -cd $PUBLIC -for FILE in $FILES; do - echo "generate $FILE.hpp" - - # use simple flag for old version of xxd - xxd -i $FILE > $DIR/$FILE.hpp -done diff --git a/spaces/Jackflack09/diffuse-custom/diffusers/schedulers/scheduling_vq_diffusion.py b/spaces/Jackflack09/diffuse-custom/diffusers/schedulers/scheduling_vq_diffusion.py deleted file mode 100644 index 89ba722a1852cbbac3bbd053effedbe97d370993..0000000000000000000000000000000000000000 --- a/spaces/Jackflack09/diffuse-custom/diffusers/schedulers/scheduling_vq_diffusion.py +++ /dev/null @@ -1,496 +0,0 @@ -# Copyright 2022 Microsoft and The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -from dataclasses import dataclass -from typing import Optional, Tuple, Union - -import numpy as np -import torch -import torch.nn.functional as F - -from ..configuration_utils import ConfigMixin, register_to_config -from ..utils import BaseOutput -from .scheduling_utils import SchedulerMixin - - -@dataclass -class VQDiffusionSchedulerOutput(BaseOutput): - """ - Output class for the scheduler's step function output. - - Args: - prev_sample (`torch.LongTensor` of shape `(batch size, num latent pixels)`): - Computed sample x_{t-1} of previous timestep. `prev_sample` should be used as next model input in the - denoising loop. - """ - - prev_sample: torch.LongTensor - - -def index_to_log_onehot(x: torch.LongTensor, num_classes: int) -> torch.FloatTensor: - """ - Convert batch of vector of class indices into batch of log onehot vectors - - Args: - x (`torch.LongTensor` of shape `(batch size, vector length)`): - Batch of class indices - - num_classes (`int`): - number of classes to be used for the onehot vectors - - Returns: - `torch.FloatTensor` of shape `(batch size, num classes, vector length)`: - Log onehot vectors - """ - x_onehot = F.one_hot(x, num_classes) - x_onehot = x_onehot.permute(0, 2, 1) - log_x = torch.log(x_onehot.float().clamp(min=1e-30)) - return log_x - - -def gumbel_noised(logits: torch.FloatTensor, generator: Optional[torch.Generator]) -> torch.FloatTensor: - """ - Apply gumbel noise to `logits` - """ - uniform = torch.rand(logits.shape, device=logits.device, generator=generator) - gumbel_noise = -torch.log(-torch.log(uniform + 1e-30) + 1e-30) - noised = gumbel_noise + logits - return noised - - -def alpha_schedules(num_diffusion_timesteps: int, alpha_cum_start=0.99999, alpha_cum_end=0.000009): - """ - Cumulative and non-cumulative alpha schedules. - - See section 4.1. - """ - att = ( - np.arange(0, num_diffusion_timesteps) / (num_diffusion_timesteps - 1) * (alpha_cum_end - alpha_cum_start) - + alpha_cum_start - ) - att = np.concatenate(([1], att)) - at = att[1:] / att[:-1] - att = np.concatenate((att[1:], [1])) - return at, att - - -def gamma_schedules(num_diffusion_timesteps: int, gamma_cum_start=0.000009, gamma_cum_end=0.99999): - """ - Cumulative and non-cumulative gamma schedules. - - See section 4.1. - """ - ctt = ( - np.arange(0, num_diffusion_timesteps) / (num_diffusion_timesteps - 1) * (gamma_cum_end - gamma_cum_start) - + gamma_cum_start - ) - ctt = np.concatenate(([0], ctt)) - one_minus_ctt = 1 - ctt - one_minus_ct = one_minus_ctt[1:] / one_minus_ctt[:-1] - ct = 1 - one_minus_ct - ctt = np.concatenate((ctt[1:], [0])) - return ct, ctt - - -class VQDiffusionScheduler(SchedulerMixin, ConfigMixin): - """ - The VQ-diffusion transformer outputs predicted probabilities of the initial unnoised image. - - The VQ-diffusion scheduler converts the transformer's output into a sample for the unnoised image at the previous - diffusion timestep. - - [`~ConfigMixin`] takes care of storing all config attributes that are passed in the scheduler's `__init__` - function, such as `num_train_timesteps`. They can be accessed via `scheduler.config.num_train_timesteps`. - [`SchedulerMixin`] provides general loading and saving functionality via the [`SchedulerMixin.save_pretrained`] and - [`~SchedulerMixin.from_pretrained`] functions. - - For more details, see the original paper: https://arxiv.org/abs/2111.14822 - - Args: - num_vec_classes (`int`): - The number of classes of the vector embeddings of the latent pixels. Includes the class for the masked - latent pixel. - - num_train_timesteps (`int`): - Number of diffusion steps used to train the model. - - alpha_cum_start (`float`): - The starting cumulative alpha value. - - alpha_cum_end (`float`): - The ending cumulative alpha value. - - gamma_cum_start (`float`): - The starting cumulative gamma value. - - gamma_cum_end (`float`): - The ending cumulative gamma value. - """ - - order = 1 - - @register_to_config - def __init__( - self, - num_vec_classes: int, - num_train_timesteps: int = 100, - alpha_cum_start: float = 0.99999, - alpha_cum_end: float = 0.000009, - gamma_cum_start: float = 0.000009, - gamma_cum_end: float = 0.99999, - ): - self.num_embed = num_vec_classes - - # By convention, the index for the mask class is the last class index - self.mask_class = self.num_embed - 1 - - at, att = alpha_schedules(num_train_timesteps, alpha_cum_start=alpha_cum_start, alpha_cum_end=alpha_cum_end) - ct, ctt = gamma_schedules(num_train_timesteps, gamma_cum_start=gamma_cum_start, gamma_cum_end=gamma_cum_end) - - num_non_mask_classes = self.num_embed - 1 - bt = (1 - at - ct) / num_non_mask_classes - btt = (1 - att - ctt) / num_non_mask_classes - - at = torch.tensor(at.astype("float64")) - bt = torch.tensor(bt.astype("float64")) - ct = torch.tensor(ct.astype("float64")) - log_at = torch.log(at) - log_bt = torch.log(bt) - log_ct = torch.log(ct) - - att = torch.tensor(att.astype("float64")) - btt = torch.tensor(btt.astype("float64")) - ctt = torch.tensor(ctt.astype("float64")) - log_cumprod_at = torch.log(att) - log_cumprod_bt = torch.log(btt) - log_cumprod_ct = torch.log(ctt) - - self.log_at = log_at.float() - self.log_bt = log_bt.float() - self.log_ct = log_ct.float() - self.log_cumprod_at = log_cumprod_at.float() - self.log_cumprod_bt = log_cumprod_bt.float() - self.log_cumprod_ct = log_cumprod_ct.float() - - # setable values - self.num_inference_steps = None - self.timesteps = torch.from_numpy(np.arange(0, num_train_timesteps)[::-1].copy()) - - def set_timesteps(self, num_inference_steps: int, device: Union[str, torch.device] = None): - """ - Sets the discrete timesteps used for the diffusion chain. Supporting function to be run before inference. - - Args: - num_inference_steps (`int`): - the number of diffusion steps used when generating samples with a pre-trained model. - - device (`str` or `torch.device`): - device to place the timesteps and the diffusion process parameters (alpha, beta, gamma) on. - """ - self.num_inference_steps = num_inference_steps - timesteps = np.arange(0, self.num_inference_steps)[::-1].copy() - self.timesteps = torch.from_numpy(timesteps).to(device) - - self.log_at = self.log_at.to(device) - self.log_bt = self.log_bt.to(device) - self.log_ct = self.log_ct.to(device) - self.log_cumprod_at = self.log_cumprod_at.to(device) - self.log_cumprod_bt = self.log_cumprod_bt.to(device) - self.log_cumprod_ct = self.log_cumprod_ct.to(device) - - def step( - self, - model_output: torch.FloatTensor, - timestep: torch.long, - sample: torch.LongTensor, - generator: Optional[torch.Generator] = None, - return_dict: bool = True, - ) -> Union[VQDiffusionSchedulerOutput, Tuple]: - """ - Predict the sample at the previous timestep via the reverse transition distribution i.e. Equation (11). See the - docstring for `self.q_posterior` for more in depth docs on how Equation (11) is computed. - - Args: - log_p_x_0: (`torch.FloatTensor` of shape `(batch size, num classes - 1, num latent pixels)`): - The log probabilities for the predicted classes of the initial latent pixels. Does not include a - prediction for the masked class as the initial unnoised image cannot be masked. - - t (`torch.long`): - The timestep that determines which transition matrices are used. - - x_t: (`torch.LongTensor` of shape `(batch size, num latent pixels)`): - The classes of each latent pixel at time `t` - - generator: (`torch.Generator` or None): - RNG for the noise applied to p(x_{t-1} | x_t) before it is sampled from. - - return_dict (`bool`): - option for returning tuple rather than VQDiffusionSchedulerOutput class - - Returns: - [`~schedulers.scheduling_utils.VQDiffusionSchedulerOutput`] or `tuple`: - [`~schedulers.scheduling_utils.VQDiffusionSchedulerOutput`] if `return_dict` is True, otherwise a `tuple`. - When returning a tuple, the first element is the sample tensor. - """ - if timestep == 0: - log_p_x_t_min_1 = model_output - else: - log_p_x_t_min_1 = self.q_posterior(model_output, sample, timestep) - - log_p_x_t_min_1 = gumbel_noised(log_p_x_t_min_1, generator) - - x_t_min_1 = log_p_x_t_min_1.argmax(dim=1) - - if not return_dict: - return (x_t_min_1,) - - return VQDiffusionSchedulerOutput(prev_sample=x_t_min_1) - - def q_posterior(self, log_p_x_0, x_t, t): - """ - Calculates the log probabilities for the predicted classes of the image at timestep `t-1`. I.e. Equation (11). - - Instead of directly computing equation (11), we use Equation (5) to restate Equation (11) in terms of only - forward probabilities. - - Equation (11) stated in terms of forward probabilities via Equation (5): - - Where: - - the sum is over x_0 = {C_0 ... C_{k-1}} (classes for x_0) - - p(x_{t-1} | x_t) = sum( q(x_t | x_{t-1}) * q(x_{t-1} | x_0) * p(x_0) / q(x_t | x_0) ) - - Args: - log_p_x_0: (`torch.FloatTensor` of shape `(batch size, num classes - 1, num latent pixels)`): - The log probabilities for the predicted classes of the initial latent pixels. Does not include a - prediction for the masked class as the initial unnoised image cannot be masked. - - x_t: (`torch.LongTensor` of shape `(batch size, num latent pixels)`): - The classes of each latent pixel at time `t` - - t (torch.Long): - The timestep that determines which transition matrix is used. - - Returns: - `torch.FloatTensor` of shape `(batch size, num classes, num latent pixels)`: - The log probabilities for the predicted classes of the image at timestep `t-1`. I.e. Equation (11). - """ - log_onehot_x_t = index_to_log_onehot(x_t, self.num_embed) - - log_q_x_t_given_x_0 = self.log_Q_t_transitioning_to_known_class( - t=t, x_t=x_t, log_onehot_x_t=log_onehot_x_t, cumulative=True - ) - - log_q_t_given_x_t_min_1 = self.log_Q_t_transitioning_to_known_class( - t=t, x_t=x_t, log_onehot_x_t=log_onehot_x_t, cumulative=False - ) - - # p_0(x_0=C_0 | x_t) / q(x_t | x_0=C_0) ... p_n(x_0=C_0 | x_t) / q(x_t | x_0=C_0) - # . . . - # . . . - # . . . - # p_0(x_0=C_{k-1} | x_t) / q(x_t | x_0=C_{k-1}) ... p_n(x_0=C_{k-1} | x_t) / q(x_t | x_0=C_{k-1}) - q = log_p_x_0 - log_q_x_t_given_x_0 - - # sum_0 = p_0(x_0=C_0 | x_t) / q(x_t | x_0=C_0) + ... + p_0(x_0=C_{k-1} | x_t) / q(x_t | x_0=C_{k-1}), ... , - # sum_n = p_n(x_0=C_0 | x_t) / q(x_t | x_0=C_0) + ... + p_n(x_0=C_{k-1} | x_t) / q(x_t | x_0=C_{k-1}) - q_log_sum_exp = torch.logsumexp(q, dim=1, keepdim=True) - - # p_0(x_0=C_0 | x_t) / q(x_t | x_0=C_0) / sum_0 ... p_n(x_0=C_0 | x_t) / q(x_t | x_0=C_0) / sum_n - # . . . - # . . . - # . . . - # p_0(x_0=C_{k-1} | x_t) / q(x_t | x_0=C_{k-1}) / sum_0 ... p_n(x_0=C_{k-1} | x_t) / q(x_t | x_0=C_{k-1}) / sum_n - q = q - q_log_sum_exp - - # (p_0(x_0=C_0 | x_t) / q(x_t | x_0=C_0) / sum_0) * a_cumulative_{t-1} + b_cumulative_{t-1} ... (p_n(x_0=C_0 | x_t) / q(x_t | x_0=C_0) / sum_n) * a_cumulative_{t-1} + b_cumulative_{t-1} - # . . . - # . . . - # . . . - # (p_0(x_0=C_{k-1} | x_t) / q(x_t | x_0=C_{k-1}) / sum_0) * a_cumulative_{t-1} + b_cumulative_{t-1} ... (p_n(x_0=C_{k-1} | x_t) / q(x_t | x_0=C_{k-1}) / sum_n) * a_cumulative_{t-1} + b_cumulative_{t-1} - # c_cumulative_{t-1} ... c_cumulative_{t-1} - q = self.apply_cumulative_transitions(q, t - 1) - - # ((p_0(x_0=C_0 | x_t) / q(x_t | x_0=C_0) / sum_0) * a_cumulative_{t-1} + b_cumulative_{t-1}) * q(x_t | x_{t-1}=C_0) * sum_0 ... ((p_n(x_0=C_0 | x_t) / q(x_t | x_0=C_0) / sum_n) * a_cumulative_{t-1} + b_cumulative_{t-1}) * q(x_t | x_{t-1}=C_0) * sum_n - # . . . - # . . . - # . . . - # ((p_0(x_0=C_{k-1} | x_t) / q(x_t | x_0=C_{k-1}) / sum_0) * a_cumulative_{t-1} + b_cumulative_{t-1}) * q(x_t | x_{t-1}=C_{k-1}) * sum_0 ... ((p_n(x_0=C_{k-1} | x_t) / q(x_t | x_0=C_{k-1}) / sum_n) * a_cumulative_{t-1} + b_cumulative_{t-1}) * q(x_t | x_{t-1}=C_{k-1}) * sum_n - # c_cumulative_{t-1} * q(x_t | x_{t-1}=C_k) * sum_0 ... c_cumulative_{t-1} * q(x_t | x_{t-1}=C_k) * sum_0 - log_p_x_t_min_1 = q + log_q_t_given_x_t_min_1 + q_log_sum_exp - - # For each column, there are two possible cases. - # - # Where: - # - sum(p_n(x_0))) is summing over all classes for x_0 - # - C_i is the class transitioning from (not to be confused with c_t and c_cumulative_t being used for gamma's) - # - C_j is the class transitioning to - # - # 1. x_t is masked i.e. x_t = c_k - # - # Simplifying the expression, the column vector is: - # . - # . - # . - # (c_t / c_cumulative_t) * (a_cumulative_{t-1} * p_n(x_0 = C_i | x_t) + b_cumulative_{t-1} * sum(p_n(x_0))) - # . - # . - # . - # (c_cumulative_{t-1} / c_cumulative_t) * sum(p_n(x_0)) - # - # From equation (11) stated in terms of forward probabilities, the last row is trivially verified. - # - # For the other rows, we can state the equation as ... - # - # (c_t / c_cumulative_t) * [b_cumulative_{t-1} * p(x_0=c_0) + ... + (a_cumulative_{t-1} + b_cumulative_{t-1}) * p(x_0=C_i) + ... + b_cumulative_{k-1} * p(x_0=c_{k-1})] - # - # This verifies the other rows. - # - # 2. x_t is not masked - # - # Simplifying the expression, there are two cases for the rows of the column vector, where C_j = C_i and where C_j != C_i: - # . - # . - # . - # C_j != C_i: b_t * ((b_cumulative_{t-1} / b_cumulative_t) * p_n(x_0 = c_0) + ... + ((a_cumulative_{t-1} + b_cumulative_{t-1}) / b_cumulative_t) * p_n(x_0 = C_i) + ... + (b_cumulative_{t-1} / (a_cumulative_t + b_cumulative_t)) * p_n(c_0=C_j) + ... + (b_cumulative_{t-1} / b_cumulative_t) * p_n(x_0 = c_{k-1})) - # . - # . - # . - # C_j = C_i: (a_t + b_t) * ((b_cumulative_{t-1} / b_cumulative_t) * p_n(x_0 = c_0) + ... + ((a_cumulative_{t-1} + b_cumulative_{t-1}) / (a_cumulative_t + b_cumulative_t)) * p_n(x_0 = C_i = C_j) + ... + (b_cumulative_{t-1} / b_cumulative_t) * p_n(x_0 = c_{k-1})) - # . - # . - # . - # 0 - # - # The last row is trivially verified. The other rows can be verified by directly expanding equation (11) stated in terms of forward probabilities. - return log_p_x_t_min_1 - - def log_Q_t_transitioning_to_known_class( - self, *, t: torch.int, x_t: torch.LongTensor, log_onehot_x_t: torch.FloatTensor, cumulative: bool - ): - """ - Returns the log probabilities of the rows from the (cumulative or non-cumulative) transition matrix for each - latent pixel in `x_t`. - - See equation (7) for the complete non-cumulative transition matrix. The complete cumulative transition matrix - is the same structure except the parameters (alpha, beta, gamma) are the cumulative analogs. - - Args: - t (torch.Long): - The timestep that determines which transition matrix is used. - - x_t (`torch.LongTensor` of shape `(batch size, num latent pixels)`): - The classes of each latent pixel at time `t`. - - log_onehot_x_t (`torch.FloatTensor` of shape `(batch size, num classes, num latent pixels)`): - The log one-hot vectors of `x_t` - - cumulative (`bool`): - If cumulative is `False`, we use the single step transition matrix `t-1`->`t`. If cumulative is `True`, - we use the cumulative transition matrix `0`->`t`. - - Returns: - `torch.FloatTensor` of shape `(batch size, num classes - 1, num latent pixels)`: - Each _column_ of the returned matrix is a _row_ of log probabilities of the complete probability - transition matrix. - - When non cumulative, returns `self.num_classes - 1` rows because the initial latent pixel cannot be - masked. - - Where: - - `q_n` is the probability distribution for the forward process of the `n`th latent pixel. - - C_0 is a class of a latent pixel embedding - - C_k is the class of the masked latent pixel - - non-cumulative result (omitting logarithms): - ``` - q_0(x_t | x_{t-1} = C_0) ... q_n(x_t | x_{t-1} = C_0) - . . . - . . . - . . . - q_0(x_t | x_{t-1} = C_k) ... q_n(x_t | x_{t-1} = C_k) - ``` - - cumulative result (omitting logarithms): - ``` - q_0_cumulative(x_t | x_0 = C_0) ... q_n_cumulative(x_t | x_0 = C_0) - . . . - . . . - . . . - q_0_cumulative(x_t | x_0 = C_{k-1}) ... q_n_cumulative(x_t | x_0 = C_{k-1}) - ``` - """ - if cumulative: - a = self.log_cumprod_at[t] - b = self.log_cumprod_bt[t] - c = self.log_cumprod_ct[t] - else: - a = self.log_at[t] - b = self.log_bt[t] - c = self.log_ct[t] - - if not cumulative: - # The values in the onehot vector can also be used as the logprobs for transitioning - # from masked latent pixels. If we are not calculating the cumulative transitions, - # we need to save these vectors to be re-appended to the final matrix so the values - # aren't overwritten. - # - # `P(x_t!=mask|x_{t-1=mask}) = 0` and 0 will be the value of the last row of the onehot vector - # if x_t is not masked - # - # `P(x_t=mask|x_{t-1=mask}) = 1` and 1 will be the value of the last row of the onehot vector - # if x_t is masked - log_onehot_x_t_transitioning_from_masked = log_onehot_x_t[:, -1, :].unsqueeze(1) - - # `index_to_log_onehot` will add onehot vectors for masked pixels, - # so the default one hot matrix has one too many rows. See the doc string - # for an explanation of the dimensionality of the returned matrix. - log_onehot_x_t = log_onehot_x_t[:, :-1, :] - - # this is a cheeky trick to produce the transition probabilities using log one-hot vectors. - # - # Don't worry about what values this sets in the columns that mark transitions - # to masked latent pixels. They are overwrote later with the `mask_class_mask`. - # - # Looking at the below logspace formula in non-logspace, each value will evaluate to either - # `1 * a + b = a + b` where `log_Q_t` has the one hot value in the column - # or - # `0 * a + b = b` where `log_Q_t` has the 0 values in the column. - # - # See equation 7 for more details. - log_Q_t = (log_onehot_x_t + a).logaddexp(b) - - # The whole column of each masked pixel is `c` - mask_class_mask = x_t == self.mask_class - mask_class_mask = mask_class_mask.unsqueeze(1).expand(-1, self.num_embed - 1, -1) - log_Q_t[mask_class_mask] = c - - if not cumulative: - log_Q_t = torch.cat((log_Q_t, log_onehot_x_t_transitioning_from_masked), dim=1) - - return log_Q_t - - def apply_cumulative_transitions(self, q, t): - bsz = q.shape[0] - a = self.log_cumprod_at[t] - b = self.log_cumprod_bt[t] - c = self.log_cumprod_ct[t] - - num_latent_pixels = q.shape[2] - c = c.expand(bsz, 1, num_latent_pixels) - - q = (q + a).logaddexp(b) - q = torch.cat((q, c), dim=1) - - return q diff --git a/spaces/JohnSmith9982/ChuanhuChatGPT/modules/models/__init__.py b/spaces/JohnSmith9982/ChuanhuChatGPT/modules/models/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/JohnSmith9982/VITS-Umamusume-voice-synthesizer/text/japanese.py b/spaces/JohnSmith9982/VITS-Umamusume-voice-synthesizer/text/japanese.py deleted file mode 100644 index 375e4d50872d5c68ee57ca17470a2ca425425eba..0000000000000000000000000000000000000000 --- a/spaces/JohnSmith9982/VITS-Umamusume-voice-synthesizer/text/japanese.py +++ /dev/null @@ -1,153 +0,0 @@ -import re -from unidecode import unidecode -import pyopenjtalk - - -# Regular expression matching Japanese without punctuation marks: -_japanese_characters = re.compile( - r'[A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]') - -# Regular expression matching non-Japanese characters or punctuation marks: -_japanese_marks = re.compile( - r'[^A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]') - -# List of (symbol, Japanese) pairs for marks: -_symbols_to_japanese = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('%', 'パーセント') -]] - -# List of (romaji, ipa) pairs for marks: -_romaji_to_ipa = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('ts', 'ʦ'), - ('u', 'ɯ'), - ('j', 'ʥ'), - ('y', 'j'), - ('ni', 'n^i'), - ('nj', 'n^'), - ('hi', 'çi'), - ('hj', 'ç'), - ('f', 'ɸ'), - ('I', 'i*'), - ('U', 'ɯ*'), - ('r', 'ɾ') -]] - -# List of (romaji, ipa2) pairs for marks: -_romaji_to_ipa2 = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('u', 'ɯ'), - ('ʧ', 'tʃ'), - ('j', 'dʑ'), - ('y', 'j'), - ('ni', 'n^i'), - ('nj', 'n^'), - ('hi', 'çi'), - ('hj', 'ç'), - ('f', 'ɸ'), - ('I', 'i*'), - ('U', 'ɯ*'), - ('r', 'ɾ') -]] - -# List of (consonant, sokuon) pairs: -_real_sokuon = [(re.compile('%s' % x[0]), x[1]) for x in [ - (r'Q([↑↓]*[kg])', r'k#\1'), - (r'Q([↑↓]*[tdjʧ])', r't#\1'), - (r'Q([↑↓]*[sʃ])', r's\1'), - (r'Q([↑↓]*[pb])', r'p#\1') -]] - -# List of (consonant, hatsuon) pairs: -_real_hatsuon = [(re.compile('%s' % x[0]), x[1]) for x in [ - (r'N([↑↓]*[pbm])', r'm\1'), - (r'N([↑↓]*[ʧʥj])', r'n^\1'), - (r'N([↑↓]*[tdn])', r'n\1'), - (r'N([↑↓]*[kg])', r'ŋ\1') -]] - - -def symbols_to_japanese(text): - for regex, replacement in _symbols_to_japanese: - text = re.sub(regex, replacement, text) - return text - - -def japanese_to_romaji_with_accent(text): - '''Reference https://r9y9.github.io/ttslearn/latest/notebooks/ch10_Recipe-Tacotron.html''' - text = symbols_to_japanese(text) - sentences = re.split(_japanese_marks, text) - marks = re.findall(_japanese_marks, text) - text = '' - for i, sentence in enumerate(sentences): - if re.match(_japanese_characters, sentence): - if text != '': - text += ' ' - labels = pyopenjtalk.extract_fullcontext(sentence) - for n, label in enumerate(labels): - phoneme = re.search(r'\-([^\+]*)\+', label).group(1) - if phoneme not in ['sil', 'pau']: - text += phoneme.replace('ch', 'ʧ').replace('sh', - 'ʃ').replace('cl', 'Q') - else: - continue - # n_moras = int(re.search(r'/F:(\d+)_', label).group(1)) - a1 = int(re.search(r"/A:(\-?[0-9]+)\+", label).group(1)) - a2 = int(re.search(r"\+(\d+)\+", label).group(1)) - a3 = int(re.search(r"\+(\d+)/", label).group(1)) - if re.search(r'\-([^\+]*)\+', labels[n + 1]).group(1) in ['sil', 'pau']: - a2_next = -1 - else: - a2_next = int( - re.search(r"\+(\d+)\+", labels[n + 1]).group(1)) - # Accent phrase boundary - if a3 == 1 and a2_next == 1: - text += ' ' - # Falling - elif a1 == 0 and a2_next == a2 + 1: - text += '↓' - # Rising - elif a2 == 1 and a2_next == 2: - text += '↑' - if i < len(marks): - text += unidecode(marks[i]).replace(' ', '') - return text - - -def get_real_sokuon(text): - for regex, replacement in _real_sokuon: - text = re.sub(regex, replacement, text) - return text - - -def get_real_hatsuon(text): - for regex, replacement in _real_hatsuon: - text = re.sub(regex, replacement, text) - return text - - -def japanese_to_ipa(text): - text = japanese_to_romaji_with_accent(text).replace('...', '…') - text = re.sub( - r'([aiueo])\1+', lambda x: x.group(0)[0]+'ː'*(len(x.group(0))-1), text) - text = get_real_sokuon(text) - text = get_real_hatsuon(text) - for regex, replacement in _romaji_to_ipa: - text = re.sub(regex, replacement, text) - return text - - -def japanese_to_ipa2(text): - text = japanese_to_romaji_with_accent(text).replace('...', '…') - text = get_real_sokuon(text) - text = get_real_hatsuon(text) - for regex, replacement in _romaji_to_ipa2: - text = re.sub(regex, replacement, text) - return text - - -def japanese_to_ipa3(text): - text = japanese_to_ipa2(text).replace('n^', 'ȵ').replace( - 'ʃ', 'ɕ').replace('*', '\u0325').replace('#', '\u031a') - text = re.sub( - r'([aiɯeo])\1+', lambda x: x.group(0)[0]+'ː'*(len(x.group(0))-1), text) - text = re.sub(r'((?:^|\s)(?:ts|tɕ|[kpt]))', r'\1ʰ', text) - return text diff --git a/spaces/Juli08/janitorai/README.md b/spaces/Juli08/janitorai/README.md deleted file mode 100644 index 1299f2ee3ec0dea68e2f02ed0c6300cc60a9d583..0000000000000000000000000000000000000000 --- a/spaces/Juli08/janitorai/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: Janitorai -emoji: 🌖 -colorFrom: pink -colorTo: green -sdk: docker -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/KyanChen/RSPrompter/mmdet/models/detectors/htc.py b/spaces/KyanChen/RSPrompter/mmdet/models/detectors/htc.py deleted file mode 100644 index 22a2aa889a59fd0e0afeb95a7369028def6e4fa9..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/RSPrompter/mmdet/models/detectors/htc.py +++ /dev/null @@ -1,16 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmdet.registry import MODELS -from .cascade_rcnn import CascadeRCNN - - -@MODELS.register_module() -class HybridTaskCascade(CascadeRCNN): - """Implementation of `HTC `_""" - - def __init__(self, **kwargs) -> None: - super().__init__(**kwargs) - - @property - def with_semantic(self) -> bool: - """bool: whether the detector has a semantic head""" - return self.roi_head.with_semantic diff --git a/spaces/KyanChen/RSPrompter/mmdet/structures/mask/__init__.py b/spaces/KyanChen/RSPrompter/mmdet/structures/mask/__init__.py deleted file mode 100644 index f78394701df1b493259c4c23a79aea5c5cb8be95..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/RSPrompter/mmdet/structures/mask/__init__.py +++ /dev/null @@ -1,11 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .mask_target import mask_target -from .structures import (BaseInstanceMasks, BitmapMasks, PolygonMasks, - bitmap_to_polygon, polygon_to_bitmap) -from .utils import encode_mask_results, mask2bbox, split_combined_polys - -__all__ = [ - 'split_combined_polys', 'mask_target', 'BaseInstanceMasks', 'BitmapMasks', - 'PolygonMasks', 'encode_mask_results', 'mask2bbox', 'polygon_to_bitmap', - 'bitmap_to_polygon' -] diff --git a/spaces/LanguageBind/LanguageBind/vl_ret/tokenization_clip.py b/spaces/LanguageBind/LanguageBind/vl_ret/tokenization_clip.py deleted file mode 100644 index 3fbb56d0ef9a4dbea9a39a6c55352ef14a34898d..0000000000000000000000000000000000000000 --- a/spaces/LanguageBind/LanguageBind/vl_ret/tokenization_clip.py +++ /dev/null @@ -1,145 +0,0 @@ -import gzip -import html -import os -from functools import lru_cache - -import ftfy -import regex as re - - -@lru_cache() -def default_bpe(): - return os.path.join(os.path.dirname(os.path.abspath(__file__)), "bpe_simple_vocab_16e6.txt.gz") - - -@lru_cache() -def bytes_to_unicode(): - """ - Returns list of utf-8 byte and a corresponding list of unicode strings. - The reversible bpe codes work on unicode strings. - This means you need a large # of unicode characters in your vocab if you want to avoid UNKs. - When you're at something like a 10B token dataset you end up needing around 5K for decent coverage. - This is a signficant percentage of your normal, say, 32K bpe vocab. - To avoid that, we want lookup tables between utf-8 bytes and unicode strings. - And avoids mapping to whitespace/control characters the bpe code barfs on. - """ - bs = list(range(ord("!"), ord("~")+1))+list(range(ord("¡"), ord("¬")+1))+list(range(ord("®"), ord("ÿ")+1)) - cs = bs[:] - n = 0 - for b in range(2**8): - if b not in bs: - bs.append(b) - cs.append(2**8+n) - n += 1 - cs = [chr(n) for n in cs] - return dict(zip(bs, cs)) - - -def get_pairs(word): - """Return set of symbol pairs in a word. - Word is represented as tuple of symbols (symbols being variable-length strings). - """ - pairs = set() - prev_char = word[0] - for char in word[1:]: - pairs.add((prev_char, char)) - prev_char = char - return pairs - - -def basic_clean(text): - text = ftfy.fix_text(text) - text = html.unescape(html.unescape(text)) - return text.strip() - - -def whitespace_clean(text): - text = re.sub(r'\s+', ' ', text) - text = text.strip() - return text - - -class SimpleTokenizer(object): - def __init__(self, bpe_path: str = default_bpe()): - self.byte_encoder = bytes_to_unicode() - self.byte_decoder = {v: k for k, v in self.byte_encoder.items()} - merges = gzip.open(bpe_path).read().decode("utf-8").split('\n') - merges = merges[1:49152-256-2+1] - merges = [tuple(merge.split()) for merge in merges] - vocab = list(bytes_to_unicode().values()) - vocab = vocab + [v+'' for v in vocab] - for merge in merges: - vocab.append(''.join(merge)) - vocab.extend(['<|startoftext|>', '<|endoftext|>']) - self.encoder = dict(zip(vocab, range(len(vocab)))) - self.decoder = {v: k for k, v in self.encoder.items()} - self.bpe_ranks = dict(zip(merges, range(len(merges)))) - self.cache = {'<|startoftext|>': '<|startoftext|>', '<|endoftext|>': '<|endoftext|>'} - self.pat = re.compile(r"""<\|startoftext\|>|<\|endoftext\|>|'s|'t|'re|'ve|'m|'ll|'d|[\p{L}]+|[\p{N}]|[^\s\p{L}\p{N}]+""", re.IGNORECASE) - - self.vocab = self.encoder - - def bpe(self, token): - if token in self.cache: - return self.cache[token] - word = tuple(token[:-1]) + ( token[-1] + '',) - pairs = get_pairs(word) - - if not pairs: - return token+'' - - while True: - bigram = min(pairs, key = lambda pair: self.bpe_ranks.get(pair, float('inf'))) - if bigram not in self.bpe_ranks: - break - first, second = bigram - new_word = [] - i = 0 - while i < len(word): - try: - j = word.index(first, i) - new_word.extend(word[i:j]) - i = j - except: - new_word.extend(word[i:]) - break - - if word[i] == first and i < len(word)-1 and word[i+1] == second: - new_word.append(first+second) - i += 2 - else: - new_word.append(word[i]) - i += 1 - new_word = tuple(new_word) - word = new_word - if len(word) == 1: - break - else: - pairs = get_pairs(word) - word = ' '.join(word) - self.cache[token] = word - return word - - def encode(self, text): - bpe_tokens = [] - text = whitespace_clean(basic_clean(text)).lower() - for token in re.findall(self.pat, text): - token = ''.join(self.byte_encoder[b] for b in token.encode('utf-8')) - bpe_tokens.extend(self.encoder[bpe_token] for bpe_token in self.bpe(token).split(' ')) - return bpe_tokens - - def decode(self, tokens): - text = ''.join([self.decoder[token] for token in tokens]) - text = bytearray([self.byte_decoder[c] for c in text]).decode('utf-8', errors="replace").replace('', ' ') - return text - - def tokenize(self, text): - tokens = [] - text = whitespace_clean(basic_clean(text)).lower() - for token in re.findall(self.pat, text): - token = ''.join(self.byte_encoder[b] for b in token.encode('utf-8')) - tokens.extend(bpe_token for bpe_token in self.bpe(token).split(' ')) - return tokens - - def convert_tokens_to_ids(self, tokens): - return [self.encoder[bpe_token] for bpe_token in tokens] \ No newline at end of file diff --git "a/spaces/Liu-LAB/GPT-academic/crazy_functions/\350\276\205\345\212\251\345\233\236\347\255\224.py" "b/spaces/Liu-LAB/GPT-academic/crazy_functions/\350\276\205\345\212\251\345\233\236\347\255\224.py" deleted file mode 100644 index b635f88b3183bbd310eca6449cd9e10c75ca7ca7..0000000000000000000000000000000000000000 --- "a/spaces/Liu-LAB/GPT-academic/crazy_functions/\350\276\205\345\212\251\345\233\236\347\255\224.py" +++ /dev/null @@ -1,28 +0,0 @@ -# encoding: utf-8 -# @Time : 2023/4/19 -# @Author : Spike -# @Descr : -from toolbox import update_ui -from toolbox import CatchException, report_execption, write_results_to_file -from crazy_functions.crazy_utils import request_gpt_model_in_new_thread_with_ui_alive - - -@CatchException -def 猜你想问(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): - if txt: - show_say = txt - prompt = txt+'\n回答完问题后,再列出用户可能提出的三个问题。' - else: - prompt = history[-1]+"\n分析上述回答,再列出用户可能提出的三个问题。" - show_say = '分析上述回答,再列出用户可能提出的三个问题。' - gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive( - inputs=prompt, - inputs_show_user=show_say, - llm_kwargs=llm_kwargs, - chatbot=chatbot, - history=history, - sys_prompt=system_prompt - ) - chatbot[-1] = (show_say, gpt_say) - history.extend([show_say, gpt_say]) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 \ No newline at end of file diff --git a/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/GroundedSAM/GroundingDINO/groundingdino/datasets/__init__.py b/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/GroundedSAM/GroundingDINO/groundingdino/datasets/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Malmika/Osana-Chat-Friend/app.py b/spaces/Malmika/Osana-Chat-Friend/app.py deleted file mode 100644 index 700d290b5c29bc9bf11a55e30ae5e54c13c86e66..0000000000000000000000000000000000000000 --- a/spaces/Malmika/Osana-Chat-Friend/app.py +++ /dev/null @@ -1,33 +0,0 @@ -import gradio as gr -from transformers import AutoModelForCausalLM, AutoTokenizer -import torch - -tokenizer = AutoTokenizer.from_pretrained("microsoft/DialoGPT-medium") -model = AutoModelForCausalLM.from_pretrained("microsoft/DialoGPT-medium") -max_history = 10 # Maximum number of previous chat turns to include in the conversation history -chat_history_ids = None - -def chatbot(user_input): - global chat_history_ids - - # encode the new user input, add the eos_token and return a tensor in PyTorch - new_user_input_ids = tokenizer.encode(user_input + tokenizer.eos_token, return_tensors='pt') - - # append the new user input tokens to the chat history - bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if chat_history_ids is not None else new_user_input_ids - - # generate a response while limiting the total chat history to max_history tokens - chat_history_ids = model.generate(bot_input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id) - - # decode and return the generated response - response = tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True) - return response - -styles = { - "textarea": "height: 200px; font-size: 18px;", - "label": "font-size: 20px; font-weight: bold;", - "output": "color: red; font-size: 18px;" -} - -iface = gr.Interface(fn=chatbot, inputs="text", outputs="text", title="Osana Chat Friend", styles=styles) -iface.launch() diff --git a/spaces/MariaK/Check-my-progress-Audio-Course/README.md b/spaces/MariaK/Check-my-progress-Audio-Course/README.md deleted file mode 100644 index d94e8d9d569f48f57907501db499d955fa959cab..0000000000000000000000000000000000000000 --- a/spaces/MariaK/Check-my-progress-Audio-Course/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Check My Progress Audio Course -emoji: 👀 -colorFrom: yellow -colorTo: purple -sdk: gradio -sdk_version: 3.16.0 -app_file: app.py -pinned: false -duplicated_from: ThomasSimonini/Check-my-progress-Deep-RL-Course ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Marshalls/testmtd/analysis/pymo/__init__.py b/spaces/Marshalls/testmtd/analysis/pymo/__init__.py deleted file mode 100644 index 81b6d00d10833e29a4b27bdec29b884b347bb9dc..0000000000000000000000000000000000000000 --- a/spaces/Marshalls/testmtd/analysis/pymo/__init__.py +++ /dev/null @@ -1,7 +0,0 @@ -import os, sys -THIS_DIR = os.path.dirname(os.path.abspath(__file__)) -ROOT_DIR = os.path.abspath(os.path.join(THIS_DIR, os.pardir)) -ANALYSIS_DIR = os.path.join(ROOT_DIR, 'analysis') -if not os.path.isdir(ANALYSIS_DIR): - os.mkdir(ANALYSIS_DIR) -sys.path.append(ROOT_DIR) diff --git a/spaces/Marshalls/testmtd/analysis/sandbox_fid.py b/spaces/Marshalls/testmtd/analysis/sandbox_fid.py deleted file mode 100644 index 90dda7a746e39affaae1a4bd0921ec5612aa2f32..0000000000000000000000000000000000000000 --- a/spaces/Marshalls/testmtd/analysis/sandbox_fid.py +++ /dev/null @@ -1,117 +0,0 @@ -import numpy as np -import sklearn -import pickle -from pathlib import Path -import scipy.linalg -import matplotlib.pyplot as plt -#%% - -def FID(m,C,mg,Cg): - mean_diff = np.sum((m-mg)**2) - covar_diff = np.trace(C) + np.trace(Cg) -2 * np.trace(scipy.linalg.sqrtm(np.dot(C,Cg))) - return mean_diff + covar_diff -#%% - -# feat_file = "inference/generated_1/moglow_expmap/predicted_mods/"+"aistpp_gBR_sBM_cAll_d04_mBR3_ch10.expmap_scaled_20.generated.npy" -# feats = np.load(feat_file) -# -# feats = feats[:,0,:] -# feats = np.delete(feats,[-4,-6],1) -# -# feats.shape -# -# C = np.dot(feats.T,feats) -# -# m = np.mean(feats,0) - -# data_path="data/dance_combined" -# feature_name="expmap_scaled_20" -# transform_name="scaler" -# transform = pickle.load(open(Path(data_path).joinpath(feature_name+'_'+transform_name+'.pkl'), "rb")) -# -# C_data = transform. -# -# C_data.shape - -#%% -root_dir = "data/fid_data/predicted_mods" -# experiment_name="moglow_expmap" - -# stat="2moments" # mean and covariance of poses -stat="2moments_ext" # mean and covariance of 3 consecutive poses -moments_file = root_dir+"/"+"ground_truth"+"/bvh_expmap_cr_"+stat+".pkl" -gt_m, gt_C = pickle.load(open(moments_file,"rb")) - -moments_dict = {} -fids = {} -experiments = ["moglow_expmap","transflower_expmap","transflower_expmap_finetune2_old","transformer_expmap"] -for experiment_name in experiments: - moments_file = root_dir+"/"+experiment_name+"/expmap_scaled_20.generated_"+stat+".pkl" - - m,C = pickle.load(open(moments_file,"rb")) - if stat=="2moments": - m = np.delete(m,[-4,-6],0) - C = np.delete(C,[-4,-6],0) - C = np.delete(C,[-4,-6],1) - elif stat=="2moments_ext": - m = np.delete(m,[-4,-6],0) - m = np.delete(m,[-4-67,-6-67],0) - m = np.delete(m,[-4-67*2,-6-67*2],0) - C = np.delete(C,[-4,-6],0) - C = np.delete(C,[-4-67,-6-67],0) - C = np.delete(C,[-4-67*2,-6-67*2],0) - C = np.delete(C,[-4,-6],1) - C = np.delete(C,[-4-67,-6-67],1) - C = np.delete(C,[-4-67*2,-6-67*2],1) - moments_dict[experiment_name] = (m,C) - fids[experiment_name] = FID(m,C,gt_m,gt_C) - - -fids -#%% - -##### -# for comparign seeds - -root_dir_generated = "data/fid_data/predicted_mods_seed" -root_dir_gt = "data/fid_data/ground_truths" -fids = np.empty((5,5)) -# stat="2moments" # mean and covariance of poses -stat="2moments_ext" # mean and covariance of 3 consecutive poses -# seeds = list(range(1,6)) -for i in range(5): - gt_moments_file = root_dir_gt+"/"+str(i+1)+"/bvh_expmap_cr_"+stat+".pkl" - gt_m,gt_C = pickle.load(open(gt_moments_file,"rb")) - for j in range(5): - # moments_file = root_dir_generated+"/"+"generated_"+str(j+1)+"/expmap_scaled_20.generated_"+stat+".pkl" - moments_file = "inference/randomized_seeds/generated_"+str(j+1)+"/transflower_expmap/predicted_mods/expmap_scaled_20.generated_"+stat+".pkl" - - m,C = pickle.load(open(moments_file,"rb")) - if stat=="2moments": - m = np.delete(m,[-4,-6],0) - C = np.delete(C,[-4,-6],0) - C = np.delete(C,[-4,-6],1) - elif stat=="2moments_ext": - m = np.delete(m,[-4,-6],0) - m = np.delete(m,[-4-67,-6-67],0) - m = np.delete(m,[-4-67*2,-6-67*2],0) - C = np.delete(C,[-4,-6],0) - C = np.delete(C,[-4-67,-6-67],0) - C = np.delete(C,[-4-67*2,-6-67*2],0) - C = np.delete(C,[-4,-6],1) - C = np.delete(C,[-4-67,-6-67],1) - C = np.delete(C,[-4-67*2,-6-67*2],1) - # moments_dict[experiment_name] = (m,C) - fids[i,j] = FID(m,C,gt_m,gt_C) - -# for i in range(5): -# for j in range(i,5): -# fids[j,i] = fids[i,j] - - -fids - -# plt.matshow(fids/np.mean(fids)) -plt.matshow(fids) -# plt.matshow(fids[1:,1:]) -plt.matshow(fids[1:,1:] == np.min(fids[1:,1:],0,keepdims=True)) diff --git a/spaces/MashiroSA/sovits-emu-voice-transform/vdecoder/nsf_hifigan/models.py b/spaces/MashiroSA/sovits-emu-voice-transform/vdecoder/nsf_hifigan/models.py deleted file mode 100644 index 6f24f617a76e64bc88b7cff6cc618b59af1c07e3..0000000000000000000000000000000000000000 --- a/spaces/MashiroSA/sovits-emu-voice-transform/vdecoder/nsf_hifigan/models.py +++ /dev/null @@ -1,435 +0,0 @@ -import os -import json -from .env import AttrDict -import numpy as np -import torch -import torch.nn.functional as F -import torch.nn as nn -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from .utils import init_weights, get_padding - -LRELU_SLOPE = 0.1 - - -def load_model(model_path, device='cuda'): - config_file = os.path.join(os.path.split(model_path)[0], 'config.json') - with open(config_file) as f: - data = f.read() - - json_config = json.loads(data) - h = AttrDict(json_config) - - generator = Generator(h).to(device) - - cp_dict = torch.load(model_path, map_location=device) - generator.load_state_dict(cp_dict['generator']) - generator.eval() - generator.remove_weight_norm() - del cp_dict - return generator, h - - -class ResBlock1(torch.nn.Module): - def __init__(self, h, channels, kernel_size=3, dilation=(1, 3, 5)): - super(ResBlock1, self).__init__() - self.h = h - self.convs1 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]))) - ]) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))) - ]) - self.convs2.apply(init_weights) - - def forward(self, x): - for c1, c2 in zip(self.convs1, self.convs2): - xt = F.leaky_relu(x, LRELU_SLOPE) - xt = c1(xt) - xt = F.leaky_relu(xt, LRELU_SLOPE) - xt = c2(xt) - x = xt + x - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_weight_norm(l) - for l in self.convs2: - remove_weight_norm(l) - - -class ResBlock2(torch.nn.Module): - def __init__(self, h, channels, kernel_size=3, dilation=(1, 3)): - super(ResBlock2, self).__init__() - self.h = h - self.convs = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))) - ]) - self.convs.apply(init_weights) - - def forward(self, x): - for c in self.convs: - xt = F.leaky_relu(x, LRELU_SLOPE) - xt = c(xt) - x = xt + x - return x - - def remove_weight_norm(self): - for l in self.convs: - remove_weight_norm(l) - - -class SineGen(torch.nn.Module): - """ Definition of sine generator - SineGen(samp_rate, harmonic_num = 0, - sine_amp = 0.1, noise_std = 0.003, - voiced_threshold = 0, - flag_for_pulse=False) - samp_rate: sampling rate in Hz - harmonic_num: number of harmonic overtones (default 0) - sine_amp: amplitude of sine-wavefrom (default 0.1) - noise_std: std of Gaussian noise (default 0.003) - voiced_thoreshold: F0 threshold for U/V classification (default 0) - flag_for_pulse: this SinGen is used inside PulseGen (default False) - Note: when flag_for_pulse is True, the first time step of a voiced - segment is always sin(np.pi) or cos(0) - """ - - def __init__(self, samp_rate, harmonic_num=0, - sine_amp=0.1, noise_std=0.003, - voiced_threshold=0): - super(SineGen, self).__init__() - self.sine_amp = sine_amp - self.noise_std = noise_std - self.harmonic_num = harmonic_num - self.dim = self.harmonic_num + 1 - self.sampling_rate = samp_rate - self.voiced_threshold = voiced_threshold - - def _f02uv(self, f0): - # generate uv signal - uv = torch.ones_like(f0) - uv = uv * (f0 > self.voiced_threshold) - return uv - - @torch.no_grad() - def forward(self, f0, upp): - """ sine_tensor, uv = forward(f0) - input F0: tensor(batchsize=1, length, dim=1) - f0 for unvoiced steps should be 0 - output sine_tensor: tensor(batchsize=1, length, dim) - output uv: tensor(batchsize=1, length, 1) - """ - f0 = f0.unsqueeze(-1) - fn = torch.multiply(f0, torch.arange(1, self.dim + 1, device=f0.device).reshape((1, 1, -1))) - rad_values = (fn / self.sampling_rate) % 1 ###%1意味着n_har的乘积无法后处理优化 - rand_ini = torch.rand(fn.shape[0], fn.shape[2], device=fn.device) - rand_ini[:, 0] = 0 - rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini - is_half = rad_values.dtype is not torch.float32 - tmp_over_one = torch.cumsum(rad_values.double(), 1) # % 1 #####%1意味着后面的cumsum无法再优化 - if is_half: - tmp_over_one = tmp_over_one.half() - else: - tmp_over_one = tmp_over_one.float() - tmp_over_one *= upp - tmp_over_one = F.interpolate( - tmp_over_one.transpose(2, 1), scale_factor=upp, - mode='linear', align_corners=True - ).transpose(2, 1) - rad_values = F.interpolate(rad_values.transpose(2, 1), scale_factor=upp, mode='nearest').transpose(2, 1) - tmp_over_one %= 1 - tmp_over_one_idx = (tmp_over_one[:, 1:, :] - tmp_over_one[:, :-1, :]) < 0 - cumsum_shift = torch.zeros_like(rad_values) - cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0 - rad_values = rad_values.double() - cumsum_shift = cumsum_shift.double() - sine_waves = torch.sin(torch.cumsum(rad_values + cumsum_shift, dim=1) * 2 * np.pi) - if is_half: - sine_waves = sine_waves.half() - else: - sine_waves = sine_waves.float() - sine_waves = sine_waves * self.sine_amp - uv = self._f02uv(f0) - uv = F.interpolate(uv.transpose(2, 1), scale_factor=upp, mode='nearest').transpose(2, 1) - noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3 - noise = noise_amp * torch.randn_like(sine_waves) - sine_waves = sine_waves * uv + noise - return sine_waves, uv, noise - - -class SourceModuleHnNSF(torch.nn.Module): - """ SourceModule for hn-nsf - SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1, - add_noise_std=0.003, voiced_threshod=0) - sampling_rate: sampling_rate in Hz - harmonic_num: number of harmonic above F0 (default: 0) - sine_amp: amplitude of sine source signal (default: 0.1) - add_noise_std: std of additive Gaussian noise (default: 0.003) - note that amplitude of noise in unvoiced is decided - by sine_amp - voiced_threshold: threhold to set U/V given F0 (default: 0) - Sine_source, noise_source = SourceModuleHnNSF(F0_sampled) - F0_sampled (batchsize, length, 1) - Sine_source (batchsize, length, 1) - noise_source (batchsize, length 1) - uv (batchsize, length, 1) - """ - - def __init__(self, sampling_rate, harmonic_num=0, sine_amp=0.1, - add_noise_std=0.003, voiced_threshod=0): - super(SourceModuleHnNSF, self).__init__() - - self.sine_amp = sine_amp - self.noise_std = add_noise_std - - # to produce sine waveforms - self.l_sin_gen = SineGen(sampling_rate, harmonic_num, - sine_amp, add_noise_std, voiced_threshod) - - # to merge source harmonics into a single excitation - self.l_linear = torch.nn.Linear(harmonic_num + 1, 1) - self.l_tanh = torch.nn.Tanh() - - def forward(self, x, upp): - sine_wavs, uv, _ = self.l_sin_gen(x, upp) - sine_merge = self.l_tanh(self.l_linear(sine_wavs)) - return sine_merge - - -class Generator(torch.nn.Module): - def __init__(self, h): - super(Generator, self).__init__() - self.h = h - self.num_kernels = len(h.resblock_kernel_sizes) - self.num_upsamples = len(h.upsample_rates) - self.m_source = SourceModuleHnNSF( - sampling_rate=h.sampling_rate, - harmonic_num=8 - ) - self.noise_convs = nn.ModuleList() - self.conv_pre = weight_norm(Conv1d(h.num_mels, h.upsample_initial_channel, 7, 1, padding=3)) - resblock = ResBlock1 if h.resblock == '1' else ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(h.upsample_rates, h.upsample_kernel_sizes)): - c_cur = h.upsample_initial_channel // (2 ** (i + 1)) - self.ups.append(weight_norm( - ConvTranspose1d(h.upsample_initial_channel // (2 ** i), h.upsample_initial_channel // (2 ** (i + 1)), - k, u, padding=(k - u) // 2))) - if i + 1 < len(h.upsample_rates): # - stride_f0 = int(np.prod(h.upsample_rates[i + 1:])) - self.noise_convs.append(Conv1d( - 1, c_cur, kernel_size=stride_f0 * 2, stride=stride_f0, padding=stride_f0 // 2)) - else: - self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1)) - self.resblocks = nn.ModuleList() - ch = h.upsample_initial_channel - for i in range(len(self.ups)): - ch //= 2 - for j, (k, d) in enumerate(zip(h.resblock_kernel_sizes, h.resblock_dilation_sizes)): - self.resblocks.append(resblock(h, ch, k, d)) - - self.conv_post = weight_norm(Conv1d(ch, 1, 7, 1, padding=3)) - self.ups.apply(init_weights) - self.conv_post.apply(init_weights) - self.upp = int(np.prod(h.upsample_rates)) - - def forward(self, x, f0): - har_source = self.m_source(f0, self.upp).transpose(1, 2) - x = self.conv_pre(x) - for i in range(self.num_upsamples): - x = F.leaky_relu(x, LRELU_SLOPE) - x = self.ups[i](x) - x_source = self.noise_convs[i](har_source) - x = x + x_source - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - print('Removing weight norm...') - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - remove_weight_norm(self.conv_pre) - remove_weight_norm(self.conv_post) - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))), - norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))), - norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))), - norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))), - norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(2, 0))), - ]) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, periods=None): - super(MultiPeriodDiscriminator, self).__init__() - self.periods = periods if periods is not None else [2, 3, 5, 7, 11] - self.discriminators = nn.ModuleList() - for period in self.periods: - self.discriminators.append(DiscriminatorP(period)) - - def forward(self, y, y_hat): - y_d_rs = [] - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - y_d_rs.append(y_d_r) - fmap_rs.append(fmap_r) - y_d_gs.append(y_d_g) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv1d(1, 128, 15, 1, padding=7)), - norm_f(Conv1d(128, 128, 41, 2, groups=4, padding=20)), - norm_f(Conv1d(128, 256, 41, 2, groups=16, padding=20)), - norm_f(Conv1d(256, 512, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(512, 1024, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 1, groups=16, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ]) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class MultiScaleDiscriminator(torch.nn.Module): - def __init__(self): - super(MultiScaleDiscriminator, self).__init__() - self.discriminators = nn.ModuleList([ - DiscriminatorS(use_spectral_norm=True), - DiscriminatorS(), - DiscriminatorS(), - ]) - self.meanpools = nn.ModuleList([ - AvgPool1d(4, 2, padding=2), - AvgPool1d(4, 2, padding=2) - ]) - - def forward(self, y, y_hat): - y_d_rs = [] - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - if i != 0: - y = self.meanpools[i - 1](y) - y_hat = self.meanpools[i - 1](y_hat) - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - y_d_rs.append(y_d_r) - fmap_rs.append(fmap_r) - y_d_gs.append(y_d_g) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -def feature_loss(fmap_r, fmap_g): - loss = 0 - for dr, dg in zip(fmap_r, fmap_g): - for rl, gl in zip(dr, dg): - loss += torch.mean(torch.abs(rl - gl)) - - return loss * 2 - - -def discriminator_loss(disc_real_outputs, disc_generated_outputs): - loss = 0 - r_losses = [] - g_losses = [] - for dr, dg in zip(disc_real_outputs, disc_generated_outputs): - r_loss = torch.mean((1 - dr) ** 2) - g_loss = torch.mean(dg ** 2) - loss += (r_loss + g_loss) - r_losses.append(r_loss.item()) - g_losses.append(g_loss.item()) - - return loss, r_losses, g_losses - - -def generator_loss(disc_outputs): - loss = 0 - gen_losses = [] - for dg in disc_outputs: - l = torch.mean((1 - dg) ** 2) - gen_losses.append(l) - loss += l - - return loss, gen_losses diff --git a/spaces/MedicalAILabo/Xp-age/lib/component/__init__.py b/spaces/MedicalAILabo/Xp-age/lib/component/__init__.py deleted file mode 100644 index 687f7bcb535788688afe9620a391c7d924ad92e4..0000000000000000000000000000000000000000 --- a/spaces/MedicalAILabo/Xp-age/lib/component/__init__.py +++ /dev/null @@ -1,16 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8 -*- - -from .net import create_net -from .criterion import set_criterion -from .optimizer import set_optimizer -from .loss import set_loss_store -from .likelihood import set_likelihood - -__all__ = [ - 'create_net', - 'set_criterion', - 'set_optimizer', - 'set_loss_store', - 'set_likelihood' - ] diff --git a/spaces/Meena/table-question-answering-space/README.md b/spaces/Meena/table-question-answering-space/README.md deleted file mode 100644 index d55c265b216d1944b2cd3acd653c023ebe7146e9..0000000000000000000000000000000000000000 --- a/spaces/Meena/table-question-answering-space/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Table Question Answering Space -emoji: 🐨 -colorFrom: pink -colorTo: blue -sdk: streamlit -sdk_version: 1.2.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/MilaNLProc/wordify/src/__init__.py b/spaces/MilaNLProc/wordify/src/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Mountchicken/MAERec-Gradio/mmocr/models/common/layers/__init__.py b/spaces/Mountchicken/MAERec-Gradio/mmocr/models/common/layers/__init__.py deleted file mode 100644 index 1d1a921fdc8b57e2de15cedd6a214df77d9bdb42..0000000000000000000000000000000000000000 --- a/spaces/Mountchicken/MAERec-Gradio/mmocr/models/common/layers/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .transformer_layers import TFDecoderLayer, TFEncoderLayer - -__all__ = ['TFEncoderLayer', 'TFDecoderLayer'] diff --git a/spaces/MrVicente/RA-BART/kgs_binding/relation_mapper_builder.py b/spaces/MrVicente/RA-BART/kgs_binding/relation_mapper_builder.py deleted file mode 100644 index 58b7d99726c5121898d84c1130ffd31b87272d28..0000000000000000000000000000000000000000 --- a/spaces/MrVicente/RA-BART/kgs_binding/relation_mapper_builder.py +++ /dev/null @@ -1,164 +0,0 @@ - -############################# -# Imports -############################# - -# Python modules -from collections import deque -from collections import defaultdict -from typing import List, Dict, Optional -from ast import literal_eval -from random import sample - -# Remote modules - -# Local modules -from .kg_base_wrapper import KGBaseHandler -from .swow_handler import SwowHandler - -from utils import ( - read_json_file_2_dict, - Data_Type, -) -from .parsing_utils import ParsingUtils - -############################# -# Constants -############################# - -############################# -# Stuff -############################# - -class RelationsMapperBuilder: - def __init__(self, knowledge: KGBaseHandler, - filename: Optional[str] = None, - file_dir: Optional[str] = None, - datatype: Optional[Data_Type] = None, - tok_sep:str = '', - use_extra_relations=True): - self.tok_sep = tok_sep - self.knowledge = knowledge - self.swow_knowledge = SwowHandler() - self.use_extra_relations = use_extra_relations - if filename and file_dir and datatype: - full_context = self.load_data(filename, file_dir) - self.relevant_context = self.fetch_relevant_context_from_data(data=full_context, datatype=datatype) - - def load_data(self, filename='commongen_qa_final.json', store_dir='./'): - data = read_json_file_2_dict(filename=filename, store_dir=store_dir) - print('data[0]:', data[0]) - return data - - def fetch_relevant_context_from_data(self, data: List[Dict], datatype:Data_Type = Data_Type.COMMONGEN_QA): - if datatype == Data_Type.COMMONGEN_QA: - model_input = [data_unit.get('title').lower() for data_unit in data] - elif datatype in [Data_Type.ELI5, Data_Type.STACK_EXCHANGE]: - model_input = [data_unit.get('question').lower() for data_unit in data] - elif datatype in [Data_Type.COMMONSENSE_QA]: - #questions = [data_unit.get('question').lower() for data_unit in data] - #model_input = datasets_parsing_utils.compose_commonsenseqa_data(data) - model_input = [data_unit.get('input_data') for data_unit in data] - elif datatype in [Data_Type.COMMONGEN]: - #questions = [data_unit.get('input_data').lower() for data_unit in data] - #model_input = datasets_parsing_utils.compose_commongen_data(data) - model_input = [data_unit.get('input_data') for data_unit in data] - else: - model_input = [] - return model_input - - def get_kg_concepts_from_context(self, context=None, clear_common_wds=False): - if not context: - context = self.relevant_context - context_words = [] - for q_id, question in enumerate(context): - simple_question = ParsingUtils.remove_pontuation(question) - n_grams = ParsingUtils.n_grams_n_words_extractor(simple_question) - words = self.relevant_entities_extractor(n_grams) - if clear_common_wds: - words = ParsingUtils.clear_common_words(words) - simple_words = [word[0] for word in words] - context_words.append(simple_words) - return context_words - - def obtain_concept_neighbours(self, context_concepts:List[str], n_neighbours = 20): - """ - Use swow to get connected concepts, but then refer back to conceptnet for rich relations - """ - neighbours = [] - for concept in context_concepts: - external_neighbour_concepts = self.swow_knowledge.get_related_concepts(concept) - relevant_concepts = external_neighbour_concepts - #local_neighbour_concepts = self.knowledge.get_related_concepts(concept) - #relevant_concepts = [ext_concept for ext_concept in external_neighbour_concepts if ext_concept in local_neighbour_concepts] - neighbours.extend(relevant_concepts) - n_neighbours = min(n_neighbours, len(neighbours)) - some_neighbours = sample(neighbours, n_neighbours) - #print('context_concepts:', context_concepts) - #print('some_neighbours:', some_neighbours) - return some_neighbours - - - def get_relations_mapping_complex(self, context=None, clear_common_wds=False): - if not context: - context = self.relevant_context - relations_info = deque() - for q_id, question in enumerate(context): - simple_question = ParsingUtils.remove_pontuation(question) - n_grams = ParsingUtils.n_grams_n_words_extractor(simple_question) - words = self.relevant_entities_extractor(n_grams) - if clear_common_wds: - words = ParsingUtils.clear_common_words(words) - #print(f'question: {question}') - #print(f'words: {words}') - relation_context_between_words = defaultdict(dict) - known_tokens = set() - for token_i, (first_word_token, first_word_range) in enumerate(words[:-1]): - known_tokens.add(first_word_token) - first_word_range_str = str(first_word_range) - # normalize - first_word_phrase_normalized = self.knowledge.normalize_nouns(first_word_token) - for (second_word_token, second_word_range) in [w for w in words[token_i + 1:] if w not in known_tokens]: - second_word_range_str = str(second_word_range) - second_word_phrase_normalized = self.knowledge.normalize_nouns(second_word_token) - left_2_right, right_2_left = self.knowledge.relation_between(first_word_phrase_normalized, second_word_phrase_normalized) - #print(first_word_token, second_word_token, left_2_right, right_2_left) - if left_2_right: - relation_context_between_words[first_word_range_str][second_word_range_str] = left_2_right - if right_2_left: - relation_context_between_words[second_word_range_str][first_word_range_str] = right_2_left - relations_info.append(dict(relation_context_between_words)) - return list(relations_info) - - def get_concepts_from_context(self, context=None, clear_common_wds=False,alignment=0): - relations_info = self.get_relations_mapping_complex(context=[context], clear_common_wds=clear_common_wds) - words = [] - #print('relations_info here:', relations_info) - for rels in relations_info: - for coords, v in rels.items(): - coords_tuple = literal_eval(coords) - i,j = coords_tuple - words.append(context[i+alignment:j+alignment]) - for coords_other, rel in v.items(): - coords_other_tuple = literal_eval(coords_other) - i_other, j_other = coords_other_tuple - words.append(context[i_other+alignment: j_other+alignment]) - returning_words = list(set(words)) - #print('returning_words:', returning_words) - return returning_words - - def relevant_entities_extractor(self, n_grams_n_words, verbose_output=True): - non_overlapping_knowledge = {} - # print(n_grams_n_words) - for concept, (idx_start, idx_end) in n_grams_n_words: - normalized_concept = self.knowledge.normalize_nouns(concept) - exists = self.knowledge.does_concept_exist(normalized_concept) - #print('exists: ', concept, normalized_concept, exists) - if exists and idx_start not in non_overlapping_knowledge and \ - idx_end not in non_overlapping_knowledge: - non_overlapping_knowledge[idx_start] = (concept, idx_start, idx_end, 'start_idx') - non_overlapping_knowledge[idx_end] = (concept, idx_end, idx_end, 'end_idx') - if verbose_output: - return [(value[0], (value[1], value[2])) for k, value in sorted(non_overlapping_knowledge.items()) if value[-1] == 'start_idx'] - else: - return [value[0] for k, value in sorted(non_overlapping_knowledge.items()) if value[-1] == 'start_idx'] diff --git a/spaces/MuGeminorum/insecta/khandy/text_utils.py b/spaces/MuGeminorum/insecta/khandy/text_utils.py deleted file mode 100644 index 11d84714960659e6299bdadeebe753f6e625bad5..0000000000000000000000000000000000000000 --- a/spaces/MuGeminorum/insecta/khandy/text_utils.py +++ /dev/null @@ -1,33 +0,0 @@ -import re - - -def strip_content_in_paren(string): - """ - Notes: - strip_content_in_paren cannot process nested paren correctly - """ - return re.sub(r"\([^)]*\)|([^)]*)", "", string) - - -def is_chinese_char(uchar: str) -> bool: - """Whether the input char is a Chinese character. - - Args: - uchar: input char in unicode - - References: - `is_chinese_char` in https://github.com/thunlp/OpenNRE/ - """ - codepoint = ord(uchar) - if ((0x4E00 <= codepoint <= 0x9FFF) or # CJK Unified Ideographs - (0x3400 <= codepoint <= 0x4DBF) or # CJK Unified Ideographs Extension A - (0xF900 <= codepoint <= 0xFAFF) or # CJK Compatibility Ideographs - (0x20000 <= codepoint <= 0x2A6DF) or # CJK Unified Ideographs Extension B - (0x2A700 <= codepoint <= 0x2B73F) or - (0x2B740 <= codepoint <= 0x2B81F) or - (0x2B820 <= codepoint <= 0x2CEAF) or - (0x2F800 <= codepoint <= 0x2FA1F)): # CJK Compatibility Supplement - return True - return False - - diff --git a/spaces/Munna0912/URL_CLASSIFIER/Utils/Model.py b/spaces/Munna0912/URL_CLASSIFIER/Utils/Model.py deleted file mode 100644 index a404d8aa78260508745e27d08c2d1af8c8d8a922..0000000000000000000000000000000000000000 --- a/spaces/Munna0912/URL_CLASSIFIER/Utils/Model.py +++ /dev/null @@ -1,27 +0,0 @@ -import tensorflow as tf -from tensorflow.keras.layers import Dense, Dropout, Embedding, GRU, Input, concatenate -from tensorflow.keras.models import Model - -def create_model(Sequence_length,max_tokens, input_shape_numeric ): - # define nlp model - text_input = Input(shape=(Sequence_length,),) - x = Embedding(max_tokens, 16, input_length=Sequence_length)(text_input) - x = GRU(16, dropout=0.2, recurrent_dropout=0.2)(x) - x = Dropout(0.2)(x) - text_model = Model(inputs=text_input, outputs=x) - - # define numeric model - numeric_input = Input(shape=(input_shape_numeric,),) - y = Dense(16, activation='relu')(numeric_input) - y = Dropout(0.2)(y) - # y = Dense(16, activation='relu')(y) - # y = Dropout(0.2)(y) - numeric_model = Model(inputs=numeric_input, outputs=y) - - # concatenate the two models - combined_input = concatenate([text_model.output, numeric_model.output]) - z = Dense(16, activation='relu')(combined_input) - z = Dropout(0.2)(z) - output = Dense(1, activation='sigmoid')(z) - - return Model(inputs=[text_model.input, numeric_model.input], outputs=output) diff --git a/spaces/NAACL2022/README/README.md b/spaces/NAACL2022/README/README.md deleted file mode 100644 index 90a482d6187dfbb7b89be1e883e36846b785f28c..0000000000000000000000000000000000000000 --- a/spaces/NAACL2022/README/README.md +++ /dev/null @@ -1,113 +0,0 @@ ---- -title: README -emoji: ⚡ -colorFrom: pink -colorTo: gray -sdk: static -pinned: false ---- - -
-

This organization invites participants to add gradio demos/models/datasets for conference papers on Hugging Face (Note: This is not a official NAACL sponsored event)

-

Join organization by clicking here

-

Hugging Face Gradio NAACL 2022 event -

-

-NAACL organization is accepting Gradio demo submissions for NAACL 2022 papers from anyone for a chance to win prizes from Hugging Face, see prizes section and the leaderboard below. The deadline to submit demos is July 31st, 2022 (AOE Time Zone). For all participants, feel free to submit Gradio demos for any NAACL paper for a chance to win prizes, you can submit demos for multiple papers. Find tutorial on getting started with Gradio on Hugging Face here and to get started with the new Gradio Blocks API here

- -

Hugging Face Models NAACL 2022 event -

-

-NAACL organization is accepting models submissions for NAACL 2022 papers from anyone for a chance to win prizes from Hugging Face, see prizes section and the leaderboard below. The deadline to submit demos is July 31st, 2022 (AOE Time Zone). For all partipants, feel free to submit models for any NAACL paper for a chance to win prizes, you can submit models for multiple papers. Find tutorial on getting started with repos on Hugging Face here and to get started with adding models here

- -

Hugging Face Datasets NAACL 2022 event -

-

-NAACL organization is accepting dataset submissions for NAACL 2022 papers from anyone for a chance to win prizes from Hugging Face, see prizes section and the leaderboard below. The deadline to submit demos is July 31st, 2022 (AOE Time Zone). For all partipants, feel free to submit datasets for any NAACL paper for a chance to win prizes, you can submit datasets for multiple papers. Find tutorial on getting started with repos on Hugging Face here and to get started with adding datasets here

- -

Hugging Face Prizes

-
    -
  • Top 5 spaces/models/datasets based on likes - -
  • -
- -

LeaderBoard for Most Popular NAACL Spaces

-

See the NAACL Spaces Leaderboard

-

LeaderBoard for Most Popular NAACL Models

-

See the NAACL Models Leaderboard

-

LeaderBoard for Most Popular NAACL Datasets

-

See the NAACL Datasets Leaderboard

-
-

Hugging Face Spaces & Gradio for Showcasing your NAACL ‘22 Demo -

-

- In this tutorial, we will demonstrate how to showcase your demo with an easy to use web interface using the Gradio Python library and host it on Hugging Face Spaces so that conference attendees can easily find and try out your demos. Also, see https://gradio.app/introduction_to_blocks/, for a more flexible way to build Gradio Demos -

-

🚀 Create a Gradio Demo from your Model -

-

-The first step is to create a web demo from your model. As an example, we will be creating a demo from an image classification model (called model) which we will be uploading to Spaces. The full code for steps 1-4 can be found in this colab notebook. -


- -

1. Install the gradio library -

-

-All you need to do is to run this in the terminal: pip install gradio -

-
-

2. Define a function in your Python code that performs inference with your model on a data point and returns the prediction -

-

-Here’s we define our image classification model prediction function in PyTorch (any framework, like TensorFlow, scikit-learn, JAX, or a plain Python will work as well): -

-
-def predict(inp):
-  inp = Image.fromarray(inp.astype('uint8'), 'RGB')
-  
-  inp = transforms.ToTensor()(inp).unsqueeze(0)
-  
-  with torch.no_grad():
-  
-    prediction = torch.nn.functional.softmax(model(inp)[0], dim=0)
-  
-  return {labels[i]: float(prediction[i]) for i in range(1000)}
-
-
-

- -

3. Then create a Gradio Interface using the function and the appropriate input and output types -

-

-For the image classification model from Step 2, it would like like this: -

-
-
-inputs = gr.inputs.Image()
-
-outputs = gr.outputs.Label(num_top_classes=3)
-
-io = gr.Interface(fn=predict, inputs=inputs, outputs=outputs)
-
-
-

-If you need help creating a Gradio Interface for your model, check out the Gradio Getting Started guide. -

- -

4. Then launch() you Interface to confirm that it runs correctly locally (or wherever you are running Python) -

-
-
-io.launch() 
-
-
-

-You should see a web interface like the following where you can drag and drop your data points and see the predictions: -

-Gradio Interface -
-
- - diff --git a/spaces/NATSpeech/DiffSpeech/tasks/vocoder/vocoder_base.py b/spaces/NATSpeech/DiffSpeech/tasks/vocoder/vocoder_base.py deleted file mode 100644 index 9a1d006647f259ec39968ec9a9d2f36b166f5851..0000000000000000000000000000000000000000 --- a/spaces/NATSpeech/DiffSpeech/tasks/vocoder/vocoder_base.py +++ /dev/null @@ -1,137 +0,0 @@ -import os -import torch -import torch.distributed as dist -from torch import nn -from torch.utils.data import DistributedSampler -from tasks.vocoder.dataset_utils import VocoderDataset, EndlessDistributedSampler -from utils.audio.io import save_wav -from utils.commons.base_task import BaseTask -from utils.commons.dataset_utils import data_loader -from utils.commons.hparams import hparams -from utils.commons.tensor_utils import tensors_to_scalars - - -class VocoderBaseTask(BaseTask): - def __init__(self): - super(VocoderBaseTask, self).__init__() - self.max_sentences = hparams['max_sentences'] - self.max_valid_sentences = hparams['max_valid_sentences'] - if self.max_valid_sentences == -1: - hparams['max_valid_sentences'] = self.max_valid_sentences = self.max_sentences - self.dataset_cls = VocoderDataset - - @data_loader - def train_dataloader(self): - train_dataset = self.dataset_cls('train', shuffle=True) - return self.build_dataloader(train_dataset, True, self.max_sentences, hparams['endless_ds']) - - @data_loader - def val_dataloader(self): - valid_dataset = self.dataset_cls('test', shuffle=False) - return self.build_dataloader(valid_dataset, False, self.max_valid_sentences) - - @data_loader - def test_dataloader(self): - test_dataset = self.dataset_cls('test', shuffle=False) - return self.build_dataloader(test_dataset, False, self.max_valid_sentences) - - def build_dataloader(self, dataset, shuffle, max_sentences, endless=False): - world_size = 1 - rank = 0 - if dist.is_initialized(): - world_size = dist.get_world_size() - rank = dist.get_rank() - sampler_cls = DistributedSampler if not endless else EndlessDistributedSampler - train_sampler = sampler_cls( - dataset=dataset, - num_replicas=world_size, - rank=rank, - shuffle=shuffle, - ) - return torch.utils.data.DataLoader( - dataset=dataset, - shuffle=False, - collate_fn=dataset.collater, - batch_size=max_sentences, - num_workers=dataset.num_workers, - sampler=train_sampler, - pin_memory=True, - ) - - def build_optimizer(self, model): - optimizer_gen = torch.optim.AdamW(self.model_gen.parameters(), lr=hparams['lr'], - betas=[hparams['adam_b1'], hparams['adam_b2']]) - optimizer_disc = torch.optim.AdamW(self.model_disc.parameters(), lr=hparams['lr'], - betas=[hparams['adam_b1'], hparams['adam_b2']]) - return [optimizer_gen, optimizer_disc] - - def build_scheduler(self, optimizer): - return { - "gen": torch.optim.lr_scheduler.StepLR( - optimizer=optimizer[0], - **hparams["generator_scheduler_params"]), - "disc": torch.optim.lr_scheduler.StepLR( - optimizer=optimizer[1], - **hparams["discriminator_scheduler_params"]), - } - - def validation_step(self, sample, batch_idx): - outputs = {} - total_loss, loss_output = self._training_step(sample, batch_idx, 0) - outputs['losses'] = tensors_to_scalars(loss_output) - outputs['total_loss'] = tensors_to_scalars(total_loss) - - if self.global_step % hparams['valid_infer_interval'] == 0 and \ - batch_idx < 10: - mels = sample['mels'] - y = sample['wavs'] - f0 = sample['f0'] - y_ = self.model_gen(mels, f0) - for idx, (wav_pred, wav_gt, item_name) in enumerate(zip(y_, y, sample["item_name"])): - wav_pred = wav_pred / wav_pred.abs().max() - if self.global_step == 0: - wav_gt = wav_gt / wav_gt.abs().max() - self.logger.add_audio(f'wav_{batch_idx}_{idx}_gt', wav_gt, self.global_step, - hparams['audio_sample_rate']) - self.logger.add_audio(f'wav_{batch_idx}_{idx}_pred', wav_pred, self.global_step, - hparams['audio_sample_rate']) - return outputs - - def test_start(self): - self.gen_dir = os.path.join(hparams['work_dir'], - f'generated_{self.trainer.global_step}_{hparams["gen_dir_name"]}') - os.makedirs(self.gen_dir, exist_ok=True) - - def test_step(self, sample, batch_idx): - mels = sample['mels'] - y = sample['wavs'] - f0 = sample['f0'] - loss_output = {} - y_ = self.model_gen(mels, f0) - gen_dir = os.path.join(hparams['work_dir'], f'generated_{self.trainer.global_step}_{hparams["gen_dir_name"]}') - os.makedirs(gen_dir, exist_ok=True) - for idx, (wav_pred, wav_gt, item_name) in enumerate(zip(y_, y, sample["item_name"])): - wav_gt = wav_gt.clamp(-1, 1) - wav_pred = wav_pred.clamp(-1, 1) - save_wav( - wav_gt.view(-1).cpu().float().numpy(), f'{gen_dir}/{item_name}_gt.wav', - hparams['audio_sample_rate']) - save_wav( - wav_pred.view(-1).cpu().float().numpy(), f'{gen_dir}/{item_name}_pred.wav', - hparams['audio_sample_rate']) - return loss_output - - def test_end(self, outputs): - return {} - - def on_before_optimization(self, opt_idx): - if opt_idx == 0: - nn.utils.clip_grad_norm_(self.model_gen.parameters(), hparams['generator_grad_norm']) - else: - nn.utils.clip_grad_norm_(self.model_disc.parameters(), hparams["discriminator_grad_norm"]) - - def on_after_optimization(self, epoch, batch_idx, optimizer, optimizer_idx): - if optimizer_idx == 0: - self.scheduler['gen'].step(self.global_step // hparams['accumulate_grad_batches']) - else: - self.scheduler['disc'].step(self.global_step // hparams['accumulate_grad_batches']) diff --git a/spaces/NCTCMumbai/NCTC/models/research/attention_ocr/python/datasets/fsns_test.py b/spaces/NCTCMumbai/NCTC/models/research/attention_ocr/python/datasets/fsns_test.py deleted file mode 100644 index 4daedfbd12a58b6635cefed2bdc02bc84fc2c9ef..0000000000000000000000000000000000000000 --- a/spaces/NCTCMumbai/NCTC/models/research/attention_ocr/python/datasets/fsns_test.py +++ /dev/null @@ -1,103 +0,0 @@ -# Copyright 2017 The TensorFlow Authors All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# ============================================================================== - -"""Tests for FSNS datasets module.""" - -import collections -import os -import tensorflow as tf -from tensorflow.contrib import slim - -from datasets import fsns -from datasets import unittest_utils - -FLAGS = tf.flags.FLAGS - - -def get_test_split(): - config = fsns.DEFAULT_CONFIG.copy() - config['splits'] = {'test': {'size': 5, 'pattern': 'fsns-00000-of-00001'}} - return fsns.get_split('test', dataset_dir(), config) - - -def dataset_dir(): - return os.path.join(os.path.dirname(__file__), 'testdata/fsns') - - -class FsnsTest(tf.test.TestCase): - def test_decodes_example_proto(self): - expected_label = range(37) - expected_image, encoded = unittest_utils.create_random_image( - 'PNG', shape=(150, 600, 3)) - serialized = unittest_utils.create_serialized_example({ - 'image/encoded': [encoded], - 'image/format': [b'PNG'], - 'image/class': - expected_label, - 'image/unpadded_class': - range(10), - 'image/text': [b'Raw text'], - 'image/orig_width': [150], - 'image/width': [600] - }) - - decoder = fsns.get_split('train', dataset_dir()).decoder - with self.test_session() as sess: - data_tuple = collections.namedtuple('DecodedData', decoder.list_items()) - data = sess.run(data_tuple(*decoder.decode(serialized))) - - self.assertAllEqual(expected_image, data.image) - self.assertAllEqual(expected_label, data.label) - self.assertEqual([b'Raw text'], data.text) - self.assertEqual([1], data.num_of_views) - - def test_label_has_shape_defined(self): - serialized = 'fake' - decoder = fsns.get_split('train', dataset_dir()).decoder - - [label_tf] = decoder.decode(serialized, ['label']) - - self.assertEqual(label_tf.get_shape().dims[0], 37) - - def test_dataset_tuple_has_all_extra_attributes(self): - dataset = fsns.get_split('train', dataset_dir()) - - self.assertTrue(dataset.charset) - self.assertTrue(dataset.num_char_classes) - self.assertTrue(dataset.num_of_views) - self.assertTrue(dataset.max_sequence_length) - self.assertTrue(dataset.null_code) - - def test_can_use_the_test_data(self): - batch_size = 1 - dataset = get_test_split() - provider = slim.dataset_data_provider.DatasetDataProvider( - dataset, - shuffle=True, - common_queue_capacity=2 * batch_size, - common_queue_min=batch_size) - image_tf, label_tf = provider.get(['image', 'label']) - - with self.test_session() as sess: - sess.run(tf.global_variables_initializer()) - with slim.queues.QueueRunners(sess): - image_np, label_np = sess.run([image_tf, label_tf]) - - self.assertEqual((150, 600, 3), image_np.shape) - self.assertEqual((37, ), label_np.shape) - - -if __name__ == '__main__': - tf.test.main() diff --git a/spaces/NCTCMumbai/NCTC/models/research/autoencoder/AdditiveGaussianNoiseAutoencoderRunner.py b/spaces/NCTCMumbai/NCTC/models/research/autoencoder/AdditiveGaussianNoiseAutoencoderRunner.py deleted file mode 100644 index 8d8ee08654985250ac61415df96889b4a4cf5f1b..0000000000000000000000000000000000000000 --- a/spaces/NCTCMumbai/NCTC/models/research/autoencoder/AdditiveGaussianNoiseAutoencoderRunner.py +++ /dev/null @@ -1,58 +0,0 @@ -from __future__ import absolute_import -from __future__ import division -from __future__ import print_function - -import numpy as np -import sklearn.preprocessing as prep -import tensorflow as tf -from tensorflow.examples.tutorials.mnist import input_data - -from autoencoder_models.DenoisingAutoencoder import AdditiveGaussianNoiseAutoencoder - -mnist = input_data.read_data_sets('MNIST_data', one_hot=True) - - -def standard_scale(X_train, X_test): - preprocessor = prep.StandardScaler().fit(X_train) - X_train = preprocessor.transform(X_train) - X_test = preprocessor.transform(X_test) - return X_train, X_test - - -def get_random_block_from_data(data, batch_size): - start_index = np.random.randint(0, len(data) - batch_size) - return data[start_index:(start_index + batch_size)] - - -X_train, X_test = standard_scale(mnist.train.images, mnist.test.images) - -n_samples = int(mnist.train.num_examples) -training_epochs = 20 -batch_size = 128 -display_step = 1 - -autoencoder = AdditiveGaussianNoiseAutoencoder( - n_input=784, - n_hidden=200, - transfer_function=tf.nn.softplus, - optimizer=tf.train.AdamOptimizer(learning_rate = 0.001), - scale=0.01) - -for epoch in range(training_epochs): - avg_cost = 0. - total_batch = int(n_samples / batch_size) - # Loop over all batches - for i in range(total_batch): - batch_xs = get_random_block_from_data(X_train, batch_size) - - # Fit training using batch data - cost = autoencoder.partial_fit(batch_xs) - # Compute average loss - avg_cost += cost / n_samples * batch_size - - # Display logs per epoch step - if epoch % display_step == 0: - print("Epoch:", '%d,' % (epoch + 1), - "Cost:", "{:.9f}".format(avg_cost)) - -print("Total cost: " + str(autoencoder.calc_total_cost(X_test))) diff --git a/spaces/NKU-AMT/AMT/networks/blocks/feat_enc.py b/spaces/NKU-AMT/AMT/networks/blocks/feat_enc.py deleted file mode 100644 index 983246b7aa16fb67a4d0f3ad4893204bb1e7f495..0000000000000000000000000000000000000000 --- a/spaces/NKU-AMT/AMT/networks/blocks/feat_enc.py +++ /dev/null @@ -1,346 +0,0 @@ -''' - This code is partially borrowed from RAFT (https://github.com/princeton-vl/RAFT). -''' -import torch -import torch.nn as nn - -class BottleneckBlock(nn.Module): - def __init__(self, in_planes, planes, norm_fn='group', stride=1): - super(BottleneckBlock, self).__init__() - - self.conv1 = nn.Conv2d(in_planes, planes//4, kernel_size=1, padding=0) - self.conv2 = nn.Conv2d(planes//4, planes//4, kernel_size=3, padding=1, stride=stride) - self.conv3 = nn.Conv2d(planes//4, planes, kernel_size=1, padding=0) - self.relu = nn.ReLU(inplace=True) - - num_groups = planes // 8 - - if norm_fn == 'group': - self.norm1 = nn.GroupNorm(num_groups=num_groups, num_channels=planes//4) - self.norm2 = nn.GroupNorm(num_groups=num_groups, num_channels=planes//4) - self.norm3 = nn.GroupNorm(num_groups=num_groups, num_channels=planes) - if not stride == 1: - self.norm4 = nn.GroupNorm(num_groups=num_groups, num_channels=planes) - - - elif norm_fn == 'batch': - self.norm1 = nn.BatchNorm2d(planes//4) - self.norm2 = nn.BatchNorm2d(planes//4) - self.norm3 = nn.BatchNorm2d(planes) - if not stride == 1: - self.norm4 = nn.BatchNorm2d(planes) - - elif norm_fn == 'instance': - self.norm1 = nn.InstanceNorm2d(planes//4) - self.norm2 = nn.InstanceNorm2d(planes//4) - self.norm3 = nn.InstanceNorm2d(planes) - if not stride == 1: - self.norm4 = nn.InstanceNorm2d(planes) - - elif norm_fn == 'none': - self.norm1 = nn.Sequential() - self.norm2 = nn.Sequential() - self.norm3 = nn.Sequential() - if not stride == 1: - self.norm4 = nn.Sequential() - - if stride == 1: - self.downsample = None - - else: - self.downsample = nn.Sequential( - nn.Conv2d(in_planes, planes, kernel_size=1, stride=stride), self.norm4) - - - def forward(self, x): - y = x - y = self.relu(self.norm1(self.conv1(y))) - y = self.relu(self.norm2(self.conv2(y))) - y = self.relu(self.norm3(self.conv3(y))) - - if self.downsample is not None: - x = self.downsample(x) - - return self.relu(x+y) - - -class ResidualBlock(nn.Module): - def __init__(self, in_planes, planes, norm_fn='group', stride=1): - super(ResidualBlock, self).__init__() - - self.conv1 = nn.Conv2d(in_planes, planes, kernel_size=3, padding=1, stride=stride) - self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, padding=1) - self.relu = nn.ReLU(inplace=True) - - num_groups = planes // 8 - - if norm_fn == 'group': - self.norm1 = nn.GroupNorm(num_groups=num_groups, num_channels=planes) - self.norm2 = nn.GroupNorm(num_groups=num_groups, num_channels=planes) - if not stride == 1: - self.norm3 = nn.GroupNorm(num_groups=num_groups, num_channels=planes) - - elif norm_fn == 'batch': - self.norm1 = nn.BatchNorm2d(planes) - self.norm2 = nn.BatchNorm2d(planes) - if not stride == 1: - self.norm3 = nn.BatchNorm2d(planes) - - elif norm_fn == 'instance': - self.norm1 = nn.InstanceNorm2d(planes) - self.norm2 = nn.InstanceNorm2d(planes) - if not stride == 1: - self.norm3 = nn.InstanceNorm2d(planes) - - elif norm_fn == 'none': - self.norm1 = nn.Sequential() - self.norm2 = nn.Sequential() - if not stride == 1: - self.norm3 = nn.Sequential() - - if stride == 1: - self.downsample = None - - else: - self.downsample = nn.Sequential( - nn.Conv2d(in_planes, planes, kernel_size=1, stride=stride), self.norm3) - - - def forward(self, x): - y = x - y = self.relu(self.norm1(self.conv1(y))) - y = self.relu(self.norm2(self.conv2(y))) - - if self.downsample is not None: - x = self.downsample(x) - - return self.relu(x+y) - - -class SmallEncoder(nn.Module): - def __init__(self, output_dim=128, norm_fn='batch', dropout=0.0): - super(SmallEncoder, self).__init__() - self.norm_fn = norm_fn - - if self.norm_fn == 'group': - self.norm1 = nn.GroupNorm(num_groups=8, num_channels=32) - - elif self.norm_fn == 'batch': - self.norm1 = nn.BatchNorm2d(32) - - elif self.norm_fn == 'instance': - self.norm1 = nn.InstanceNorm2d(32) - - elif self.norm_fn == 'none': - self.norm1 = nn.Sequential() - - self.conv1 = nn.Conv2d(3, 32, kernel_size=7, stride=2, padding=3) - self.relu1 = nn.ReLU(inplace=True) - - self.in_planes = 32 - self.layer1 = self._make_layer(32, stride=1) - self.layer2 = self._make_layer(64, stride=2) - self.layer3 = self._make_layer(96, stride=2) - - self.dropout = None - if dropout > 0: - self.dropout = nn.Dropout2d(p=dropout) - - self.conv2 = nn.Conv2d(96, output_dim, kernel_size=1) - - for m in self.modules(): - if isinstance(m, nn.Conv2d): - nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu') - elif isinstance(m, (nn.BatchNorm2d, nn.InstanceNorm2d, nn.GroupNorm)): - if m.weight is not None: - nn.init.constant_(m.weight, 1) - if m.bias is not None: - nn.init.constant_(m.bias, 0) - - def _make_layer(self, dim, stride=1): - layer1 = BottleneckBlock(self.in_planes, dim, self.norm_fn, stride=stride) - layer2 = BottleneckBlock(dim, dim, self.norm_fn, stride=1) - layers = (layer1, layer2) - - self.in_planes = dim - return nn.Sequential(*layers) - - - def forward(self, x): - - # if input is list, combine batch dimension - is_list = isinstance(x, tuple) or isinstance(x, list) - if is_list: - batch_dim = x[0].shape[0] - x = torch.cat(x, dim=0) - - x = self.conv1(x) - x = self.norm1(x) - x = self.relu1(x) - - x = self.layer1(x) - x = self.layer2(x) - x = self.layer3(x) - x = self.conv2(x) - - if self.training and self.dropout is not None: - x = self.dropout(x) - - if is_list: - x = torch.split(x, [batch_dim, batch_dim], dim=0) - - return x - -class BasicEncoder(nn.Module): - def __init__(self, output_dim=128, norm_fn='batch', dropout=0.0): - super(BasicEncoder, self).__init__() - self.norm_fn = norm_fn - - if self.norm_fn == 'group': - self.norm1 = nn.GroupNorm(num_groups=8, num_channels=64) - - elif self.norm_fn == 'batch': - self.norm1 = nn.BatchNorm2d(64) - - elif self.norm_fn == 'instance': - self.norm1 = nn.InstanceNorm2d(64) - - elif self.norm_fn == 'none': - self.norm1 = nn.Sequential() - - self.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3) - self.relu1 = nn.ReLU(inplace=True) - - self.in_planes = 64 - self.layer1 = self._make_layer(64, stride=1) - self.layer2 = self._make_layer(72, stride=2) - self.layer3 = self._make_layer(128, stride=2) - - # output convolution - self.conv2 = nn.Conv2d(128, output_dim, kernel_size=1) - - self.dropout = None - if dropout > 0: - self.dropout = nn.Dropout2d(p=dropout) - - for m in self.modules(): - if isinstance(m, nn.Conv2d): - nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu') - elif isinstance(m, (nn.BatchNorm2d, nn.InstanceNorm2d, nn.GroupNorm)): - if m.weight is not None: - nn.init.constant_(m.weight, 1) - if m.bias is not None: - nn.init.constant_(m.bias, 0) - - def _make_layer(self, dim, stride=1): - layer1 = ResidualBlock(self.in_planes, dim, self.norm_fn, stride=stride) - layer2 = ResidualBlock(dim, dim, self.norm_fn, stride=1) - layers = (layer1, layer2) - - self.in_planes = dim - return nn.Sequential(*layers) - - - def forward(self, x): - - # if input is list, combine batch dimension - is_list = isinstance(x, tuple) or isinstance(x, list) - if is_list: - batch_dim = x[0].shape[0] - x = torch.cat(x, dim=0) - - x = self.conv1(x) - x = self.norm1(x) - x = self.relu1(x) - - x = self.layer1(x) - x = self.layer2(x) - x = self.layer3(x) - - x = self.conv2(x) - - if self.training and self.dropout is not None: - x = self.dropout(x) - - if is_list: - x = torch.split(x, [batch_dim, batch_dim], dim=0) - - return x - -class LargeEncoder(nn.Module): - def __init__(self, output_dim=128, norm_fn='batch', dropout=0.0): - super(LargeEncoder, self).__init__() - self.norm_fn = norm_fn - - if self.norm_fn == 'group': - self.norm1 = nn.GroupNorm(num_groups=8, num_channels=64) - - elif self.norm_fn == 'batch': - self.norm1 = nn.BatchNorm2d(64) - - elif self.norm_fn == 'instance': - self.norm1 = nn.InstanceNorm2d(64) - - elif self.norm_fn == 'none': - self.norm1 = nn.Sequential() - - self.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3) - self.relu1 = nn.ReLU(inplace=True) - - self.in_planes = 64 - self.layer1 = self._make_layer(64, stride=1) - self.layer2 = self._make_layer(112, stride=2) - self.layer3 = self._make_layer(160, stride=2) - self.layer3_2 = self._make_layer(160, stride=1) - - # output convolution - self.conv2 = nn.Conv2d(self.in_planes, output_dim, kernel_size=1) - - self.dropout = None - if dropout > 0: - self.dropout = nn.Dropout2d(p=dropout) - - for m in self.modules(): - if isinstance(m, nn.Conv2d): - nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu') - elif isinstance(m, (nn.BatchNorm2d, nn.InstanceNorm2d, nn.GroupNorm)): - if m.weight is not None: - nn.init.constant_(m.weight, 1) - if m.bias is not None: - nn.init.constant_(m.bias, 0) - - def _make_layer(self, dim, stride=1): - layer1 = ResidualBlock(self.in_planes, dim, self.norm_fn, stride=stride) - layer2 = ResidualBlock(dim, dim, self.norm_fn, stride=1) - layers = (layer1, layer2) - - self.in_planes = dim - return nn.Sequential(*layers) - - - def forward(self, x): - - # if input is list, combine batch dimension - is_list = isinstance(x, tuple) or isinstance(x, list) - if is_list: - batch_dim = x[0].shape[0] - x = torch.cat(x, dim=0) - - x = self.conv1(x) - x = self.norm1(x) - x = self.relu1(x) - - x = self.layer1(x) - x = self.layer2(x) - x = self.layer3(x) - x = self.layer3_2(x) - - x = self.conv2(x) - - if self.training and self.dropout is not None: - x = self.dropout(x) - - if is_list: - x = torch.split(x, [batch_dim, batch_dim], dim=0) - - return x diff --git a/spaces/Nikhil0987/omm/qa.py b/spaces/Nikhil0987/omm/qa.py deleted file mode 100644 index 86ec1538137cd2528f8ab0cb8bd1348001184b37..0000000000000000000000000000000000000000 --- a/spaces/Nikhil0987/omm/qa.py +++ /dev/null @@ -1,44 +0,0 @@ -from transformers import pipeline -import streamlit as st - - -# def que(): -# question = st.text_input("ASk me a question") - -# oracle = pipeline(task= "question-answering",model="deepset/roberta-base-squad2") -# oracle(question="Where do I live?", context="My name is Wolfgang and I live in Berlin") - - -def question_answering(question, context): - """Answers a question given a context.""" - - # Load the question answering model. - - - qa_model = pipeline("question-answering") - - - # Prepare the inputs for the model. - inputs = { - "question": question, - "context": context, - } - - # Get the answer from the model. - output = qa_model(**inputs) - answer = output["answer_start"] - - # Return the answer. - return context[answer : answer + output["answer_length"]] - - - if __name__ == "__main__": - # Get the question and context. - question = "What is the capital of France?" - context = "The capital of France is Paris." - - # Get the answer. - answer = question_answering(question, context) - - # Print the answer. - print(answer) diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/data/base_wrapper_dataset.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/data/base_wrapper_dataset.py deleted file mode 100644 index 134d398b47dc73c8807759188504aee205b3b34d..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/data/base_wrapper_dataset.py +++ /dev/null @@ -1,78 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from torch.utils.data.dataloader import default_collate - -from . import FairseqDataset - - -class BaseWrapperDataset(FairseqDataset): - def __init__(self, dataset): - super().__init__() - self.dataset = dataset - - def __getitem__(self, index): - return self.dataset[index] - - def __len__(self): - return len(self.dataset) - - def collater(self, samples): - if hasattr(self.dataset, "collater"): - return self.dataset.collater(samples) - else: - return default_collate(samples) - - @property - def sizes(self): - return self.dataset.sizes - - def num_tokens(self, index): - return self.dataset.num_tokens(index) - - def size(self, index): - return self.dataset.size(index) - - def ordered_indices(self): - return self.dataset.ordered_indices() - - @property - def supports_prefetch(self): - return getattr(self.dataset, "supports_prefetch", False) - - def attr(self, attr: str, index: int): - return self.dataset.attr(attr, index) - - def prefetch(self, indices): - self.dataset.prefetch(indices) - - def get_batch_shapes(self): - return self.dataset.get_batch_shapes() - - def batch_by_size( - self, - indices, - max_tokens=None, - max_sentences=None, - required_batch_size_multiple=1, - ): - return self.dataset.batch_by_size( - indices, - max_tokens=max_tokens, - max_sentences=max_sentences, - required_batch_size_multiple=required_batch_size_multiple, - ) - - def filter_indices_by_size(self, indices, max_sizes): - return self.dataset.filter_indices_by_size(indices, max_sizes) - - @property - def can_reuse_epoch_itr_across_epochs(self): - return self.dataset.can_reuse_epoch_itr_across_epochs - - def set_epoch(self, epoch): - super().set_epoch(epoch) - if hasattr(self.dataset, "set_epoch"): - self.dataset.set_epoch(epoch) diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/tests/test_resampling_dataset.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/tests/test_resampling_dataset.py deleted file mode 100644 index ccb53a253ce6ca0d8e972adfa708144b4299b3cb..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/tests/test_resampling_dataset.py +++ /dev/null @@ -1,103 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import collections -import unittest - -import numpy as np -from fairseq.data import ListDataset, ResamplingDataset - - -class TestResamplingDataset(unittest.TestCase): - def setUp(self): - self.strings = ["ab", "c", "def", "ghij"] - self.weights = [4.0, 2.0, 7.0, 1.5] - self.size_ratio = 2 - self.dataset = ListDataset( - self.strings, np.array([len(s) for s in self.strings]) - ) - - def _test_common(self, resampling_dataset, iters): - assert len(self.dataset) == len(self.strings) == len(self.weights) - assert len(resampling_dataset) == self.size_ratio * len(self.strings) - - results = {"ordered_by_size": True, "max_distribution_diff": 0.0} - - totalfreqs = 0 - freqs = collections.defaultdict(int) - - for epoch_num in range(iters): - resampling_dataset.set_epoch(epoch_num) - - indices = resampling_dataset.ordered_indices() - assert len(indices) == len(resampling_dataset) - - prev_size = -1 - - for i in indices: - cur_size = resampling_dataset.size(i) - # Make sure indices map to same sequences within an epoch - assert resampling_dataset[i] == resampling_dataset[i] - - # Make sure length of sequence is correct - assert cur_size == len(resampling_dataset[i]) - - freqs[resampling_dataset[i]] += 1 - totalfreqs += 1 - - if prev_size > cur_size: - results["ordered_by_size"] = False - - prev_size = cur_size - - assert set(freqs.keys()) == set(self.strings) - for s, weight in zip(self.strings, self.weights): - freq = freqs[s] / totalfreqs - expected_freq = weight / sum(self.weights) - results["max_distribution_diff"] = max( - results["max_distribution_diff"], abs(expected_freq - freq) - ) - - return results - - def test_resampling_dataset_batch_by_size_false(self): - resampling_dataset = ResamplingDataset( - self.dataset, - self.weights, - size_ratio=self.size_ratio, - batch_by_size=False, - seed=0, - ) - - results = self._test_common(resampling_dataset, iters=1000) - - # For batch_by_size = False, the batches should be returned in - # arbitrary order of size. - assert not results["ordered_by_size"] - - # Allow tolerance in distribution error of 2%. - assert results["max_distribution_diff"] < 0.02 - - def test_resampling_dataset_batch_by_size_true(self): - resampling_dataset = ResamplingDataset( - self.dataset, - self.weights, - size_ratio=self.size_ratio, - batch_by_size=True, - seed=0, - ) - - results = self._test_common(resampling_dataset, iters=1000) - - # For batch_by_size = True, the batches should be returned in - # increasing order of size. - assert results["ordered_by_size"] - - # Allow tolerance in distribution error of 2%. - assert results["max_distribution_diff"] < 0.02 - - -if __name__ == "__main__": - unittest.main() diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/tests/test_sequence_generator.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/tests/test_sequence_generator.py deleted file mode 100644 index 9273191962089816edffaa5d0c9c90cb0c3f3c1a..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/tests/test_sequence_generator.py +++ /dev/null @@ -1,799 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import tempfile -import unittest -import math -import numpy as np - - -import tests.utils as test_utils -import torch -from fairseq import search -from fairseq.data.dictionary import Dictionary -from fairseq.models.transformer import TransformerModel -from fairseq.sequence_generator import EnsembleModel, SequenceGenerator -from fairseq.ngram_repeat_block import NGramRepeatBlock -from fairseq.tasks.fairseq_task import LegacyFairseqTask - - -DEFAULT_TEST_VOCAB_SIZE = 100 - - -class DummyTask(LegacyFairseqTask): - def __init__(self, args): - super().__init__(args) - self.dictionary = get_dummy_dictionary() - if getattr(self.args, "ctc", False): - self.dictionary.add_symbol("") - self.src_dict = self.dictionary - self.tgt_dict = self.dictionary - - @property - def source_dictionary(self): - return self.src_dict - - @property - def target_dictionary(self): - return self.dictionary - - -def get_dummy_dictionary(vocab_size=DEFAULT_TEST_VOCAB_SIZE): - dummy_dict = Dictionary() - # add dummy symbol to satisfy vocab size - for id, _ in enumerate(range(vocab_size)): - dummy_dict.add_symbol("{}".format(id), n=1000) - return dummy_dict - - -def get_dummy_task_and_parser(): - """ - to build a fariseq model, we need some dummy parse and task. This function - is used to create dummy task and parser to faciliate model/criterion test - - Note: we use FbSpeechRecognitionTask as the dummy task. You may want - to use other task by providing another function - """ - parser = argparse.ArgumentParser( - description="test_dummy_s2s_task", argument_default=argparse.SUPPRESS - ) - DummyTask.add_args(parser) - args = parser.parse_args([]) - task = DummyTask.setup_task(args) - return task, parser - - -class TestJitSequenceGeneratorBase(unittest.TestCase): - def setUp(self): - self.task, self.parser = get_dummy_task_and_parser() - eos = self.task.tgt_dict.eos() - src_tokens = torch.randint(3, 50, (2, 10)).long() - src_tokens = torch.cat((src_tokens, torch.LongTensor([[eos], [eos]])), -1) - src_lengths = torch.LongTensor([2, 10]) - self.sample = { - "net_input": {"src_tokens": src_tokens, "src_lengths": src_lengths} - } - TransformerModel.add_args(self.parser) - args = self.parser.parse_args([]) - args.encoder_layers = 2 - args.decoder_layers = 1 - self.transformer_model = TransformerModel.build_model(args, self.task) - - def assertOutputEqual(self, hypo, pos_probs): - pos_scores = torch.FloatTensor(pos_probs).log() - self.assertTensorSizeEqual(hypo["positional_scores"], pos_scores) - self.assertTensorSizeEqual(pos_scores.numel(), hypo["tokens"].numel()) - - def assertTensorSizeEqual(self, t1, t2): - self.assertEqual(t1.size(), t2.size(), "size mismatch") - - def assertAlmostEqual(self, t1, t2): - self.assertEqual(t1.size(), t2.size(), "size mismatch") - self.assertLess((t1 - t2).abs().max(), 1e-4) - - def assertTensorEqual(self, t1, t2): - self.assertEqual(t1.size(), t2.size(), "size mismatch") - self.assertEqual(t1.ne(t2).long().sum(), 0) - - def assertHypoEqual(self, h1, h2): - "Check two hypos are equal" - self.assertTensorEqual(h1["tokens"], h2["tokens"]) - self.assertAlmostEqual(h1["positional_scores"], h2["positional_scores"]) - self.assertLess(abs(h1["score"] - h2["score"]), 1e-6) - self.assertAlmostEqual(h1["attention"], h2["attention"]) - - def _test_save_and_load(self, scripted_module): - with tempfile.NamedTemporaryFile() as f: - scripted_module.save(f.name) - torch.jit.load(f.name) - - -JIT_MSG = "Targeting OSS scriptability for the 1.6 release" - - -@unittest.skipIf(torch.__version__ < "1.6.0", JIT_MSG) -class TestJitSequenceGenerator(TestJitSequenceGeneratorBase): - def test_export_transformer(self): - model = self.transformer_model - torch.jit.script(model) - - def test_ensemble_sequence_generator(self): - model = self.transformer_model - generator = SequenceGenerator( - [model], - self.task.tgt_dict, - beam_size=2, - no_repeat_ngram_size=2, - max_len_b=10, - ) - scripted_model = torch.jit.script(generator) - self._test_save_and_load(scripted_model) - - def test_export_ensemble_model(self): - model = self.transformer_model - ensemble_models = EnsembleModel([model]) - torch.jit.script(ensemble_models) - - -class TestExportSearch(unittest.TestCase): - def setUp(self): - task, _ = get_dummy_task_and_parser() - self.tgt_dict = task.tgt_dict - self.min_top1_prob = 0.4 - - def test_export_diverse_bs(self): - search_strategy = search.DiverseBeamSearch( - self.tgt_dict, num_groups=2, diversity_strength=0.0 - ) - torch.jit.script(search_strategy) - - def test_export_sampling(self): - low_sampling_topp = self.min_top1_prob / 2.0 - search_strategy = search.Sampling( - self.tgt_dict, sampling_topp=low_sampling_topp - ) - torch.jit.script(search_strategy) - - def test_export_diverse_siblings_search(self): - search_strategy = search.DiverseSiblingsSearch( - self.tgt_dict, diversity_rate=0.5 - ) - torch.jit.script(search_strategy) - - -class TestSequenceGeneratorBase(unittest.TestCase): - def assertHypoTokens(self, hypo, tokens): - self.assertTensorEqual(hypo["tokens"], torch.LongTensor(tokens)) - - def assertHypoScore(self, hypo, pos_probs, normalized=True, lenpen=1.0): - pos_scores = torch.FloatTensor(pos_probs).log() - self.assertAlmostEqual(hypo["positional_scores"], pos_scores) - self.assertEqual(pos_scores.numel(), hypo["tokens"].numel()) - score = pos_scores.sum() - if normalized: - score /= pos_scores.numel() ** lenpen - self.assertLess(abs(score - hypo["score"]), 1e-6) - - def assertAlmostEqual(self, t1, t2): - self.assertEqual(t1.size(), t2.size(), "size mismatch") - self.assertLess((t1 - t2).abs().max(), 1e-4) - - def assertTensorEqual(self, t1, t2): - self.assertEqual(t1.size(), t2.size(), "size mismatch") - self.assertEqual(t1.ne(t2).long().sum(), 0) - - -class TestSequenceGenerator(TestSequenceGeneratorBase): - def setUp(self): - ( - self.tgt_dict, - self.w1, - self.w2, - src_tokens, - src_lengths, - self.model, - ) = test_utils.sequence_generator_setup() - self.sample = { - "net_input": {"src_tokens": src_tokens, "src_lengths": src_lengths} - } - - def test_with_normalization(self): - generator = SequenceGenerator([self.model], self.tgt_dict, beam_size=2) - hypos = generator.forward(self.sample) - eos, w1, w2 = self.tgt_dict.eos(), self.w1, self.w2 - # sentence 1, beam 1 - self.assertHypoTokens(hypos[0][0], [w1, eos]) - self.assertHypoScore(hypos[0][0], [0.9, 1.0]) - # sentence 1, beam 2 - self.assertHypoTokens(hypos[0][1], [w2, w1, w2, eos]) - self.assertHypoScore(hypos[0][1], [0.1, 0.9, 0.9, 1.0]) - # sentence 2, beam 1 - self.assertHypoTokens(hypos[1][0], [w1, w2, w1, eos]) - self.assertHypoScore(hypos[1][0], [0.7, 0.4, 0.4, 1.0]) - # sentence 2, beam 2 - self.assertHypoTokens(hypos[1][1], [w1, w2, eos]) - self.assertHypoScore(hypos[1][1], [0.7, 0.4, 0.6]) - - def test_without_normalization(self): - # Sentence 1: unchanged from the normalized case - # Sentence 2: beams swap order - generator = SequenceGenerator( - [self.model], self.tgt_dict, beam_size=2, normalize_scores=False - ) - hypos = generator.forward(self.sample) - eos, w1, w2 = self.tgt_dict.eos(), self.w1, self.w2 - # sentence 1, beam 1 - self.assertHypoTokens(hypos[0][0], [w1, eos]) - self.assertHypoScore(hypos[0][0], [0.9, 1.0], normalized=False) - # sentence 1, beam 2 - self.assertHypoTokens(hypos[0][1], [w2, w1, w2, eos]) - self.assertHypoScore(hypos[0][1], [0.1, 0.9, 0.9, 1.0], normalized=False) - # sentence 2, beam 1 - self.assertHypoTokens(hypos[1][0], [w1, w2, eos]) - self.assertHypoScore(hypos[1][0], [0.7, 0.4, 0.6], normalized=False) - # sentence 2, beam 2 - self.assertHypoTokens(hypos[1][1], [w1, w2, w1, eos]) - self.assertHypoScore(hypos[1][1], [0.7, 0.4, 0.4, 1.0], normalized=False) - - def test_with_lenpen_favoring_short_hypos(self): - lenpen = 0.6 - generator = SequenceGenerator( - [self.model], self.tgt_dict, beam_size=2, len_penalty=lenpen - ) - hypos = generator.forward(self.sample) - eos, w1, w2 = self.tgt_dict.eos(), self.w1, self.w2 - # sentence 1, beam 1 - self.assertHypoTokens(hypos[0][0], [w1, eos]) - self.assertHypoScore(hypos[0][0], [0.9, 1.0], lenpen=lenpen) - # sentence 1, beam 2 - self.assertHypoTokens(hypos[0][1], [w2, w1, w2, eos]) - self.assertHypoScore(hypos[0][1], [0.1, 0.9, 0.9, 1.0], lenpen=lenpen) - # sentence 2, beam 1 - self.assertHypoTokens(hypos[1][0], [w1, w2, eos]) - self.assertHypoScore(hypos[1][0], [0.7, 0.4, 0.6], lenpen=lenpen) - # sentence 2, beam 2 - self.assertHypoTokens(hypos[1][1], [w1, w2, w1, eos]) - self.assertHypoScore(hypos[1][1], [0.7, 0.4, 0.4, 1.0], lenpen=lenpen) - - def test_with_lenpen_favoring_long_hypos(self): - lenpen = 5.0 - generator = SequenceGenerator( - [self.model], self.tgt_dict, beam_size=2, len_penalty=lenpen - ) - hypos = generator.forward(self.sample) - eos, w1, w2 = self.tgt_dict.eos(), self.w1, self.w2 - # sentence 1, beam 1 - self.assertHypoTokens(hypos[0][0], [w2, w1, w2, eos]) - self.assertHypoScore(hypos[0][0], [0.1, 0.9, 0.9, 1.0], lenpen=lenpen) - # sentence 1, beam 2 - self.assertHypoTokens(hypos[0][1], [w1, eos]) - self.assertHypoScore(hypos[0][1], [0.9, 1.0], lenpen=lenpen) - # sentence 2, beam 1 - self.assertHypoTokens(hypos[1][0], [w1, w2, w1, eos]) - self.assertHypoScore(hypos[1][0], [0.7, 0.4, 0.4, 1.0], lenpen=lenpen) - # sentence 2, beam 2 - self.assertHypoTokens(hypos[1][1], [w1, w2, eos]) - self.assertHypoScore(hypos[1][1], [0.7, 0.4, 0.6], lenpen=lenpen) - - def test_maxlen(self): - generator = SequenceGenerator( - [self.model], self.tgt_dict, beam_size=2, max_len_b=2 - ) - hypos = generator.forward(self.sample) - eos, w1, w2 = self.tgt_dict.eos(), self.w1, self.w2 - # sentence 1, beam 1 - self.assertHypoTokens(hypos[0][0], [w1, eos]) - self.assertHypoScore(hypos[0][0], [0.9, 1.0]) - # sentence 1, beam 2 - self.assertHypoTokens(hypos[0][1], [w2, w2, eos]) - self.assertHypoScore(hypos[0][1], [0.1, 0.1, 0.6]) - # sentence 2, beam 1 - self.assertHypoTokens(hypos[1][0], [w1, w2, eos]) - self.assertHypoScore(hypos[1][0], [0.7, 0.4, 0.6]) - # sentence 2, beam 2 - self.assertHypoTokens(hypos[1][1], [w2, w2, eos]) - self.assertHypoScore(hypos[1][1], [0.3, 0.9, 0.01]) - - def test_encoder_with_different_output_len(self): - args = self.model.encoder.args - task = test_utils.TestTranslationTask.setup_task( - args, self.tgt_dict, self.tgt_dict - ) - reshaping_model = test_utils.TestReshapingModel.build_model(args, task) - generator = SequenceGenerator( - [reshaping_model], self.tgt_dict, beam_size=2, max_len_b=2 - ) - hypos = generator.forward(self.sample) - for sent in [0, 1]: - for beam in [0, 1]: - assert hypos[sent][beam]["attention"] is not None - - def test_generation_with_additional_input(self): - args = self.model.encoder.args - task = test_utils.TestTranslationTask.setup_task( - args, self.tgt_dict, self.tgt_dict - ) - add_input_model = test_utils.TestAdditionalInputModel.build_model(args, task) - generator = SequenceGenerator([add_input_model], self.tgt_dict, beam_size=2) - sample = self.sample.copy() - sample["net_input"]["fancy_other_input"] = sample["net_input"]["src_tokens"] - hypos = generator.forward(self.sample) - eos, w1, w2 = self.tgt_dict.eos(), self.w1, self.w2 - # sentence 1, beam 1 - self.assertHypoTokens(hypos[0][0], [w1, eos]) - self.assertHypoScore(hypos[0][0], [0.9, 1.0]) - - -@unittest.skipUnless(torch.cuda.is_available(), "") -class TestRepeatNgramBlocking(TestSequenceGeneratorBase): - @classmethod - def setUpClass(cls): - ( - cls.tgt_dict, - cls.w1, - cls.w2, - src_tokens, - src_lengths, - cls.model, - ) = test_utils.sequence_generator_setup() - return cls - - def test_finds_repetitive_tokens(self): - bsz, vocab_size, beam_size, step = 2, 4, 1, 3 - generated_tok = torch.tensor( - [[2, 2, 2, 2], [3, 3, 3, 3]], dtype=torch.long, device="cuda" - ) - lprobs = torch.zeros((beam_size * bsz, vocab_size), device="cuda") - desired_result = lprobs.new_tensor( - [[0.0, 0.0, -math.inf, 0.0], [0.0, 0.0, 0.0, -math.inf]] - ) - - cuda_ext_result, baseline_result = self._compare_cuda_ext_to_default_implem( - bsz, beam_size, generated_tok, lprobs, step, 2 - ) - self.assertTensorEqual(cuda_ext_result, desired_result) - self.assertTensorEqual(baseline_result, desired_result) - - @unittest.skipIf(torch.__version__ < "1.6.0", JIT_MSG) - def test_jit_no_extension(self): - bsz, vocab_size, beam_size, step = 2, 4, 1, 3 - generated_tok = torch.tensor( - [[2, 2, 2, 2], [3, 3, 3, 3]], dtype=torch.long, device="cuda" - ) - lprobs = torch.zeros((beam_size * bsz, vocab_size), device="cuda") - blocker = NGramRepeatBlock(2, use_extension=False) - base_result = blocker(generated_tok, lprobs.clone(), bsz, beam_size, step) - scripted_blocker = torch.jit.script(blocker) - jit_result = scripted_blocker( - generated_tok, lprobs.clone(), bsz, beam_size, step - ) - self.assertTensorEqual(base_result, jit_result) - - def test_ngram_blocking_same_as_default_implem(self): - """Test that cuda extension returns same things as default impl in many settings.""" - vocab_size = 4 - step = 6 - for _ in range(2): - block_param = np.random.choice([1, 2, 3, 4]) - batch_size = np.random.randint(1, 8) - beam_size = np.random.choice([1, 2, 4, 8]) - lprobs = torch.zeros((beam_size * batch_size, vocab_size), device="cuda") - - generated_tok = torch.tensor( - np.random.randint( - 0, vocab_size, size=(batch_size * beam_size, step + 1) - ), - device="cuda", - dtype=torch.long, - ) - self._compare_cuda_ext_to_default_implem( - batch_size, - beam_size, - generated_tok, - lprobs, - step, - block_param, - ) - - def _compare_cuda_ext_to_default_implem( - self, bsz, beam_size, generated_tok, lprobs, step, block_param - ): - """Assert that cuda extension and default implem return the same thing.""" - blocker = NGramRepeatBlock(block_param) - assert blocker.use_extension, "Extension not compiled" - cuda_ext_result = blocker( - generated_tok, - lprobs.clone(), - bsz, - beam_size, - step, - ) - blocker.use_extension = False - baseline_result = blocker( - generated_tok, - lprobs.clone(), - bsz, - beam_size, - step, - ) - self.assertTensorEqual(cuda_ext_result, baseline_result) - blocker.use_extension = True - return cuda_ext_result, baseline_result - - -class TestDiverseBeamSearch(TestSequenceGeneratorBase): - def setUp(self): - # construct dummy dictionary - d = test_utils.dummy_dictionary(vocab_size=2) - self.assertEqual(d.pad(), 1) - self.assertEqual(d.eos(), 2) - self.assertEqual(d.unk(), 3) - self.eos = d.eos() - self.w1 = 4 - self.w2 = 5 - - # construct source data - self.src_tokens = torch.LongTensor( - [ - [self.w1, self.w2, self.eos], - [self.w1, self.w2, self.eos], - ] - ) - self.src_lengths = torch.LongTensor([2, 2]) - - args = argparse.Namespace() - unk = 0.0 - args.beam_probs = [ - # step 0: - torch.FloatTensor( - [ - # eos w1 w2 - # sentence 1: - [0.0, unk, 0.9, 0.1], # beam 1 - [0.0, unk, 0.9, 0.1], # beam 2 - # sentence 2: - [0.0, unk, 0.7, 0.3], - [0.0, unk, 0.7, 0.3], - ] - ), - # step 1: - torch.FloatTensor( - [ - # eos w1 w2 - # sentence 1: - [0.0, unk, 0.6, 0.4], - [0.0, unk, 0.6, 0.4], - # sentence 2: - [0.25, unk, 0.35, 0.4], - [0.25, unk, 0.35, 0.4], - ] - ), - # step 2: - torch.FloatTensor( - [ - # eos w1 w2 - # sentence 1: - [1.0, unk, 0.0, 0.0], - [1.0, unk, 0.0, 0.0], - # sentence 2: - [0.9, unk, 0.1, 0.0], - [0.9, unk, 0.1, 0.0], - ] - ), - ] - - task = test_utils.TestTranslationTask.setup_task(args, d, d) - self.model = task.build_model(args) - self.tgt_dict = task.target_dictionary - - def test_diverse_beam_search(self): - search_strategy = search.DiverseBeamSearch( - self.tgt_dict, num_groups=2, diversity_strength=0.0 - ) - generator = SequenceGenerator( - [self.model], - self.tgt_dict, - beam_size=2, - search_strategy=search_strategy, - ) - sample = { - "net_input": { - "src_tokens": self.src_tokens, - "src_lengths": self.src_lengths, - } - } - hypos = generator.forward(sample) - eos, w1, w2 = self.eos, self.w1, self.w2 - # sentence 1, beam 1 - self.assertHypoTokens(hypos[0][0], [w1, w1, eos]) - self.assertHypoScore(hypos[0][0], [0.9, 0.6, 1.0]) - # sentence 1, beam 2 - self.assertHypoTokens(hypos[0][1], [w1, w1, eos]) - self.assertHypoScore(hypos[0][1], [0.9, 0.6, 1.0]) - # sentence 2, beam 1 - self.assertHypoTokens(hypos[1][0], [w1, w2, eos]) - self.assertHypoScore(hypos[1][0], [0.7, 0.4, 0.9]) - # sentence 2, beam 2 - self.assertHypoTokens(hypos[1][1], [w1, w2, eos]) - self.assertHypoScore(hypos[1][1], [0.7, 0.4, 0.9]) - - -class TestDiverseSiblingsSearch(TestDiverseBeamSearch): - def assertHypoScore( - self, hypo, pos_probs, sibling_rank, diversity_rate, normalized=True, lenpen=1.0 - ): - pos_scores = torch.FloatTensor(pos_probs).log() - pos_scores.sub_(torch.Tensor(sibling_rank) * diversity_rate) - self.assertAlmostEqual(hypo["positional_scores"], pos_scores) - self.assertEqual(pos_scores.numel(), hypo["tokens"].numel()) - score = pos_scores.sum() - if normalized: - score /= pos_scores.numel() ** lenpen - self.assertLess(abs(score - hypo["score"]), 1e-6) - - def test_diverse_beam_search(self): - search_strategy = search.DiverseSiblingsSearch( - self.tgt_dict, diversity_rate=0.5 - ) - generator = SequenceGenerator( - [self.model], self.tgt_dict, beam_size=2, search_strategy=search_strategy - ) - sample = { - "net_input": { - "src_tokens": self.src_tokens, - "src_lengths": self.src_lengths, - } - } - hypos = generator.forward(sample) - eos, w1, w2 = self.eos, self.w1, self.w2 - # sentence 1, beam 1 - self.assertHypoTokens(hypos[0][0], [w1, w1, eos]) - self.assertHypoScore(hypos[0][0], [0.9, 0.6, 1.0], [0, 1, 1], 0.5) - # sentence 1, beam 2 - self.assertHypoTokens(hypos[0][1], [w1, w2, eos]) - self.assertHypoScore(hypos[0][1], [0.9, 0.4, 1.0], [0, 2, 1], 0.5) - # sentence 2, beam 1 - self.assertHypoTokens(hypos[1][0], [w1, w2, eos]) - self.assertHypoScore(hypos[1][0], [0.7, 0.4, 0.9], [0, 1, 1], 0.5) - # sentence 2, beam 2 - self.assertHypoTokens(hypos[1][1], [w1, w1, eos]) - self.assertHypoScore(hypos[1][1], [0.7, 0.35, 0.9], [0, 2, 1], 0.5) - - -class TestPrefixBeamSearch(TestSequenceGeneratorBase): - def setUp(self): - # construct dummy dictionary - vocab_size = 10 - d = test_utils.dummy_dictionary(vocab_size=vocab_size) - self.assertEqual(d.pad(), 1) - self.assertEqual(d.eos(), 2) - self.assertEqual(d.unk(), 3) - self.eos = d.eos() - self.w1 = 4 - self.w2 = 5 - self.beam_size = 3 - - # construct prefix data - self.tokens = torch.LongTensor( - [ - [self.w1, self.w2, self.eos], - ] - ) - self.token_lengths = torch.LongTensor([2]) - - args = argparse.Namespace() - unk = 0.0 - args.beam_probs = [ - # prefix step 0: - torch.FloatTensor( - [ - # eos - [0.0, unk] + [1.0 / vocab_size] * vocab_size # beam 1 - ] * self.beam_size - ), - ] * vocab_size - - task = test_utils.TestTranslationTask.setup_task(args, d, d) - self.model = task.build_model(args) - self.tgt_dict = task.target_dictionary - - def test_prefix_beam_search(self): - search_strategy = search.BeamSearch(self.tgt_dict) - generator = SequenceGenerator( - [self.model], - self.tgt_dict, - beam_size=self.beam_size, - search_strategy=search_strategy, - ) - sample = { - "net_input": { - "src_tokens": self.tokens, - "src_lengths": self.token_lengths, - } - } - # make sure test sample doesn't break any assertion - generator.forward(sample, prefix_tokens=self.tokens[:, :-1]) - -class TestTopPSamplingSearch(TestSequenceGeneratorBase): - def setUp(self): - # construct dummy dictionary - d = test_utils.dummy_dictionary(vocab_size=2) - self.assertEqual(d.pad(), 1) - self.assertEqual(d.eos(), 2) - self.assertEqual(d.unk(), 3) - self.eos = d.eos() - self.w1 = 4 - self.w2 = 5 - - # construct source data - self.src_tokens = torch.LongTensor( - [ - [self.w1, self.w2, self.eos], - [self.w1, self.w2, self.eos], - ] - ) - self.src_lengths = torch.LongTensor([2, 2]) - - args = argparse.Namespace() - unk = 0.0 - # The minimal probability of top 2 tokens. - self.min_top2_prob = 0.75 - # The minimal probability of the top 1 token. - self.min_top1_prob = 0.4 - - w1_prob = self.min_top1_prob - w2_prob = self.min_top2_prob - self.min_top1_prob - eos_prob = 1 - self.min_top2_prob - - args.beam_probs = [ - # step 0: - torch.FloatTensor( - [ - # eos w1 w2 - [0.0, unk, 1.0, 0.0], - [0.0, unk, 1.0, 0.0], - [0.0, unk, 1.0, 0.0], - [0.0, unk, 1.0, 0.0], - ] - ), - # step 1: - torch.FloatTensor( - [ - # eos w1 w2 - [eos_prob, unk, w1_prob, w2_prob], - [eos_prob, unk, w1_prob, w2_prob], - [eos_prob, unk, w1_prob, w2_prob], - [eos_prob, unk, w1_prob, w2_prob], - ] - ), - # step 2: - torch.FloatTensor( - [ - # eos w1 w2 - [1.0, unk, 0.0, 0.0], - [1.0, unk, 0.0, 0.0], - [1.0, unk, 0.0, 0.0], - [1.0, unk, 0.0, 0.0], - ] - ), - ] - - task = test_utils.TestTranslationTask.setup_task(args, d, d) - self.model = task.build_model(args) - self.tgt_dict = task.target_dictionary - - def test_topp_sampling_search_low_prob(self): - # Given a prob low enough to top-P sampling, we expect only the top - # 1 token to be sampled, which always results in the same output. - low_sampling_topp = self.min_top1_prob / 2.0 - search_strategy = search.Sampling( - self.tgt_dict, sampling_topp=low_sampling_topp - ) - generator = SequenceGenerator( - [self.model], self.tgt_dict, beam_size=2, search_strategy=search_strategy - ) - sample = { - "net_input": { - "src_tokens": self.src_tokens, - "src_lengths": self.src_lengths, - } - } - hypos = generator.forward(sample) - eos, w1 = self.eos, self.w1 - # sentence 1, beam 1 - self.assertHypoTokens(hypos[0][0], [w1, w1, eos]) - self.assertHypoScore(hypos[0][0], [1.0, 0.4, 1.0]) - # sentence 1, beam 2 - self.assertHypoTokens(hypos[0][1], [w1, w1, eos]) - self.assertHypoScore(hypos[0][1], [1.0, 0.4, 1.0]) - # sentence 2, beam 1 - self.assertHypoTokens(hypos[1][0], [w1, w1, eos]) - self.assertHypoScore(hypos[1][0], [1.0, 0.4, 1.0]) - # sentence 2, beam 2 - self.assertHypoTokens(hypos[1][1], [w1, w1, eos]) - self.assertHypoScore(hypos[1][1], [1.0, 0.4, 1.0]) - - def test_topp_sampling_search_high_prob(self): - # Given a prob high enough to top-P sampling, any of the top 2 - # tokens could be sampled. This can cause different outputs. - high_sampling_topp = (self.min_top1_prob + self.min_top2_prob) / 2.0 - search_strategy = search.Sampling( - self.tgt_dict, sampling_topp=high_sampling_topp - ) - generator = SequenceGenerator( - [self.model], self.tgt_dict, beam_size=2, search_strategy=search_strategy - ) - sample = { - "net_input": { - "src_tokens": self.src_tokens, - "src_lengths": self.src_lengths, - } - } - hypos = generator.forward(sample) - eos, w1, w2 = self.eos, self.w1, self.w2 - # sentence 1, beam 1 - self.assertTrue( - self.hypoTokens(hypos[0][0], [w1, w1, eos]) - or self.hypoTokens(hypos[0][0], [w1, w2, eos]) - ) - self.assertTrue( - self.hypoScore(hypos[0][0], [1.0, 0.4, 1.0]) - or self.hypoScore(hypos[0][0], [1.0, 0.35, 1.0]) - ) - - # sentence 1, beam 2 - self.assertTrue( - self.hypoTokens(hypos[0][1], [w1, w1, eos]) - or self.hypoTokens(hypos[0][1], [w1, w2, eos]) - ) - self.assertTrue( - self.hypoScore(hypos[0][1], [1.0, 0.4, 1.0]) - or self.hypoScore(hypos[0][1], [1.0, 0.35, 1.0]) - ) - - # sentence 2, beam 1 - self.assertTrue( - self.hypoTokens(hypos[1][0], [w1, w1, eos]) - or self.hypoTokens(hypos[1][0], [w1, w2, eos]) - ) - self.assertTrue( - self.hypoScore(hypos[1][0], [1.0, 0.4, 1.0]) - or self.hypoScore(hypos[1][0], [1.0, 0.35, 1.0]) - ) - - # sentence 2, beam 2 - self.assertTrue( - self.hypoTokens(hypos[1][1], [w1, w1, eos]) - or self.hypoTokens(hypos[1][1], [w1, w2, eos]) - ) - self.assertTrue( - self.hypoScore(hypos[1][1], [1.0, 0.4, 1.0]) - or self.hypoScore(hypos[1][1], [1.0, 0.35, 1.0]) - ) - - def hypoTokens(self, hypo, tokens): - return self.tensorEqual(hypo["tokens"], torch.LongTensor(tokens)) - - def hypoScore(self, hypo, pos_probs, normalized=True, lenpen=1.0): - pos_scores = torch.FloatTensor(pos_probs).log() - if not self.almostEqual(hypo["positional_scores"], pos_scores): - return False - if pos_scores.numel() != hypo["tokens"].numel(): - return False - score = pos_scores.sum() - if normalized: - score /= pos_scores.numel() ** lenpen - return abs(score - hypo["score"]) < 1e-6 - - def almostEqual(self, t1, t2): - return t1.size() == t2.size() and (t1 - t2).abs().max() < 1e-4 - - def tensorEqual(self, t1, t2): - return t1.size() == t2.size() and t1.ne(t2).long().sum() == 0 - - -if __name__ == "__main__": - unittest.main() diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/conv_seq2seq/README.md b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/conv_seq2seq/README.md deleted file mode 100644 index 95fe7e7909a77ee0e50fe31d4b8be38daa8f3be7..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/conv_seq2seq/README.md +++ /dev/null @@ -1,25 +0,0 @@ -# Convolutional Sequence to Sequence Learning (Gehring et al., 2017) - -## Pre-trained models - -Description | Dataset | Model | Test set(s) ----|---|---|--- -Convolutional
([Gehring et al., 2017](https://arxiv.org/abs/1705.03122)) | [WMT14 English-French](http://statmt.org/wmt14/translation-task.html#Download) | [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/models/wmt14.v2.en-fr.fconv-py.tar.bz2) | newstest2014:
[download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/wmt14.v2.en-fr.newstest2014.tar.bz2)
newstest2012/2013:
[download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/wmt14.v2.en-fr.ntst1213.tar.bz2) -Convolutional
([Gehring et al., 2017](https://arxiv.org/abs/1705.03122)) | [WMT14 English-German](http://statmt.org/wmt14/translation-task.html#Download) | [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/models/wmt14.en-de.fconv-py.tar.bz2) | newstest2014:
[download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/wmt14.en-de.newstest2014.tar.bz2) -Convolutional
([Gehring et al., 2017](https://arxiv.org/abs/1705.03122)) | [WMT17 English-German](http://statmt.org/wmt17/translation-task.html#Download) | [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/models/wmt17.v2.en-de.fconv-py.tar.bz2) | newstest2014:
[download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/wmt17.v2.en-de.newstest2014.tar.bz2) - -## Example usage - -See the [translation README](../translation/README.md) for instructions on reproducing results for WMT'14 En-De and -WMT'14 En-Fr using the `fconv_wmt_en_de` and `fconv_wmt_en_fr` model architectures. - -## Citation - -```bibtex -@inproceedings{gehring2017convs2s, - title = {Convolutional Sequence to Sequence Learning}, - author = {Gehring, Jonas, and Auli, Michael and Grangier, David and Yarats, Denis and Dauphin, Yann N}, - booktitle = {Proc. of ICML}, - year = 2017, -} -``` diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/wav2vec/unsupervised/models/wav2vec_u.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/wav2vec/unsupervised/models/wav2vec_u.py deleted file mode 100644 index 27792ebda842057e33fed3dc53dd9d8a594d0483..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/wav2vec/unsupervised/models/wav2vec_u.py +++ /dev/null @@ -1,637 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from dataclasses import dataclass -from enum import Enum, auto -import math -import numpy as np -from typing import Tuple, List, Optional, Dict - -import torch -import torch.nn as nn -import torch.nn.functional as F -from torch import autograd - -from fairseq import checkpoint_utils, utils -from fairseq.dataclass import FairseqDataclass -from fairseq.models import BaseFairseqModel, register_model -from fairseq.modules import ( - SamePad, - TransposeLast, -) - - -class SegmentationType(Enum): - NONE = auto() - RANDOM = auto() - UNIFORM_RANDOM = auto() - UNIFORM_RANDOM_JOIN = auto() - JOIN = auto() - - -@dataclass -class SegmentationConfig(FairseqDataclass): - type: SegmentationType = SegmentationType.NONE - subsample_rate: float = 0.25 - mean_pool: bool = True - mean_pool_join: bool = False - remove_zeros: bool = False - - -@dataclass -class Wav2vec_UConfig(FairseqDataclass): - - discriminator_kernel: int = 3 - discriminator_dilation: int = 1 - discriminator_dim: int = 256 - discriminator_causal: bool = True - discriminator_linear_emb: bool = False - discriminator_depth: int = 1 - discriminator_max_pool: bool = False - discriminator_act_after_linear: bool = False - discriminator_dropout: float = 0.0 - discriminator_spectral_norm: bool = False - discriminator_weight_norm: bool = False - - generator_kernel: int = 4 - generator_dilation: int = 1 - generator_stride: int = 1 - generator_bias: bool = False - generator_dropout: float = 0.0 - - blank_weight: float = 0 - blank_mode: str = "add" - blank_is_sil: bool = False - no_softmax: bool = False - - smoothness_weight: float = 0.0 - smoothing: float = 0.0 - smoothing_one_sided: bool = False - gradient_penalty: float = 0.0 - probabilistic_grad_penalty_slicing: bool = False - code_penalty: float = 0.0 - gumbel: bool = False - hard_gumbel: bool = True - temp: Tuple[float, float, float] = (2, 0.1, 0.99995) - input_dim: int = 128 - - segmentation: SegmentationConfig = SegmentationConfig() - - -class Segmenter(nn.Module): - cfg: SegmentationConfig - - def __init__(self, cfg: SegmentationConfig): - super().__init__() - self.cfg = cfg - self.subsample_rate = cfg.subsample_rate - - def pre_segment(self, dense_x, dense_padding_mask): - return dense_x, dense_padding_mask - - def logit_segment(self, logits, padding_mask): - return logits, padding_mask - - -class RandomSegmenter(Segmenter): - def pre_segment(self, dense_x, dense_padding_mask): - target_num = math.ceil(dense_x.size(1) * self.subsample_rate) - ones = torch.ones(dense_x.shape[:-1], device=dense_x.device) - indices, _ = ones.multinomial(target_num).sort(dim=-1) - indices_ld = indices.unsqueeze(-1).expand(-1, -1, dense_x.size(-1)) - dense_x = dense_x.gather(1, indices_ld) - dense_padding_mask = dense_padding_mask.gather(1, index=indices) - return dense_x, dense_padding_mask - - -class UniformRandomSegmenter(Segmenter): - def pre_segment(self, dense_x, dense_padding_mask): - bsz, tsz, fsz = dense_x.shape - - target_num = math.ceil(tsz * self.subsample_rate) - - rem = tsz % target_num - - if rem > 0: - dense_x = F.pad(dense_x, [0, 0, 0, target_num - rem]) - dense_padding_mask = F.pad( - dense_padding_mask, [0, target_num - rem], value=True - ) - - dense_x = dense_x.view(bsz, target_num, -1, fsz) - dense_padding_mask = dense_padding_mask.view(bsz, target_num, -1) - - if self.cfg.mean_pool: - dense_x = dense_x.mean(dim=-2) - dense_padding_mask = dense_padding_mask.all(dim=-1) - else: - ones = torch.ones((bsz, dense_x.size(2)), device=dense_x.device) - indices = ones.multinomial(1) - indices = indices.unsqueeze(-1).expand(-1, target_num, -1) - indices_ld = indices.unsqueeze(-1).expand(-1, -1, -1, fsz) - dense_x = dense_x.gather(2, indices_ld).reshape(bsz, -1, fsz) - dense_padding_mask = dense_padding_mask.gather(2, index=indices).reshape( - bsz, -1 - ) - return dense_x, dense_padding_mask - - -class JoinSegmenter(Segmenter): - def logit_segment(self, logits, padding_mask): - preds = logits.argmax(dim=-1) - - if padding_mask.any(): - preds[padding_mask] = -1 # mark pad - uniques = [] - - bsz, tsz, csz = logits.shape - - for p in preds: - uniques.append( - p.cpu().unique_consecutive(return_inverse=True, return_counts=True) - ) - - new_tsz = max(u[0].numel() for u in uniques) - new_logits = logits.new_zeros(bsz, new_tsz, csz) - new_pad = padding_mask.new_zeros(bsz, new_tsz) - - for b in range(bsz): - u, idx, c = uniques[b] - keep = u != -1 - - if self.cfg.remove_zeros: - keep.logical_and_(u != 0) - - if self.training and not self.cfg.mean_pool_join: - u[0] = 0 - u[1:] = c.cumsum(0)[:-1] - m = c > 1 - r = torch.rand(m.sum()) - o = (c[m] * r).long() - u[m] += o - new_logits[b, : u.numel()] = logits[b, u] - else: - new_logits[b].index_add_( - dim=0, index=idx.to(new_logits.device), source=logits[b] - ) - new_logits[b, : c.numel()] /= c.unsqueeze(-1).to(new_logits.device) - - new_sz = keep.sum() - if not keep.all(): - kept_logits = new_logits[b, : c.numel()][keep] - new_logits[b, :new_sz] = kept_logits - - if new_sz < new_tsz: - pad = new_tsz - new_sz - new_logits[b, -pad:] = 0 - new_pad[b, -pad:] = True - - return new_logits, new_pad - - -class UniformRandomJoinSegmenter(UniformRandomSegmenter, JoinSegmenter): - pass - - -SEGMENT_FACTORY = { - SegmentationType.NONE: Segmenter, - SegmentationType.RANDOM: RandomSegmenter, - SegmentationType.UNIFORM_RANDOM: UniformRandomSegmenter, - SegmentationType.UNIFORM_RANDOM_JOIN: UniformRandomJoinSegmenter, - SegmentationType.JOIN: JoinSegmenter, -} - - -class Discriminator(nn.Module): - def __init__(self, dim, cfg: Wav2vec_UConfig): - super().__init__() - - inner_dim = cfg.discriminator_dim - kernel = cfg.discriminator_kernel - dilation = cfg.discriminator_dilation - self.max_pool = cfg.discriminator_max_pool - - if cfg.discriminator_causal: - padding = kernel - 1 - else: - padding = kernel // 2 - - def make_conv(in_d, out_d, k, p=0, has_dilation=True): - conv = nn.Conv1d( - in_d, - out_d, - kernel_size=k, - padding=p, - dilation=dilation if has_dilation else 1, - ) - if cfg.discriminator_spectral_norm: - conv = nn.utils.spectral_norm(conv) - elif cfg.discriminator_weight_norm: - conv = nn.utils.weight_norm(conv) - return conv - - inner_net = [ - nn.Sequential( - make_conv(inner_dim, inner_dim, kernel, padding), - SamePad(kernel_size=kernel, causal=cfg.discriminator_causal), - nn.Dropout(cfg.discriminator_dropout), - nn.GELU(), - ) - for _ in range(cfg.discriminator_depth - 1) - ] + [ - make_conv(inner_dim, 1, kernel, padding, has_dilation=False), - SamePad(kernel_size=kernel, causal=cfg.discriminator_causal), - ] - - if cfg.discriminator_linear_emb: - emb_net = [make_conv(dim, inner_dim, 1)] - else: - emb_net = [ - make_conv(dim, inner_dim, kernel, padding), - SamePad(kernel_size=kernel, causal=cfg.discriminator_causal), - ] - - if cfg.discriminator_act_after_linear: - emb_net.append(nn.GELU()) - - self.net = nn.Sequential( - *emb_net, - nn.Dropout(cfg.discriminator_dropout), - *inner_net, - ) - - def forward(self, x, padding_mask): - x = x.transpose(1, 2) # BTC -> BCT - x = self.net(x) - x = x.transpose(1, 2) - x_sz = x.size(1) - if padding_mask is not None and padding_mask.any() and padding_mask.dim() > 1: - padding_mask = padding_mask[:, : x.size(1)] - x[padding_mask] = float("-inf") if self.max_pool else 0 - x_sz = x_sz - padding_mask.sum(dim=-1) - x = x.squeeze(-1) - if self.max_pool: - x, _ = x.max(dim=-1) - else: - x = x.sum(dim=-1) - x = x / x_sz - return x - - -class Generator(nn.Module): - def __init__(self, input_dim, output_dim, cfg: Wav2vec_UConfig): - super().__init__() - - self.cfg = cfg - self.output_dim = output_dim - self.stride = cfg.generator_stride - self.dropout = nn.Dropout(cfg.generator_dropout) - - padding = cfg.generator_kernel // 2 - self.proj = nn.Sequential( - TransposeLast(), - nn.Conv1d( - input_dim, - output_dim, - kernel_size=cfg.generator_kernel, - stride=cfg.generator_stride, - dilation=cfg.generator_dilation, - padding=padding, - bias=cfg.generator_bias, - ), - TransposeLast(), - ) - - def forward(self, dense_x, tokens, dense_padding_mask): - dense_x = self.dropout(dense_x) - - dense_x = self.proj(dense_x) - if self.stride > 1: - dense_padding_mask = dense_padding_mask[:, :: self.stride] - - if dense_padding_mask.size(1) != dense_x.size(1): - new_padding = dense_padding_mask.new_zeros(dense_x.shape[:-1]) - diff = new_padding.size(1) - dense_padding_mask.size(1) - assert ( - diff > 0 - ), f"{new_padding.shape}, {dense_padding_mask.shape}, {dense_x.shape}, {diff}" - if diff > 0: - new_padding[:, diff:] = dense_padding_mask - else: - assert diff < 0 - new_padding = dense_padding_mask[:, :diff] - - dense_padding_mask = new_padding - - result = {} - - token_x = None - if tokens is not None: - token_x = dense_x.new_zeros(tokens.numel(), self.output_dim) - token_x.scatter_(1, tokens.view(-1, 1).long(), 1) - token_x = token_x.view(tokens.shape + (self.output_dim,)) - - result["dense_x"] = dense_x - result["token_x"] = token_x - result["dense_padding_mask"] = dense_padding_mask - - return result - - -@register_model("wav2vec_u", dataclass=Wav2vec_UConfig) -class Wav2vec_U(BaseFairseqModel): - def calc_gradient_penalty(self, real_data, fake_data): - - b_size = min(real_data.size(0), fake_data.size(0)) - t_size = min(real_data.size(1), fake_data.size(1)) - - if self.cfg.probabilistic_grad_penalty_slicing: - - def get_slice(data, dim, target_size): - - size = data.size(dim) - diff = size - target_size - if diff <= 0: - return data - - start = np.random.randint(0, diff + 1) - return data.narrow(dim=dim, start=start, length=target_size) - - real_data = get_slice(real_data, 0, b_size) - real_data = get_slice(real_data, 1, t_size) - fake_data = get_slice(fake_data, 0, b_size) - fake_data = get_slice(fake_data, 1, t_size) - - else: - real_data = real_data[:b_size, :t_size] - fake_data = fake_data[:b_size, :t_size] - - alpha = torch.rand(real_data.size(0), 1, 1) - alpha = alpha.expand(real_data.size()) - alpha = alpha.to(real_data.device) - - interpolates = alpha * real_data + ((1 - alpha) * fake_data) - - disc_interpolates = self.discriminator(interpolates, None) - - gradients = autograd.grad( - outputs=disc_interpolates, - inputs=interpolates, - grad_outputs=torch.ones(disc_interpolates.size(), device=real_data.device), - create_graph=True, - retain_graph=True, - only_inputs=True, - )[0] - - gradient_penalty = (gradients.norm(2, dim=1) - 1) ** 2 - return gradient_penalty - - def set_num_updates(self, num_updates): - super().set_num_updates(num_updates) - self.update_num = num_updates - self.curr_temp = max( - self.max_temp * self.temp_decay ** num_updates, self.min_temp - ) - - def discrim_step(self, num_updates): - return num_updates % 2 == 1 - - def get_groups_for_update(self, num_updates): - return "discriminator" if self.discrim_step(num_updates) else "generator" - - def __init__(self, cfg: Wav2vec_UConfig, target_dict): - super().__init__() - - self.cfg = cfg - self.zero_index = target_dict.index("") if "" in target_dict else 0 - self.smoothness_weight = cfg.smoothness_weight - - output_size = len(target_dict) - self.pad = target_dict.pad() - self.eos = target_dict.eos() - self.smoothing = cfg.smoothing - self.smoothing_one_sided = cfg.smoothing_one_sided - self.no_softmax = cfg.no_softmax - self.gumbel = cfg.gumbel - self.hard_gumbel = cfg.hard_gumbel - self.last_acc = None - - self.gradient_penalty = cfg.gradient_penalty - self.code_penalty = cfg.code_penalty - self.blank_weight = cfg.blank_weight - self.blank_mode = cfg.blank_mode - self.blank_index = target_dict.index("") if cfg.blank_is_sil else 0 - assert self.blank_index != target_dict.unk() - - self.discriminator = Discriminator(output_size, cfg) - for p in self.discriminator.parameters(): - p.param_group = "discriminator" - - self.pca_A = self.pca_b = None - d = cfg.input_dim - - self.segmenter = SEGMENT_FACTORY[cfg.segmentation.type](cfg.segmentation) - - self.generator = Generator(d, output_size, cfg) - - for p in self.generator.parameters(): - p.param_group = "generator" - - for p in self.segmenter.parameters(): - p.param_group = "generator" - - self.max_temp, self.min_temp, self.temp_decay = cfg.temp - self.curr_temp = self.max_temp - self.update_num = 0 - - @classmethod - def build_model(cls, cfg, task): - return cls(cfg, task.target_dictionary) - - def get_logits( - self, - net_output: Optional[Dict[str, List[Optional[torch.Tensor]]]], - normalize: bool = False, - ): - logits = net_output["logits"] - - if self.blank_weight != 0: - if self.blank_mode == "add": - logits[..., self.blank_index] += self.blank_weight - elif self.blank_mode == "set": - logits[..., self.blank_index] = self.blank_weight - else: - raise Exception(f"invalid blank mode {self.blank_mode}") - - padding = net_output["padding_mask"] - if padding.any(): - logits[padding] = float("-inf") - logits[padding][..., self.blank_index] = float("inf") - - if normalize: - logits = utils.log_softmax(logits.float(), dim=-1) - - return logits.transpose(0, 1) - - def get_normalized_probs( - self, - net_output: Tuple[ - torch.Tensor, Optional[Dict[str, List[Optional[torch.Tensor]]]] - ], - log_probs: bool, - sample: Optional[Dict[str, torch.Tensor]] = None, - ): - logits = self.get_logits(net_output) - - probs = super().get_normalized_probs(logits, log_probs, sample) - # BTC -> TBC for ctc - probs = probs.transpose(0, 1) - return probs - - def normalize(self, dense_x): - - bsz, tsz, csz = dense_x.shape - - if dense_x.numel() == 0: - raise Exception(dense_x.shape) - _, k = dense_x.max(-1) - hard_x = ( - dense_x.new_zeros(bsz * tsz, csz) - .scatter_(-1, k.view(-1, 1), 1.0) - .view(-1, csz) - ) - hard_probs = torch.mean(hard_x.float(), dim=0) - code_perplexity = torch.exp( - -torch.sum(hard_probs * torch.log(hard_probs + 1e-7), dim=-1) - ) - - avg_probs = torch.softmax(dense_x.reshape(-1, csz).float(), dim=-1).mean(dim=0) - prob_perplexity = torch.exp( - -torch.sum(avg_probs * torch.log(avg_probs + 1e-7), dim=-1) - ) - - if not self.no_softmax: - if self.training and self.gumbel: - dense_x = F.gumbel_softmax( - dense_x.float(), tau=self.curr_temp, hard=self.hard_gumbel - ).type_as(dense_x) - else: - dense_x = dense_x.softmax(-1) - - return dense_x, code_perplexity, prob_perplexity - - def forward( - self, - features, - padding_mask, - random_label=None, - dense_x_only=False, - segment=True, - ): - if segment: - features, padding_mask = self.segmenter.pre_segment(features, padding_mask) - - orig_size = features.size(0) * features.size(1) - padding_mask.sum() - - gen_result = self.generator(features, random_label, padding_mask) - - orig_dense_x, token_x = gen_result["dense_x"], gen_result["token_x"] - orig_dense_padding_mask = gen_result["dense_padding_mask"] - - if segment: - dense_x, dense_padding_mask = self.segmenter.logit_segment( - orig_dense_x, orig_dense_padding_mask - ) - else: - dense_x = orig_dense_x - dense_padding_mask = orig_dense_padding_mask - - dense_logits = dense_x - prob_perplexity = None - code_perplexity = None - - if not (self.no_softmax and dense_x_only): - dense_x, code_perplexity, prob_perplexity = self.normalize(dense_logits) - - if dense_x_only or self.discriminator is None: - return { - "logits": dense_x, - "padding_mask": dense_padding_mask, - } - - token_padding_mask = random_label == self.pad - - dense_y = self.discriminator(dense_x, dense_padding_mask) - token_y = self.discriminator(token_x, token_padding_mask) - - sample_size = features.size(0) - - d_step = self.discrim_step(self.update_num) - - fake_smooth = self.smoothing - real_smooth = self.smoothing - if self.smoothing_one_sided: - fake_smooth = 0 - - zero_loss = None - smoothness_loss = None - code_pen = None - - if d_step: - loss_dense = F.binary_cross_entropy_with_logits( - dense_y, - dense_y.new_ones(dense_y.shape) - fake_smooth, - reduction="sum", - ) - loss_token = F.binary_cross_entropy_with_logits( - token_y, - token_y.new_zeros(token_y.shape) + real_smooth, - reduction="sum", - ) - if self.training and self.gradient_penalty > 0: - grad_pen = self.calc_gradient_penalty(token_x, dense_x) - grad_pen = grad_pen.sum() * self.gradient_penalty - else: - grad_pen = None - else: - grad_pen = None - loss_token = None - loss_dense = F.binary_cross_entropy_with_logits( - dense_y, - dense_y.new_zeros(dense_y.shape) + fake_smooth, - reduction="sum", - ) - num_vars = dense_x.size(-1) - if prob_perplexity is not None: - code_pen = (num_vars - prob_perplexity) / num_vars - code_pen = code_pen * sample_size * self.code_penalty - - if self.smoothness_weight > 0: - smoothness_loss = F.mse_loss( - dense_logits[:, :-1], dense_logits[:, 1:], reduction="none" - ) - smoothness_loss[dense_padding_mask[:, 1:]] = 0 - smoothness_loss = ( - smoothness_loss.mean() * sample_size * self.smoothness_weight - ) - - result = { - "losses": { - "grad_pen": grad_pen, - "code_pen": code_pen, - "smoothness": smoothness_loss, - }, - "temp": self.curr_temp, - "code_ppl": code_perplexity, - "prob_ppl": prob_perplexity, - "d_steps": int(d_step), - "sample_size": sample_size, - } - - suff = "_d" if d_step else "_g" - result["losses"]["dense" + suff] = loss_dense - result["losses"]["token" + suff] = loss_token - - return result diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/benchmark/dummy_model.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/benchmark/dummy_model.py deleted file mode 100644 index ff26e4fe655d8e8d7f9942c4bd3df7cd267405fb..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/benchmark/dummy_model.py +++ /dev/null @@ -1,96 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch.nn as nn -import torch.nn.functional as F -from fairseq.data import Dictionary -from fairseq.models import ( - FairseqDecoder, - FairseqLanguageModel, - register_model, - register_model_architecture, -) - - -@register_model("dummy_model") -class DummyModel(FairseqLanguageModel): - def __init__(self, args, encoder): - super().__init__(encoder) - self.args = args - - @staticmethod - def add_args(parser): - parser.add_argument("--num-layers", type=int, default=24) - parser.add_argument("--embed-dim", type=int, default=1024) - - @classmethod - def build_model(cls, args, task): - encoder = DummyEncoder( - num_embed=len(task.target_dictionary), - embed_dim=args.embed_dim, - num_layers=args.num_layers, - ) - return cls(args, encoder) - - def forward(self, src_tokens, masked_tokens=None, **kwargs): - return self.decoder(src_tokens, masked_tokens=masked_tokens) - - -class DummyEncoder(FairseqDecoder): - def __init__(self, num_embed=50000, embed_dim=1024, num_layers=24): - super().__init__(Dictionary()) - self.embed = nn.Embedding( - num_embeddings=num_embed, embedding_dim=embed_dim, padding_idx=0 - ) - self.layers_a = nn.ModuleList( - [ - nn.Sequential( - nn.LayerNorm(embed_dim), - nn.Linear(embed_dim, 3 * embed_dim), # q, k, v input projection - nn.Linear(3 * embed_dim, embed_dim), # skip self-attention - nn.Linear(embed_dim, embed_dim), # output projection - nn.Dropout(), - ) - for i in range(num_layers) - ] - ) - self.layers_b = nn.ModuleList( - [ - nn.Sequential( - nn.LayerNorm(embed_dim), - nn.Linear(embed_dim, 4 * embed_dim), # FFN - nn.ReLU(), - nn.Linear(4 * embed_dim, embed_dim), # FFN - nn.Dropout(0.1), - ) - for i in range(num_layers) - ] - ) - self.out_proj = nn.Linear(embed_dim, num_embed) - - def forward(self, tokens, masked_tokens=None): - x = self.embed(tokens) - for layer_a, layer_b in zip(self.layers_a, self.layers_b): - x = x + layer_a(x) - x = x + layer_b(x) - x = self.out_proj(x) - if masked_tokens is not None: - x = x[masked_tokens] - return (x,) - - def max_positions(self): - return 1024 - - def get_normalized_probs(self, net_output, log_probs, sample=None): - logits = net_output[0].float() - if log_probs: - return F.log_softmax(logits, dim=-1) - else: - return F.softmax(logits, dim=-1) - - -@register_model_architecture("dummy_model", "dummy_model") -def base_architecture(args): - pass diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/modules/quantization/quantization_options.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/modules/quantization/quantization_options.py deleted file mode 100644 index b46d682c0edaeaaf2a230e51d50da2a32d4bda98..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/modules/quantization/quantization_options.py +++ /dev/null @@ -1,44 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - - -def parse_config_yaml(yaml_data): - # Initialize to default options. - quantization_options = { - "n_centroids": { - "Linear": ["in_features", {"*": 256}], - "Embedding": ["embedding_dim", {"*": 256}], - }, - "block_sizes": { - "Linear": ["fuzzy_name", {"fc": 8, "attn": 4, "emb": 4}], - "Embedding": ["fuzzy_name", {"emb": 8}], - }, - "layers_to_quantize": [ - "decoder\\.layers\\.\\d+\\.fc[12]", - "decoder\\.embed_tokens\\.embeddings\\.[012]\\.[01]", - "decoder\\.layers\\.\\d+\\.self_attn\\.(k_proj|v_proj|q_proj|out_proj)", - ], - } - - if "n_centroids" in yaml_data: - quantization_options["n_centroids"] = { - layer: convert_yaml_to_tuple(layer_data) - for layer, layer_data in yaml_data["n_centroids"].items() - } - if "block_sizes" in yaml_data: - quantization_options["block_sizes"] = { - layer: convert_yaml_to_tuple(layer_data) - for layer, layer_data in yaml_data["block_sizes"].items() - } - if "layers_to_quantize" in yaml_data: - quantization_options["layers_to_quantize"] = yaml_data["layers_to_quantize"] - - return quantization_options - - -def convert_yaml_to_tuple(yaml_dictionary): - """Converts a yaml dictionary with two keys: `key` and `value` into a two - argument tuple of those values.""" - return (yaml_dictionary["key"], yaml_dictionary["value"]) diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/multilingual/data_scripts/download_wat19_my.sh b/spaces/OFA-Sys/OFA-vqa/fairseq/examples/multilingual/data_scripts/download_wat19_my.sh deleted file mode 100644 index c1e2d47287a29af4576e7a63641e8152ecb63c44..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/multilingual/data_scripts/download_wat19_my.sh +++ /dev/null @@ -1,36 +0,0 @@ -#!/bin/bash -# Copyright (c) Facebook, Inc. and its affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - - -if [ -z $WORKDIR_ROOT ] ; -then - echo "please specify your working directory root in environment variable WORKDIR_ROOT. Exitting..." - exit -fi - - -SRCDIR=$WORKDIR_ROOT/indic_languages_corpus -DESTDIR=$WORKDIR_ROOT/ML50/raw -mkdir -p $SRCDIR -mkdir -p $DESTDIR - -WAT_MY_EN=wat2020.my-en.zip -cd $SRCDIR -# please refer to http://lotus.kuee.kyoto-u.ac.jp/WAT/my-en-data/ for latest URL if the following url expired -#- The data used for WAT2020 are identical to those used in WAT2019. -wget http://lotus.kuee.kyoto-u.ac.jp/WAT/my-en-data/$WAT_MY_EN -unzip $WAT_MY_EN - - -SRC_EXTRACT_DIR=$SRCDIR/wat2020.my-en/alt - -cp $SRC_EXTRACT_DIR/train.alt.en $DESTDIR/train.my_MM-en_XX.en_XX -cp $SRC_EXTRACT_DIR/train.alt.my $DESTDIR/train.my_MM-en_XX.my_MM -cp $SRC_EXTRACT_DIR/dev.alt.en $DESTDIR/valid.my_MM-en_XX.en_XX -cp $SRC_EXTRACT_DIR/dev.alt.my $DESTDIR/valid.my_MM-en_XX.my_MM -cp $SRC_EXTRACT_DIR/test.alt.en $DESTDIR/test.my_MM-en_XX.en_XX -cp $SRC_EXTRACT_DIR/test.alt.my $DESTDIR/test.my_MM-en_XX.my_MM diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/wav2vec/unsupervised/scripts/apply_pca.py b/spaces/OFA-Sys/OFA-vqa/fairseq/examples/wav2vec/unsupervised/scripts/apply_pca.py deleted file mode 100644 index 10ad6ce47cfdf0a87ba089b299fe9551b29fa167..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/wav2vec/unsupervised/scripts/apply_pca.py +++ /dev/null @@ -1,76 +0,0 @@ -#!/usr/bin/env python3 -u -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import os -import os.path as osp -import math -import numpy as np -import tqdm -import torch -from shutil import copyfile - -from npy_append_array import NpyAppendArray - - -def get_parser(): - parser = argparse.ArgumentParser( - description="transforms features via a given pca and stored them in target dir" - ) - # fmt: off - parser.add_argument('source', help='directory with features') - parser.add_argument('--split', help='which split to read', required=True) - parser.add_argument('--save-dir', help='where to save the output', required=True) - parser.add_argument('--pca-path', type=str, help='pca location. will append _A.npy and _b.npy', required=True) - parser.add_argument('--batch-size', type=int, default=2048000, help='batch size') - parser.add_argument('--unfiltered', action='store_true', help='process the unfiltered version') - # fmt: on - - return parser - - -def main(): - parser = get_parser() - args = parser.parse_args() - - source_path = osp.join(args.source, args.split) - data_poth = source_path + "_unfiltered" if args.unfiltered else source_path - - print(f"data path: {data_poth}") - - features = np.load(data_poth + ".npy", mmap_mode="r") - pca_A = torch.from_numpy(np.load(args.pca_path + "_A.npy")).cuda() - pca_b = torch.from_numpy(np.load(args.pca_path + "_b.npy")).cuda() - - os.makedirs(args.save_dir, exist_ok=True) - save_path = osp.join(args.save_dir, args.split) - - copyfile(source_path + ".tsv", save_path + ".tsv") - copyfile(data_poth + ".lengths", save_path + ".lengths") - - if osp.exists(source_path + ".phn"): - copyfile(source_path + ".phn", save_path + ".phn") - - if osp.exists(source_path + ".wrd"): - copyfile(source_path + ".wrd", save_path + ".wrd") - - if osp.exists(save_path + ".npy"): - os.remove(save_path + ".npy") - npaa = NpyAppendArray(save_path + ".npy") - - batches = math.ceil(features.shape[0] / args.batch_size) - - with torch.no_grad(): - for b in tqdm.trange(batches): - start = b * args.batch_size - end = start + args.batch_size - x = torch.from_numpy(features[start:end]).cuda() - x = torch.matmul(x, pca_A) + pca_b - npaa.append(x.cpu().numpy()) - - -if __name__ == "__main__": - main() diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq_cli/interactive.py b/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq_cli/interactive.py deleted file mode 100644 index cadef2821a74a3b2f051c792d835129bf775714f..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq_cli/interactive.py +++ /dev/null @@ -1,316 +0,0 @@ -#!/usr/bin/env python3 -u -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -""" -Translate raw text with a trained model. Batches data on-the-fly. -""" - -import ast -import fileinput -import logging -import math -import os -import sys -import time -from argparse import Namespace -from collections import namedtuple - -import numpy as np -import torch -from fairseq import checkpoint_utils, distributed_utils, options, tasks, utils -from fairseq.dataclass.configs import FairseqConfig -from fairseq.dataclass.utils import convert_namespace_to_omegaconf -from fairseq.token_generation_constraints import pack_constraints, unpack_constraints -from fairseq_cli.generate import get_symbols_to_strip_from_output - - -logging.basicConfig( - format="%(asctime)s | %(levelname)s | %(name)s | %(message)s", - datefmt="%Y-%m-%d %H:%M:%S", - level=os.environ.get("LOGLEVEL", "INFO").upper(), - stream=sys.stdout, -) -logger = logging.getLogger("fairseq_cli.interactive") - - -Batch = namedtuple("Batch", "ids src_tokens src_lengths constraints") -Translation = namedtuple("Translation", "src_str hypos pos_scores alignments") - - -def buffered_read(input, buffer_size): - buffer = [] - with fileinput.input(files=[input], openhook=fileinput.hook_encoded("utf-8")) as h: - for src_str in h: - buffer.append(src_str.strip()) - if len(buffer) >= buffer_size: - yield buffer - buffer = [] - - if len(buffer) > 0: - yield buffer - - -def make_batches(lines, cfg, task, max_positions, encode_fn): - def encode_fn_target(x): - return encode_fn(x) - - if cfg.generation.constraints: - # Strip (tab-delimited) contraints, if present, from input lines, - # store them in batch_constraints - batch_constraints = [list() for _ in lines] - for i, line in enumerate(lines): - if "\t" in line: - lines[i], *batch_constraints[i] = line.split("\t") - - # Convert each List[str] to List[Tensor] - for i, constraint_list in enumerate(batch_constraints): - batch_constraints[i] = [ - task.target_dictionary.encode_line( - encode_fn_target(constraint), - append_eos=False, - add_if_not_exist=False, - ) - for constraint in constraint_list - ] - - if cfg.generation.constraints: - constraints_tensor = pack_constraints(batch_constraints) - else: - constraints_tensor = None - - tokens, lengths = task.get_interactive_tokens_and_lengths(lines, encode_fn) - - itr = task.get_batch_iterator( - dataset=task.build_dataset_for_inference( - tokens, lengths, constraints=constraints_tensor - ), - max_tokens=cfg.dataset.max_tokens, - max_sentences=cfg.dataset.batch_size, - max_positions=max_positions, - ignore_invalid_inputs=cfg.dataset.skip_invalid_size_inputs_valid_test, - ).next_epoch_itr(shuffle=False) - for batch in itr: - ids = batch["id"] - src_tokens = batch["net_input"]["src_tokens"] - src_lengths = batch["net_input"]["src_lengths"] - constraints = batch.get("constraints", None) - - yield Batch( - ids=ids, - src_tokens=src_tokens, - src_lengths=src_lengths, - constraints=constraints, - ) - - -def main(cfg: FairseqConfig): - if isinstance(cfg, Namespace): - cfg = convert_namespace_to_omegaconf(cfg) - - start_time = time.time() - total_translate_time = 0 - - utils.import_user_module(cfg.common) - - if cfg.interactive.buffer_size < 1: - cfg.interactive.buffer_size = 1 - if cfg.dataset.max_tokens is None and cfg.dataset.batch_size is None: - cfg.dataset.batch_size = 1 - - assert ( - not cfg.generation.sampling or cfg.generation.nbest == cfg.generation.beam - ), "--sampling requires --nbest to be equal to --beam" - assert ( - not cfg.dataset.batch_size - or cfg.dataset.batch_size <= cfg.interactive.buffer_size - ), "--batch-size cannot be larger than --buffer-size" - - logger.info(cfg) - - # Fix seed for stochastic decoding - if cfg.common.seed is not None and not cfg.generation.no_seed_provided: - np.random.seed(cfg.common.seed) - utils.set_torch_seed(cfg.common.seed) - - use_cuda = torch.cuda.is_available() and not cfg.common.cpu - - # Setup task, e.g., translation - task = tasks.setup_task(cfg.task) - - # Load ensemble - overrides = ast.literal_eval(cfg.common_eval.model_overrides) - logger.info("loading model(s) from {}".format(cfg.common_eval.path)) - models, _model_args = checkpoint_utils.load_model_ensemble( - utils.split_paths(cfg.common_eval.path), - arg_overrides=overrides, - task=task, - suffix=cfg.checkpoint.checkpoint_suffix, - strict=(cfg.checkpoint.checkpoint_shard_count == 1), - num_shards=cfg.checkpoint.checkpoint_shard_count, - ) - - # Set dictionaries - src_dict = task.source_dictionary - tgt_dict = task.target_dictionary - - # Optimize ensemble for generation - for model in models: - if model is None: - continue - if cfg.common.fp16: - model.half() - if use_cuda and not cfg.distributed_training.pipeline_model_parallel: - model.cuda() - model.prepare_for_inference_(cfg) - - # Initialize generator - generator = task.build_generator(models, cfg.generation) - - # Handle tokenization and BPE - tokenizer = task.build_tokenizer(cfg.tokenizer) - bpe = task.build_bpe(cfg.bpe) - - def encode_fn(x): - if tokenizer is not None: - x = tokenizer.encode(x) - if bpe is not None: - x = bpe.encode(x) - return x - - def decode_fn(x): - if bpe is not None: - x = bpe.decode(x) - if tokenizer is not None: - x = tokenizer.decode(x) - return x - - # Load alignment dictionary for unknown word replacement - # (None if no unknown word replacement, empty if no path to align dictionary) - align_dict = utils.load_align_dict(cfg.generation.replace_unk) - - max_positions = utils.resolve_max_positions( - task.max_positions(), *[model.max_positions() for model in models] - ) - - if cfg.generation.constraints: - logger.warning( - "NOTE: Constrained decoding currently assumes a shared subword vocabulary." - ) - - if cfg.interactive.buffer_size > 1: - logger.info("Sentence buffer size: %s", cfg.interactive.buffer_size) - logger.info("NOTE: hypothesis and token scores are output in base 2") - logger.info("Type the input sentence and press return:") - start_id = 0 - for inputs in buffered_read(cfg.interactive.input, cfg.interactive.buffer_size): - results = [] - for batch in make_batches(inputs, cfg, task, max_positions, encode_fn): - bsz = batch.src_tokens.size(0) - src_tokens = batch.src_tokens - src_lengths = batch.src_lengths - constraints = batch.constraints - if use_cuda: - src_tokens = src_tokens.cuda() - src_lengths = src_lengths.cuda() - if constraints is not None: - constraints = constraints.cuda() - - sample = { - "net_input": { - "src_tokens": src_tokens, - "src_lengths": src_lengths, - }, - } - translate_start_time = time.time() - translations = task.inference_step( - generator, models, sample, constraints=constraints - ) - translate_time = time.time() - translate_start_time - total_translate_time += translate_time - list_constraints = [[] for _ in range(bsz)] - if cfg.generation.constraints: - list_constraints = [unpack_constraints(c) for c in constraints] - for i, (id, hypos) in enumerate(zip(batch.ids.tolist(), translations)): - src_tokens_i = utils.strip_pad(src_tokens[i], tgt_dict.pad()) - constraints = list_constraints[i] - results.append( - ( - start_id + id, - src_tokens_i, - hypos, - { - "constraints": constraints, - "time": translate_time / len(translations), - }, - ) - ) - - # sort output to match input order - for id_, src_tokens, hypos, info in sorted(results, key=lambda x: x[0]): - src_str = '' - if src_dict is not None: - src_str = src_dict.string(src_tokens, cfg.common_eval.post_process) - print("S-{}\t{}".format(id_, src_str)) - print("W-{}\t{:.3f}\tseconds".format(id_, info["time"])) - for constraint in info["constraints"]: - print( - "C-{}\t{}".format( - id_, tgt_dict.string(constraint, cfg.common_eval.post_process) - ) - ) - - # Process top predictions - for hypo in hypos[: min(len(hypos), cfg.generation.nbest)]: - hypo_tokens, hypo_str, alignment = utils.post_process_prediction( - hypo_tokens=hypo["tokens"].int().cpu(), - src_str=src_str, - alignment=hypo["alignment"], - align_dict=align_dict, - tgt_dict=tgt_dict, - remove_bpe=cfg.common_eval.post_process, - extra_symbols_to_ignore=get_symbols_to_strip_from_output(generator), - ) - detok_hypo_str = decode_fn(hypo_str) - score = hypo["score"] / math.log(2) # convert to base 2 - # original hypothesis (after tokenization and BPE) - print("H-{}\t{}\t{}".format(id_, score, hypo_str)) - # detokenized hypothesis - print("D-{}\t{}\t{}".format(id_, score, detok_hypo_str)) - print( - "P-{}\t{}".format( - id_, - " ".join( - map( - lambda x: "{:.4f}".format(x), - # convert from base e to base 2 - hypo["positional_scores"].div_(math.log(2)).tolist(), - ) - ), - ) - ) - if cfg.generation.print_alignment: - alignment_str = " ".join( - ["{}-{}".format(src, tgt) for src, tgt in alignment] - ) - print("A-{}\t{}".format(id_, alignment_str)) - - # update running id_ counter - start_id += len(inputs) - - logger.info( - "Total time: {:.3f} seconds; translation time: {:.3f}".format( - time.time() - start_time, total_translate_time - ) - ) - - -def cli_main(): - parser = options.get_interactive_generation_parser() - args = options.parse_args_and_arch(parser) - distributed_utils.call_main(convert_namespace_to_omegaconf(args), main) - - -if __name__ == "__main__": - cli_main() diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/tests/test_inference_dropout.py b/spaces/OFA-Sys/OFA-vqa/fairseq/tests/test_inference_dropout.py deleted file mode 100644 index 353ac674780a9795492c75aa0a7bc0677b07a9c9..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/tests/test_inference_dropout.py +++ /dev/null @@ -1,70 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -import unittest - -from fairseq.dataclass.utils import convert_namespace_to_omegaconf -from fairseq.models.transformer import TransformerModel -from tests.test_sequence_generator import get_dummy_task_and_parser - - -class TestInferenceDropout(unittest.TestCase): - def setUp(self): - self.task, self.parser = get_dummy_task_and_parser() - TransformerModel.add_args(self.parser) - self.args = self.parser.parse_args([]) - self.args.encoder_layers = 2 - self.args.decoder_layers = 1 - logging.disable(logging.CRITICAL) - - def tearDown(self): - logging.disable(logging.NOTSET) - - def test_sets_inference_dropout_to_true(self): - self.args.retain_dropout = True - self.transformer_model = TransformerModel.build_model(self.args, self.task) - cfg = convert_namespace_to_omegaconf(self.args) - self.transformer_model.prepare_for_inference_(cfg) - assert self.transformer_model.encoder.dropout_module.apply_during_inference - assert self.transformer_model.decoder.dropout_module.apply_during_inference - for layer in self.transformer_model.encoder.layers: - assert layer.dropout_module.apply_during_inference - - def test_inference_dropout_false_by_default(self): - self.transformer_model = TransformerModel.build_model(self.args, self.task) - cfg = convert_namespace_to_omegaconf(self.args) - self.transformer_model.prepare_for_inference_(cfg) - assert not self.transformer_model.encoder.dropout_module.apply_during_inference - assert not self.transformer_model.decoder.dropout_module.apply_during_inference - for layer in self.transformer_model.encoder.layers: - assert not layer.dropout_module.apply_during_inference - for layer in self.transformer_model.decoder.layers: - assert not layer.dropout_module.apply_during_inference - - def test_applies_training_mode(self): - self.transformer_model = TransformerModel.build_model(self.args, self.task) - assert self.transformer_model.encoder.dropout_module.training - for layer in self.transformer_model.encoder.layers: - assert layer.dropout_module.training - - self.transformer_model.eval() - assert not self.transformer_model.decoder.dropout_module.training - for layer in self.transformer_model.encoder.layers: - assert not layer.dropout_module.training - - def test_retain_modules(self): - self.args.retain_dropout = True - self.args.retain_dropout_modules = [ - "TransformerEncoder", - "TransformerEncoderLayer", - ] - self.transformer_model = TransformerModel.build_model(self.args, self.task) - cfg = convert_namespace_to_omegaconf(self.args) - self.transformer_model.prepare_for_inference_(cfg) - assert self.transformer_model.encoder.dropout_module.apply_during_inference - assert not self.transformer_model.decoder.dropout_module.apply_during_inference - for layer in self.transformer_model.decoder.layers: - assert not layer.dropout_module.apply_during_inference diff --git a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/ops/group_points.py b/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/ops/group_points.py deleted file mode 100644 index 6c3ec9d758ebe4e1c2205882af4be154008253a5..0000000000000000000000000000000000000000 --- a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/ops/group_points.py +++ /dev/null @@ -1,224 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from typing import Tuple - -import torch -from torch import nn as nn -from torch.autograd import Function - -from ..utils import ext_loader -from .ball_query import ball_query -from .knn import knn - -ext_module = ext_loader.load_ext( - '_ext', ['group_points_forward', 'group_points_backward']) - - -class QueryAndGroup(nn.Module): - """Groups points with a ball query of radius. - - Args: - max_radius (float): The maximum radius of the balls. - If None is given, we will use kNN sampling instead of ball query. - sample_num (int): Maximum number of features to gather in the ball. - min_radius (float, optional): The minimum radius of the balls. - Default: 0. - use_xyz (bool, optional): Whether to use xyz. - Default: True. - return_grouped_xyz (bool, optional): Whether to return grouped xyz. - Default: False. - normalize_xyz (bool, optional): Whether to normalize xyz. - Default: False. - uniform_sample (bool, optional): Whether to sample uniformly. - Default: False - return_unique_cnt (bool, optional): Whether to return the count of - unique samples. Default: False. - return_grouped_idx (bool, optional): Whether to return grouped idx. - Default: False. - """ - - def __init__(self, - max_radius, - sample_num, - min_radius=0, - use_xyz=True, - return_grouped_xyz=False, - normalize_xyz=False, - uniform_sample=False, - return_unique_cnt=False, - return_grouped_idx=False): - super().__init__() - self.max_radius = max_radius - self.min_radius = min_radius - self.sample_num = sample_num - self.use_xyz = use_xyz - self.return_grouped_xyz = return_grouped_xyz - self.normalize_xyz = normalize_xyz - self.uniform_sample = uniform_sample - self.return_unique_cnt = return_unique_cnt - self.return_grouped_idx = return_grouped_idx - if self.return_unique_cnt: - assert self.uniform_sample, \ - 'uniform_sample should be True when ' \ - 'returning the count of unique samples' - if self.max_radius is None: - assert not self.normalize_xyz, \ - 'can not normalize grouped xyz when max_radius is None' - - def forward(self, points_xyz, center_xyz, features=None): - """ - Args: - points_xyz (Tensor): (B, N, 3) xyz coordinates of the features. - center_xyz (Tensor): (B, npoint, 3) coordinates of the centriods. - features (Tensor): (B, C, N) Descriptors of the features. - - Returns: - Tensor: (B, 3 + C, npoint, sample_num) Grouped feature. - """ - # if self.max_radius is None, we will perform kNN instead of ball query - # idx is of shape [B, npoint, sample_num] - if self.max_radius is None: - idx = knn(self.sample_num, points_xyz, center_xyz, False) - idx = idx.transpose(1, 2).contiguous() - else: - idx = ball_query(self.min_radius, self.max_radius, self.sample_num, - points_xyz, center_xyz) - - if self.uniform_sample: - unique_cnt = torch.zeros((idx.shape[0], idx.shape[1])) - for i_batch in range(idx.shape[0]): - for i_region in range(idx.shape[1]): - unique_ind = torch.unique(idx[i_batch, i_region, :]) - num_unique = unique_ind.shape[0] - unique_cnt[i_batch, i_region] = num_unique - sample_ind = torch.randint( - 0, - num_unique, (self.sample_num - num_unique, ), - dtype=torch.long) - all_ind = torch.cat((unique_ind, unique_ind[sample_ind])) - idx[i_batch, i_region, :] = all_ind - - xyz_trans = points_xyz.transpose(1, 2).contiguous() - # (B, 3, npoint, sample_num) - grouped_xyz = grouping_operation(xyz_trans, idx) - grouped_xyz_diff = grouped_xyz - \ - center_xyz.transpose(1, 2).unsqueeze(-1) # relative offsets - if self.normalize_xyz: - grouped_xyz_diff /= self.max_radius - - if features is not None: - grouped_features = grouping_operation(features, idx) - if self.use_xyz: - # (B, C + 3, npoint, sample_num) - new_features = torch.cat([grouped_xyz_diff, grouped_features], - dim=1) - else: - new_features = grouped_features - else: - assert (self.use_xyz - ), 'Cannot have not features and not use xyz as a feature!' - new_features = grouped_xyz_diff - - ret = [new_features] - if self.return_grouped_xyz: - ret.append(grouped_xyz) - if self.return_unique_cnt: - ret.append(unique_cnt) - if self.return_grouped_idx: - ret.append(idx) - if len(ret) == 1: - return ret[0] - else: - return tuple(ret) - - -class GroupAll(nn.Module): - """Group xyz with feature. - - Args: - use_xyz (bool): Whether to use xyz. - """ - - def __init__(self, use_xyz: bool = True): - super().__init__() - self.use_xyz = use_xyz - - def forward(self, - xyz: torch.Tensor, - new_xyz: torch.Tensor, - features: torch.Tensor = None): - """ - Args: - xyz (Tensor): (B, N, 3) xyz coordinates of the features. - new_xyz (Tensor): new xyz coordinates of the features. - features (Tensor): (B, C, N) features to group. - - Returns: - Tensor: (B, C + 3, 1, N) Grouped feature. - """ - grouped_xyz = xyz.transpose(1, 2).unsqueeze(2) - if features is not None: - grouped_features = features.unsqueeze(2) - if self.use_xyz: - # (B, 3 + C, 1, N) - new_features = torch.cat([grouped_xyz, grouped_features], - dim=1) - else: - new_features = grouped_features - else: - new_features = grouped_xyz - - return new_features - - -class GroupingOperation(Function): - """Group feature with given index.""" - - @staticmethod - def forward(ctx, features: torch.Tensor, - indices: torch.Tensor) -> torch.Tensor: - """ - Args: - features (Tensor): (B, C, N) tensor of features to group. - indices (Tensor): (B, npoint, nsample) the indices of - features to group with. - - Returns: - Tensor: (B, C, npoint, nsample) Grouped features. - """ - features = features.contiguous() - indices = indices.contiguous() - - B, nfeatures, nsample = indices.size() - _, C, N = features.size() - output = torch.cuda.FloatTensor(B, C, nfeatures, nsample) - - ext_module.group_points_forward(B, C, N, nfeatures, nsample, features, - indices, output) - - ctx.for_backwards = (indices, N) - return output - - @staticmethod - def backward(ctx, - grad_out: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor]: - """ - Args: - grad_out (Tensor): (B, C, npoint, nsample) tensor of the gradients - of the output from forward. - - Returns: - Tensor: (B, C, N) gradient of the features. - """ - idx, N = ctx.for_backwards - - B, C, npoint, nsample = grad_out.size() - grad_features = torch.cuda.FloatTensor(B, C, N).zero_() - - grad_out_data = grad_out.data.contiguous() - ext_module.group_points_backward(B, C, N, npoint, nsample, - grad_out_data, idx, - grad_features.data) - return grad_features, None - - -grouping_operation = GroupingOperation.apply diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/scripts/read-text-outline.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/scripts/read-text-outline.go deleted file mode 100644 index ea32773d28ba991e1b9eeea4376fa49f0176f255..0000000000000000000000000000000000000000 Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/scripts/read-text-outline.go and /dev/null differ diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/lilypond/2.24.2/ccache/lily/auto-beam.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/lilypond/2.24.2/ccache/lily/auto-beam.go deleted file mode 100644 index 7f1b6d514703df873a07a292ebc64914794fc71c..0000000000000000000000000000000000000000 Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/lilypond/2.24.2/ccache/lily/auto-beam.go and /dev/null differ diff --git a/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/ops/border_align.py b/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/ops/border_align.py deleted file mode 100644 index ff305be328e9b0a15e1bbb5e6b41beb940f55c81..0000000000000000000000000000000000000000 --- a/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/ops/border_align.py +++ /dev/null @@ -1,109 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -# modified from -# https://github.com/Megvii-BaseDetection/cvpods/blob/master/cvpods/layers/border_align.py - -import torch -import torch.nn as nn -from torch.autograd import Function -from torch.autograd.function import once_differentiable - -from ..utils import ext_loader - -ext_module = ext_loader.load_ext( - '_ext', ['border_align_forward', 'border_align_backward']) - - -class BorderAlignFunction(Function): - - @staticmethod - def symbolic(g, input, boxes, pool_size): - return g.op( - 'mmcv::MMCVBorderAlign', input, boxes, pool_size_i=pool_size) - - @staticmethod - def forward(ctx, input, boxes, pool_size): - ctx.pool_size = pool_size - ctx.input_shape = input.size() - - assert boxes.ndim == 3, 'boxes must be with shape [B, H*W, 4]' - assert boxes.size(2) == 4, \ - 'the last dimension of boxes must be (x1, y1, x2, y2)' - assert input.size(1) % 4 == 0, \ - 'the channel for input feature must be divisible by factor 4' - - # [B, C//4, H*W, 4] - output_shape = (input.size(0), input.size(1) // 4, boxes.size(1), 4) - output = input.new_zeros(output_shape) - # `argmax_idx` only used for backward - argmax_idx = input.new_zeros(output_shape).to(torch.int) - - ext_module.border_align_forward( - input, boxes, output, argmax_idx, pool_size=ctx.pool_size) - - ctx.save_for_backward(boxes, argmax_idx) - return output - - @staticmethod - @once_differentiable - def backward(ctx, grad_output): - boxes, argmax_idx = ctx.saved_tensors - grad_input = grad_output.new_zeros(ctx.input_shape) - # complex head architecture may cause grad_output uncontiguous - grad_output = grad_output.contiguous() - ext_module.border_align_backward( - grad_output, - boxes, - argmax_idx, - grad_input, - pool_size=ctx.pool_size) - return grad_input, None, None - - -border_align = BorderAlignFunction.apply - - -class BorderAlign(nn.Module): - r"""Border align pooling layer. - - Applies border_align over the input feature based on predicted bboxes. - The details were described in the paper - `BorderDet: Border Feature for Dense Object Detection - `_. - - For each border line (e.g. top, left, bottom or right) of each box, - border_align does the following: - 1. uniformly samples `pool_size`+1 positions on this line, involving \ - the start and end points. - 2. the corresponding features on these points are computed by \ - bilinear interpolation. - 3. max pooling over all the `pool_size`+1 positions are used for \ - computing pooled feature. - - Args: - pool_size (int): number of positions sampled over the boxes' borders - (e.g. top, bottom, left, right). - - """ - - def __init__(self, pool_size): - super(BorderAlign, self).__init__() - self.pool_size = pool_size - - def forward(self, input, boxes): - """ - Args: - input: Features with shape [N,4C,H,W]. Channels ranged in [0,C), - [C,2C), [2C,3C), [3C,4C) represent the top, left, bottom, - right features respectively. - boxes: Boxes with shape [N,H*W,4]. Coordinate format (x1,y1,x2,y2). - - Returns: - Tensor: Pooled features with shape [N,C,H*W,4]. The order is - (top,left,bottom,right) for the last dimension. - """ - return border_align(input, boxes, self.pool_size) - - def __repr__(self): - s = self.__class__.__name__ - s += f'(pool_size={self.pool_size})' - return s diff --git a/spaces/Pranjal12345/Text_to_Speech/tortoise/utils/text.py b/spaces/Pranjal12345/Text_to_Speech/tortoise/utils/text.py deleted file mode 100644 index e28c86786b2ca47823a25f3f251f9bc85bb3facd..0000000000000000000000000000000000000000 --- a/spaces/Pranjal12345/Text_to_Speech/tortoise/utils/text.py +++ /dev/null @@ -1,132 +0,0 @@ -import re - - -def split_and_recombine_text(text, desired_length=200, max_length=300): - """Split text it into chunks of a desired length trying to keep sentences intact.""" - # normalize text, remove redundant whitespace and convert non-ascii quotes to ascii - text = re.sub(r'\n\n+', '\n', text) - text = re.sub(r'\s+', ' ', text) - text = re.sub(r'[“”]', '"', text) - - rv = [] - in_quote = False - current = "" - split_pos = [] - pos = -1 - end_pos = len(text) - 1 - - def seek(delta): - nonlocal pos, in_quote, current - is_neg = delta < 0 - for _ in range(abs(delta)): - if is_neg: - pos -= 1 - current = current[:-1] - else: - pos += 1 - current += text[pos] - if text[pos] == '"': - in_quote = not in_quote - return text[pos] - - def peek(delta): - p = pos + delta - return text[p] if p < end_pos and p >= 0 else "" - - def commit(): - nonlocal rv, current, split_pos - rv.append(current) - current = "" - split_pos = [] - - while pos < end_pos: - c = seek(1) - # do we need to force a split? - if len(current) >= max_length: - if len(split_pos) > 0 and len(current) > (desired_length / 2): - # we have at least one sentence and we are over half the desired length, seek back to the last split - d = pos - split_pos[-1] - seek(-d) - else: - # no full sentences, seek back until we are not in the middle of a word and split there - while c not in '!?.\n ' and pos > 0 and len(current) > desired_length: - c = seek(-1) - commit() - # check for sentence boundaries - elif not in_quote and (c in '!?\n' or (c == '.' and peek(1) in '\n ')): - # seek forward if we have consecutive boundary markers but still within the max length - while pos < len(text) - 1 and len(current) < max_length and peek(1) in '!?.': - c = seek(1) - split_pos.append(pos) - if len(current) >= desired_length: - commit() - # treat end of quote as a boundary if its followed by a space or newline - elif in_quote and peek(1) == '"' and peek(2) in '\n ': - seek(2) - split_pos.append(pos) - rv.append(current) - - # clean up, remove lines with only whitespace or punctuation - rv = [s.strip() for s in rv] - rv = [s for s in rv if len(s) > 0 and not re.match(r'^[\s\.,;:!?]*$', s)] - - return rv - - -if __name__ == '__main__': - import os - import unittest - - class Test(unittest.TestCase): - def test_split_and_recombine_text(self): - text = """ - This is a sample sentence. - This is another sample sentence. - This is a longer sample sentence that should force a split inthemiddlebutinotinthislongword. - "Don't split my quote... please" - """ - self.assertEqual(split_and_recombine_text(text, desired_length=20, max_length=40), - ['This is a sample sentence.', - 'This is another sample sentence.', - 'This is a longer sample sentence that', - 'should force a split', - 'inthemiddlebutinotinthislongword.', - '"Don\'t split my quote... please"']) - - def test_split_and_recombine_text_2(self): - text = """ - When you are really angry sometimes you use consecutive exclamation marks!!!!!! Is this a good thing to do?!?!?! - I don't know but we should handle this situation.......................... - """ - self.assertEqual(split_and_recombine_text(text, desired_length=30, max_length=50), - ['When you are really angry sometimes you use', - 'consecutive exclamation marks!!!!!!', - 'Is this a good thing to do?!?!?!', - 'I don\'t know but we should handle this situation.']) - - def test_split_and_recombine_text_3(self): - text_src = os.path.join(os.path.dirname(__file__), '../data/riding_hood.txt') - with open(text_src, 'r') as f: - text = f.read() - self.assertEqual( - split_and_recombine_text(text), - [ - 'Once upon a time there lived in a certain village a little country girl, the prettiest creature who was ever seen. Her mother was excessively fond of her; and her grandmother doted on her still more. This good woman had a little red riding hood made for her.', - 'It suited the girl so extremely well that everybody called her Little Red Riding Hood. One day her mother, having made some cakes, said to her, "Go, my dear, and see how your grandmother is doing, for I hear she has been very ill. Take her a cake, and this little pot of butter."', - 'Little Red Riding Hood set out immediately to go to her grandmother, who lived in another village. As she was going through the wood, she met with a wolf, who had a very great mind to eat her up, but he dared not, because of some woodcutters working nearby in the forest.', - 'He asked her where she was going. The poor child, who did not know that it was dangerous to stay and talk to a wolf, said to him, "I am going to see my grandmother and carry her a cake and a little pot of butter from my mother." "Does she live far off?" said the wolf "Oh I say,"', - 'answered Little Red Riding Hood; "it is beyond that mill you see there, at the first house in the village." "Well," said the wolf, "and I\'ll go and see her too. I\'ll go this way and go you that, and we shall see who will be there first."', - 'The wolf ran as fast as he could, taking the shortest path, and the little girl took a roundabout way, entertaining herself by gathering nuts, running after butterflies, and gathering bouquets of little flowers.', - 'It was not long before the wolf arrived at the old woman\'s house. He knocked at the door: tap, tap. "Who\'s there?" "Your grandchild, Little Red Riding Hood," replied the wolf, counterfeiting her voice; "who has brought you a cake and a little pot of butter sent you by mother."', - 'The good grandmother, who was in bed, because she was somewhat ill, cried out, "Pull the bobbin, and the latch will go up."', - 'The wolf pulled the bobbin, and the door opened, and then he immediately fell upon the good woman and ate her up in a moment, for it been more than three days since he had eaten.', - 'He then shut the door and got into the grandmother\'s bed, expecting Little Red Riding Hood, who came some time afterwards and knocked at the door: tap, tap. "Who\'s there?"', - 'Little Red Riding Hood, hearing the big voice of the wolf, was at first afraid; but believing her grandmother had a cold and was hoarse, answered, "It is your grandchild Little Red Riding Hood, who has brought you a cake and a little pot of butter mother sends you."', - 'The wolf cried out to her, softening his voice as much as he could, "Pull the bobbin, and the latch will go up." Little Red Riding Hood pulled the bobbin, and the door opened.', - 'The wolf, seeing her come in, said to her, hiding himself under the bedclothes, "Put the cake and the little pot of butter upon the stool, and come get into bed with me." Little Red Riding Hood took off her clothes and got into bed.', - 'She was greatly amazed to see how her grandmother looked in her nightclothes, and said to her, "Grandmother, what big arms you have!" "All the better to hug you with, my dear." "Grandmother, what big legs you have!" "All the better to run with, my child." "Grandmother, what big ears you have!"', - '"All the better to hear with, my child." "Grandmother, what big eyes you have!" "All the better to see with, my child." "Grandmother, what big teeth you have got!" "All the better to eat you up with." And, saying these words, this wicked wolf fell upon Little Red Riding Hood, and ate her all up.', - ] - ) - - unittest.main() diff --git a/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/audiocraft/data/zip.py b/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/audiocraft/data/zip.py deleted file mode 100644 index f0b17849d36991e7def35a14d3d518b9d867ce36..0000000000000000000000000000000000000000 --- a/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/audiocraft/data/zip.py +++ /dev/null @@ -1,76 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. -"""Utility for reading some info from inside a zip file. -""" - -import typing -import zipfile - -from dataclasses import dataclass -from functools import lru_cache -from typing_extensions import Literal - - -DEFAULT_SIZE = 32 -MODE = Literal['r', 'w', 'x', 'a'] - - -@dataclass(order=True) -class PathInZip: - """Hold a path of file within a zip file. - - Args: - path (str): The convention is :. - Let's assume there is a zip file /some/location/foo.zip - and inside of it is a json file located at /data/file1.json, - Then we expect path = "/some/location/foo.zip:/data/file1.json". - """ - - INFO_PATH_SEP = ':' - zip_path: str - file_path: str - - def __init__(self, path: str) -> None: - split_path = path.split(self.INFO_PATH_SEP) - assert len(split_path) == 2 - self.zip_path, self.file_path = split_path - - @classmethod - def from_paths(cls, zip_path: str, file_path: str): - return cls(zip_path + cls.INFO_PATH_SEP + file_path) - - def __str__(self) -> str: - return self.zip_path + self.INFO_PATH_SEP + self.file_path - - -def _open_zip(path: str, mode: MODE = 'r'): - return zipfile.ZipFile(path, mode) - - -_cached_open_zip = lru_cache(DEFAULT_SIZE)(_open_zip) - - -def set_zip_cache_size(max_size: int): - """Sets the maximal LRU caching for zip file opening. - - Args: - max_size (int): the maximal LRU cache. - """ - global _cached_open_zip - _cached_open_zip = lru_cache(max_size)(_open_zip) - - -def open_file_in_zip(path_in_zip: PathInZip, mode: str = 'r') -> typing.IO: - """Opens a file stored inside a zip and returns a file-like object. - - Args: - path_in_zip (PathInZip): A PathInZip object representing the file to return a file-like object of. - mode (str): The mode in which to open the file with. - Returns: - A file-like object for PathInZip. - """ - zf = _cached_open_zip(path_in_zip.zip_path) - return zf.open(path_in_zip.file_path) diff --git a/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/tests/quantization/test_vq.py b/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/tests/quantization/test_vq.py deleted file mode 100644 index c215099fedacae35c6798fdd9b8420a447aa16bb..0000000000000000000000000000000000000000 --- a/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/tests/quantization/test_vq.py +++ /dev/null @@ -1,18 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import torch - -from audiocraft.quantization.vq import ResidualVectorQuantizer - - -class TestResidualVectorQuantizer: - - def test_rvq(self): - x = torch.randn(1, 16, 2048) - vq = ResidualVectorQuantizer(n_q=8, dimension=16, bins=8) - res = vq(x, 1.) - assert res.x.shape == torch.Size([1, 16, 2048]) diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/commands/install.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/commands/install.py deleted file mode 100644 index e081c27d2d2b05ee9820bb41c071ec9da4ad2106..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/commands/install.py +++ /dev/null @@ -1,860 +0,0 @@ -import errno -import json -import operator -import os -import shutil -import site -from optparse import SUPPRESS_HELP, Values -from typing import Iterable, List, Optional - -from pip._vendor.packaging.utils import canonicalize_name -from pip._vendor.rich import print_json - -from pip._internal.cache import WheelCache -from pip._internal.cli import cmdoptions -from pip._internal.cli.cmdoptions import make_target_python -from pip._internal.cli.req_command import ( - RequirementCommand, - warn_if_run_as_root, - with_cleanup, -) -from pip._internal.cli.status_codes import ERROR, SUCCESS -from pip._internal.exceptions import CommandError, InstallationError -from pip._internal.locations import get_scheme -from pip._internal.metadata import get_environment -from pip._internal.models.format_control import FormatControl -from pip._internal.models.installation_report import InstallationReport -from pip._internal.operations.build.build_tracker import get_build_tracker -from pip._internal.operations.check import ConflictDetails, check_install_conflicts -from pip._internal.req import install_given_reqs -from pip._internal.req.req_install import ( - InstallRequirement, - LegacySetupPyOptionsCheckMode, - check_legacy_setup_py_options, -) -from pip._internal.utils.compat import WINDOWS -from pip._internal.utils.deprecation import ( - LegacyInstallReasonFailedBdistWheel, - deprecated, -) -from pip._internal.utils.distutils_args import parse_distutils_args -from pip._internal.utils.filesystem import test_writable_dir -from pip._internal.utils.logging import getLogger -from pip._internal.utils.misc import ( - ensure_dir, - get_pip_version, - protect_pip_from_modification_on_windows, - write_output, -) -from pip._internal.utils.temp_dir import TempDirectory -from pip._internal.utils.virtualenv import ( - running_under_virtualenv, - virtualenv_no_global, -) -from pip._internal.wheel_builder import ( - BdistWheelAllowedPredicate, - build, - should_build_for_install_command, -) - -logger = getLogger(__name__) - - -def get_check_bdist_wheel_allowed( - format_control: FormatControl, -) -> BdistWheelAllowedPredicate: - def check_binary_allowed(req: InstallRequirement) -> bool: - canonical_name = canonicalize_name(req.name or "") - allowed_formats = format_control.get_allowed_formats(canonical_name) - return "binary" in allowed_formats - - return check_binary_allowed - - -class InstallCommand(RequirementCommand): - """ - Install packages from: - - - PyPI (and other indexes) using requirement specifiers. - - VCS project urls. - - Local project directories. - - Local or remote source archives. - - pip also supports installing from "requirements files", which provide - an easy way to specify a whole environment to be installed. - """ - - usage = """ - %prog [options] [package-index-options] ... - %prog [options] -r [package-index-options] ... - %prog [options] [-e] ... - %prog [options] [-e] ... - %prog [options] ...""" - - def add_options(self) -> None: - self.cmd_opts.add_option(cmdoptions.requirements()) - self.cmd_opts.add_option(cmdoptions.constraints()) - self.cmd_opts.add_option(cmdoptions.no_deps()) - self.cmd_opts.add_option(cmdoptions.pre()) - - self.cmd_opts.add_option(cmdoptions.editable()) - self.cmd_opts.add_option( - "--dry-run", - action="store_true", - dest="dry_run", - default=False, - help=( - "Don't actually install anything, just print what would be. " - "Can be used in combination with --ignore-installed " - "to 'resolve' the requirements." - ), - ) - self.cmd_opts.add_option( - "-t", - "--target", - dest="target_dir", - metavar="dir", - default=None, - help=( - "Install packages into . " - "By default this will not replace existing files/folders in " - ". Use --upgrade to replace existing packages in " - "with new versions." - ), - ) - cmdoptions.add_target_python_options(self.cmd_opts) - - self.cmd_opts.add_option( - "--user", - dest="use_user_site", - action="store_true", - help=( - "Install to the Python user install directory for your " - "platform. Typically ~/.local/, or %APPDATA%\\Python on " - "Windows. (See the Python documentation for site.USER_BASE " - "for full details.)" - ), - ) - self.cmd_opts.add_option( - "--no-user", - dest="use_user_site", - action="store_false", - help=SUPPRESS_HELP, - ) - self.cmd_opts.add_option( - "--root", - dest="root_path", - metavar="dir", - default=None, - help="Install everything relative to this alternate root directory.", - ) - self.cmd_opts.add_option( - "--prefix", - dest="prefix_path", - metavar="dir", - default=None, - help=( - "Installation prefix where lib, bin and other top-level " - "folders are placed" - ), - ) - - self.cmd_opts.add_option(cmdoptions.src()) - - self.cmd_opts.add_option( - "-U", - "--upgrade", - dest="upgrade", - action="store_true", - help=( - "Upgrade all specified packages to the newest available " - "version. The handling of dependencies depends on the " - "upgrade-strategy used." - ), - ) - - self.cmd_opts.add_option( - "--upgrade-strategy", - dest="upgrade_strategy", - default="only-if-needed", - choices=["only-if-needed", "eager"], - help=( - "Determines how dependency upgrading should be handled " - "[default: %default]. " - '"eager" - dependencies are upgraded regardless of ' - "whether the currently installed version satisfies the " - "requirements of the upgraded package(s). " - '"only-if-needed" - are upgraded only when they do not ' - "satisfy the requirements of the upgraded package(s)." - ), - ) - - self.cmd_opts.add_option( - "--force-reinstall", - dest="force_reinstall", - action="store_true", - help="Reinstall all packages even if they are already up-to-date.", - ) - - self.cmd_opts.add_option( - "-I", - "--ignore-installed", - dest="ignore_installed", - action="store_true", - help=( - "Ignore the installed packages, overwriting them. " - "This can break your system if the existing package " - "is of a different version or was installed " - "with a different package manager!" - ), - ) - - self.cmd_opts.add_option(cmdoptions.ignore_requires_python()) - self.cmd_opts.add_option(cmdoptions.no_build_isolation()) - self.cmd_opts.add_option(cmdoptions.use_pep517()) - self.cmd_opts.add_option(cmdoptions.no_use_pep517()) - self.cmd_opts.add_option(cmdoptions.check_build_deps()) - - self.cmd_opts.add_option(cmdoptions.config_settings()) - self.cmd_opts.add_option(cmdoptions.install_options()) - self.cmd_opts.add_option(cmdoptions.global_options()) - - self.cmd_opts.add_option( - "--compile", - action="store_true", - dest="compile", - default=True, - help="Compile Python source files to bytecode", - ) - - self.cmd_opts.add_option( - "--no-compile", - action="store_false", - dest="compile", - help="Do not compile Python source files to bytecode", - ) - - self.cmd_opts.add_option( - "--no-warn-script-location", - action="store_false", - dest="warn_script_location", - default=True, - help="Do not warn when installing scripts outside PATH", - ) - self.cmd_opts.add_option( - "--no-warn-conflicts", - action="store_false", - dest="warn_about_conflicts", - default=True, - help="Do not warn about broken dependencies", - ) - self.cmd_opts.add_option(cmdoptions.no_binary()) - self.cmd_opts.add_option(cmdoptions.only_binary()) - self.cmd_opts.add_option(cmdoptions.prefer_binary()) - self.cmd_opts.add_option(cmdoptions.require_hashes()) - self.cmd_opts.add_option(cmdoptions.progress_bar()) - self.cmd_opts.add_option(cmdoptions.root_user_action()) - - index_opts = cmdoptions.make_option_group( - cmdoptions.index_group, - self.parser, - ) - - self.parser.insert_option_group(0, index_opts) - self.parser.insert_option_group(0, self.cmd_opts) - - self.cmd_opts.add_option( - "--report", - dest="json_report_file", - metavar="file", - default=None, - help=( - "Generate a JSON file describing what pip did to install " - "the provided requirements. " - "Can be used in combination with --dry-run and --ignore-installed " - "to 'resolve' the requirements. " - "When - is used as file name it writes to stdout. " - "When writing to stdout, please combine with the --quiet option " - "to avoid mixing pip logging output with JSON output." - ), - ) - - @with_cleanup - def run(self, options: Values, args: List[str]) -> int: - if options.use_user_site and options.target_dir is not None: - raise CommandError("Can not combine '--user' and '--target'") - - upgrade_strategy = "to-satisfy-only" - if options.upgrade: - upgrade_strategy = options.upgrade_strategy - - cmdoptions.check_dist_restriction(options, check_target=True) - - install_options = options.install_options or [] - - logger.verbose("Using %s", get_pip_version()) - options.use_user_site = decide_user_install( - options.use_user_site, - prefix_path=options.prefix_path, - target_dir=options.target_dir, - root_path=options.root_path, - isolated_mode=options.isolated_mode, - ) - - target_temp_dir: Optional[TempDirectory] = None - target_temp_dir_path: Optional[str] = None - if options.target_dir: - options.ignore_installed = True - options.target_dir = os.path.abspath(options.target_dir) - if ( - # fmt: off - os.path.exists(options.target_dir) and - not os.path.isdir(options.target_dir) - # fmt: on - ): - raise CommandError( - "Target path exists but is not a directory, will not continue." - ) - - # Create a target directory for using with the target option - target_temp_dir = TempDirectory(kind="target") - target_temp_dir_path = target_temp_dir.path - self.enter_context(target_temp_dir) - - global_options = options.global_options or [] - - session = self.get_default_session(options) - - target_python = make_target_python(options) - finder = self._build_package_finder( - options=options, - session=session, - target_python=target_python, - ignore_requires_python=options.ignore_requires_python, - ) - build_tracker = self.enter_context(get_build_tracker()) - - directory = TempDirectory( - delete=not options.no_clean, - kind="install", - globally_managed=True, - ) - - try: - reqs = self.get_requirements(args, options, finder, session) - check_legacy_setup_py_options( - options, reqs, LegacySetupPyOptionsCheckMode.INSTALL - ) - - if "no-binary-enable-wheel-cache" in options.features_enabled: - # TODO: remove format_control from WheelCache when the deprecation cycle - # is over - wheel_cache = WheelCache(options.cache_dir) - else: - if options.format_control.no_binary: - deprecated( - reason=( - "--no-binary currently disables reading from " - "the cache of locally built wheels. In the future " - "--no-binary will not influence the wheel cache." - ), - replacement="to use the --no-cache-dir option", - feature_flag="no-binary-enable-wheel-cache", - issue=11453, - gone_in="23.1", - ) - wheel_cache = WheelCache(options.cache_dir, options.format_control) - - # Only when installing is it permitted to use PEP 660. - # In other circumstances (pip wheel, pip download) we generate - # regular (i.e. non editable) metadata and wheels. - for req in reqs: - req.permit_editable_wheels = True - - reject_location_related_install_options(reqs, options.install_options) - - preparer = self.make_requirement_preparer( - temp_build_dir=directory, - options=options, - build_tracker=build_tracker, - session=session, - finder=finder, - use_user_site=options.use_user_site, - verbosity=self.verbosity, - ) - resolver = self.make_resolver( - preparer=preparer, - finder=finder, - options=options, - wheel_cache=wheel_cache, - use_user_site=options.use_user_site, - ignore_installed=options.ignore_installed, - ignore_requires_python=options.ignore_requires_python, - force_reinstall=options.force_reinstall, - upgrade_strategy=upgrade_strategy, - use_pep517=options.use_pep517, - ) - - self.trace_basic_info(finder) - - requirement_set = resolver.resolve( - reqs, check_supported_wheels=not options.target_dir - ) - - if options.json_report_file: - logger.warning( - "--report is currently an experimental option. " - "The output format may change in a future release " - "without prior warning." - ) - - report = InstallationReport(requirement_set.requirements_to_install) - if options.json_report_file == "-": - print_json(data=report.to_dict()) - else: - with open(options.json_report_file, "w", encoding="utf-8") as f: - json.dump(report.to_dict(), f, indent=2, ensure_ascii=False) - - if options.dry_run: - would_install_items = sorted( - (r.metadata["name"], r.metadata["version"]) - for r in requirement_set.requirements_to_install - ) - if would_install_items: - write_output( - "Would install %s", - " ".join("-".join(item) for item in would_install_items), - ) - return SUCCESS - - try: - pip_req = requirement_set.get_requirement("pip") - except KeyError: - modifying_pip = False - else: - # If we're not replacing an already installed pip, - # we're not modifying it. - modifying_pip = pip_req.satisfied_by is None - protect_pip_from_modification_on_windows(modifying_pip=modifying_pip) - - check_bdist_wheel_allowed = get_check_bdist_wheel_allowed( - finder.format_control - ) - - reqs_to_build = [ - r - for r in requirement_set.requirements.values() - if should_build_for_install_command(r, check_bdist_wheel_allowed) - ] - - _, build_failures = build( - reqs_to_build, - wheel_cache=wheel_cache, - verify=True, - build_options=[], - global_options=global_options, - ) - - # If we're using PEP 517, we cannot do a legacy setup.py install - # so we fail here. - pep517_build_failure_names: List[str] = [ - r.name for r in build_failures if r.use_pep517 # type: ignore - ] - if pep517_build_failure_names: - raise InstallationError( - "Could not build wheels for {}, which is required to " - "install pyproject.toml-based projects".format( - ", ".join(pep517_build_failure_names) - ) - ) - - # For now, we just warn about failures building legacy - # requirements, as we'll fall through to a setup.py install for - # those. - for r in build_failures: - if not r.use_pep517: - r.legacy_install_reason = LegacyInstallReasonFailedBdistWheel - - to_install = resolver.get_installation_order(requirement_set) - - # Check for conflicts in the package set we're installing. - conflicts: Optional[ConflictDetails] = None - should_warn_about_conflicts = ( - not options.ignore_dependencies and options.warn_about_conflicts - ) - if should_warn_about_conflicts: - conflicts = self._determine_conflicts(to_install) - - # Don't warn about script install locations if - # --target or --prefix has been specified - warn_script_location = options.warn_script_location - if options.target_dir or options.prefix_path: - warn_script_location = False - - installed = install_given_reqs( - to_install, - install_options, - global_options, - root=options.root_path, - home=target_temp_dir_path, - prefix=options.prefix_path, - warn_script_location=warn_script_location, - use_user_site=options.use_user_site, - pycompile=options.compile, - ) - - lib_locations = get_lib_location_guesses( - user=options.use_user_site, - home=target_temp_dir_path, - root=options.root_path, - prefix=options.prefix_path, - isolated=options.isolated_mode, - ) - env = get_environment(lib_locations) - - installed.sort(key=operator.attrgetter("name")) - items = [] - for result in installed: - item = result.name - try: - installed_dist = env.get_distribution(item) - if installed_dist is not None: - item = f"{item}-{installed_dist.version}" - except Exception: - pass - items.append(item) - - if conflicts is not None: - self._warn_about_conflicts( - conflicts, - resolver_variant=self.determine_resolver_variant(options), - ) - - installed_desc = " ".join(items) - if installed_desc: - write_output( - "Successfully installed %s", - installed_desc, - ) - except OSError as error: - show_traceback = self.verbosity >= 1 - - message = create_os_error_message( - error, - show_traceback, - options.use_user_site, - ) - logger.error(message, exc_info=show_traceback) # noqa - - return ERROR - - if options.target_dir: - assert target_temp_dir - self._handle_target_dir( - options.target_dir, target_temp_dir, options.upgrade - ) - if options.root_user_action == "warn": - warn_if_run_as_root() - return SUCCESS - - def _handle_target_dir( - self, target_dir: str, target_temp_dir: TempDirectory, upgrade: bool - ) -> None: - ensure_dir(target_dir) - - # Checking both purelib and platlib directories for installed - # packages to be moved to target directory - lib_dir_list = [] - - # Checking both purelib and platlib directories for installed - # packages to be moved to target directory - scheme = get_scheme("", home=target_temp_dir.path) - purelib_dir = scheme.purelib - platlib_dir = scheme.platlib - data_dir = scheme.data - - if os.path.exists(purelib_dir): - lib_dir_list.append(purelib_dir) - if os.path.exists(platlib_dir) and platlib_dir != purelib_dir: - lib_dir_list.append(platlib_dir) - if os.path.exists(data_dir): - lib_dir_list.append(data_dir) - - for lib_dir in lib_dir_list: - for item in os.listdir(lib_dir): - if lib_dir == data_dir: - ddir = os.path.join(data_dir, item) - if any(s.startswith(ddir) for s in lib_dir_list[:-1]): - continue - target_item_dir = os.path.join(target_dir, item) - if os.path.exists(target_item_dir): - if not upgrade: - logger.warning( - "Target directory %s already exists. Specify " - "--upgrade to force replacement.", - target_item_dir, - ) - continue - if os.path.islink(target_item_dir): - logger.warning( - "Target directory %s already exists and is " - "a link. pip will not automatically replace " - "links, please remove if replacement is " - "desired.", - target_item_dir, - ) - continue - if os.path.isdir(target_item_dir): - shutil.rmtree(target_item_dir) - else: - os.remove(target_item_dir) - - shutil.move(os.path.join(lib_dir, item), target_item_dir) - - def _determine_conflicts( - self, to_install: List[InstallRequirement] - ) -> Optional[ConflictDetails]: - try: - return check_install_conflicts(to_install) - except Exception: - logger.exception( - "Error while checking for conflicts. Please file an issue on " - "pip's issue tracker: https://github.com/pypa/pip/issues/new" - ) - return None - - def _warn_about_conflicts( - self, conflict_details: ConflictDetails, resolver_variant: str - ) -> None: - package_set, (missing, conflicting) = conflict_details - if not missing and not conflicting: - return - - parts: List[str] = [] - if resolver_variant == "legacy": - parts.append( - "pip's legacy dependency resolver does not consider dependency " - "conflicts when selecting packages. This behaviour is the " - "source of the following dependency conflicts." - ) - else: - assert resolver_variant == "2020-resolver" - parts.append( - "pip's dependency resolver does not currently take into account " - "all the packages that are installed. This behaviour is the " - "source of the following dependency conflicts." - ) - - # NOTE: There is some duplication here, with commands/check.py - for project_name in missing: - version = package_set[project_name][0] - for dependency in missing[project_name]: - message = ( - "{name} {version} requires {requirement}, " - "which is not installed." - ).format( - name=project_name, - version=version, - requirement=dependency[1], - ) - parts.append(message) - - for project_name in conflicting: - version = package_set[project_name][0] - for dep_name, dep_version, req in conflicting[project_name]: - message = ( - "{name} {version} requires {requirement}, but {you} have " - "{dep_name} {dep_version} which is incompatible." - ).format( - name=project_name, - version=version, - requirement=req, - dep_name=dep_name, - dep_version=dep_version, - you=("you" if resolver_variant == "2020-resolver" else "you'll"), - ) - parts.append(message) - - logger.critical("\n".join(parts)) - - -def get_lib_location_guesses( - user: bool = False, - home: Optional[str] = None, - root: Optional[str] = None, - isolated: bool = False, - prefix: Optional[str] = None, -) -> List[str]: - scheme = get_scheme( - "", - user=user, - home=home, - root=root, - isolated=isolated, - prefix=prefix, - ) - return [scheme.purelib, scheme.platlib] - - -def site_packages_writable(root: Optional[str], isolated: bool) -> bool: - return all( - test_writable_dir(d) - for d in set(get_lib_location_guesses(root=root, isolated=isolated)) - ) - - -def decide_user_install( - use_user_site: Optional[bool], - prefix_path: Optional[str] = None, - target_dir: Optional[str] = None, - root_path: Optional[str] = None, - isolated_mode: bool = False, -) -> bool: - """Determine whether to do a user install based on the input options. - - If use_user_site is False, no additional checks are done. - If use_user_site is True, it is checked for compatibility with other - options. - If use_user_site is None, the default behaviour depends on the environment, - which is provided by the other arguments. - """ - # In some cases (config from tox), use_user_site can be set to an integer - # rather than a bool, which 'use_user_site is False' wouldn't catch. - if (use_user_site is not None) and (not use_user_site): - logger.debug("Non-user install by explicit request") - return False - - if use_user_site: - if prefix_path: - raise CommandError( - "Can not combine '--user' and '--prefix' as they imply " - "different installation locations" - ) - if virtualenv_no_global(): - raise InstallationError( - "Can not perform a '--user' install. User site-packages " - "are not visible in this virtualenv." - ) - logger.debug("User install by explicit request") - return True - - # If we are here, user installs have not been explicitly requested/avoided - assert use_user_site is None - - # user install incompatible with --prefix/--target - if prefix_path or target_dir: - logger.debug("Non-user install due to --prefix or --target option") - return False - - # If user installs are not enabled, choose a non-user install - if not site.ENABLE_USER_SITE: - logger.debug("Non-user install because user site-packages disabled") - return False - - # If we have permission for a non-user install, do that, - # otherwise do a user install. - if site_packages_writable(root=root_path, isolated=isolated_mode): - logger.debug("Non-user install because site-packages writeable") - return False - - logger.info( - "Defaulting to user installation because normal site-packages " - "is not writeable" - ) - return True - - -def reject_location_related_install_options( - requirements: List[InstallRequirement], options: Optional[List[str]] -) -> None: - """If any location-changing --install-option arguments were passed for - requirements or on the command-line, then show a deprecation warning. - """ - - def format_options(option_names: Iterable[str]) -> List[str]: - return ["--{}".format(name.replace("_", "-")) for name in option_names] - - offenders = [] - - for requirement in requirements: - install_options = requirement.install_options - location_options = parse_distutils_args(install_options) - if location_options: - offenders.append( - "{!r} from {}".format( - format_options(location_options.keys()), requirement - ) - ) - - if options: - location_options = parse_distutils_args(options) - if location_options: - offenders.append( - "{!r} from command line".format(format_options(location_options.keys())) - ) - - if not offenders: - return - - raise CommandError( - "Location-changing options found in --install-option: {}." - " This is unsupported, use pip-level options like --user," - " --prefix, --root, and --target instead.".format("; ".join(offenders)) - ) - - -def create_os_error_message( - error: OSError, show_traceback: bool, using_user_site: bool -) -> str: - """Format an error message for an OSError - - It may occur anytime during the execution of the install command. - """ - parts = [] - - # Mention the error if we are not going to show a traceback - parts.append("Could not install packages due to an OSError") - if not show_traceback: - parts.append(": ") - parts.append(str(error)) - else: - parts.append(".") - - # Spilt the error indication from a helper message (if any) - parts[-1] += "\n" - - # Suggest useful actions to the user: - # (1) using user site-packages or (2) verifying the permissions - if error.errno == errno.EACCES: - user_option_part = "Consider using the `--user` option" - permissions_part = "Check the permissions" - - if not running_under_virtualenv() and not using_user_site: - parts.extend( - [ - user_option_part, - " or ", - permissions_part.lower(), - ] - ) - else: - parts.append(permissions_part) - parts.append(".\n") - - # Suggest the user to enable Long Paths if path length is - # more than 260 - if ( - WINDOWS - and error.errno == errno.ENOENT - and error.filename - and len(error.filename) > 260 - ): - parts.append( - "HINT: This error might have occurred since " - "this system does not have Windows Long Path " - "support enabled. You can find information on " - "how to enable this at " - "https://pip.pypa.io/warnings/enable-long-paths\n" - ) - - return "".join(parts).strip() + "\n" diff --git a/spaces/Reha2704/VToonify/vtoonify/model/stylegan/op/conv2d_gradfix.py b/spaces/Reha2704/VToonify/vtoonify/model/stylegan/op/conv2d_gradfix.py deleted file mode 100644 index 5e4b83adac8e6a4b1caf522596666e4f5d0ee854..0000000000000000000000000000000000000000 --- a/spaces/Reha2704/VToonify/vtoonify/model/stylegan/op/conv2d_gradfix.py +++ /dev/null @@ -1,227 +0,0 @@ -import contextlib -import warnings - -import torch -from torch import autograd -from torch.nn import functional as F - -enabled = True -weight_gradients_disabled = False - - -@contextlib.contextmanager -def no_weight_gradients(): - global weight_gradients_disabled - - old = weight_gradients_disabled - weight_gradients_disabled = True - yield - weight_gradients_disabled = old - - -def conv2d(input, weight, bias=None, stride=1, padding=0, dilation=1, groups=1): - if could_use_op(input): - return conv2d_gradfix( - transpose=False, - weight_shape=weight.shape, - stride=stride, - padding=padding, - output_padding=0, - dilation=dilation, - groups=groups, - ).apply(input, weight, bias) - - return F.conv2d( - input=input, - weight=weight, - bias=bias, - stride=stride, - padding=padding, - dilation=dilation, - groups=groups, - ) - - -def conv_transpose2d( - input, - weight, - bias=None, - stride=1, - padding=0, - output_padding=0, - groups=1, - dilation=1, -): - if could_use_op(input): - return conv2d_gradfix( - transpose=True, - weight_shape=weight.shape, - stride=stride, - padding=padding, - output_padding=output_padding, - groups=groups, - dilation=dilation, - ).apply(input, weight, bias) - - return F.conv_transpose2d( - input=input, - weight=weight, - bias=bias, - stride=stride, - padding=padding, - output_padding=output_padding, - dilation=dilation, - groups=groups, - ) - - -def could_use_op(input): - if (not enabled) or (not torch.backends.cudnn.enabled): - return False - - if input.device.type != "cuda": - return False - - if any(torch.__version__.startswith(x) for x in ["1.7.", "1.8."]): - return True - - #warnings.warn( - # f"conv2d_gradfix not supported on PyTorch {torch.__version__}. Falling back to torch.nn.functional.conv2d()." - #) - - return False - - -def ensure_tuple(xs, ndim): - xs = tuple(xs) if isinstance(xs, (tuple, list)) else (xs,) * ndim - - return xs - - -conv2d_gradfix_cache = dict() - - -def conv2d_gradfix( - transpose, weight_shape, stride, padding, output_padding, dilation, groups -): - ndim = 2 - weight_shape = tuple(weight_shape) - stride = ensure_tuple(stride, ndim) - padding = ensure_tuple(padding, ndim) - output_padding = ensure_tuple(output_padding, ndim) - dilation = ensure_tuple(dilation, ndim) - - key = (transpose, weight_shape, stride, padding, output_padding, dilation, groups) - if key in conv2d_gradfix_cache: - return conv2d_gradfix_cache[key] - - common_kwargs = dict( - stride=stride, padding=padding, dilation=dilation, groups=groups - ) - - def calc_output_padding(input_shape, output_shape): - if transpose: - return [0, 0] - - return [ - input_shape[i + 2] - - (output_shape[i + 2] - 1) * stride[i] - - (1 - 2 * padding[i]) - - dilation[i] * (weight_shape[i + 2] - 1) - for i in range(ndim) - ] - - class Conv2d(autograd.Function): - @staticmethod - def forward(ctx, input, weight, bias): - if not transpose: - out = F.conv2d(input=input, weight=weight, bias=bias, **common_kwargs) - - else: - out = F.conv_transpose2d( - input=input, - weight=weight, - bias=bias, - output_padding=output_padding, - **common_kwargs, - ) - - ctx.save_for_backward(input, weight) - - return out - - @staticmethod - def backward(ctx, grad_output): - input, weight = ctx.saved_tensors - grad_input, grad_weight, grad_bias = None, None, None - - if ctx.needs_input_grad[0]: - p = calc_output_padding( - input_shape=input.shape, output_shape=grad_output.shape - ) - grad_input = conv2d_gradfix( - transpose=(not transpose), - weight_shape=weight_shape, - output_padding=p, - **common_kwargs, - ).apply(grad_output, weight, None) - - if ctx.needs_input_grad[1] and not weight_gradients_disabled: - grad_weight = Conv2dGradWeight.apply(grad_output, input) - - if ctx.needs_input_grad[2]: - grad_bias = grad_output.sum((0, 2, 3)) - - return grad_input, grad_weight, grad_bias - - class Conv2dGradWeight(autograd.Function): - @staticmethod - def forward(ctx, grad_output, input): - op = torch._C._jit_get_operation( - "aten::cudnn_convolution_backward_weight" - if not transpose - else "aten::cudnn_convolution_transpose_backward_weight" - ) - flags = [ - torch.backends.cudnn.benchmark, - torch.backends.cudnn.deterministic, - torch.backends.cudnn.allow_tf32, - ] - grad_weight = op( - weight_shape, - grad_output, - input, - padding, - stride, - dilation, - groups, - *flags, - ) - ctx.save_for_backward(grad_output, input) - - return grad_weight - - @staticmethod - def backward(ctx, grad_grad_weight): - grad_output, input = ctx.saved_tensors - grad_grad_output, grad_grad_input = None, None - - if ctx.needs_input_grad[0]: - grad_grad_output = Conv2d.apply(input, grad_grad_weight, None) - - if ctx.needs_input_grad[1]: - p = calc_output_padding( - input_shape=input.shape, output_shape=grad_output.shape - ) - grad_grad_input = conv2d_gradfix( - transpose=(not transpose), - weight_shape=weight_shape, - output_padding=p, - **common_kwargs, - ).apply(grad_output, grad_grad_weight, None) - - return grad_grad_output, grad_grad_input - - conv2d_gradfix_cache[key] = Conv2d - - return Conv2d diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/ops/deform_roi_pool.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/ops/deform_roi_pool.py deleted file mode 100644 index cc245ba91fee252226ba22e76bb94a35db9a629b..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/ops/deform_roi_pool.py +++ /dev/null @@ -1,204 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from torch import nn -from torch.autograd import Function -from torch.autograd.function import once_differentiable -from torch.nn.modules.utils import _pair - -from ..utils import ext_loader - -ext_module = ext_loader.load_ext( - '_ext', ['deform_roi_pool_forward', 'deform_roi_pool_backward']) - - -class DeformRoIPoolFunction(Function): - - @staticmethod - def symbolic(g, input, rois, offset, output_size, spatial_scale, - sampling_ratio, gamma): - return g.op( - 'mmcv::MMCVDeformRoIPool', - input, - rois, - offset, - pooled_height_i=output_size[0], - pooled_width_i=output_size[1], - spatial_scale_f=spatial_scale, - sampling_ratio_f=sampling_ratio, - gamma_f=gamma) - - @staticmethod - def forward(ctx, - input, - rois, - offset, - output_size, - spatial_scale=1.0, - sampling_ratio=0, - gamma=0.1): - if offset is None: - offset = input.new_zeros(0) - ctx.output_size = _pair(output_size) - ctx.spatial_scale = float(spatial_scale) - ctx.sampling_ratio = int(sampling_ratio) - ctx.gamma = float(gamma) - - assert rois.size(1) == 5, 'RoI must be (idx, x1, y1, x2, y2)!' - - output_shape = (rois.size(0), input.size(1), ctx.output_size[0], - ctx.output_size[1]) - output = input.new_zeros(output_shape) - - ext_module.deform_roi_pool_forward( - input, - rois, - offset, - output, - pooled_height=ctx.output_size[0], - pooled_width=ctx.output_size[1], - spatial_scale=ctx.spatial_scale, - sampling_ratio=ctx.sampling_ratio, - gamma=ctx.gamma) - - ctx.save_for_backward(input, rois, offset) - return output - - @staticmethod - @once_differentiable - def backward(ctx, grad_output): - input, rois, offset = ctx.saved_tensors - grad_input = grad_output.new_zeros(input.shape) - grad_offset = grad_output.new_zeros(offset.shape) - - ext_module.deform_roi_pool_backward( - grad_output, - input, - rois, - offset, - grad_input, - grad_offset, - pooled_height=ctx.output_size[0], - pooled_width=ctx.output_size[1], - spatial_scale=ctx.spatial_scale, - sampling_ratio=ctx.sampling_ratio, - gamma=ctx.gamma) - if grad_offset.numel() == 0: - grad_offset = None - return grad_input, None, grad_offset, None, None, None, None - - -deform_roi_pool = DeformRoIPoolFunction.apply - - -class DeformRoIPool(nn.Module): - - def __init__(self, - output_size, - spatial_scale=1.0, - sampling_ratio=0, - gamma=0.1): - super(DeformRoIPool, self).__init__() - self.output_size = _pair(output_size) - self.spatial_scale = float(spatial_scale) - self.sampling_ratio = int(sampling_ratio) - self.gamma = float(gamma) - - def forward(self, input, rois, offset=None): - return deform_roi_pool(input, rois, offset, self.output_size, - self.spatial_scale, self.sampling_ratio, - self.gamma) - - -class DeformRoIPoolPack(DeformRoIPool): - - def __init__(self, - output_size, - output_channels, - deform_fc_channels=1024, - spatial_scale=1.0, - sampling_ratio=0, - gamma=0.1): - super(DeformRoIPoolPack, self).__init__(output_size, spatial_scale, - sampling_ratio, gamma) - - self.output_channels = output_channels - self.deform_fc_channels = deform_fc_channels - - self.offset_fc = nn.Sequential( - nn.Linear( - self.output_size[0] * self.output_size[1] * - self.output_channels, self.deform_fc_channels), - nn.ReLU(inplace=True), - nn.Linear(self.deform_fc_channels, self.deform_fc_channels), - nn.ReLU(inplace=True), - nn.Linear(self.deform_fc_channels, - self.output_size[0] * self.output_size[1] * 2)) - self.offset_fc[-1].weight.data.zero_() - self.offset_fc[-1].bias.data.zero_() - - def forward(self, input, rois): - assert input.size(1) == self.output_channels - x = deform_roi_pool(input, rois, None, self.output_size, - self.spatial_scale, self.sampling_ratio, - self.gamma) - rois_num = rois.size(0) - offset = self.offset_fc(x.view(rois_num, -1)) - offset = offset.view(rois_num, 2, self.output_size[0], - self.output_size[1]) - return deform_roi_pool(input, rois, offset, self.output_size, - self.spatial_scale, self.sampling_ratio, - self.gamma) - - -class ModulatedDeformRoIPoolPack(DeformRoIPool): - - def __init__(self, - output_size, - output_channels, - deform_fc_channels=1024, - spatial_scale=1.0, - sampling_ratio=0, - gamma=0.1): - super(ModulatedDeformRoIPoolPack, - self).__init__(output_size, spatial_scale, sampling_ratio, gamma) - - self.output_channels = output_channels - self.deform_fc_channels = deform_fc_channels - - self.offset_fc = nn.Sequential( - nn.Linear( - self.output_size[0] * self.output_size[1] * - self.output_channels, self.deform_fc_channels), - nn.ReLU(inplace=True), - nn.Linear(self.deform_fc_channels, self.deform_fc_channels), - nn.ReLU(inplace=True), - nn.Linear(self.deform_fc_channels, - self.output_size[0] * self.output_size[1] * 2)) - self.offset_fc[-1].weight.data.zero_() - self.offset_fc[-1].bias.data.zero_() - - self.mask_fc = nn.Sequential( - nn.Linear( - self.output_size[0] * self.output_size[1] * - self.output_channels, self.deform_fc_channels), - nn.ReLU(inplace=True), - nn.Linear(self.deform_fc_channels, - self.output_size[0] * self.output_size[1] * 1), - nn.Sigmoid()) - self.mask_fc[2].weight.data.zero_() - self.mask_fc[2].bias.data.zero_() - - def forward(self, input, rois): - assert input.size(1) == self.output_channels - x = deform_roi_pool(input, rois, None, self.output_size, - self.spatial_scale, self.sampling_ratio, - self.gamma) - rois_num = rois.size(0) - offset = self.offset_fc(x.view(rois_num, -1)) - offset = offset.view(rois_num, 2, self.output_size[0], - self.output_size[1]) - mask = self.mask_fc(x.view(rois_num, -1)) - mask = mask.view(rois_num, 1, self.output_size[0], self.output_size[1]) - d = deform_roi_pool(input, rois, offset, self.output_size, - self.spatial_scale, self.sampling_ratio, - self.gamma) - return d * mask diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/models/dense_heads/gfl_head.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/models/dense_heads/gfl_head.py deleted file mode 100644 index 961bc92237663ad5343d3d08eb9c0e4e811ada05..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/models/dense_heads/gfl_head.py +++ /dev/null @@ -1,647 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import ConvModule, Scale, bias_init_with_prob, normal_init -from mmcv.runner import force_fp32 - -from mmdet.core import (anchor_inside_flags, bbox2distance, bbox_overlaps, - build_assigner, build_sampler, distance2bbox, - images_to_levels, multi_apply, multiclass_nms, - reduce_mean, unmap) -from ..builder import HEADS, build_loss -from .anchor_head import AnchorHead - - -class Integral(nn.Module): - """A fixed layer for calculating integral result from distribution. - - This layer calculates the target location by :math: `sum{P(y_i) * y_i}`, - P(y_i) denotes the softmax vector that represents the discrete distribution - y_i denotes the discrete set, usually {0, 1, 2, ..., reg_max} - - Args: - reg_max (int): The maximal value of the discrete set. Default: 16. You - may want to reset it according to your new dataset or related - settings. - """ - - def __init__(self, reg_max=16): - super(Integral, self).__init__() - self.reg_max = reg_max - self.register_buffer('project', - torch.linspace(0, self.reg_max, self.reg_max + 1)) - - def forward(self, x): - """Forward feature from the regression head to get integral result of - bounding box location. - - Args: - x (Tensor): Features of the regression head, shape (N, 4*(n+1)), - n is self.reg_max. - - Returns: - x (Tensor): Integral result of box locations, i.e., distance - offsets from the box center in four directions, shape (N, 4). - """ - x = F.softmax(x.reshape(-1, self.reg_max + 1), dim=1) - x = F.linear(x, self.project.type_as(x)).reshape(-1, 4) - return x - - -@HEADS.register_module() -class GFLHead(AnchorHead): - """Generalized Focal Loss: Learning Qualified and Distributed Bounding - Boxes for Dense Object Detection. - - GFL head structure is similar with ATSS, however GFL uses - 1) joint representation for classification and localization quality, and - 2) flexible General distribution for bounding box locations, - which are supervised by - Quality Focal Loss (QFL) and Distribution Focal Loss (DFL), respectively - - https://arxiv.org/abs/2006.04388 - - Args: - num_classes (int): Number of categories excluding the background - category. - in_channels (int): Number of channels in the input feature map. - stacked_convs (int): Number of conv layers in cls and reg tower. - Default: 4. - conv_cfg (dict): dictionary to construct and config conv layer. - Default: None. - norm_cfg (dict): dictionary to construct and config norm layer. - Default: dict(type='GN', num_groups=32, requires_grad=True). - loss_qfl (dict): Config of Quality Focal Loss (QFL). - reg_max (int): Max value of integral set :math: `{0, ..., reg_max}` - in QFL setting. Default: 16. - Example: - >>> self = GFLHead(11, 7) - >>> feats = [torch.rand(1, 7, s, s) for s in [4, 8, 16, 32, 64]] - >>> cls_quality_score, bbox_pred = self.forward(feats) - >>> assert len(cls_quality_score) == len(self.scales) - """ - - def __init__(self, - num_classes, - in_channels, - stacked_convs=4, - conv_cfg=None, - norm_cfg=dict(type='GN', num_groups=32, requires_grad=True), - loss_dfl=dict(type='DistributionFocalLoss', loss_weight=0.25), - reg_max=16, - **kwargs): - self.stacked_convs = stacked_convs - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.reg_max = reg_max - super(GFLHead, self).__init__(num_classes, in_channels, **kwargs) - - self.sampling = False - if self.train_cfg: - self.assigner = build_assigner(self.train_cfg.assigner) - # SSD sampling=False so use PseudoSampler - sampler_cfg = dict(type='PseudoSampler') - self.sampler = build_sampler(sampler_cfg, context=self) - - self.integral = Integral(self.reg_max) - self.loss_dfl = build_loss(loss_dfl) - - def _init_layers(self): - """Initialize layers of the head.""" - self.relu = nn.ReLU(inplace=True) - self.cls_convs = nn.ModuleList() - self.reg_convs = nn.ModuleList() - for i in range(self.stacked_convs): - chn = self.in_channels if i == 0 else self.feat_channels - self.cls_convs.append( - ConvModule( - chn, - self.feat_channels, - 3, - stride=1, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg)) - self.reg_convs.append( - ConvModule( - chn, - self.feat_channels, - 3, - stride=1, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg)) - assert self.num_anchors == 1, 'anchor free version' - self.gfl_cls = nn.Conv2d( - self.feat_channels, self.cls_out_channels, 3, padding=1) - self.gfl_reg = nn.Conv2d( - self.feat_channels, 4 * (self.reg_max + 1), 3, padding=1) - self.scales = nn.ModuleList( - [Scale(1.0) for _ in self.anchor_generator.strides]) - - def init_weights(self): - """Initialize weights of the head.""" - for m in self.cls_convs: - normal_init(m.conv, std=0.01) - for m in self.reg_convs: - normal_init(m.conv, std=0.01) - bias_cls = bias_init_with_prob(0.01) - normal_init(self.gfl_cls, std=0.01, bias=bias_cls) - normal_init(self.gfl_reg, std=0.01) - - def forward(self, feats): - """Forward features from the upstream network. - - Args: - feats (tuple[Tensor]): Features from the upstream network, each is - a 4D-tensor. - - Returns: - tuple: Usually a tuple of classification scores and bbox prediction - cls_scores (list[Tensor]): Classification and quality (IoU) - joint scores for all scale levels, each is a 4D-tensor, - the channel number is num_classes. - bbox_preds (list[Tensor]): Box distribution logits for all - scale levels, each is a 4D-tensor, the channel number is - 4*(n+1), n is max value of integral set. - """ - return multi_apply(self.forward_single, feats, self.scales) - - def forward_single(self, x, scale): - """Forward feature of a single scale level. - - Args: - x (Tensor): Features of a single scale level. - scale (:obj: `mmcv.cnn.Scale`): Learnable scale module to resize - the bbox prediction. - - Returns: - tuple: - cls_score (Tensor): Cls and quality joint scores for a single - scale level the channel number is num_classes. - bbox_pred (Tensor): Box distribution logits for a single scale - level, the channel number is 4*(n+1), n is max value of - integral set. - """ - cls_feat = x - reg_feat = x - for cls_conv in self.cls_convs: - cls_feat = cls_conv(cls_feat) - for reg_conv in self.reg_convs: - reg_feat = reg_conv(reg_feat) - cls_score = self.gfl_cls(cls_feat) - bbox_pred = scale(self.gfl_reg(reg_feat)).float() - return cls_score, bbox_pred - - def anchor_center(self, anchors): - """Get anchor centers from anchors. - - Args: - anchors (Tensor): Anchor list with shape (N, 4), "xyxy" format. - - Returns: - Tensor: Anchor centers with shape (N, 2), "xy" format. - """ - anchors_cx = (anchors[..., 2] + anchors[..., 0]) / 2 - anchors_cy = (anchors[..., 3] + anchors[..., 1]) / 2 - return torch.stack([anchors_cx, anchors_cy], dim=-1) - - def loss_single(self, anchors, cls_score, bbox_pred, labels, label_weights, - bbox_targets, stride, num_total_samples): - """Compute loss of a single scale level. - - Args: - anchors (Tensor): Box reference for each scale level with shape - (N, num_total_anchors, 4). - cls_score (Tensor): Cls and quality joint scores for each scale - level has shape (N, num_classes, H, W). - bbox_pred (Tensor): Box distribution logits for each scale - level with shape (N, 4*(n+1), H, W), n is max value of integral - set. - labels (Tensor): Labels of each anchors with shape - (N, num_total_anchors). - label_weights (Tensor): Label weights of each anchor with shape - (N, num_total_anchors) - bbox_targets (Tensor): BBox regression targets of each anchor wight - shape (N, num_total_anchors, 4). - stride (tuple): Stride in this scale level. - num_total_samples (int): Number of positive samples that is - reduced over all GPUs. - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - assert stride[0] == stride[1], 'h stride is not equal to w stride!' - anchors = anchors.reshape(-1, 4) - cls_score = cls_score.permute(0, 2, 3, - 1).reshape(-1, self.cls_out_channels) - bbox_pred = bbox_pred.permute(0, 2, 3, - 1).reshape(-1, 4 * (self.reg_max + 1)) - bbox_targets = bbox_targets.reshape(-1, 4) - labels = labels.reshape(-1) - label_weights = label_weights.reshape(-1) - - # FG cat_id: [0, num_classes -1], BG cat_id: num_classes - bg_class_ind = self.num_classes - pos_inds = ((labels >= 0) - & (labels < bg_class_ind)).nonzero().squeeze(1) - score = label_weights.new_zeros(labels.shape) - - if len(pos_inds) > 0: - pos_bbox_targets = bbox_targets[pos_inds] - pos_bbox_pred = bbox_pred[pos_inds] - pos_anchors = anchors[pos_inds] - pos_anchor_centers = self.anchor_center(pos_anchors) / stride[0] - - weight_targets = cls_score.detach().sigmoid() - weight_targets = weight_targets.max(dim=1)[0][pos_inds] - pos_bbox_pred_corners = self.integral(pos_bbox_pred) - pos_decode_bbox_pred = distance2bbox(pos_anchor_centers, - pos_bbox_pred_corners) - pos_decode_bbox_targets = pos_bbox_targets / stride[0] - score[pos_inds] = bbox_overlaps( - pos_decode_bbox_pred.detach(), - pos_decode_bbox_targets, - is_aligned=True) - pred_corners = pos_bbox_pred.reshape(-1, self.reg_max + 1) - target_corners = bbox2distance(pos_anchor_centers, - pos_decode_bbox_targets, - self.reg_max).reshape(-1) - - # regression loss - loss_bbox = self.loss_bbox( - pos_decode_bbox_pred, - pos_decode_bbox_targets, - weight=weight_targets, - avg_factor=1.0) - - # dfl loss - loss_dfl = self.loss_dfl( - pred_corners, - target_corners, - weight=weight_targets[:, None].expand(-1, 4).reshape(-1), - avg_factor=4.0) - else: - loss_bbox = bbox_pred.sum() * 0 - loss_dfl = bbox_pred.sum() * 0 - weight_targets = bbox_pred.new_tensor(0) - - # cls (qfl) loss - loss_cls = self.loss_cls( - cls_score, (labels, score), - weight=label_weights, - avg_factor=num_total_samples) - - return loss_cls, loss_bbox, loss_dfl, weight_targets.sum() - - @force_fp32(apply_to=('cls_scores', 'bbox_preds')) - def loss(self, - cls_scores, - bbox_preds, - gt_bboxes, - gt_labels, - img_metas, - gt_bboxes_ignore=None): - """Compute losses of the head. - - Args: - cls_scores (list[Tensor]): Cls and quality scores for each scale - level has shape (N, num_classes, H, W). - bbox_preds (list[Tensor]): Box distribution logits for each scale - level with shape (N, 4*(n+1), H, W), n is max value of integral - set. - gt_bboxes (list[Tensor]): Ground truth bboxes for each image with - shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): class indices corresponding to each box - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes_ignore (list[Tensor] | None): specify which bounding - boxes can be ignored when computing the loss. - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - - featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores] - assert len(featmap_sizes) == self.anchor_generator.num_levels - - device = cls_scores[0].device - anchor_list, valid_flag_list = self.get_anchors( - featmap_sizes, img_metas, device=device) - label_channels = self.cls_out_channels if self.use_sigmoid_cls else 1 - - cls_reg_targets = self.get_targets( - anchor_list, - valid_flag_list, - gt_bboxes, - img_metas, - gt_bboxes_ignore_list=gt_bboxes_ignore, - gt_labels_list=gt_labels, - label_channels=label_channels) - if cls_reg_targets is None: - return None - - (anchor_list, labels_list, label_weights_list, bbox_targets_list, - bbox_weights_list, num_total_pos, num_total_neg) = cls_reg_targets - - num_total_samples = reduce_mean( - torch.tensor(num_total_pos, dtype=torch.float, - device=device)).item() - num_total_samples = max(num_total_samples, 1.0) - - losses_cls, losses_bbox, losses_dfl,\ - avg_factor = multi_apply( - self.loss_single, - anchor_list, - cls_scores, - bbox_preds, - labels_list, - label_weights_list, - bbox_targets_list, - self.anchor_generator.strides, - num_total_samples=num_total_samples) - - avg_factor = sum(avg_factor) - avg_factor = reduce_mean(avg_factor).item() - losses_bbox = list(map(lambda x: x / avg_factor, losses_bbox)) - losses_dfl = list(map(lambda x: x / avg_factor, losses_dfl)) - return dict( - loss_cls=losses_cls, loss_bbox=losses_bbox, loss_dfl=losses_dfl) - - def _get_bboxes(self, - cls_scores, - bbox_preds, - mlvl_anchors, - img_shapes, - scale_factors, - cfg, - rescale=False, - with_nms=True): - """Transform outputs for a single batch item into labeled boxes. - - Args: - cls_scores (list[Tensor]): Box scores for a single scale level - has shape (N, num_classes, H, W). - bbox_preds (list[Tensor]): Box distribution logits for a single - scale level with shape (N, 4*(n+1), H, W), n is max value of - integral set. - mlvl_anchors (list[Tensor]): Box reference for a single scale level - with shape (num_total_anchors, 4). - img_shapes (list[tuple[int]]): Shape of the input image, - list[(height, width, 3)]. - scale_factors (list[ndarray]): Scale factor of the image arange as - (w_scale, h_scale, w_scale, h_scale). - cfg (mmcv.Config | None): Test / postprocessing configuration, - if None, test_cfg would be used. - rescale (bool): If True, return boxes in original image space. - Default: False. - with_nms (bool): If True, do nms before return boxes. - Default: True. - - Returns: - list[tuple[Tensor, Tensor]]: Each item in result_list is 2-tuple. - The first item is an (n, 5) tensor, where 5 represent - (tl_x, tl_y, br_x, br_y, score) and the score between 0 and 1. - The shape of the second tensor in the tuple is (n,), and - each element represents the class label of the corresponding - box. - """ - cfg = self.test_cfg if cfg is None else cfg - assert len(cls_scores) == len(bbox_preds) == len(mlvl_anchors) - batch_size = cls_scores[0].shape[0] - - mlvl_bboxes = [] - mlvl_scores = [] - for cls_score, bbox_pred, stride, anchors in zip( - cls_scores, bbox_preds, self.anchor_generator.strides, - mlvl_anchors): - assert cls_score.size()[-2:] == bbox_pred.size()[-2:] - assert stride[0] == stride[1] - scores = cls_score.permute(0, 2, 3, 1).reshape( - batch_size, -1, self.cls_out_channels).sigmoid() - bbox_pred = bbox_pred.permute(0, 2, 3, 1) - - bbox_pred = self.integral(bbox_pred) * stride[0] - bbox_pred = bbox_pred.reshape(batch_size, -1, 4) - - nms_pre = cfg.get('nms_pre', -1) - if nms_pre > 0 and scores.shape[1] > nms_pre: - max_scores, _ = scores.max(-1) - _, topk_inds = max_scores.topk(nms_pre) - batch_inds = torch.arange(batch_size).view( - -1, 1).expand_as(topk_inds).long() - anchors = anchors[topk_inds, :] - bbox_pred = bbox_pred[batch_inds, topk_inds, :] - scores = scores[batch_inds, topk_inds, :] - else: - anchors = anchors.expand_as(bbox_pred) - - bboxes = distance2bbox( - self.anchor_center(anchors), bbox_pred, max_shape=img_shapes) - mlvl_bboxes.append(bboxes) - mlvl_scores.append(scores) - - batch_mlvl_bboxes = torch.cat(mlvl_bboxes, dim=1) - if rescale: - batch_mlvl_bboxes /= batch_mlvl_bboxes.new_tensor( - scale_factors).unsqueeze(1) - - batch_mlvl_scores = torch.cat(mlvl_scores, dim=1) - # Add a dummy background class to the backend when using sigmoid - # remind that we set FG labels to [0, num_class-1] since mmdet v2.0 - # BG cat_id: num_class - padding = batch_mlvl_scores.new_zeros(batch_size, - batch_mlvl_scores.shape[1], 1) - batch_mlvl_scores = torch.cat([batch_mlvl_scores, padding], dim=-1) - - if with_nms: - det_results = [] - for (mlvl_bboxes, mlvl_scores) in zip(batch_mlvl_bboxes, - batch_mlvl_scores): - det_bbox, det_label = multiclass_nms(mlvl_bboxes, mlvl_scores, - cfg.score_thr, cfg.nms, - cfg.max_per_img) - det_results.append(tuple([det_bbox, det_label])) - else: - det_results = [ - tuple(mlvl_bs) - for mlvl_bs in zip(batch_mlvl_bboxes, batch_mlvl_scores) - ] - return det_results - - def get_targets(self, - anchor_list, - valid_flag_list, - gt_bboxes_list, - img_metas, - gt_bboxes_ignore_list=None, - gt_labels_list=None, - label_channels=1, - unmap_outputs=True): - """Get targets for GFL head. - - This method is almost the same as `AnchorHead.get_targets()`. Besides - returning the targets as the parent method does, it also returns the - anchors as the first element of the returned tuple. - """ - num_imgs = len(img_metas) - assert len(anchor_list) == len(valid_flag_list) == num_imgs - - # anchor number of multi levels - num_level_anchors = [anchors.size(0) for anchors in anchor_list[0]] - num_level_anchors_list = [num_level_anchors] * num_imgs - - # concat all level anchors and flags to a single tensor - for i in range(num_imgs): - assert len(anchor_list[i]) == len(valid_flag_list[i]) - anchor_list[i] = torch.cat(anchor_list[i]) - valid_flag_list[i] = torch.cat(valid_flag_list[i]) - - # compute targets for each image - if gt_bboxes_ignore_list is None: - gt_bboxes_ignore_list = [None for _ in range(num_imgs)] - if gt_labels_list is None: - gt_labels_list = [None for _ in range(num_imgs)] - (all_anchors, all_labels, all_label_weights, all_bbox_targets, - all_bbox_weights, pos_inds_list, neg_inds_list) = multi_apply( - self._get_target_single, - anchor_list, - valid_flag_list, - num_level_anchors_list, - gt_bboxes_list, - gt_bboxes_ignore_list, - gt_labels_list, - img_metas, - label_channels=label_channels, - unmap_outputs=unmap_outputs) - # no valid anchors - if any([labels is None for labels in all_labels]): - return None - # sampled anchors of all images - num_total_pos = sum([max(inds.numel(), 1) for inds in pos_inds_list]) - num_total_neg = sum([max(inds.numel(), 1) for inds in neg_inds_list]) - # split targets to a list w.r.t. multiple levels - anchors_list = images_to_levels(all_anchors, num_level_anchors) - labels_list = images_to_levels(all_labels, num_level_anchors) - label_weights_list = images_to_levels(all_label_weights, - num_level_anchors) - bbox_targets_list = images_to_levels(all_bbox_targets, - num_level_anchors) - bbox_weights_list = images_to_levels(all_bbox_weights, - num_level_anchors) - return (anchors_list, labels_list, label_weights_list, - bbox_targets_list, bbox_weights_list, num_total_pos, - num_total_neg) - - def _get_target_single(self, - flat_anchors, - valid_flags, - num_level_anchors, - gt_bboxes, - gt_bboxes_ignore, - gt_labels, - img_meta, - label_channels=1, - unmap_outputs=True): - """Compute regression, classification targets for anchors in a single - image. - - Args: - flat_anchors (Tensor): Multi-level anchors of the image, which are - concatenated into a single tensor of shape (num_anchors, 4) - valid_flags (Tensor): Multi level valid flags of the image, - which are concatenated into a single tensor of - shape (num_anchors,). - num_level_anchors Tensor): Number of anchors of each scale level. - gt_bboxes (Tensor): Ground truth bboxes of the image, - shape (num_gts, 4). - gt_bboxes_ignore (Tensor): Ground truth bboxes to be - ignored, shape (num_ignored_gts, 4). - gt_labels (Tensor): Ground truth labels of each box, - shape (num_gts,). - img_meta (dict): Meta info of the image. - label_channels (int): Channel of label. - unmap_outputs (bool): Whether to map outputs back to the original - set of anchors. - - Returns: - tuple: N is the number of total anchors in the image. - anchors (Tensor): All anchors in the image with shape (N, 4). - labels (Tensor): Labels of all anchors in the image with shape - (N,). - label_weights (Tensor): Label weights of all anchor in the - image with shape (N,). - bbox_targets (Tensor): BBox targets of all anchors in the - image with shape (N, 4). - bbox_weights (Tensor): BBox weights of all anchors in the - image with shape (N, 4). - pos_inds (Tensor): Indices of positive anchor with shape - (num_pos,). - neg_inds (Tensor): Indices of negative anchor with shape - (num_neg,). - """ - inside_flags = anchor_inside_flags(flat_anchors, valid_flags, - img_meta['img_shape'][:2], - self.train_cfg.allowed_border) - if not inside_flags.any(): - return (None, ) * 7 - # assign gt and sample anchors - anchors = flat_anchors[inside_flags, :] - - num_level_anchors_inside = self.get_num_level_anchors_inside( - num_level_anchors, inside_flags) - assign_result = self.assigner.assign(anchors, num_level_anchors_inside, - gt_bboxes, gt_bboxes_ignore, - gt_labels) - - sampling_result = self.sampler.sample(assign_result, anchors, - gt_bboxes) - - num_valid_anchors = anchors.shape[0] - bbox_targets = torch.zeros_like(anchors) - bbox_weights = torch.zeros_like(anchors) - labels = anchors.new_full((num_valid_anchors, ), - self.num_classes, - dtype=torch.long) - label_weights = anchors.new_zeros(num_valid_anchors, dtype=torch.float) - - pos_inds = sampling_result.pos_inds - neg_inds = sampling_result.neg_inds - if len(pos_inds) > 0: - pos_bbox_targets = sampling_result.pos_gt_bboxes - bbox_targets[pos_inds, :] = pos_bbox_targets - bbox_weights[pos_inds, :] = 1.0 - if gt_labels is None: - # Only rpn gives gt_labels as None - # Foreground is the first class - labels[pos_inds] = 0 - else: - labels[pos_inds] = gt_labels[ - sampling_result.pos_assigned_gt_inds] - if self.train_cfg.pos_weight <= 0: - label_weights[pos_inds] = 1.0 - else: - label_weights[pos_inds] = self.train_cfg.pos_weight - if len(neg_inds) > 0: - label_weights[neg_inds] = 1.0 - - # map up to original set of anchors - if unmap_outputs: - num_total_anchors = flat_anchors.size(0) - anchors = unmap(anchors, num_total_anchors, inside_flags) - labels = unmap( - labels, num_total_anchors, inside_flags, fill=self.num_classes) - label_weights = unmap(label_weights, num_total_anchors, - inside_flags) - bbox_targets = unmap(bbox_targets, num_total_anchors, inside_flags) - bbox_weights = unmap(bbox_weights, num_total_anchors, inside_flags) - - return (anchors, labels, label_weights, bbox_targets, bbox_weights, - pos_inds, neg_inds) - - def get_num_level_anchors_inside(self, num_level_anchors, inside_flags): - split_inside_flags = torch.split(inside_flags, num_level_anchors) - num_level_anchors_inside = [ - int(flags.sum()) for flags in split_inside_flags - ] - return num_level_anchors_inside diff --git a/spaces/RobinZ2021/remove_background/README.md b/spaces/RobinZ2021/remove_background/README.md deleted file mode 100644 index 204d5bee629d81b958e5714fe33424da7ce074ed..0000000000000000000000000000000000000000 --- a/spaces/RobinZ2021/remove_background/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Remove Background -emoji: 📈 -colorFrom: blue -colorTo: indigo -sdk: gradio -sdk_version: 3.13.0 -app_file: app.py -pinned: false -license: openrail ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Rongjiehuang/GenerSpeech/modules/commons/ssim.py b/spaces/Rongjiehuang/GenerSpeech/modules/commons/ssim.py deleted file mode 100644 index 0d0241f267ef58b24979e022b05f2a9adf768826..0000000000000000000000000000000000000000 --- a/spaces/Rongjiehuang/GenerSpeech/modules/commons/ssim.py +++ /dev/null @@ -1,391 +0,0 @@ -# ''' -# https://github.com/One-sixth/ms_ssim_pytorch/blob/master/ssim.py -# ''' -# -# import torch -# import torch.jit -# import torch.nn.functional as F -# -# -# @torch.jit.script -# def create_window(window_size: int, sigma: float, channel: int): -# ''' -# Create 1-D gauss kernel -# :param window_size: the size of gauss kernel -# :param sigma: sigma of normal distribution -# :param channel: input channel -# :return: 1D kernel -# ''' -# coords = torch.arange(window_size, dtype=torch.float) -# coords -= window_size // 2 -# -# g = torch.exp(-(coords ** 2) / (2 * sigma ** 2)) -# g /= g.sum() -# -# g = g.reshape(1, 1, 1, -1).repeat(channel, 1, 1, 1) -# return g -# -# -# @torch.jit.script -# def _gaussian_filter(x, window_1d, use_padding: bool): -# ''' -# Blur input with 1-D kernel -# :param x: batch of tensors to be blured -# :param window_1d: 1-D gauss kernel -# :param use_padding: padding image before conv -# :return: blured tensors -# ''' -# C = x.shape[1] -# padding = 0 -# if use_padding: -# window_size = window_1d.shape[3] -# padding = window_size // 2 -# out = F.conv2d(x, window_1d, stride=1, padding=(0, padding), groups=C) -# out = F.conv2d(out, window_1d.transpose(2, 3), stride=1, padding=(padding, 0), groups=C) -# return out -# -# -# @torch.jit.script -# def ssim(X, Y, window, data_range: float, use_padding: bool = False): -# ''' -# Calculate ssim index for X and Y -# :param X: images [B, C, H, N_bins] -# :param Y: images [B, C, H, N_bins] -# :param window: 1-D gauss kernel -# :param data_range: value range of input images. (usually 1.0 or 255) -# :param use_padding: padding image before conv -# :return: -# ''' -# -# K1 = 0.01 -# K2 = 0.03 -# compensation = 1.0 -# -# C1 = (K1 * data_range) ** 2 -# C2 = (K2 * data_range) ** 2 -# -# mu1 = _gaussian_filter(X, window, use_padding) -# mu2 = _gaussian_filter(Y, window, use_padding) -# sigma1_sq = _gaussian_filter(X * X, window, use_padding) -# sigma2_sq = _gaussian_filter(Y * Y, window, use_padding) -# sigma12 = _gaussian_filter(X * Y, window, use_padding) -# -# mu1_sq = mu1.pow(2) -# mu2_sq = mu2.pow(2) -# mu1_mu2 = mu1 * mu2 -# -# sigma1_sq = compensation * (sigma1_sq - mu1_sq) -# sigma2_sq = compensation * (sigma2_sq - mu2_sq) -# sigma12 = compensation * (sigma12 - mu1_mu2) -# -# cs_map = (2 * sigma12 + C2) / (sigma1_sq + sigma2_sq + C2) -# # Fixed the issue that the negative value of cs_map caused ms_ssim to output Nan. -# cs_map = cs_map.clamp_min(0.) -# ssim_map = ((2 * mu1_mu2 + C1) / (mu1_sq + mu2_sq + C1)) * cs_map -# -# ssim_val = ssim_map.mean(dim=(1, 2, 3)) # reduce along CHW -# cs = cs_map.mean(dim=(1, 2, 3)) -# -# return ssim_val, cs -# -# -# @torch.jit.script -# def ms_ssim(X, Y, window, data_range: float, weights, use_padding: bool = False, eps: float = 1e-8): -# ''' -# interface of ms-ssim -# :param X: a batch of images, (N,C,H,W) -# :param Y: a batch of images, (N,C,H,W) -# :param window: 1-D gauss kernel -# :param data_range: value range of input images. (usually 1.0 or 255) -# :param weights: weights for different levels -# :param use_padding: padding image before conv -# :param eps: use for avoid grad nan. -# :return: -# ''' -# levels = weights.shape[0] -# cs_vals = [] -# ssim_vals = [] -# for _ in range(levels): -# ssim_val, cs = ssim(X, Y, window=window, data_range=data_range, use_padding=use_padding) -# # Use for fix a issue. When c = a ** b and a is 0, c.backward() will cause the a.grad become inf. -# ssim_val = ssim_val.clamp_min(eps) -# cs = cs.clamp_min(eps) -# cs_vals.append(cs) -# -# ssim_vals.append(ssim_val) -# padding = (X.shape[2] % 2, X.shape[3] % 2) -# X = F.avg_pool2d(X, kernel_size=2, stride=2, padding=padding) -# Y = F.avg_pool2d(Y, kernel_size=2, stride=2, padding=padding) -# -# cs_vals = torch.stack(cs_vals, dim=0) -# ms_ssim_val = torch.prod((cs_vals[:-1] ** weights[:-1].unsqueeze(1)) * (ssim_vals[-1] ** weights[-1]), dim=0) -# return ms_ssim_val -# -# -# class SSIM(torch.jit.ScriptModule): -# __constants__ = ['data_range', 'use_padding'] -# -# def __init__(self, window_size=11, window_sigma=1.5, data_range=255., channel=3, use_padding=False): -# ''' -# :param window_size: the size of gauss kernel -# :param window_sigma: sigma of normal distribution -# :param data_range: value range of input images. (usually 1.0 or 255) -# :param channel: input channels (default: 3) -# :param use_padding: padding image before conv -# ''' -# super().__init__() -# assert window_size % 2 == 1, 'Window size must be odd.' -# window = create_window(window_size, window_sigma, channel) -# self.register_buffer('window', window) -# self.data_range = data_range -# self.use_padding = use_padding -# -# @torch.jit.script_method -# def forward(self, X, Y): -# r = ssim(X, Y, window=self.window, data_range=self.data_range, use_padding=self.use_padding) -# return r[0] -# -# -# class MS_SSIM(torch.jit.ScriptModule): -# __constants__ = ['data_range', 'use_padding', 'eps'] -# -# def __init__(self, window_size=11, window_sigma=1.5, data_range=255., channel=3, use_padding=False, weights=None, -# levels=None, eps=1e-8): -# ''' -# class for ms-ssim -# :param window_size: the size of gauss kernel -# :param window_sigma: sigma of normal distribution -# :param data_range: value range of input images. (usually 1.0 or 255) -# :param channel: input channels -# :param use_padding: padding image before conv -# :param weights: weights for different levels. (default [0.0448, 0.2856, 0.3001, 0.2363, 0.1333]) -# :param levels: number of downsampling -# :param eps: Use for fix a issue. When c = a ** b and a is 0, c.backward() will cause the a.grad become inf. -# ''' -# super().__init__() -# assert window_size % 2 == 1, 'Window size must be odd.' -# self.data_range = data_range -# self.use_padding = use_padding -# self.eps = eps -# -# window = create_window(window_size, window_sigma, channel) -# self.register_buffer('window', window) -# -# if weights is None: -# weights = [0.0448, 0.2856, 0.3001, 0.2363, 0.1333] -# weights = torch.tensor(weights, dtype=torch.float) -# -# if levels is not None: -# weights = weights[:levels] -# weights = weights / weights.sum() -# -# self.register_buffer('weights', weights) -# -# @torch.jit.script_method -# def forward(self, X, Y): -# return ms_ssim(X, Y, window=self.window, data_range=self.data_range, weights=self.weights, -# use_padding=self.use_padding, eps=self.eps) -# -# -# if __name__ == '__main__': -# print('Simple Test') -# im = torch.randint(0, 255, (5, 3, 256, 256), dtype=torch.float, device='cuda') -# img1 = im / 255 -# img2 = img1 * 0.5 -# -# losser = SSIM(data_range=1.).cuda() -# loss = losser(img1, img2).mean() -# -# losser2 = MS_SSIM(data_range=1.).cuda() -# loss2 = losser2(img1, img2).mean() -# -# print(loss.item()) -# print(loss2.item()) -# -# if __name__ == '__main__': -# print('Training Test') -# import cv2 -# import torch.optim -# import numpy as np -# import imageio -# import time -# -# out_test_video = False -# # 最好不要直接输出gif图,会非常大,最好先输出mkv文件后用ffmpeg转换到GIF -# video_use_gif = False -# -# im = cv2.imread('test_img1.jpg', 1) -# t_im = torch.from_numpy(im).cuda().permute(2, 0, 1).float()[None] / 255. -# -# if out_test_video: -# if video_use_gif: -# fps = 0.5 -# out_wh = (im.shape[1] // 2, im.shape[0] // 2) -# suffix = '.gif' -# else: -# fps = 5 -# out_wh = (im.shape[1], im.shape[0]) -# suffix = '.mkv' -# video_last_time = time.perf_counter() -# video = imageio.get_writer('ssim_test' + suffix, fps=fps) -# -# # 测试ssim -# print('Training SSIM') -# rand_im = torch.randint_like(t_im, 0, 255, dtype=torch.float32) / 255. -# rand_im.requires_grad = True -# optim = torch.optim.Adam([rand_im], 0.003, eps=1e-8) -# losser = SSIM(data_range=1., channel=t_im.shape[1]).cuda() -# ssim_score = 0 -# while ssim_score < 0.999: -# optim.zero_grad() -# loss = losser(rand_im, t_im) -# (-loss).sum().backward() -# ssim_score = loss.item() -# optim.step() -# r_im = np.transpose(rand_im.detach().cpu().numpy().clip(0, 1) * 255, [0, 2, 3, 1]).astype(np.uint8)[0] -# r_im = cv2.putText(r_im, 'ssim %f' % ssim_score, (10, 30), cv2.FONT_HERSHEY_PLAIN, 2, (255, 0, 0), 2) -# -# if out_test_video: -# if time.perf_counter() - video_last_time > 1. / fps: -# video_last_time = time.perf_counter() -# out_frame = cv2.cvtColor(r_im, cv2.COLOR_BGR2RGB) -# out_frame = cv2.resize(out_frame, out_wh, interpolation=cv2.INTER_AREA) -# if isinstance(out_frame, cv2.UMat): -# out_frame = out_frame.get() -# video.append_data(out_frame) -# -# cv2.imshow('ssim', r_im) -# cv2.setWindowTitle('ssim', 'ssim %f' % ssim_score) -# cv2.waitKey(1) -# -# if out_test_video: -# video.close() -# -# # 测试ms_ssim -# if out_test_video: -# if video_use_gif: -# fps = 0.5 -# out_wh = (im.shape[1] // 2, im.shape[0] // 2) -# suffix = '.gif' -# else: -# fps = 5 -# out_wh = (im.shape[1], im.shape[0]) -# suffix = '.mkv' -# video_last_time = time.perf_counter() -# video = imageio.get_writer('ms_ssim_test' + suffix, fps=fps) -# -# print('Training MS_SSIM') -# rand_im = torch.randint_like(t_im, 0, 255, dtype=torch.float32) / 255. -# rand_im.requires_grad = True -# optim = torch.optim.Adam([rand_im], 0.003, eps=1e-8) -# losser = MS_SSIM(data_range=1., channel=t_im.shape[1]).cuda() -# ssim_score = 0 -# while ssim_score < 0.999: -# optim.zero_grad() -# loss = losser(rand_im, t_im) -# (-loss).sum().backward() -# ssim_score = loss.item() -# optim.step() -# r_im = np.transpose(rand_im.detach().cpu().numpy().clip(0, 1) * 255, [0, 2, 3, 1]).astype(np.uint8)[0] -# r_im = cv2.putText(r_im, 'ms_ssim %f' % ssim_score, (10, 30), cv2.FONT_HERSHEY_PLAIN, 2, (255, 0, 0), 2) -# -# if out_test_video: -# if time.perf_counter() - video_last_time > 1. / fps: -# video_last_time = time.perf_counter() -# out_frame = cv2.cvtColor(r_im, cv2.COLOR_BGR2RGB) -# out_frame = cv2.resize(out_frame, out_wh, interpolation=cv2.INTER_AREA) -# if isinstance(out_frame, cv2.UMat): -# out_frame = out_frame.get() -# video.append_data(out_frame) -# -# cv2.imshow('ms_ssim', r_im) -# cv2.setWindowTitle('ms_ssim', 'ms_ssim %f' % ssim_score) -# cv2.waitKey(1) -# -# if out_test_video: -# video.close() - -""" -Adapted from https://github.com/Po-Hsun-Su/pytorch-ssim -""" - -import torch -import torch.nn.functional as F -from torch.autograd import Variable -import numpy as np -from math import exp - - -def gaussian(window_size, sigma): - gauss = torch.Tensor([exp(-(x - window_size // 2) ** 2 / float(2 * sigma ** 2)) for x in range(window_size)]) - return gauss / gauss.sum() - - -def create_window(window_size, channel): - _1D_window = gaussian(window_size, 1.5).unsqueeze(1) - _2D_window = _1D_window.mm(_1D_window.t()).float().unsqueeze(0).unsqueeze(0) - window = Variable(_2D_window.expand(channel, 1, window_size, window_size).contiguous()) - return window - - -def _ssim(img1, img2, window, window_size, channel, size_average=True): - mu1 = F.conv2d(img1, window, padding=window_size // 2, groups=channel) - mu2 = F.conv2d(img2, window, padding=window_size // 2, groups=channel) - - mu1_sq = mu1.pow(2) - mu2_sq = mu2.pow(2) - mu1_mu2 = mu1 * mu2 - - sigma1_sq = F.conv2d(img1 * img1, window, padding=window_size // 2, groups=channel) - mu1_sq - sigma2_sq = F.conv2d(img2 * img2, window, padding=window_size // 2, groups=channel) - mu2_sq - sigma12 = F.conv2d(img1 * img2, window, padding=window_size // 2, groups=channel) - mu1_mu2 - - C1 = 0.01 ** 2 - C2 = 0.03 ** 2 - - ssim_map = ((2 * mu1_mu2 + C1) * (2 * sigma12 + C2)) / ((mu1_sq + mu2_sq + C1) * (sigma1_sq + sigma2_sq + C2)) - - if size_average: - return ssim_map.mean() - else: - return ssim_map.mean(1) - - -class SSIM(torch.nn.Module): - def __init__(self, window_size=11, size_average=True): - super(SSIM, self).__init__() - self.window_size = window_size - self.size_average = size_average - self.channel = 1 - self.window = create_window(window_size, self.channel) - - def forward(self, img1, img2): - (_, channel, _, _) = img1.size() - - if channel == self.channel and self.window.data.type() == img1.data.type(): - window = self.window - else: - window = create_window(self.window_size, channel) - - if img1.is_cuda: - window = window.cuda(img1.get_device()) - window = window.type_as(img1) - - self.window = window - self.channel = channel - - return _ssim(img1, img2, window, self.window_size, channel, self.size_average) - - -window = None - - -def ssim(img1, img2, window_size=11, size_average=True): - (_, channel, _, _) = img1.size() - global window - if window is None: - window = create_window(window_size, channel) - if img1.is_cuda: - window = window.cuda(img1.get_device()) - window = window.type_as(img1) - return _ssim(img1, img2, window, window_size, channel, size_average) diff --git a/spaces/S0h9l/Coherent_Speech/README.md b/spaces/S0h9l/Coherent_Speech/README.md deleted file mode 100644 index 6e9e0035d4b028a332b923a933606d1d579ec30c..0000000000000000000000000000000000000000 --- a/spaces/S0h9l/Coherent_Speech/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Coherent Speech -emoji: 🎙️ -colorFrom: gray -colorTo: purple -sdk: gradio -sdk_version: 3.15.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Sapphire-356/Video2MC/joints_detectors/Alphapose/SPPE/src/models/layers/PRM.py b/spaces/Sapphire-356/Video2MC/joints_detectors/Alphapose/SPPE/src/models/layers/PRM.py deleted file mode 100644 index 375bea4e45362ee240632c94ab6bfbf72f324e26..0000000000000000000000000000000000000000 --- a/spaces/Sapphire-356/Video2MC/joints_detectors/Alphapose/SPPE/src/models/layers/PRM.py +++ /dev/null @@ -1,135 +0,0 @@ -import torch.nn as nn -from .util_models import ConcatTable, CaddTable, Identity -import math -from opt import opt - - -class Residual(nn.Module): - def __init__(self, numIn, numOut, inputResH, inputResW, stride=1, - net_type='preact', useConv=False, baseWidth=9, cardinality=4): - super(Residual, self).__init__() - - self.con = ConcatTable([convBlock(numIn, numOut, inputResH, - inputResW, net_type, baseWidth, cardinality, stride), - skipLayer(numIn, numOut, stride, useConv)]) - self.cadd = CaddTable(True) - - def forward(self, x): - out = self.con(x) - out = self.cadd(out) - return out - - -def convBlock(numIn, numOut, inputResH, inputResW, net_type, baseWidth, cardinality, stride): - numIn = int(numIn) - numOut = int(numOut) - - addTable = ConcatTable() - s_list = [] - if net_type != 'no_preact': - s_list.append(nn.BatchNorm2d(numIn)) - s_list.append(nn.ReLU(True)) - - conv1 = nn.Conv2d(numIn, numOut // 2, kernel_size=1) - if opt.init: - nn.init.xavier_normal(conv1.weight, gain=math.sqrt(1 / 2)) - s_list.append(conv1) - - s_list.append(nn.BatchNorm2d(numOut // 2)) - s_list.append(nn.ReLU(True)) - - conv2 = nn.Conv2d(numOut // 2, numOut // 2, - kernel_size=3, stride=stride, padding=1) - if opt.init: - nn.init.xavier_normal(conv2.weight) - s_list.append(conv2) - - s = nn.Sequential(*s_list) - addTable.add(s) - - D = math.floor(numOut // baseWidth) - C = cardinality - s_list = [] - - if net_type != 'no_preact': - s_list.append(nn.BatchNorm2d(numIn)) - s_list.append(nn.ReLU(True)) - - conv1 = nn.Conv2d(numIn, D, kernel_size=1, stride=stride) - if opt.init: - nn.init.xavier_normal(conv1.weight, gain=math.sqrt(1 / C)) - - s_list.append(conv1) - s_list.append(nn.BatchNorm2d(D)) - s_list.append(nn.ReLU(True)) - s_list.append(pyramid(D, C, inputResH, inputResW)) - s_list.append(nn.BatchNorm2d(D)) - s_list.append(nn.ReLU(True)) - - a = nn.Conv2d(D, numOut // 2, kernel_size=1) - a.nBranchIn = C - if opt.init: - nn.init.xavier_normal(a.weight, gain=math.sqrt(1 / C)) - s_list.append(a) - - s = nn.Sequential(*s_list) - addTable.add(s) - - elewiswAdd = nn.Sequential( - addTable, - CaddTable(False) - ) - conv2 = nn.Conv2d(numOut // 2, numOut, kernel_size=1) - if opt.init: - nn.init.xavier_normal(conv2.weight, gain=math.sqrt(1 / 2)) - model = nn.Sequential( - elewiswAdd, - nn.BatchNorm2d(numOut // 2), - nn.ReLU(True), - conv2 - ) - return model - - -def pyramid(D, C, inputResH, inputResW): - pyraTable = ConcatTable() - sc = math.pow(2, 1 / C) - for i in range(C): - scaled = 1 / math.pow(sc, i + 1) - conv1 = nn.Conv2d(D, D, kernel_size=3, stride=1, padding=1) - if opt.init: - nn.init.xavier_normal(conv1.weight) - s = nn.Sequential( - nn.FractionalMaxPool2d(2, output_ratio=(scaled, scaled)), - conv1, - nn.UpsamplingBilinear2d(size=(int(inputResH), int(inputResW)))) - pyraTable.add(s) - pyra = nn.Sequential( - pyraTable, - CaddTable(False) - ) - return pyra - - -class skipLayer(nn.Module): - def __init__(self, numIn, numOut, stride, useConv): - super(skipLayer, self).__init__() - self.identity = False - - if numIn == numOut and stride == 1 and not useConv: - self.identity = True - else: - conv1 = nn.Conv2d(numIn, numOut, kernel_size=1, stride=stride) - if opt.init: - nn.init.xavier_normal(conv1.weight, gain=math.sqrt(1 / 2)) - self.m = nn.Sequential( - nn.BatchNorm2d(numIn), - nn.ReLU(True), - conv1 - ) - - def forward(self, x): - if self.identity: - return x - else: - return self.m(x) diff --git a/spaces/ServerX/PorcoDiaz/tensorlowest.py b/spaces/ServerX/PorcoDiaz/tensorlowest.py deleted file mode 100644 index eccd4dbf3494434e59f7defaae6ab91797263b90..0000000000000000000000000000000000000000 --- a/spaces/ServerX/PorcoDiaz/tensorlowest.py +++ /dev/null @@ -1,123 +0,0 @@ -from tensorboard.backend.event_processing import event_accumulator - -import os -from shutil import copy2 -from re import search as RSearch -import pandas as pd -from ast import literal_eval as LEval - -weights_dir = 'weights/' - -def find_biggest_tensorboard(tensordir): - try: - files = [f for f in os.listdir(tensordir) if f.endswith('.0')] - if not files: - print("No files with the '.0' extension found!") - return - - max_size = 0 - biggest_file = "" - - for file in files: - file_path = os.path.join(tensordir, file) - if os.path.isfile(file_path): - file_size = os.path.getsize(file_path) - if file_size > max_size: - max_size = file_size - biggest_file = file - - return biggest_file - - except FileNotFoundError: - print("Couldn't find your model!") - return - -def main(model_name, save_freq, lastmdls): - global lowestval_weight_dir, scl - - tensordir = os.path.join('logs', model_name) - lowestval_weight_dir = os.path.join(tensordir, "lowestvals") - - latest_file = find_biggest_tensorboard(tensordir) - - if latest_file is None: - print("Couldn't find a valid tensorboard file!") - return - - tfile = os.path.join(tensordir, latest_file) - - ea = event_accumulator.EventAccumulator(tfile, - size_guidance={ - event_accumulator.COMPRESSED_HISTOGRAMS: 500, - event_accumulator.IMAGES: 4, - event_accumulator.AUDIO: 4, - event_accumulator.SCALARS: 0, - event_accumulator.HISTOGRAMS: 1, - }) - - ea.Reload() - ea.Tags() - - scl = ea.Scalars('loss/g/total') - - listwstep = {} - - for val in scl: - if (val.step // save_freq) * save_freq in [val.step for val in scl]: - listwstep[float(val.value)] = (val.step // save_freq) * save_freq - - lowest_vals = sorted(listwstep.keys())[:lastmdls] - - sorted_dict = {value: step for value, step in listwstep.items() if value in lowest_vals} - - return sorted_dict - -def selectweights(model_name, file_dict, weights_dir, lowestval_weight_dir): - os.makedirs(lowestval_weight_dir, exist_ok=True) - logdir = [] - files = [] - lbldict = { - 'Values': {}, - 'Names': {} - } - weights_dir_path = os.path.join(weights_dir, "") - low_val_path = os.path.join(os.getcwd(), os.path.join(lowestval_weight_dir, "")) - - try: - file_dict = LEval(file_dict) - except Exception as e: - print(f"Error! {e}") - return f"Couldn't load tensorboard file! {e}" - - weights = [f for f in os.scandir(weights_dir)] - for key, value in file_dict.items(): - pattern = fr"^{model_name}_.*_s{value}\.pth$" - matching_weights = [f.name for f in weights if f.is_file() and RSearch(pattern, f.name)] - for weight in matching_weights: - source_path = weights_dir_path + weight - destination_path = os.path.join(lowestval_weight_dir, weight) - - copy2(source_path, destination_path) - - logdir.append(f"File = {weight} Value: {key}, Step: {value}") - - lbldict['Names'][weight] = weight - lbldict['Values'][weight] = key - - files.append(low_val_path + weight) - - print(f"File = {weight} Value: {key}, Step: {value}") - - yield ('\n'.join(logdir), files, pd.DataFrame(lbldict)) - - - return ''.join(logdir), files, pd.DataFrame(lbldict) - - -if __name__ == "__main__": - model = str(input("Enter the name of the model: ")) - sav_freq = int(input("Enter save frequency of the model: ")) - ds = main(model, sav_freq) - - if ds: selectweights(model, ds, weights_dir, lowestval_weight_dir) - \ No newline at end of file diff --git a/spaces/Shad0ws/ImageModelTestEnvironment/README.md b/spaces/Shad0ws/ImageModelTestEnvironment/README.md deleted file mode 100644 index 7a6cea2e3ea5f93119b8c780d7508617a5a4a63f..0000000000000000000000000000000000000000 --- a/spaces/Shad0ws/ImageModelTestEnvironment/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Maximum Multiplier -emoji: 🛕🛕 -colorFrom: green -colorTo: blue -sdk: gradio -sdk_version: 3.15.0 -app_file: app.py -pinned: true -duplicated_from: rankjet/BulkImgVariations ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/ShilongLiu/Grounding_DINO_demo/groundingdino/util/box_ops.py b/spaces/ShilongLiu/Grounding_DINO_demo/groundingdino/util/box_ops.py deleted file mode 100644 index 781068d294e576954edb4bd07b6e0f30e4e1bcd9..0000000000000000000000000000000000000000 --- a/spaces/ShilongLiu/Grounding_DINO_demo/groundingdino/util/box_ops.py +++ /dev/null @@ -1,140 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -""" -Utilities for bounding box manipulation and GIoU. -""" -import torch -from torchvision.ops.boxes import box_area - - -def box_cxcywh_to_xyxy(x): - x_c, y_c, w, h = x.unbind(-1) - b = [(x_c - 0.5 * w), (y_c - 0.5 * h), (x_c + 0.5 * w), (y_c + 0.5 * h)] - return torch.stack(b, dim=-1) - - -def box_xyxy_to_cxcywh(x): - x0, y0, x1, y1 = x.unbind(-1) - b = [(x0 + x1) / 2, (y0 + y1) / 2, (x1 - x0), (y1 - y0)] - return torch.stack(b, dim=-1) - - -# modified from torchvision to also return the union -def box_iou(boxes1, boxes2): - area1 = box_area(boxes1) - area2 = box_area(boxes2) - - # import ipdb; ipdb.set_trace() - lt = torch.max(boxes1[:, None, :2], boxes2[:, :2]) # [N,M,2] - rb = torch.min(boxes1[:, None, 2:], boxes2[:, 2:]) # [N,M,2] - - wh = (rb - lt).clamp(min=0) # [N,M,2] - inter = wh[:, :, 0] * wh[:, :, 1] # [N,M] - - union = area1[:, None] + area2 - inter - - iou = inter / (union + 1e-6) - return iou, union - - -def generalized_box_iou(boxes1, boxes2): - """ - Generalized IoU from https://giou.stanford.edu/ - - The boxes should be in [x0, y0, x1, y1] format - - Returns a [N, M] pairwise matrix, where N = len(boxes1) - and M = len(boxes2) - """ - # degenerate boxes gives inf / nan results - # so do an early check - assert (boxes1[:, 2:] >= boxes1[:, :2]).all() - assert (boxes2[:, 2:] >= boxes2[:, :2]).all() - # except: - # import ipdb; ipdb.set_trace() - iou, union = box_iou(boxes1, boxes2) - - lt = torch.min(boxes1[:, None, :2], boxes2[:, :2]) - rb = torch.max(boxes1[:, None, 2:], boxes2[:, 2:]) - - wh = (rb - lt).clamp(min=0) # [N,M,2] - area = wh[:, :, 0] * wh[:, :, 1] - - return iou - (area - union) / (area + 1e-6) - - -# modified from torchvision to also return the union -def box_iou_pairwise(boxes1, boxes2): - area1 = box_area(boxes1) - area2 = box_area(boxes2) - - lt = torch.max(boxes1[:, :2], boxes2[:, :2]) # [N,2] - rb = torch.min(boxes1[:, 2:], boxes2[:, 2:]) # [N,2] - - wh = (rb - lt).clamp(min=0) # [N,2] - inter = wh[:, 0] * wh[:, 1] # [N] - - union = area1 + area2 - inter - - iou = inter / union - return iou, union - - -def generalized_box_iou_pairwise(boxes1, boxes2): - """ - Generalized IoU from https://giou.stanford.edu/ - - Input: - - boxes1, boxes2: N,4 - Output: - - giou: N, 4 - """ - # degenerate boxes gives inf / nan results - # so do an early check - assert (boxes1[:, 2:] >= boxes1[:, :2]).all() - assert (boxes2[:, 2:] >= boxes2[:, :2]).all() - assert boxes1.shape == boxes2.shape - iou, union = box_iou_pairwise(boxes1, boxes2) # N, 4 - - lt = torch.min(boxes1[:, :2], boxes2[:, :2]) - rb = torch.max(boxes1[:, 2:], boxes2[:, 2:]) - - wh = (rb - lt).clamp(min=0) # [N,2] - area = wh[:, 0] * wh[:, 1] - - return iou - (area - union) / area - - -def masks_to_boxes(masks): - """Compute the bounding boxes around the provided masks - - The masks should be in format [N, H, W] where N is the number of masks, (H, W) are the spatial dimensions. - - Returns a [N, 4] tensors, with the boxes in xyxy format - """ - if masks.numel() == 0: - return torch.zeros((0, 4), device=masks.device) - - h, w = masks.shape[-2:] - - y = torch.arange(0, h, dtype=torch.float) - x = torch.arange(0, w, dtype=torch.float) - y, x = torch.meshgrid(y, x) - - x_mask = masks * x.unsqueeze(0) - x_max = x_mask.flatten(1).max(-1)[0] - x_min = x_mask.masked_fill(~(masks.bool()), 1e8).flatten(1).min(-1)[0] - - y_mask = masks * y.unsqueeze(0) - y_max = y_mask.flatten(1).max(-1)[0] - y_min = y_mask.masked_fill(~(masks.bool()), 1e8).flatten(1).min(-1)[0] - - return torch.stack([x_min, y_min, x_max, y_max], 1) - - -if __name__ == "__main__": - x = torch.rand(5, 4) - y = torch.rand(3, 4) - iou, union = box_iou(x, y) - import ipdb - - ipdb.set_trace() diff --git a/spaces/Shreyas3006/Text-Summarizer-sdp/README.md b/spaces/Shreyas3006/Text-Summarizer-sdp/README.md deleted file mode 100644 index ba5825f953cffde79af227a4cd816a22c37029a0..0000000000000000000000000000000000000000 --- a/spaces/Shreyas3006/Text-Summarizer-sdp/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Text Summarizer -emoji: 🌍 -colorFrom: blue -colorTo: yellow -sdk: streamlit -app_file: app.py -pinned: false ---- - -# Text Summarizer -Text summarizer using Transformers \ No newline at end of file diff --git a/spaces/Silentlin/DiffSinger/modules/parallel_wavegan/models/melgan.py b/spaces/Silentlin/DiffSinger/modules/parallel_wavegan/models/melgan.py deleted file mode 100644 index e021ae4817a8c1c97338e61b00b230c881836fd8..0000000000000000000000000000000000000000 --- a/spaces/Silentlin/DiffSinger/modules/parallel_wavegan/models/melgan.py +++ /dev/null @@ -1,427 +0,0 @@ -# -*- coding: utf-8 -*- - -# Copyright 2020 Tomoki Hayashi -# MIT License (https://opensource.org/licenses/MIT) - -"""MelGAN Modules.""" - -import logging - -import numpy as np -import torch - -from modules.parallel_wavegan.layers import CausalConv1d -from modules.parallel_wavegan.layers import CausalConvTranspose1d -from modules.parallel_wavegan.layers import ResidualStack - - -class MelGANGenerator(torch.nn.Module): - """MelGAN generator module.""" - - def __init__(self, - in_channels=80, - out_channels=1, - kernel_size=7, - channels=512, - bias=True, - upsample_scales=[8, 8, 2, 2], - stack_kernel_size=3, - stacks=3, - nonlinear_activation="LeakyReLU", - nonlinear_activation_params={"negative_slope": 0.2}, - pad="ReflectionPad1d", - pad_params={}, - use_final_nonlinear_activation=True, - use_weight_norm=True, - use_causal_conv=False, - ): - """Initialize MelGANGenerator module. - - Args: - in_channels (int): Number of input channels. - out_channels (int): Number of output channels. - kernel_size (int): Kernel size of initial and final conv layer. - channels (int): Initial number of channels for conv layer. - bias (bool): Whether to add bias parameter in convolution layers. - upsample_scales (list): List of upsampling scales. - stack_kernel_size (int): Kernel size of dilated conv layers in residual stack. - stacks (int): Number of stacks in a single residual stack. - nonlinear_activation (str): Activation function module name. - nonlinear_activation_params (dict): Hyperparameters for activation function. - pad (str): Padding function module name before dilated convolution layer. - pad_params (dict): Hyperparameters for padding function. - use_final_nonlinear_activation (torch.nn.Module): Activation function for the final layer. - use_weight_norm (bool): Whether to use weight norm. - If set to true, it will be applied to all of the conv layers. - use_causal_conv (bool): Whether to use causal convolution. - - """ - super(MelGANGenerator, self).__init__() - - # check hyper parameters is valid - assert channels >= np.prod(upsample_scales) - assert channels % (2 ** len(upsample_scales)) == 0 - if not use_causal_conv: - assert (kernel_size - 1) % 2 == 0, "Not support even number kernel size." - - # add initial layer - layers = [] - if not use_causal_conv: - layers += [ - getattr(torch.nn, pad)((kernel_size - 1) // 2, **pad_params), - torch.nn.Conv1d(in_channels, channels, kernel_size, bias=bias), - ] - else: - layers += [ - CausalConv1d(in_channels, channels, kernel_size, - bias=bias, pad=pad, pad_params=pad_params), - ] - - for i, upsample_scale in enumerate(upsample_scales): - # add upsampling layer - layers += [getattr(torch.nn, nonlinear_activation)(**nonlinear_activation_params)] - if not use_causal_conv: - layers += [ - torch.nn.ConvTranspose1d( - channels // (2 ** i), - channels // (2 ** (i + 1)), - upsample_scale * 2, - stride=upsample_scale, - padding=upsample_scale // 2 + upsample_scale % 2, - output_padding=upsample_scale % 2, - bias=bias, - ) - ] - else: - layers += [ - CausalConvTranspose1d( - channels // (2 ** i), - channels // (2 ** (i + 1)), - upsample_scale * 2, - stride=upsample_scale, - bias=bias, - ) - ] - - # add residual stack - for j in range(stacks): - layers += [ - ResidualStack( - kernel_size=stack_kernel_size, - channels=channels // (2 ** (i + 1)), - dilation=stack_kernel_size ** j, - bias=bias, - nonlinear_activation=nonlinear_activation, - nonlinear_activation_params=nonlinear_activation_params, - pad=pad, - pad_params=pad_params, - use_causal_conv=use_causal_conv, - ) - ] - - # add final layer - layers += [getattr(torch.nn, nonlinear_activation)(**nonlinear_activation_params)] - if not use_causal_conv: - layers += [ - getattr(torch.nn, pad)((kernel_size - 1) // 2, **pad_params), - torch.nn.Conv1d(channels // (2 ** (i + 1)), out_channels, kernel_size, bias=bias), - ] - else: - layers += [ - CausalConv1d(channels // (2 ** (i + 1)), out_channels, kernel_size, - bias=bias, pad=pad, pad_params=pad_params), - ] - if use_final_nonlinear_activation: - layers += [torch.nn.Tanh()] - - # define the model as a single function - self.melgan = torch.nn.Sequential(*layers) - - # apply weight norm - if use_weight_norm: - self.apply_weight_norm() - - # reset parameters - self.reset_parameters() - - def forward(self, c): - """Calculate forward propagation. - - Args: - c (Tensor): Input tensor (B, channels, T). - - Returns: - Tensor: Output tensor (B, 1, T ** prod(upsample_scales)). - - """ - return self.melgan(c) - - def remove_weight_norm(self): - """Remove weight normalization module from all of the layers.""" - def _remove_weight_norm(m): - try: - logging.debug(f"Weight norm is removed from {m}.") - torch.nn.utils.remove_weight_norm(m) - except ValueError: # this module didn't have weight norm - return - - self.apply(_remove_weight_norm) - - def apply_weight_norm(self): - """Apply weight normalization module from all of the layers.""" - def _apply_weight_norm(m): - if isinstance(m, torch.nn.Conv1d) or isinstance(m, torch.nn.ConvTranspose1d): - torch.nn.utils.weight_norm(m) - logging.debug(f"Weight norm is applied to {m}.") - - self.apply(_apply_weight_norm) - - def reset_parameters(self): - """Reset parameters. - - This initialization follows official implementation manner. - https://github.com/descriptinc/melgan-neurips/blob/master/spec2wav/modules.py - - """ - def _reset_parameters(m): - if isinstance(m, torch.nn.Conv1d) or isinstance(m, torch.nn.ConvTranspose1d): - m.weight.data.normal_(0.0, 0.02) - logging.debug(f"Reset parameters in {m}.") - - self.apply(_reset_parameters) - - -class MelGANDiscriminator(torch.nn.Module): - """MelGAN discriminator module.""" - - def __init__(self, - in_channels=1, - out_channels=1, - kernel_sizes=[5, 3], - channels=16, - max_downsample_channels=1024, - bias=True, - downsample_scales=[4, 4, 4, 4], - nonlinear_activation="LeakyReLU", - nonlinear_activation_params={"negative_slope": 0.2}, - pad="ReflectionPad1d", - pad_params={}, - ): - """Initilize MelGAN discriminator module. - - Args: - in_channels (int): Number of input channels. - out_channels (int): Number of output channels. - kernel_sizes (list): List of two kernel sizes. The prod will be used for the first conv layer, - and the first and the second kernel sizes will be used for the last two layers. - For example if kernel_sizes = [5, 3], the first layer kernel size will be 5 * 3 = 15, - the last two layers' kernel size will be 5 and 3, respectively. - channels (int): Initial number of channels for conv layer. - max_downsample_channels (int): Maximum number of channels for downsampling layers. - bias (bool): Whether to add bias parameter in convolution layers. - downsample_scales (list): List of downsampling scales. - nonlinear_activation (str): Activation function module name. - nonlinear_activation_params (dict): Hyperparameters for activation function. - pad (str): Padding function module name before dilated convolution layer. - pad_params (dict): Hyperparameters for padding function. - - """ - super(MelGANDiscriminator, self).__init__() - self.layers = torch.nn.ModuleList() - - # check kernel size is valid - assert len(kernel_sizes) == 2 - assert kernel_sizes[0] % 2 == 1 - assert kernel_sizes[1] % 2 == 1 - - # add first layer - self.layers += [ - torch.nn.Sequential( - getattr(torch.nn, pad)((np.prod(kernel_sizes) - 1) // 2, **pad_params), - torch.nn.Conv1d(in_channels, channels, np.prod(kernel_sizes), bias=bias), - getattr(torch.nn, nonlinear_activation)(**nonlinear_activation_params), - ) - ] - - # add downsample layers - in_chs = channels - for downsample_scale in downsample_scales: - out_chs = min(in_chs * downsample_scale, max_downsample_channels) - self.layers += [ - torch.nn.Sequential( - torch.nn.Conv1d( - in_chs, out_chs, - kernel_size=downsample_scale * 10 + 1, - stride=downsample_scale, - padding=downsample_scale * 5, - groups=in_chs // 4, - bias=bias, - ), - getattr(torch.nn, nonlinear_activation)(**nonlinear_activation_params), - ) - ] - in_chs = out_chs - - # add final layers - out_chs = min(in_chs * 2, max_downsample_channels) - self.layers += [ - torch.nn.Sequential( - torch.nn.Conv1d( - in_chs, out_chs, kernel_sizes[0], - padding=(kernel_sizes[0] - 1) // 2, - bias=bias, - ), - getattr(torch.nn, nonlinear_activation)(**nonlinear_activation_params), - ) - ] - self.layers += [ - torch.nn.Conv1d( - out_chs, out_channels, kernel_sizes[1], - padding=(kernel_sizes[1] - 1) // 2, - bias=bias, - ), - ] - - def forward(self, x): - """Calculate forward propagation. - - Args: - x (Tensor): Input noise signal (B, 1, T). - - Returns: - List: List of output tensors of each layer. - - """ - outs = [] - for f in self.layers: - x = f(x) - outs += [x] - - return outs - - -class MelGANMultiScaleDiscriminator(torch.nn.Module): - """MelGAN multi-scale discriminator module.""" - - def __init__(self, - in_channels=1, - out_channels=1, - scales=3, - downsample_pooling="AvgPool1d", - # follow the official implementation setting - downsample_pooling_params={ - "kernel_size": 4, - "stride": 2, - "padding": 1, - "count_include_pad": False, - }, - kernel_sizes=[5, 3], - channels=16, - max_downsample_channels=1024, - bias=True, - downsample_scales=[4, 4, 4, 4], - nonlinear_activation="LeakyReLU", - nonlinear_activation_params={"negative_slope": 0.2}, - pad="ReflectionPad1d", - pad_params={}, - use_weight_norm=True, - ): - """Initilize MelGAN multi-scale discriminator module. - - Args: - in_channels (int): Number of input channels. - out_channels (int): Number of output channels. - downsample_pooling (str): Pooling module name for downsampling of the inputs. - downsample_pooling_params (dict): Parameters for the above pooling module. - kernel_sizes (list): List of two kernel sizes. The sum will be used for the first conv layer, - and the first and the second kernel sizes will be used for the last two layers. - channels (int): Initial number of channels for conv layer. - max_downsample_channels (int): Maximum number of channels for downsampling layers. - bias (bool): Whether to add bias parameter in convolution layers. - downsample_scales (list): List of downsampling scales. - nonlinear_activation (str): Activation function module name. - nonlinear_activation_params (dict): Hyperparameters for activation function. - pad (str): Padding function module name before dilated convolution layer. - pad_params (dict): Hyperparameters for padding function. - use_causal_conv (bool): Whether to use causal convolution. - - """ - super(MelGANMultiScaleDiscriminator, self).__init__() - self.discriminators = torch.nn.ModuleList() - - # add discriminators - for _ in range(scales): - self.discriminators += [ - MelGANDiscriminator( - in_channels=in_channels, - out_channels=out_channels, - kernel_sizes=kernel_sizes, - channels=channels, - max_downsample_channels=max_downsample_channels, - bias=bias, - downsample_scales=downsample_scales, - nonlinear_activation=nonlinear_activation, - nonlinear_activation_params=nonlinear_activation_params, - pad=pad, - pad_params=pad_params, - ) - ] - self.pooling = getattr(torch.nn, downsample_pooling)(**downsample_pooling_params) - - # apply weight norm - if use_weight_norm: - self.apply_weight_norm() - - # reset parameters - self.reset_parameters() - - def forward(self, x): - """Calculate forward propagation. - - Args: - x (Tensor): Input noise signal (B, 1, T). - - Returns: - List: List of list of each discriminator outputs, which consists of each layer output tensors. - - """ - outs = [] - for f in self.discriminators: - outs += [f(x)] - x = self.pooling(x) - - return outs - - def remove_weight_norm(self): - """Remove weight normalization module from all of the layers.""" - def _remove_weight_norm(m): - try: - logging.debug(f"Weight norm is removed from {m}.") - torch.nn.utils.remove_weight_norm(m) - except ValueError: # this module didn't have weight norm - return - - self.apply(_remove_weight_norm) - - def apply_weight_norm(self): - """Apply weight normalization module from all of the layers.""" - def _apply_weight_norm(m): - if isinstance(m, torch.nn.Conv1d) or isinstance(m, torch.nn.ConvTranspose1d): - torch.nn.utils.weight_norm(m) - logging.debug(f"Weight norm is applied to {m}.") - - self.apply(_apply_weight_norm) - - def reset_parameters(self): - """Reset parameters. - - This initialization follows official implementation manner. - https://github.com/descriptinc/melgan-neurips/blob/master/spec2wav/modules.py - - """ - def _reset_parameters(m): - if isinstance(m, torch.nn.Conv1d) or isinstance(m, torch.nn.ConvTranspose1d): - m.weight.data.normal_(0.0, 0.02) - logging.debug(f"Reset parameters in {m}.") - - self.apply(_reset_parameters) diff --git a/spaces/Silentlin/DiffSinger/utils/pitch_utils.py b/spaces/Silentlin/DiffSinger/utils/pitch_utils.py deleted file mode 100644 index f7fd166abd3a03bac5909e498669b482447435cf..0000000000000000000000000000000000000000 --- a/spaces/Silentlin/DiffSinger/utils/pitch_utils.py +++ /dev/null @@ -1,76 +0,0 @@ -######### -# world -########## -import librosa -import numpy as np -import torch - -gamma = 0 -mcepInput = 3 # 0 for dB, 3 for magnitude -alpha = 0.45 -en_floor = 10 ** (-80 / 20) -FFT_SIZE = 2048 - - -f0_bin = 256 -f0_max = 1100.0 -f0_min = 50.0 -f0_mel_min = 1127 * np.log(1 + f0_min / 700) -f0_mel_max = 1127 * np.log(1 + f0_max / 700) - - -def f0_to_coarse(f0): - is_torch = isinstance(f0, torch.Tensor) - f0_mel = 1127 * (1 + f0 / 700).log() if is_torch else 1127 * np.log(1 + f0 / 700) - f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * (f0_bin - 2) / (f0_mel_max - f0_mel_min) + 1 - - f0_mel[f0_mel <= 1] = 1 - f0_mel[f0_mel > f0_bin - 1] = f0_bin - 1 - f0_coarse = (f0_mel + 0.5).long() if is_torch else np.rint(f0_mel).astype(np.int) - assert f0_coarse.max() <= 255 and f0_coarse.min() >= 1, (f0_coarse.max(), f0_coarse.min()) - return f0_coarse - - -def norm_f0(f0, uv, hparams): - is_torch = isinstance(f0, torch.Tensor) - if hparams['pitch_norm'] == 'standard': - f0 = (f0 - hparams['f0_mean']) / hparams['f0_std'] - if hparams['pitch_norm'] == 'log': - f0 = torch.log2(f0) if is_torch else np.log2(f0) - if uv is not None and hparams['use_uv']: - f0[uv > 0] = 0 - return f0 - - -def norm_interp_f0(f0, hparams): - is_torch = isinstance(f0, torch.Tensor) - if is_torch: - device = f0.device - f0 = f0.data.cpu().numpy() - uv = f0 == 0 - f0 = norm_f0(f0, uv, hparams) - if sum(uv) == len(f0): - f0[uv] = 0 - elif sum(uv) > 0: - f0[uv] = np.interp(np.where(uv)[0], np.where(~uv)[0], f0[~uv]) - uv = torch.FloatTensor(uv) - f0 = torch.FloatTensor(f0) - if is_torch: - f0 = f0.to(device) - return f0, uv - - -def denorm_f0(f0, uv, hparams, pitch_padding=None, min=None, max=None): - if hparams['pitch_norm'] == 'standard': - f0 = f0 * hparams['f0_std'] + hparams['f0_mean'] - if hparams['pitch_norm'] == 'log': - f0 = 2 ** f0 - if min is not None: - f0 = f0.clamp(min=min) - if max is not None: - f0 = f0.clamp(max=max) - if uv is not None and hparams['use_uv']: - f0[uv > 0] = 0 - if pitch_padding is not None: - f0[pitch_padding] = 0 - return f0 diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/aiohttp/web_app.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/aiohttp/web_app.py deleted file mode 100644 index 8fd4471d3af019c6e3bd01fcb9838ee99636238e..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/aiohttp/web_app.py +++ /dev/null @@ -1,557 +0,0 @@ -import asyncio -import logging -import warnings -from functools import partial, update_wrapper -from typing import ( - TYPE_CHECKING, - Any, - AsyncIterator, - Awaitable, - Callable, - Dict, - Iterable, - Iterator, - List, - Mapping, - MutableMapping, - Optional, - Sequence, - Tuple, - Type, - Union, - cast, -) - -from aiosignal import Signal -from frozenlist import FrozenList - -from . import hdrs -from .abc import ( - AbstractAccessLogger, - AbstractMatchInfo, - AbstractRouter, - AbstractStreamWriter, -) -from .helpers import DEBUG -from .http_parser import RawRequestMessage -from .log import web_logger -from .streams import StreamReader -from .web_log import AccessLogger -from .web_middlewares import _fix_request_current_app -from .web_protocol import RequestHandler -from .web_request import Request -from .web_response import StreamResponse -from .web_routedef import AbstractRouteDef -from .web_server import Server -from .web_urldispatcher import ( - AbstractResource, - AbstractRoute, - Domain, - MaskDomain, - MatchedSubAppResource, - PrefixedSubAppResource, - UrlDispatcher, -) - -__all__ = ("Application", "CleanupError") - - -if TYPE_CHECKING: # pragma: no cover - from .typedefs import Handler - - _AppSignal = Signal[Callable[["Application"], Awaitable[None]]] - _RespPrepareSignal = Signal[Callable[[Request, StreamResponse], Awaitable[None]]] - _Middleware = Union[ - Callable[[Request, Handler], Awaitable[StreamResponse]], - Callable[["Application", Handler], Awaitable[Handler]], # old-style - ] - _Middlewares = FrozenList[_Middleware] - _MiddlewaresHandlers = Optional[Sequence[Tuple[_Middleware, bool]]] - _Subapps = List["Application"] -else: - # No type checker mode, skip types - _AppSignal = Signal - _RespPrepareSignal = Signal - _Middleware = Callable - _Middlewares = FrozenList - _MiddlewaresHandlers = Optional[Sequence] - _Subapps = List - - -class Application(MutableMapping[str, Any]): - ATTRS = frozenset( - [ - "logger", - "_debug", - "_router", - "_loop", - "_handler_args", - "_middlewares", - "_middlewares_handlers", - "_run_middlewares", - "_state", - "_frozen", - "_pre_frozen", - "_subapps", - "_on_response_prepare", - "_on_startup", - "_on_shutdown", - "_on_cleanup", - "_client_max_size", - "_cleanup_ctx", - ] - ) - - def __init__( - self, - *, - logger: logging.Logger = web_logger, - router: Optional[UrlDispatcher] = None, - middlewares: Iterable[_Middleware] = (), - handler_args: Optional[Mapping[str, Any]] = None, - client_max_size: int = 1024**2, - loop: Optional[asyncio.AbstractEventLoop] = None, - debug: Any = ..., # mypy doesn't support ellipsis - ) -> None: - if router is None: - router = UrlDispatcher() - else: - warnings.warn( - "router argument is deprecated", DeprecationWarning, stacklevel=2 - ) - assert isinstance(router, AbstractRouter), router - - if loop is not None: - warnings.warn( - "loop argument is deprecated", DeprecationWarning, stacklevel=2 - ) - - if debug is not ...: - warnings.warn( - "debug argument is deprecated", DeprecationWarning, stacklevel=2 - ) - self._debug = debug - self._router: UrlDispatcher = router - self._loop = loop - self._handler_args = handler_args - self.logger = logger - - self._middlewares: _Middlewares = FrozenList(middlewares) - - # initialized on freezing - self._middlewares_handlers: _MiddlewaresHandlers = None - # initialized on freezing - self._run_middlewares: Optional[bool] = None - - self._state: Dict[str, Any] = {} - self._frozen = False - self._pre_frozen = False - self._subapps: _Subapps = [] - - self._on_response_prepare: _RespPrepareSignal = Signal(self) - self._on_startup: _AppSignal = Signal(self) - self._on_shutdown: _AppSignal = Signal(self) - self._on_cleanup: _AppSignal = Signal(self) - self._cleanup_ctx = CleanupContext() - self._on_startup.append(self._cleanup_ctx._on_startup) - self._on_cleanup.append(self._cleanup_ctx._on_cleanup) - self._client_max_size = client_max_size - - def __init_subclass__(cls: Type["Application"]) -> None: - warnings.warn( - "Inheritance class {} from web.Application " - "is discouraged".format(cls.__name__), - DeprecationWarning, - stacklevel=2, - ) - - if DEBUG: # pragma: no cover - - def __setattr__(self, name: str, val: Any) -> None: - if name not in self.ATTRS: - warnings.warn( - "Setting custom web.Application.{} attribute " - "is discouraged".format(name), - DeprecationWarning, - stacklevel=2, - ) - super().__setattr__(name, val) - - # MutableMapping API - - def __eq__(self, other: object) -> bool: - return self is other - - def __getitem__(self, key: str) -> Any: - return self._state[key] - - def _check_frozen(self) -> None: - if self._frozen: - warnings.warn( - "Changing state of started or joined " "application is deprecated", - DeprecationWarning, - stacklevel=3, - ) - - def __setitem__(self, key: str, value: Any) -> None: - self._check_frozen() - self._state[key] = value - - def __delitem__(self, key: str) -> None: - self._check_frozen() - del self._state[key] - - def __len__(self) -> int: - return len(self._state) - - def __iter__(self) -> Iterator[str]: - return iter(self._state) - - ######## - @property - def loop(self) -> asyncio.AbstractEventLoop: - # Technically the loop can be None - # but we mask it by explicit type cast - # to provide more convinient type annotation - warnings.warn("loop property is deprecated", DeprecationWarning, stacklevel=2) - return cast(asyncio.AbstractEventLoop, self._loop) - - def _set_loop(self, loop: Optional[asyncio.AbstractEventLoop]) -> None: - if loop is None: - loop = asyncio.get_event_loop() - if self._loop is not None and self._loop is not loop: - raise RuntimeError( - "web.Application instance initialized with different loop" - ) - - self._loop = loop - - # set loop debug - if self._debug is ...: - self._debug = loop.get_debug() - - # set loop to sub applications - for subapp in self._subapps: - subapp._set_loop(loop) - - @property - def pre_frozen(self) -> bool: - return self._pre_frozen - - def pre_freeze(self) -> None: - if self._pre_frozen: - return - - self._pre_frozen = True - self._middlewares.freeze() - self._router.freeze() - self._on_response_prepare.freeze() - self._cleanup_ctx.freeze() - self._on_startup.freeze() - self._on_shutdown.freeze() - self._on_cleanup.freeze() - self._middlewares_handlers = tuple(self._prepare_middleware()) - - # If current app and any subapp do not have middlewares avoid run all - # of the code footprint that it implies, which have a middleware - # hardcoded per app that sets up the current_app attribute. If no - # middlewares are configured the handler will receive the proper - # current_app without needing all of this code. - self._run_middlewares = True if self.middlewares else False - - for subapp in self._subapps: - subapp.pre_freeze() - self._run_middlewares = self._run_middlewares or subapp._run_middlewares - - @property - def frozen(self) -> bool: - return self._frozen - - def freeze(self) -> None: - if self._frozen: - return - - self.pre_freeze() - self._frozen = True - for subapp in self._subapps: - subapp.freeze() - - @property - def debug(self) -> bool: - warnings.warn("debug property is deprecated", DeprecationWarning, stacklevel=2) - return self._debug # type: ignore[no-any-return] - - def _reg_subapp_signals(self, subapp: "Application") -> None: - def reg_handler(signame: str) -> None: - subsig = getattr(subapp, signame) - - async def handler(app: "Application") -> None: - await subsig.send(subapp) - - appsig = getattr(self, signame) - appsig.append(handler) - - reg_handler("on_startup") - reg_handler("on_shutdown") - reg_handler("on_cleanup") - - def add_subapp(self, prefix: str, subapp: "Application") -> AbstractResource: - if not isinstance(prefix, str): - raise TypeError("Prefix must be str") - prefix = prefix.rstrip("/") - if not prefix: - raise ValueError("Prefix cannot be empty") - factory = partial(PrefixedSubAppResource, prefix, subapp) - return self._add_subapp(factory, subapp) - - def _add_subapp( - self, resource_factory: Callable[[], AbstractResource], subapp: "Application" - ) -> AbstractResource: - if self.frozen: - raise RuntimeError("Cannot add sub application to frozen application") - if subapp.frozen: - raise RuntimeError("Cannot add frozen application") - resource = resource_factory() - self.router.register_resource(resource) - self._reg_subapp_signals(subapp) - self._subapps.append(subapp) - subapp.pre_freeze() - if self._loop is not None: - subapp._set_loop(self._loop) - return resource - - def add_domain(self, domain: str, subapp: "Application") -> AbstractResource: - if not isinstance(domain, str): - raise TypeError("Domain must be str") - elif "*" in domain: - rule: Domain = MaskDomain(domain) - else: - rule = Domain(domain) - factory = partial(MatchedSubAppResource, rule, subapp) - return self._add_subapp(factory, subapp) - - def add_routes(self, routes: Iterable[AbstractRouteDef]) -> List[AbstractRoute]: - return self.router.add_routes(routes) - - @property - def on_response_prepare(self) -> _RespPrepareSignal: - return self._on_response_prepare - - @property - def on_startup(self) -> _AppSignal: - return self._on_startup - - @property - def on_shutdown(self) -> _AppSignal: - return self._on_shutdown - - @property - def on_cleanup(self) -> _AppSignal: - return self._on_cleanup - - @property - def cleanup_ctx(self) -> "CleanupContext": - return self._cleanup_ctx - - @property - def router(self) -> UrlDispatcher: - return self._router - - @property - def middlewares(self) -> _Middlewares: - return self._middlewares - - def _make_handler( - self, - *, - loop: Optional[asyncio.AbstractEventLoop] = None, - access_log_class: Type[AbstractAccessLogger] = AccessLogger, - **kwargs: Any, - ) -> Server: - - if not issubclass(access_log_class, AbstractAccessLogger): - raise TypeError( - "access_log_class must be subclass of " - "aiohttp.abc.AbstractAccessLogger, got {}".format(access_log_class) - ) - - self._set_loop(loop) - self.freeze() - - kwargs["debug"] = self._debug - kwargs["access_log_class"] = access_log_class - if self._handler_args: - for k, v in self._handler_args.items(): - kwargs[k] = v - - return Server( - self._handle, # type: ignore[arg-type] - request_factory=self._make_request, - loop=self._loop, - **kwargs, - ) - - def make_handler( - self, - *, - loop: Optional[asyncio.AbstractEventLoop] = None, - access_log_class: Type[AbstractAccessLogger] = AccessLogger, - **kwargs: Any, - ) -> Server: - - warnings.warn( - "Application.make_handler(...) is deprecated, " "use AppRunner API instead", - DeprecationWarning, - stacklevel=2, - ) - - return self._make_handler( - loop=loop, access_log_class=access_log_class, **kwargs - ) - - async def startup(self) -> None: - """Causes on_startup signal - - Should be called in the event loop along with the request handler. - """ - await self.on_startup.send(self) - - async def shutdown(self) -> None: - """Causes on_shutdown signal - - Should be called before cleanup() - """ - await self.on_shutdown.send(self) - - async def cleanup(self) -> None: - """Causes on_cleanup signal - - Should be called after shutdown() - """ - if self.on_cleanup.frozen: - await self.on_cleanup.send(self) - else: - # If an exception occurs in startup, ensure cleanup contexts are completed. - await self._cleanup_ctx._on_cleanup(self) - - def _make_request( - self, - message: RawRequestMessage, - payload: StreamReader, - protocol: RequestHandler, - writer: AbstractStreamWriter, - task: "asyncio.Task[None]", - _cls: Type[Request] = Request, - ) -> Request: - return _cls( - message, - payload, - protocol, - writer, - task, - self._loop, - client_max_size=self._client_max_size, - ) - - def _prepare_middleware(self) -> Iterator[Tuple[_Middleware, bool]]: - for m in reversed(self._middlewares): - if getattr(m, "__middleware_version__", None) == 1: - yield m, True - else: - warnings.warn( - 'old-style middleware "{!r}" deprecated, ' "see #2252".format(m), - DeprecationWarning, - stacklevel=2, - ) - yield m, False - - yield _fix_request_current_app(self), True - - async def _handle(self, request: Request) -> StreamResponse: - loop = asyncio.get_event_loop() - debug = loop.get_debug() - match_info = await self._router.resolve(request) - if debug: # pragma: no cover - if not isinstance(match_info, AbstractMatchInfo): - raise TypeError( - "match_info should be AbstractMatchInfo " - "instance, not {!r}".format(match_info) - ) - match_info.add_app(self) - - match_info.freeze() - - resp = None - request._match_info = match_info - expect = request.headers.get(hdrs.EXPECT) - if expect: - resp = await match_info.expect_handler(request) - await request.writer.drain() - - if resp is None: - handler = match_info.handler - - if self._run_middlewares: - for app in match_info.apps[::-1]: - for m, new_style in app._middlewares_handlers: # type: ignore[union-attr] # noqa - if new_style: - handler = update_wrapper( - partial(m, handler=handler), handler - ) - else: - handler = await m(app, handler) # type: ignore[arg-type] - - resp = await handler(request) - - return resp - - def __call__(self) -> "Application": - """gunicorn compatibility""" - return self - - def __repr__(self) -> str: - return f"" - - def __bool__(self) -> bool: - return True - - -class CleanupError(RuntimeError): - @property - def exceptions(self) -> List[BaseException]: - return cast(List[BaseException], self.args[1]) - - -if TYPE_CHECKING: # pragma: no cover - _CleanupContextBase = FrozenList[Callable[[Application], AsyncIterator[None]]] -else: - _CleanupContextBase = FrozenList - - -class CleanupContext(_CleanupContextBase): - def __init__(self) -> None: - super().__init__() - self._exits: List[AsyncIterator[None]] = [] - - async def _on_startup(self, app: Application) -> None: - for cb in self: - it = cb(app).__aiter__() - await it.__anext__() - self._exits.append(it) - - async def _on_cleanup(self, app: Application) -> None: - errors = [] - for it in reversed(self._exits): - try: - await it.__anext__() - except StopAsyncIteration: - pass - except Exception as exc: - errors.append(exc) - else: - errors.append(RuntimeError(f"{it!r} has more than one 'yield'")) - if errors: - if len(errors) == 1: - raise errors[0] - else: - raise CleanupError("Multiple errors on cleanup stage", errors) diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/charset_normalizer/cli/normalizer.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/charset_normalizer/cli/normalizer.py deleted file mode 100644 index f4bcbaac049b542a004918a0b1499122fcca9cc0..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/charset_normalizer/cli/normalizer.py +++ /dev/null @@ -1,296 +0,0 @@ -import argparse -import sys -from json import dumps -from os.path import abspath, basename, dirname, join, realpath -from platform import python_version -from typing import List, Optional -from unicodedata import unidata_version - -import charset_normalizer.md as md_module -from charset_normalizer import from_fp -from charset_normalizer.models import CliDetectionResult -from charset_normalizer.version import __version__ - - -def query_yes_no(question: str, default: str = "yes") -> bool: - """Ask a yes/no question via input() and return their answer. - - "question" is a string that is presented to the user. - "default" is the presumed answer if the user just hits . - It must be "yes" (the default), "no" or None (meaning - an answer is required of the user). - - The "answer" return value is True for "yes" or False for "no". - - Credit goes to (c) https://stackoverflow.com/questions/3041986/apt-command-line-interface-like-yes-no-input - """ - valid = {"yes": True, "y": True, "ye": True, "no": False, "n": False} - if default is None: - prompt = " [y/n] " - elif default == "yes": - prompt = " [Y/n] " - elif default == "no": - prompt = " [y/N] " - else: - raise ValueError("invalid default answer: '%s'" % default) - - while True: - sys.stdout.write(question + prompt) - choice = input().lower() - if default is not None and choice == "": - return valid[default] - elif choice in valid: - return valid[choice] - else: - sys.stdout.write("Please respond with 'yes' or 'no' " "(or 'y' or 'n').\n") - - -def cli_detect(argv: Optional[List[str]] = None) -> int: - """ - CLI assistant using ARGV and ArgumentParser - :param argv: - :return: 0 if everything is fine, anything else equal trouble - """ - parser = argparse.ArgumentParser( - description="The Real First Universal Charset Detector. " - "Discover originating encoding used on text file. " - "Normalize text to unicode." - ) - - parser.add_argument( - "files", type=argparse.FileType("rb"), nargs="+", help="File(s) to be analysed" - ) - parser.add_argument( - "-v", - "--verbose", - action="store_true", - default=False, - dest="verbose", - help="Display complementary information about file if any. " - "Stdout will contain logs about the detection process.", - ) - parser.add_argument( - "-a", - "--with-alternative", - action="store_true", - default=False, - dest="alternatives", - help="Output complementary possibilities if any. Top-level JSON WILL be a list.", - ) - parser.add_argument( - "-n", - "--normalize", - action="store_true", - default=False, - dest="normalize", - help="Permit to normalize input file. If not set, program does not write anything.", - ) - parser.add_argument( - "-m", - "--minimal", - action="store_true", - default=False, - dest="minimal", - help="Only output the charset detected to STDOUT. Disabling JSON output.", - ) - parser.add_argument( - "-r", - "--replace", - action="store_true", - default=False, - dest="replace", - help="Replace file when trying to normalize it instead of creating a new one.", - ) - parser.add_argument( - "-f", - "--force", - action="store_true", - default=False, - dest="force", - help="Replace file without asking if you are sure, use this flag with caution.", - ) - parser.add_argument( - "-t", - "--threshold", - action="store", - default=0.2, - type=float, - dest="threshold", - help="Define a custom maximum amount of chaos allowed in decoded content. 0. <= chaos <= 1.", - ) - parser.add_argument( - "--version", - action="version", - version="Charset-Normalizer {} - Python {} - Unicode {} - SpeedUp {}".format( - __version__, - python_version(), - unidata_version, - "OFF" if md_module.__file__.lower().endswith(".py") else "ON", - ), - help="Show version information and exit.", - ) - - args = parser.parse_args(argv) - - if args.replace is True and args.normalize is False: - print("Use --replace in addition of --normalize only.", file=sys.stderr) - return 1 - - if args.force is True and args.replace is False: - print("Use --force in addition of --replace only.", file=sys.stderr) - return 1 - - if args.threshold < 0.0 or args.threshold > 1.0: - print("--threshold VALUE should be between 0. AND 1.", file=sys.stderr) - return 1 - - x_ = [] - - for my_file in args.files: - matches = from_fp(my_file, threshold=args.threshold, explain=args.verbose) - - best_guess = matches.best() - - if best_guess is None: - print( - 'Unable to identify originating encoding for "{}". {}'.format( - my_file.name, - "Maybe try increasing maximum amount of chaos." - if args.threshold < 1.0 - else "", - ), - file=sys.stderr, - ) - x_.append( - CliDetectionResult( - abspath(my_file.name), - None, - [], - [], - "Unknown", - [], - False, - 1.0, - 0.0, - None, - True, - ) - ) - else: - x_.append( - CliDetectionResult( - abspath(my_file.name), - best_guess.encoding, - best_guess.encoding_aliases, - [ - cp - for cp in best_guess.could_be_from_charset - if cp != best_guess.encoding - ], - best_guess.language, - best_guess.alphabets, - best_guess.bom, - best_guess.percent_chaos, - best_guess.percent_coherence, - None, - True, - ) - ) - - if len(matches) > 1 and args.alternatives: - for el in matches: - if el != best_guess: - x_.append( - CliDetectionResult( - abspath(my_file.name), - el.encoding, - el.encoding_aliases, - [ - cp - for cp in el.could_be_from_charset - if cp != el.encoding - ], - el.language, - el.alphabets, - el.bom, - el.percent_chaos, - el.percent_coherence, - None, - False, - ) - ) - - if args.normalize is True: - if best_guess.encoding.startswith("utf") is True: - print( - '"{}" file does not need to be normalized, as it already came from unicode.'.format( - my_file.name - ), - file=sys.stderr, - ) - if my_file.closed is False: - my_file.close() - continue - - dir_path = dirname(realpath(my_file.name)) - file_name = basename(realpath(my_file.name)) - - o_: List[str] = file_name.split(".") - - if args.replace is False: - o_.insert(-1, best_guess.encoding) - if my_file.closed is False: - my_file.close() - elif ( - args.force is False - and query_yes_no( - 'Are you sure to normalize "{}" by replacing it ?'.format( - my_file.name - ), - "no", - ) - is False - ): - if my_file.closed is False: - my_file.close() - continue - - try: - x_[0].unicode_path = join(dir_path, ".".join(o_)) - - with open(x_[0].unicode_path, "w", encoding="utf-8") as fp: - fp.write(str(best_guess)) - except IOError as e: - print(str(e), file=sys.stderr) - if my_file.closed is False: - my_file.close() - return 2 - - if my_file.closed is False: - my_file.close() - - if args.minimal is False: - print( - dumps( - [el.__dict__ for el in x_] if len(x_) > 1 else x_[0].__dict__, - ensure_ascii=True, - indent=4, - ) - ) - else: - for my_file in args.files: - print( - ", ".join( - [ - el.encoding or "undefined" - for el in x_ - if el.path == abspath(my_file.name) - ] - ) - ) - - return 0 - - -if __name__ == "__main__": - cli_detect() diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/pydevd_attach_to_process/_test_attach_to_process.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/pydevd_attach_to_process/_test_attach_to_process.py deleted file mode 100644 index daeee93f471786d2b05c331829afcf0dae1f3fc1..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/pydevd_attach_to_process/_test_attach_to_process.py +++ /dev/null @@ -1,9 +0,0 @@ -import subprocess -import sys -print(sys.executable) - -if __name__ == '__main__': - p = subprocess.Popen([sys.executable, '-u', '_always_live_program.py']) - import attach_pydevd - attach_pydevd.main(attach_pydevd.process_command_line(['--pid', str(p.pid), '--protocol', 'http'])) - p.wait() diff --git a/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/tracking/iou_weighted_hungarian_bbox_iou_tracker.py b/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/tracking/iou_weighted_hungarian_bbox_iou_tracker.py deleted file mode 100644 index e9b40f8a9c269029e220d5dfa8df1e8372d05007..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/tracking/iou_weighted_hungarian_bbox_iou_tracker.py +++ /dev/null @@ -1,102 +0,0 @@ -#!/usr/bin/env python3 -# Copyright 2004-present Facebook. All Rights Reserved. - -import numpy as np -from typing import List - -from annotator.oneformer.detectron2.config import CfgNode as CfgNode_ -from annotator.oneformer.detectron2.config import configurable - -from .base_tracker import TRACKER_HEADS_REGISTRY -from .vanilla_hungarian_bbox_iou_tracker import VanillaHungarianBBoxIOUTracker - - -@TRACKER_HEADS_REGISTRY.register() -class IOUWeightedHungarianBBoxIOUTracker(VanillaHungarianBBoxIOUTracker): - """ - A tracker using IoU as weight in Hungarian algorithm, also known - as Munkres or Kuhn-Munkres algorithm - """ - - @configurable - def __init__( - self, - *, - video_height: int, - video_width: int, - max_num_instances: int = 200, - max_lost_frame_count: int = 0, - min_box_rel_dim: float = 0.02, - min_instance_period: int = 1, - track_iou_threshold: float = 0.5, - **kwargs, - ): - """ - Args: - video_height: height the video frame - video_width: width of the video frame - max_num_instances: maximum number of id allowed to be tracked - max_lost_frame_count: maximum number of frame an id can lost tracking - exceed this number, an id is considered as lost - forever - min_box_rel_dim: a percentage, smaller than this dimension, a bbox is - removed from tracking - min_instance_period: an instance will be shown after this number of period - since its first showing up in the video - track_iou_threshold: iou threshold, below this number a bbox pair is removed - from tracking - """ - super().__init__( - video_height=video_height, - video_width=video_width, - max_num_instances=max_num_instances, - max_lost_frame_count=max_lost_frame_count, - min_box_rel_dim=min_box_rel_dim, - min_instance_period=min_instance_period, - track_iou_threshold=track_iou_threshold, - ) - - @classmethod - def from_config(cls, cfg: CfgNode_): - """ - Old style initialization using CfgNode - - Args: - cfg: D2 CfgNode, config file - Return: - dictionary storing arguments for __init__ method - """ - assert "VIDEO_HEIGHT" in cfg.TRACKER_HEADS - assert "VIDEO_WIDTH" in cfg.TRACKER_HEADS - video_height = cfg.TRACKER_HEADS.get("VIDEO_HEIGHT") - video_width = cfg.TRACKER_HEADS.get("VIDEO_WIDTH") - max_num_instances = cfg.TRACKER_HEADS.get("MAX_NUM_INSTANCES", 200) - max_lost_frame_count = cfg.TRACKER_HEADS.get("MAX_LOST_FRAME_COUNT", 0) - min_box_rel_dim = cfg.TRACKER_HEADS.get("MIN_BOX_REL_DIM", 0.02) - min_instance_period = cfg.TRACKER_HEADS.get("MIN_INSTANCE_PERIOD", 1) - track_iou_threshold = cfg.TRACKER_HEADS.get("TRACK_IOU_THRESHOLD", 0.5) - return { - "_target_": "detectron2.tracking.iou_weighted_hungarian_bbox_iou_tracker.IOUWeightedHungarianBBoxIOUTracker", # noqa - "video_height": video_height, - "video_width": video_width, - "max_num_instances": max_num_instances, - "max_lost_frame_count": max_lost_frame_count, - "min_box_rel_dim": min_box_rel_dim, - "min_instance_period": min_instance_period, - "track_iou_threshold": track_iou_threshold, - } - - def assign_cost_matrix_values(self, cost_matrix: np.ndarray, bbox_pairs: List) -> np.ndarray: - """ - Based on IoU for each pair of bbox, assign the associated value in cost matrix - - Args: - cost_matrix: np.ndarray, initialized 2D array with target dimensions - bbox_pairs: list of bbox pair, in each pair, iou value is stored - Return: - np.ndarray, cost_matrix with assigned values - """ - for pair in bbox_pairs: - # assign (-1 * IoU) for above threshold pairs, algorithms will minimize cost - cost_matrix[pair["idx"]][pair["prev_idx"]] = -1 * pair["IoU"] - return cost_matrix diff --git a/spaces/Superlang/ImageProcessor/annotator/zoe/zoedepth/models/base_models/midas_repo/midas/midas_net_custom.py b/spaces/Superlang/ImageProcessor/annotator/zoe/zoedepth/models/base_models/midas_repo/midas/midas_net_custom.py deleted file mode 100644 index 50e4acb5e53d5fabefe3dde16ab49c33c2b7797c..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/zoe/zoedepth/models/base_models/midas_repo/midas/midas_net_custom.py +++ /dev/null @@ -1,128 +0,0 @@ -"""MidashNet: Network for monocular depth estimation trained by mixing several datasets. -This file contains code that is adapted from -https://github.com/thomasjpfan/pytorch_refinenet/blob/master/pytorch_refinenet/refinenet/refinenet_4cascade.py -""" -import torch -import torch.nn as nn - -from .base_model import BaseModel -from .blocks import FeatureFusionBlock, FeatureFusionBlock_custom, Interpolate, _make_encoder - - -class MidasNet_small(BaseModel): - """Network for monocular depth estimation. - """ - - def __init__(self, path=None, features=64, backbone="efficientnet_lite3", non_negative=True, exportable=True, channels_last=False, align_corners=True, - blocks={'expand': True}): - """Init. - - Args: - path (str, optional): Path to saved model. Defaults to None. - features (int, optional): Number of features. Defaults to 256. - backbone (str, optional): Backbone network for encoder. Defaults to resnet50 - """ - print("Loading weights: ", path) - - super(MidasNet_small, self).__init__() - - use_pretrained = False if path else True - - self.channels_last = channels_last - self.blocks = blocks - self.backbone = backbone - - self.groups = 1 - - features1=features - features2=features - features3=features - features4=features - self.expand = False - if "expand" in self.blocks and self.blocks['expand'] == True: - self.expand = True - features1=features - features2=features*2 - features3=features*4 - features4=features*8 - - self.pretrained, self.scratch = _make_encoder(self.backbone, features, use_pretrained, groups=self.groups, expand=self.expand, exportable=exportable) - - self.scratch.activation = nn.ReLU(False) - - self.scratch.refinenet4 = FeatureFusionBlock_custom(features4, self.scratch.activation, deconv=False, bn=False, expand=self.expand, align_corners=align_corners) - self.scratch.refinenet3 = FeatureFusionBlock_custom(features3, self.scratch.activation, deconv=False, bn=False, expand=self.expand, align_corners=align_corners) - self.scratch.refinenet2 = FeatureFusionBlock_custom(features2, self.scratch.activation, deconv=False, bn=False, expand=self.expand, align_corners=align_corners) - self.scratch.refinenet1 = FeatureFusionBlock_custom(features1, self.scratch.activation, deconv=False, bn=False, align_corners=align_corners) - - - self.scratch.output_conv = nn.Sequential( - nn.Conv2d(features, features//2, kernel_size=3, stride=1, padding=1, groups=self.groups), - Interpolate(scale_factor=2, mode="bilinear"), - nn.Conv2d(features//2, 32, kernel_size=3, stride=1, padding=1), - self.scratch.activation, - nn.Conv2d(32, 1, kernel_size=1, stride=1, padding=0), - nn.ReLU(True) if non_negative else nn.Identity(), - nn.Identity(), - ) - - if path: - self.load(path) - - - def forward(self, x): - """Forward pass. - - Args: - x (tensor): input data (image) - - Returns: - tensor: depth - """ - if self.channels_last==True: - print("self.channels_last = ", self.channels_last) - x.contiguous(memory_format=torch.channels_last) - - - layer_1 = self.pretrained.layer1(x) - layer_2 = self.pretrained.layer2(layer_1) - layer_3 = self.pretrained.layer3(layer_2) - layer_4 = self.pretrained.layer4(layer_3) - - layer_1_rn = self.scratch.layer1_rn(layer_1) - layer_2_rn = self.scratch.layer2_rn(layer_2) - layer_3_rn = self.scratch.layer3_rn(layer_3) - layer_4_rn = self.scratch.layer4_rn(layer_4) - - - path_4 = self.scratch.refinenet4(layer_4_rn) - path_3 = self.scratch.refinenet3(path_4, layer_3_rn) - path_2 = self.scratch.refinenet2(path_3, layer_2_rn) - path_1 = self.scratch.refinenet1(path_2, layer_1_rn) - - out = self.scratch.output_conv(path_1) - - return torch.squeeze(out, dim=1) - - - -def fuse_model(m): - prev_previous_type = nn.Identity() - prev_previous_name = '' - previous_type = nn.Identity() - previous_name = '' - for name, module in m.named_modules(): - if prev_previous_type == nn.Conv2d and previous_type == nn.BatchNorm2d and type(module) == nn.ReLU: - # print("FUSED ", prev_previous_name, previous_name, name) - torch.quantization.fuse_modules(m, [prev_previous_name, previous_name, name], inplace=True) - elif prev_previous_type == nn.Conv2d and previous_type == nn.BatchNorm2d: - # print("FUSED ", prev_previous_name, previous_name) - torch.quantization.fuse_modules(m, [prev_previous_name, previous_name], inplace=True) - # elif previous_type == nn.Conv2d and type(module) == nn.ReLU: - # print("FUSED ", previous_name, name) - # torch.quantization.fuse_modules(m, [previous_name, name], inplace=True) - - prev_previous_type = previous_type - prev_previous_name = previous_name - previous_type = type(module) - previous_name = name \ No newline at end of file diff --git a/spaces/TabPFN/TabPFNPrediction/TabPFN/initializers.py b/spaces/TabPFN/TabPFNPrediction/TabPFN/initializers.py deleted file mode 100644 index 4a2de2711a62676223950c35e5ce88cabcb086a0..0000000000000000000000000000000000000000 --- a/spaces/TabPFN/TabPFNPrediction/TabPFN/initializers.py +++ /dev/null @@ -1,9 +0,0 @@ -import torch -from torch import nn - -def get_NormalInitializer(std): - def initializer(m): - if isinstance(m, nn.Linear): - nn.init.normal_(m.weight, 0, std) - nn.init.normal_(m.bias, 0, std) - return initializer \ No newline at end of file diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/command/rotate.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/command/rotate.py deleted file mode 100644 index 74795ba922bb376e24858760e63dc9124ef22b9f..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/command/rotate.py +++ /dev/null @@ -1,64 +0,0 @@ -from distutils.util import convert_path -from distutils import log -from distutils.errors import DistutilsOptionError -import os -import shutil - -from setuptools import Command - - -class rotate(Command): - """Delete older distributions""" - - description = "delete older distributions, keeping N newest files" - user_options = [ - ('match=', 'm', "patterns to match (required)"), - ('dist-dir=', 'd', "directory where the distributions are"), - ('keep=', 'k', "number of matching distributions to keep"), - ] - - boolean_options = [] - - def initialize_options(self): - self.match = None - self.dist_dir = None - self.keep = None - - def finalize_options(self): - if self.match is None: - raise DistutilsOptionError( - "Must specify one or more (comma-separated) match patterns " - "(e.g. '.zip' or '.egg')" - ) - if self.keep is None: - raise DistutilsOptionError("Must specify number of files to keep") - try: - self.keep = int(self.keep) - except ValueError as e: - raise DistutilsOptionError("--keep must be an integer") from e - if isinstance(self.match, str): - self.match = [ - convert_path(p.strip()) for p in self.match.split(',') - ] - self.set_undefined_options('bdist', ('dist_dir', 'dist_dir')) - - def run(self): - self.run_command("egg_info") - from glob import glob - - for pattern in self.match: - pattern = self.distribution.get_name() + '*' + pattern - files = glob(os.path.join(self.dist_dir, pattern)) - files = [(os.path.getmtime(f), f) for f in files] - files.sort() - files.reverse() - - log.info("%d file(s) matching %s", len(files), pattern) - files = files[self.keep:] - for (t, f) in files: - log.info("Deleting %s", f) - if not self.dry_run: - if os.path.isdir(f): - shutil.rmtree(f) - else: - os.unlink(f) diff --git a/spaces/ThomasSimonini/Huggy/style.css b/spaces/ThomasSimonini/Huggy/style.css deleted file mode 100644 index 114adf441e9032febb46bc056b2a8bb651075f0d..0000000000000000000000000000000000000000 --- a/spaces/ThomasSimonini/Huggy/style.css +++ /dev/null @@ -1,28 +0,0 @@ -body { - padding: 2rem; - font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif; -} - -h1 { - font-size: 16px; - margin-top: 0; -} - -p { - color: rgb(107, 114, 128); - font-size: 15px; - margin-bottom: 10px; - margin-top: 5px; -} - -.card { - max-width: 620px; - margin: 0 auto; - padding: 16px; - border: 1px solid lightgray; - border-radius: 16px; -} - -.card p:last-child { - margin-bottom: 0; -} diff --git a/spaces/Tinny-Robot/Tinny-Robot-NCAIR-ChatBot/app.py b/spaces/Tinny-Robot/Tinny-Robot-NCAIR-ChatBot/app.py deleted file mode 100644 index 204cdb460bb68b5f3f91bfda52cae05e2848f8ec..0000000000000000000000000000000000000000 --- a/spaces/Tinny-Robot/Tinny-Robot-NCAIR-ChatBot/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/Tinny-Robot/NCAIR-ChatBot").launch() \ No newline at end of file diff --git a/spaces/VasudevaK/Information_Extractor/README.md b/spaces/VasudevaK/Information_Extractor/README.md deleted file mode 100644 index b806d44f9f37fc1d786e2655e0edce08a715258a..0000000000000000000000000000000000000000 --- a/spaces/VasudevaK/Information_Extractor/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Information Extractor -emoji: 🌖 -colorFrom: purple -colorTo: red -sdk: streamlit -sdk_version: 1.10.0 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Vegecken/sovits4dzl/README.md b/spaces/Vegecken/sovits4dzl/README.md deleted file mode 100644 index 90bf70dbb0d0dde34087cc52b3ca591099dffffd..0000000000000000000000000000000000000000 --- a/spaces/Vegecken/sovits4dzl/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Sovits4 -emoji: 🐨 -colorFrom: gray -colorTo: pink -sdk: gradio -sdk_version: 3.18.0 -app_file: app.py -pinned: false -license: mit -duplicated_from: chilge/sovits4nemo ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Vijish/Crop-CLIP/app.py b/spaces/Vijish/Crop-CLIP/app.py deleted file mode 100644 index 1e549f630ad5410ee787731796d88f0bb6b054fe..0000000000000000000000000000000000000000 --- a/spaces/Vijish/Crop-CLIP/app.py +++ /dev/null @@ -1,95 +0,0 @@ -import csv -import gradio as gr -import glob -import pprint as pp -from sys import excepthook -from re import T -from urllib.parse import parse_qs, urlparse -import clip -import numpy as np -import requests -import torch -import io - - -from IPython.display import Image, display -from PIL import Image, ImageFont -import os -import cv2 -import torch -import glob - -# Model - -def predict(img,text): - import tempfile - model = torch.hub.load('ultralytics/yolov5', 'yolov5s') - results = model(img) - dirpath = tempfile.mkdtemp() - results.crop(save_dir=dirpath) - path= dirpath+'/crops/**/*.jpg' - txtfiles = [] - for file in glob.glob(path): - txtfiles.append(file) - - from PIL import Image - l = [] - #keyList = list(range(len(txtfiles))) - for filename in glob.glob(path): - foo = Image.open(filename).convert('RGB') - #resized_image = foo.resize((250,250)) - l.append(foo) - - device = "cuda" if torch.cuda.is_available() else "cpu" - model, preprocess = clip.load("ViT-B/32", device=device) - - images = torch.stack([preprocess(im) for im in l]).to(device) - with torch.no_grad(): - image_features = model.encode_image(images) - image_features /= image_features.norm(dim=-1, keepdim=True) - - image_features.cpu().numpy() - - image_mean = torch.tensor([0.48145466, 0.4578275, 0.40821073]) - image_std = torch.tensor([0.26862954, 0.26130258, 0.27577711]) - - images = [preprocess(im) for im in l] - image_input = torch.tensor(np.stack(images)) - image_input -= image_mean[:, None, None] - image_input /= image_std[:, None, None] - with torch.no_grad(): - image_features = model.encode_image(image_input).float() - image_features /= image_features.norm(dim=-1, keepdim=True) - - def get_top_N_semantic_similarity(similarity_list,N): - results = zip(range(len(similarity_list)), similarity_list) - results = sorted(results, key=lambda x: x[1],reverse= True) - top_N_images = [] - scores=[] - for index,score in results[:N]: - scores.append(score) - top_N_images.append(l[index]) - return scores,top_N_images - - #search_query = text - - with torch.no_grad(): - # Encode and normalize the description using CLIP - text_encoded = model.encode_text(clip.tokenize(text).to(device)) - text_encoded /= text_encoded.norm(dim=-1, keepdim=True) - - similarity = text_encoded.cpu().numpy() @ image_features.cpu().numpy().T - similarity = similarity[0] - scores,imgs= get_top_N_semantic_similarity(similarity,N=1) - #print ("scores ",scores) - #ipyplot.plot_images(imgs,img_width=350) - return imgs[0] - -#text = gr.inputs.Textbox(lines=1, label="Text query", placeholder="Introduce the search text...",) -#img = gr.inputs.Image() - -#img = "image" - - - -gr.Interface(predict, ["image", gr.inputs.Textbox(lines=1, label="Text query", placeholder="Type here...",)], outputs="image", title="Crop-CLIP", description ="Search subjects/objects in an image using simple text description and get cropped results.This is done by combining Object detection Yolov5 and OpenAI's CLIP model.").launch(); diff --git a/spaces/Wayben/ChatGPT/modules/overwrites.py b/spaces/Wayben/ChatGPT/modules/overwrites.py deleted file mode 100644 index bfcd4d01b7d7bec1184a8d09113933bca860530b..0000000000000000000000000000000000000000 --- a/spaces/Wayben/ChatGPT/modules/overwrites.py +++ /dev/null @@ -1,56 +0,0 @@ -from __future__ import annotations -import logging - -from llama_index import Prompt -from typing import List, Tuple -import mdtex2html - -from modules.presets import * -from modules.llama_func import * - - -def compact_text_chunks(self, prompt: Prompt, text_chunks: List[str]) -> List[str]: - logging.debug("Compacting text chunks...🚀🚀🚀") - combined_str = [c.strip() for c in text_chunks if c.strip()] - combined_str = [f"[{index+1}] {c}" for index, c in enumerate(combined_str)] - combined_str = "\n\n".join(combined_str) - # resplit based on self.max_chunk_overlap - text_splitter = self.get_text_splitter_given_prompt(prompt, 1, padding=1) - return text_splitter.split_text(combined_str) - - -def postprocess( - self, y: List[Tuple[str | None, str | None]] -) -> List[Tuple[str | None, str | None]]: - """ - Parameters: - y: List of tuples representing the message and response pairs. Each message and response should be a string, which may be in Markdown format. - Returns: - List of tuples representing the message and response. Each message and response will be a string of HTML. - """ - if y is None or y == []: - return [] - user, bot = y[-1] - if not detect_converted_mark(user): - user = convert_asis(user) - if not detect_converted_mark(bot): - bot = convert_mdtext(bot) - y[-1] = (user, bot) - return y - -with open("./assets/custom.js", "r", encoding="utf-8") as f, open("./assets/Kelpy-Codos.js", "r", encoding="utf-8") as f2: - customJS = f.read() - kelpyCodos = f2.read() - -def reload_javascript(): - print("Reloading javascript...") - js = f'' - def template_response(*args, **kwargs): - res = GradioTemplateResponseOriginal(*args, **kwargs) - res.body = res.body.replace(b'', f'{js}'.encode("utf8")) - res.init_headers() - return res - - gr.routes.templates.TemplateResponse = template_response - -GradioTemplateResponseOriginal = gr.routes.templates.TemplateResponse \ No newline at end of file diff --git a/spaces/Woocy/541GPT/chat_func.py b/spaces/Woocy/541GPT/chat_func.py deleted file mode 100644 index beb166a3deb254201fb2bb63aa7d6c520b838e36..0000000000000000000000000000000000000000 --- a/spaces/Woocy/541GPT/chat_func.py +++ /dev/null @@ -1,491 +0,0 @@ -# -*- coding:utf-8 -*- -from __future__ import annotations -from typing import TYPE_CHECKING, List - -import logging -import json -import os -import requests -import urllib3 - -from tqdm import tqdm -import colorama -from duckduckgo_search import ddg -import asyncio -import aiohttp - -from presets import * -from llama_func import * -from utils import * - -# logging.basicConfig(level=logging.INFO, format="%(asctime)s [%(levelname)s] [%(filename)s:%(lineno)d] %(message)s") - -if TYPE_CHECKING: - from typing import TypedDict - - class DataframeData(TypedDict): - headers: List[str] - data: List[List[str | int | bool]] - - -initial_prompt = "You are a helpful assistant." -API_URL = "https://api.openai.com/v1/chat/completions" -HISTORY_DIR = "history" -TEMPLATES_DIR = "templates" - -def get_response( - openai_api_key, system_prompt, history, temperature, top_p, stream, selected_model -): - headers = { - "Content-Type": "application/json", - "Authorization": f"Bearer {openai_api_key}", - } - - history = [construct_system(system_prompt), *history] - - payload = { - "model": selected_model, - "messages": history, # [{"role": "user", "content": f"{inputs}"}], - "temperature": temperature, # 1.0, - "top_p": top_p, # 1.0, - "n": 1, - "stream": stream, - "presence_penalty": 0, - "frequency_penalty": 0, - } - if stream: - timeout = timeout_streaming - else: - timeout = timeout_all - - # 获取环境变量中的代理设置 - http_proxy = os.environ.get("HTTP_PROXY") or os.environ.get("http_proxy") - https_proxy = os.environ.get("HTTPS_PROXY") or os.environ.get("https_proxy") - - # 如果存在代理设置,使用它们 - proxies = {} - if http_proxy: - logging.info(f"Using HTTP proxy: {http_proxy}") - proxies["http"] = http_proxy - if https_proxy: - logging.info(f"Using HTTPS proxy: {https_proxy}") - proxies["https"] = https_proxy - - # 如果有代理,使用代理发送请求,否则使用默认设置发送请求 - if proxies: - response = requests.post( - API_URL, - headers=headers, - json=payload, - stream=True, - timeout=timeout, - proxies=proxies, - ) - else: - response = requests.post( - API_URL, - headers=headers, - json=payload, - stream=True, - timeout=timeout, - ) - return response - - -def stream_predict( - openai_api_key, - system_prompt, - history, - inputs, - chatbot, - all_token_counts, - top_p, - temperature, - selected_model, - fake_input=None, - display_append="" -): - def get_return_value(): - return chatbot, history, status_text, all_token_counts - - logging.info("实时回答模式") - partial_words = "" - counter = 0 - status_text = "开始实时传输回答……" - history.append(construct_user(inputs)) - history.append(construct_assistant("")) - if fake_input: - chatbot.append((fake_input, "")) - else: - chatbot.append((inputs, "")) - user_token_count = 0 - if len(all_token_counts) == 0: - system_prompt_token_count = count_token(construct_system(system_prompt)) - user_token_count = ( - count_token(construct_user(inputs)) + system_prompt_token_count - ) - else: - user_token_count = count_token(construct_user(inputs)) - all_token_counts.append(user_token_count) - logging.info(f"输入token计数: {user_token_count}") - yield get_return_value() - try: - response = get_response( - openai_api_key, - system_prompt, - history, - temperature, - top_p, - True, - selected_model, - ) - except requests.exceptions.ConnectTimeout: - status_text = ( - standard_error_msg + connection_timeout_prompt + error_retrieve_prompt - ) - yield get_return_value() - return - except requests.exceptions.ReadTimeout: - status_text = standard_error_msg + read_timeout_prompt + error_retrieve_prompt - yield get_return_value() - return - - yield get_return_value() - error_json_str = "" - - for chunk in tqdm(response.iter_lines()): - if counter == 0: - counter += 1 - continue - counter += 1 - # check whether each line is non-empty - if chunk: - chunk = chunk.decode() - chunklength = len(chunk) - try: - chunk = json.loads(chunk[6:]) - except json.JSONDecodeError: - logging.info(chunk) - error_json_str += chunk - status_text = f"JSON解析错误。请重置对话。收到的内容: {error_json_str}" - yield get_return_value() - continue - # decode each line as response data is in bytes - if chunklength > 6 and "delta" in chunk["choices"][0]: - finish_reason = chunk["choices"][0]["finish_reason"] - status_text = construct_token_message( - sum(all_token_counts), stream=True - ) - if finish_reason == "stop": - yield get_return_value() - break - try: - partial_words = ( - partial_words + chunk["choices"][0]["delta"]["content"] - ) - except KeyError: - status_text = ( - standard_error_msg - + "API回复中找不到内容。很可能是Token计数达到上限了。请重置对话。当前Token计数: " - + str(sum(all_token_counts)) - ) - yield get_return_value() - break - history[-1] = construct_assistant(partial_words) - chatbot[-1] = (chatbot[-1][0], partial_words+display_append) - all_token_counts[-1] += 1 - yield get_return_value() - return all_token_counts - -#def main(): - # 调用stream_predict函数获取结果... -# with open("token_counts.txt", "a") as f: -# for token_count in all_token_counts: -# f.write(str(token_count) + "\n") -#if __name__ == '__main__': -# main() - -def predict_all( - openai_api_key, - system_prompt, - history, - inputs, - chatbot, - all_token_counts, - top_p, - temperature, - selected_model, - fake_input=None, - display_append="" -): - logging.info("一次性回答模式") - history.append(construct_user(inputs)) - history.append(construct_assistant("")) - if fake_input: - chatbot.append((fake_input, "")) - else: - chatbot.append((inputs, "")) - all_token_counts.append(count_token(construct_user(inputs))) - try: - response = get_response( - openai_api_key, - system_prompt, - history, - temperature, - top_p, - False, - selected_model, - ) - except requests.exceptions.ConnectTimeout: - status_text = ( - standard_error_msg + connection_timeout_prompt + error_retrieve_prompt - ) - return chatbot, history, status_text, all_token_counts - except requests.exceptions.ProxyError: - status_text = standard_error_msg + proxy_error_prompt + error_retrieve_prompt - return chatbot, history, status_text, all_token_counts - except requests.exceptions.SSLError: - status_text = standard_error_msg + ssl_error_prompt + error_retrieve_prompt - return chatbot, history, status_text, all_token_counts - response = json.loads(response.text) - content = response["choices"][0]["message"]["content"] - history[-1] = construct_assistant(content) - chatbot[-1] = (chatbot[-1][0], content+display_append) - total_token_count = response["usage"]["total_tokens"] - all_token_counts[-1] = total_token_count - sum(all_token_counts) - status_text = construct_token_message(total_token_count) - return chatbot, history, status_text, all_token_counts - - -def predict( - openai_api_key, - system_prompt, - history, - inputs, - chatbot, - all_token_counts, - top_p, - temperature, - stream=False, - selected_model=MODELS[0], - use_websearch=False, - files = None, - should_check_token_count=True, -): # repetition_penalty, top_k - logging.info("输入为:" + colorama.Fore.BLUE + f"{inputs}" + colorama.Style.RESET_ALL) - if files: - msg = "构建索引中……(这可能需要比较久的时间)" - logging.info(msg) - yield chatbot, history, msg, all_token_counts - index = construct_index(openai_api_key, file_src=files) - msg = "索引构建完成,获取回答中……" - yield chatbot, history, msg, all_token_counts - history, chatbot, status_text = chat_ai(openai_api_key, index, inputs, history, chatbot) - yield chatbot, history, status_text, all_token_counts - return - - old_inputs = "" - link_references = [] - if use_websearch: - search_results = ddg(inputs, max_results=5) - old_inputs = inputs - web_results = [] - for idx, result in enumerate(search_results): - logging.info(f"搜索结果{idx + 1}:{result}") - domain_name = urllib3.util.parse_url(result["href"]).host - web_results.append(f'[{idx+1}]"{result["body"]}"\nURL: {result["href"]}') - link_references.append(f"{idx+1}. [{domain_name}]({result['href']})\n") - link_references = "\n\n" + "".join(link_references) - inputs = ( - replace_today(WEBSEARCH_PTOMPT_TEMPLATE) - .replace("{query}", inputs) - .replace("{web_results}", "\n\n".join(web_results)) - ) - else: - link_references = "" - - if len(openai_api_key) != 51: - status_text = standard_error_msg + no_apikey_msg - logging.info(status_text) - chatbot.append((inputs, "")) - if len(history) == 0: - history.append(construct_user(inputs)) - history.append("") - all_token_counts.append(0) - else: - history[-2] = construct_user(inputs) - yield chatbot, history, status_text, all_token_counts - return - - yield chatbot, history, "开始生成回答……", all_token_counts - - if stream: - logging.info("使用流式传输") - iter = stream_predict( - openai_api_key, - system_prompt, - history, - inputs, - chatbot, - all_token_counts, - top_p, - temperature, - selected_model, - fake_input=old_inputs, - display_append=link_references - ) - for chatbot, history, status_text, all_token_counts in iter: - yield chatbot, history, status_text, all_token_counts - else: - logging.info("不使用流式传输") - chatbot, history, status_text, all_token_counts = predict_all( - openai_api_key, - system_prompt, - history, - inputs, - chatbot, - all_token_counts, - top_p, - temperature, - selected_model, - fake_input=old_inputs, - display_append=link_references - ) - yield chatbot, history, status_text, all_token_counts - - logging.info(f"传输完毕。当前token计数为{all_token_counts}") - if len(history) > 1 and history[-1]["content"] != inputs: - logging.info( - "回答为:" - + colorama.Fore.BLUE - + f"{history[-1]['content']}" - + colorama.Style.RESET_ALL - + str(os.environ.get('USERNAME')) - ) - - if stream: - max_token = max_token_streaming - else: - max_token = max_token_all - - if sum(all_token_counts) > max_token and should_check_token_count: - status_text = f"精简token中{all_token_counts}/{max_token}" - logging.info(status_text) - yield chatbot, history, status_text, all_token_counts - iter = reduce_token_size( - openai_api_key, - system_prompt, - history, - chatbot, - all_token_counts, - top_p, - temperature, - max_token//2, - selected_model=selected_model, - ) - for chatbot, history, status_text, all_token_counts in iter: - status_text = f"Token 达到上限,已自动降低Token计数至 {status_text}" - yield chatbot, history, status_text, all_token_counts - - -def retry( - openai_api_key, - system_prompt, - history, - chatbot, - token_count, - top_p, - temperature, - stream=False, - selected_model=MODELS[0], -): - logging.info("重试中……") - if len(history) == 0: - yield chatbot, history, f"{standard_error_msg}上下文是空的", token_count - return - history.pop() - inputs = history.pop()["content"] - token_count.pop() - iter = predict( - openai_api_key, - system_prompt, - history, - inputs, - chatbot, - token_count, - top_p, - temperature, - stream=stream, - selected_model=selected_model, - ) - logging.info("重试中……") - for x in iter: - yield x - logging.info("重试完毕") - - -def reduce_token_size( - openai_api_key, - system_prompt, - history, - chatbot, - token_count, - top_p, - temperature, - max_token_count, - selected_model=MODELS[0], -): - logging.info("开始减少token数量……") - iter = predict( - openai_api_key, - system_prompt, - history, - summarize_prompt, - chatbot, - token_count, - top_p, - temperature, - selected_model=selected_model, - should_check_token_count=False, - ) - logging.info(f"chatbot: {chatbot}") - flag = False - for chatbot, history, status_text, previous_token_count in iter: - num_chat = find_n(previous_token_count, max_token_count) - if flag: - chatbot = chatbot[:-1] - flag = True - history = history[-2*num_chat:] if num_chat > 0 else [] - token_count = previous_token_count[-num_chat:] if num_chat > 0 else [] - msg = f"保留了最近{num_chat}轮对话" - yield chatbot, history, msg + "," + construct_token_message( - sum(token_count) if len(token_count) > 0 else 0, - ), token_count - logging.info(msg) - logging.info("减少token数量完毕") - - - -# 获取 Hugging Face 的 IP 地址 -hf_ip = os.getenv("HF_IP_ADDRESS") - -# 获取 Hugging Face 的用户名 -hf_username = os.getenv("HF_USERNAME") - -user_id = os.environ.get("USERNAME") - -print(user_id) -print(f"Hugging Face IP address: {hf_ip}") -print(f"Hugging Face username: {hf_username}") -logging.info(f"Hugging Face IP address: {hf_ip}") -logging.info(f"Hugging Face username: {hf_username}") -# 将获取到的信息写入txt文件 -#all_token_counts = 10 - -#def write_to_file(user_id, all_token_counts): -# log_file = "user_info.txt" -# log_str = f"{user_id}\t{all_token_counts}\n" -# with open(log_file, 'a') as f: -# f.write(log_str) - -#write_to_file(user_id,all_token_counts) \ No newline at end of file diff --git a/spaces/Wrathless/Dkrotzer-MusicalMagic/tests/data/test_audio_utils.py b/spaces/Wrathless/Dkrotzer-MusicalMagic/tests/data/test_audio_utils.py deleted file mode 100644 index 0480671bb17281d61ce02bce6373a5ccec89fece..0000000000000000000000000000000000000000 --- a/spaces/Wrathless/Dkrotzer-MusicalMagic/tests/data/test_audio_utils.py +++ /dev/null @@ -1,110 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import julius -import torch -import pytest - -from audiocraft.data.audio_utils import ( - _clip_wav, - convert_audio_channels, - convert_audio, - normalize_audio -) -from ..common_utils import get_batch_white_noise - - -class TestConvertAudioChannels: - - def test_convert_audio_channels_downmix(self): - b, c, t = 2, 3, 100 - audio = get_batch_white_noise(b, c, t) - mixed = convert_audio_channels(audio, channels=2) - assert list(mixed.shape) == [b, 2, t] - - def test_convert_audio_channels_nochange(self): - b, c, t = 2, 3, 100 - audio = get_batch_white_noise(b, c, t) - mixed = convert_audio_channels(audio, channels=c) - assert list(mixed.shape) == list(audio.shape) - - def test_convert_audio_channels_upmix(self): - b, c, t = 2, 1, 100 - audio = get_batch_white_noise(b, c, t) - mixed = convert_audio_channels(audio, channels=3) - assert list(mixed.shape) == [b, 3, t] - - def test_convert_audio_channels_upmix_error(self): - b, c, t = 2, 2, 100 - audio = get_batch_white_noise(b, c, t) - with pytest.raises(ValueError): - convert_audio_channels(audio, channels=3) - - -class TestConvertAudio: - - def test_convert_audio_channels_downmix(self): - b, c, dur = 2, 3, 4. - sr = 128 - audio = get_batch_white_noise(b, c, int(sr * dur)) - out = convert_audio(audio, from_rate=sr, to_rate=sr, to_channels=2) - assert list(out.shape) == [audio.shape[0], 2, audio.shape[-1]] - - def test_convert_audio_channels_upmix(self): - b, c, dur = 2, 1, 4. - sr = 128 - audio = get_batch_white_noise(b, c, int(sr * dur)) - out = convert_audio(audio, from_rate=sr, to_rate=sr, to_channels=3) - assert list(out.shape) == [audio.shape[0], 3, audio.shape[-1]] - - def test_convert_audio_upsample(self): - b, c, dur = 2, 1, 4. - sr = 2 - new_sr = 3 - audio = get_batch_white_noise(b, c, int(sr * dur)) - out = convert_audio(audio, from_rate=sr, to_rate=new_sr, to_channels=c) - out_j = julius.resample.resample_frac(audio, old_sr=sr, new_sr=new_sr) - assert torch.allclose(out, out_j) - - def test_convert_audio_resample(self): - b, c, dur = 2, 1, 4. - sr = 3 - new_sr = 2 - audio = get_batch_white_noise(b, c, int(sr * dur)) - out = convert_audio(audio, from_rate=sr, to_rate=new_sr, to_channels=c) - out_j = julius.resample.resample_frac(audio, old_sr=sr, new_sr=new_sr) - assert torch.allclose(out, out_j) - - -class TestNormalizeAudio: - - def test_clip_wav(self): - b, c, dur = 2, 1, 4. - sr = 3 - audio = 10.0 * get_batch_white_noise(b, c, int(sr * dur)) - _clip_wav(audio) - assert audio.abs().max() <= 1 - - def test_normalize_audio_clip(self): - b, c, dur = 2, 1, 4. - sr = 3 - audio = 10.0 * get_batch_white_noise(b, c, int(sr * dur)) - norm_audio = normalize_audio(audio, strategy='clip') - assert norm_audio.abs().max() <= 1 - - def test_normalize_audio_rms(self): - b, c, dur = 2, 1, 4. - sr = 3 - audio = 10.0 * get_batch_white_noise(b, c, int(sr * dur)) - norm_audio = normalize_audio(audio, strategy='rms') - assert norm_audio.abs().max() <= 1 - - def test_normalize_audio_peak(self): - b, c, dur = 2, 1, 4. - sr = 3 - audio = 10.0 * get_batch_white_noise(b, c, int(sr * dur)) - norm_audio = normalize_audio(audio, strategy='peak') - assert norm_audio.abs().max() <= 1 diff --git a/spaces/XingHe0127/Chatbot/modules/webui_locale.py b/spaces/XingHe0127/Chatbot/modules/webui_locale.py deleted file mode 100644 index 97a35b00eed94187ded21b1f6176bb66b287f7d4..0000000000000000000000000000000000000000 --- a/spaces/XingHe0127/Chatbot/modules/webui_locale.py +++ /dev/null @@ -1,25 +0,0 @@ -import os -import locale -import commentjson as json - -class I18nAuto: - def __init__(self): - if os.path.exists("config.json"): - with open("config.json", "r", encoding='utf-8') as f: - config = json.load(f) - else: - config = {} - language = config.get("language", "auto") - if language == "auto": - language = locale.getdefaultlocale()[0] # get the language code of the system (ex. zh_CN) - self.language_map = {} - self.file_is_exists = os.path.isfile(f"./locale/{language}.json") - if self.file_is_exists: - with open(f"./locale/{language}.json", "r", encoding="utf-8") as f: - self.language_map.update(json.load(f)) - - def __call__(self, key): - if self.file_is_exists and key in self.language_map: - return self.language_map[key] - else: - return key diff --git a/spaces/XzJosh/Eileen-Bert-VITS2/utils.py b/spaces/XzJosh/Eileen-Bert-VITS2/utils.py deleted file mode 100644 index c6aa6cfc64c33e2eed33e9845239e831fc1c4a1a..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/Eileen-Bert-VITS2/utils.py +++ /dev/null @@ -1,293 +0,0 @@ -import os -import glob -import sys -import argparse -import logging -import json -import subprocess -import numpy as np -from scipy.io.wavfile import read -import torch - -MATPLOTLIB_FLAG = False - -logging.basicConfig(stream=sys.stdout, level=logging.DEBUG) -logger = logging - - -def load_checkpoint(checkpoint_path, model, optimizer=None, skip_optimizer=False): - assert os.path.isfile(checkpoint_path) - checkpoint_dict = torch.load(checkpoint_path, map_location='cpu') - iteration = checkpoint_dict['iteration'] - learning_rate = checkpoint_dict['learning_rate'] - if optimizer is not None and not skip_optimizer and checkpoint_dict['optimizer'] is not None: - optimizer.load_state_dict(checkpoint_dict['optimizer']) - elif optimizer is None and not skip_optimizer: - #else: #Disable this line if Infer ,and enable the line upper - new_opt_dict = optimizer.state_dict() - new_opt_dict_params = new_opt_dict['param_groups'][0]['params'] - new_opt_dict['param_groups'] = checkpoint_dict['optimizer']['param_groups'] - new_opt_dict['param_groups'][0]['params'] = new_opt_dict_params - optimizer.load_state_dict(new_opt_dict) - saved_state_dict = checkpoint_dict['model'] - if hasattr(model, 'module'): - state_dict = model.module.state_dict() - else: - state_dict = model.state_dict() - new_state_dict = {} - for k, v in state_dict.items(): - try: - #assert "emb_g" not in k - # print("load", k) - new_state_dict[k] = saved_state_dict[k] - assert saved_state_dict[k].shape == v.shape, (saved_state_dict[k].shape, v.shape) - except: - print("error, %s is not in the checkpoint" % k) - new_state_dict[k] = v - if hasattr(model, 'module'): - model.module.load_state_dict(new_state_dict, strict=False) - else: - model.load_state_dict(new_state_dict, strict=False) - print("load ") - logger.info("Loaded checkpoint '{}' (iteration {})".format( - checkpoint_path, iteration)) - return model, optimizer, learning_rate, iteration - - -def save_checkpoint(model, optimizer, learning_rate, iteration, checkpoint_path): - logger.info("Saving model and optimizer state at iteration {} to {}".format( - iteration, checkpoint_path)) - if hasattr(model, 'module'): - state_dict = model.module.state_dict() - else: - state_dict = model.state_dict() - torch.save({'model': state_dict, - 'iteration': iteration, - 'optimizer': optimizer.state_dict(), - 'learning_rate': learning_rate}, checkpoint_path) - - -def summarize(writer, global_step, scalars={}, histograms={}, images={}, audios={}, audio_sampling_rate=22050): - for k, v in scalars.items(): - writer.add_scalar(k, v, global_step) - for k, v in histograms.items(): - writer.add_histogram(k, v, global_step) - for k, v in images.items(): - writer.add_image(k, v, global_step, dataformats='HWC') - for k, v in audios.items(): - writer.add_audio(k, v, global_step, audio_sampling_rate) - - -def latest_checkpoint_path(dir_path, regex="G_*.pth"): - f_list = glob.glob(os.path.join(dir_path, regex)) - f_list.sort(key=lambda f: int("".join(filter(str.isdigit, f)))) - x = f_list[-1] - print(x) - return x - - -def plot_spectrogram_to_numpy(spectrogram): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger('matplotlib') - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(10, 2)) - im = ax.imshow(spectrogram, aspect="auto", origin="lower", - interpolation='none') - plt.colorbar(im, ax=ax) - plt.xlabel("Frames") - plt.ylabel("Channels") - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - -def plot_alignment_to_numpy(alignment, info=None): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger('matplotlib') - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(6, 4)) - im = ax.imshow(alignment.transpose(), aspect='auto', origin='lower', - interpolation='none') - fig.colorbar(im, ax=ax) - xlabel = 'Decoder timestep' - if info is not None: - xlabel += '\n\n' + info - plt.xlabel(xlabel) - plt.ylabel('Encoder timestep') - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - -def load_wav_to_torch(full_path): - sampling_rate, data = read(full_path) - return torch.FloatTensor(data.astype(np.float32)), sampling_rate - - -def load_filepaths_and_text(filename, split="|"): - with open(filename, encoding='utf-8') as f: - filepaths_and_text = [line.strip().split(split) for line in f] - return filepaths_and_text - - -def get_hparams(init=True): - parser = argparse.ArgumentParser() - parser.add_argument('-c', '--config', type=str, default="./configs/base.json", - help='JSON file for configuration') - parser.add_argument('-m', '--model', type=str, default="./OUTPUT_MODEL", - help='Model name') - parser.add_argument('--cont', dest='cont', action="store_true", default=False, help="whether to continue training on the latest checkpoint") - - args = parser.parse_args() - model_dir = os.path.join("./logs", args.model) - - if not os.path.exists(model_dir): - os.makedirs(model_dir) - - config_path = args.config - config_save_path = os.path.join(model_dir, "config.json") - if init: - with open(config_path, "r") as f: - data = f.read() - with open(config_save_path, "w") as f: - f.write(data) - else: - with open(config_save_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - hparams.model_dir = model_dir - hparams.cont = args.cont - return hparams - - -def clean_checkpoints(path_to_models='logs/44k/', n_ckpts_to_keep=2, sort_by_time=True): - """Freeing up space by deleting saved ckpts - - Arguments: - path_to_models -- Path to the model directory - n_ckpts_to_keep -- Number of ckpts to keep, excluding G_0.pth and D_0.pth - sort_by_time -- True -> chronologically delete ckpts - False -> lexicographically delete ckpts - """ - import re - ckpts_files = [f for f in os.listdir(path_to_models) if os.path.isfile(os.path.join(path_to_models, f))] - name_key = (lambda _f: int(re.compile('._(\d+)\.pth').match(_f).group(1))) - time_key = (lambda _f: os.path.getmtime(os.path.join(path_to_models, _f))) - sort_key = time_key if sort_by_time else name_key - x_sorted = lambda _x: sorted([f for f in ckpts_files if f.startswith(_x) and not f.endswith('_0.pth')], - key=sort_key) - to_del = [os.path.join(path_to_models, fn) for fn in - (x_sorted('G')[:-n_ckpts_to_keep] + x_sorted('D')[:-n_ckpts_to_keep])] - del_info = lambda fn: logger.info(f".. Free up space by deleting ckpt {fn}") - del_routine = lambda x: [os.remove(x), del_info(x)] - rs = [del_routine(fn) for fn in to_del] - -def get_hparams_from_dir(model_dir): - config_save_path = os.path.join(model_dir, "config.json") - with open(config_save_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - hparams.model_dir = model_dir - return hparams - - -def get_hparams_from_file(config_path): - with open(config_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - return hparams - - -def check_git_hash(model_dir): - source_dir = os.path.dirname(os.path.realpath(__file__)) - if not os.path.exists(os.path.join(source_dir, ".git")): - logger.warn("{} is not a git repository, therefore hash value comparison will be ignored.".format( - source_dir - )) - return - - cur_hash = subprocess.getoutput("git rev-parse HEAD") - - path = os.path.join(model_dir, "githash") - if os.path.exists(path): - saved_hash = open(path).read() - if saved_hash != cur_hash: - logger.warn("git hash values are different. {}(saved) != {}(current)".format( - saved_hash[:8], cur_hash[:8])) - else: - open(path, "w").write(cur_hash) - - -def get_logger(model_dir, filename="train.log"): - global logger - logger = logging.getLogger(os.path.basename(model_dir)) - logger.setLevel(logging.DEBUG) - - formatter = logging.Formatter("%(asctime)s\t%(name)s\t%(levelname)s\t%(message)s") - if not os.path.exists(model_dir): - os.makedirs(model_dir) - h = logging.FileHandler(os.path.join(model_dir, filename)) - h.setLevel(logging.DEBUG) - h.setFormatter(formatter) - logger.addHandler(h) - return logger - - -class HParams(): - def __init__(self, **kwargs): - for k, v in kwargs.items(): - if type(v) == dict: - v = HParams(**v) - self[k] = v - - def keys(self): - return self.__dict__.keys() - - def items(self): - return self.__dict__.items() - - def values(self): - return self.__dict__.values() - - def __len__(self): - return len(self.__dict__) - - def __getitem__(self, key): - return getattr(self, key) - - def __setitem__(self, key, value): - return setattr(self, key, value) - - def __contains__(self, key): - return key in self.__dict__ - - def __repr__(self): - return self.__dict__.__repr__() diff --git a/spaces/XzJosh/LAPLACE-Bert-VITS2/train_ms.py b/spaces/XzJosh/LAPLACE-Bert-VITS2/train_ms.py deleted file mode 100644 index 5d109003d40497ea4493e7c73f47c1eb7370a81e..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/LAPLACE-Bert-VITS2/train_ms.py +++ /dev/null @@ -1,402 +0,0 @@ -import os -import json -import argparse -import itertools -import math -import torch -import shutil -from torch import nn, optim -from torch.nn import functional as F -from torch.utils.data import DataLoader -from torch.utils.tensorboard import SummaryWriter -import torch.multiprocessing as mp -import torch.distributed as dist -from torch.nn.parallel import DistributedDataParallel as DDP -from torch.cuda.amp import autocast, GradScaler -from tqdm import tqdm -import logging -logging.getLogger('numba').setLevel(logging.WARNING) -import commons -import utils -from data_utils import ( - TextAudioSpeakerLoader, - TextAudioSpeakerCollate, - DistributedBucketSampler -) -from models import ( - SynthesizerTrn, - MultiPeriodDiscriminator, - DurationDiscriminator, -) -from losses import ( - generator_loss, - discriminator_loss, - feature_loss, - kl_loss -) -from mel_processing import mel_spectrogram_torch, spec_to_mel_torch -from text.symbols import symbols - -torch.backends.cudnn.benchmark = True -torch.backends.cuda.matmul.allow_tf32 = True -torch.backends.cudnn.allow_tf32 = True -torch.set_float32_matmul_precision('medium') -global_step = 0 - - -def main(): - """Assume Single Node Multi GPUs Training Only""" - assert torch.cuda.is_available(), "CPU training is not allowed." - - n_gpus = torch.cuda.device_count() - os.environ['MASTER_ADDR'] = 'localhost' - os.environ['MASTER_PORT'] = '65280' - - hps = utils.get_hparams() - if not hps.cont: - shutil.copy('./pretrained_models/D_0.pth','./logs/OUTPUT_MODEL/D_0.pth') - shutil.copy('./pretrained_models/G_0.pth','./logs/OUTPUT_MODEL/G_0.pth') - shutil.copy('./pretrained_models/DUR_0.pth','./logs/OUTPUT_MODEL/DUR_0.pth') - mp.spawn(run, nprocs=n_gpus, args=(n_gpus, hps,)) - - -def run(rank, n_gpus, hps): - global global_step - if rank == 0: - logger = utils.get_logger(hps.model_dir) - logger.info(hps) - utils.check_git_hash(hps.model_dir) - writer = SummaryWriter(log_dir=hps.model_dir) - writer_eval = SummaryWriter(log_dir=os.path.join(hps.model_dir, "eval")) - - dist.init_process_group(backend= 'gloo' if os.name == 'nt' else 'nccl', init_method='env://', world_size=n_gpus, rank=rank) - torch.manual_seed(hps.train.seed) - torch.cuda.set_device(rank) - - train_dataset = TextAudioSpeakerLoader(hps.data.training_files, hps.data) - train_sampler = DistributedBucketSampler( - train_dataset, - hps.train.batch_size, - [32, 300, 400, 500, 600, 700, 800, 900, 1000], - num_replicas=n_gpus, - rank=rank, - shuffle=True) - collate_fn = TextAudioSpeakerCollate() - train_loader = DataLoader(train_dataset, num_workers=2, shuffle=False, pin_memory=True, - collate_fn=collate_fn, batch_sampler=train_sampler) - if rank == 0: - eval_dataset = TextAudioSpeakerLoader(hps.data.validation_files, hps.data) - eval_loader = DataLoader(eval_dataset, num_workers=0, shuffle=False, - batch_size=1, pin_memory=True, - drop_last=False, collate_fn=collate_fn) - if "use_noise_scaled_mas" in hps.model.keys() and hps.model.use_noise_scaled_mas == True: - print("Using noise scaled MAS for VITS2") - use_noise_scaled_mas = True - mas_noise_scale_initial = 0.01 - noise_scale_delta = 2e-6 - else: - print("Using normal MAS for VITS1") - use_noise_scaled_mas = False - mas_noise_scale_initial = 0.0 - noise_scale_delta = 0.0 - if "use_duration_discriminator" in hps.model.keys() and hps.model.use_duration_discriminator == True: - print("Using duration discriminator for VITS2") - use_duration_discriminator = True - net_dur_disc = DurationDiscriminator( - hps.model.hidden_channels, - hps.model.hidden_channels, - 3, - 0.1, - gin_channels=hps.model.gin_channels if hps.data.n_speakers != 0 else 0, - ).cuda(rank) - if "use_spk_conditioned_encoder" in hps.model.keys() and hps.model.use_spk_conditioned_encoder == True: - if hps.data.n_speakers == 0: - raise ValueError("n_speakers must be > 0 when using spk conditioned encoder to train multi-speaker model") - use_spk_conditioned_encoder = True - else: - print("Using normal encoder for VITS1") - use_spk_conditioned_encoder = False - - net_g = SynthesizerTrn( - len(symbols), - hps.data.filter_length // 2 + 1, - hps.train.segment_size // hps.data.hop_length, - n_speakers=hps.data.n_speakers, - mas_noise_scale_initial = mas_noise_scale_initial, - noise_scale_delta = noise_scale_delta, - **hps.model).cuda(rank) - - freeze_enc = getattr(hps.model, "freeze_enc", False) - if freeze_enc: - print("freeze encoder !!!") - for param in net_g.enc_p.parameters(): - param.requires_grad = False - - net_d = MultiPeriodDiscriminator(hps.model.use_spectral_norm).cuda(rank) - optim_g = torch.optim.AdamW( - filter(lambda p: p.requires_grad, net_g.parameters()), - hps.train.learning_rate, - betas=hps.train.betas, - eps=hps.train.eps) - optim_d = torch.optim.AdamW( - net_d.parameters(), - hps.train.learning_rate, - betas=hps.train.betas, - eps=hps.train.eps) - if net_dur_disc is not None: - optim_dur_disc = torch.optim.AdamW( - net_dur_disc.parameters(), - hps.train.learning_rate, - betas=hps.train.betas, - eps=hps.train.eps) - else: - optim_dur_disc = None - net_g = DDP(net_g, device_ids=[rank], find_unused_parameters=True) - net_d = DDP(net_d, device_ids=[rank], find_unused_parameters=True) - if net_dur_disc is not None: - net_dur_disc = DDP(net_dur_disc, device_ids=[rank], find_unused_parameters=True) - - pretrain_dir = None - if pretrain_dir is None: - try: - if net_dur_disc is not None: - _, optim_dur_disc, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(hps.model_dir, "DUR_*.pth"), net_dur_disc, optim_dur_disc, skip_optimizer=not hps.cont) - _, optim_g, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(hps.model_dir, "G_*.pth"), net_g, - optim_g, skip_optimizer=not hps.cont) - _, optim_d, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(hps.model_dir, "D_*.pth"), net_d, - optim_d, skip_optimizer=not hps.cont) - - epoch_str = max(epoch_str, 1) - global_step = (epoch_str - 1) * len(train_loader) - except Exception as e: - print(e) - epoch_str = 1 - global_step = 0 - else: - _, _, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(pretrain_dir, "G_*.pth"), net_g, - optim_g, True) - _, _, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(pretrain_dir, "D_*.pth"), net_d, - optim_d, True) - - - - scheduler_g = torch.optim.lr_scheduler.ExponentialLR(optim_g, gamma=hps.train.lr_decay, last_epoch=epoch_str - 2) - scheduler_d = torch.optim.lr_scheduler.ExponentialLR(optim_d, gamma=hps.train.lr_decay, last_epoch=epoch_str - 2) - if net_dur_disc is not None: - scheduler_dur_disc = torch.optim.lr_scheduler.ExponentialLR(optim_dur_disc, gamma=hps.train.lr_decay, last_epoch=epoch_str-2) - else: - scheduler_dur_disc = None - scaler = GradScaler(enabled=hps.train.fp16_run) - - for epoch in range(epoch_str, hps.train.epochs + 1): - if rank == 0: - train_and_evaluate(rank, epoch, hps, [net_g, net_d, net_dur_disc], [optim_g, optim_d, optim_dur_disc], [scheduler_g, scheduler_d, scheduler_dur_disc], scaler, [train_loader, eval_loader], logger, [writer, writer_eval]) - else: - train_and_evaluate(rank, epoch, hps, [net_g, net_d, net_dur_disc], [optim_g, optim_d, optim_dur_disc], [scheduler_g, scheduler_d, scheduler_dur_disc], scaler, [train_loader, None], None, None) - scheduler_g.step() - scheduler_d.step() - if net_dur_disc is not None: - scheduler_dur_disc.step() - - -def train_and_evaluate(rank, epoch, hps, nets, optims, schedulers, scaler, loaders, logger, writers): - net_g, net_d, net_dur_disc = nets - optim_g, optim_d, optim_dur_disc = optims - scheduler_g, scheduler_d, scheduler_dur_disc = schedulers - train_loader, eval_loader = loaders - if writers is not None: - writer, writer_eval = writers - - train_loader.batch_sampler.set_epoch(epoch) - global global_step - - net_g.train() - net_d.train() - if net_dur_disc is not None: - net_dur_disc.train() - for batch_idx, (x, x_lengths, spec, spec_lengths, y, y_lengths, speakers, tone, language, bert) in tqdm(enumerate(train_loader)): - if net_g.module.use_noise_scaled_mas: - current_mas_noise_scale = net_g.module.mas_noise_scale_initial - net_g.module.noise_scale_delta * global_step - net_g.module.current_mas_noise_scale = max(current_mas_noise_scale, 0.0) - x, x_lengths = x.cuda(rank, non_blocking=True), x_lengths.cuda(rank, non_blocking=True) - spec, spec_lengths = spec.cuda(rank, non_blocking=True), spec_lengths.cuda(rank, non_blocking=True) - y, y_lengths = y.cuda(rank, non_blocking=True), y_lengths.cuda(rank, non_blocking=True) - speakers = speakers.cuda(rank, non_blocking=True) - tone = tone.cuda(rank, non_blocking=True) - language = language.cuda(rank, non_blocking=True) - bert = bert.cuda(rank, non_blocking=True) - - with autocast(enabled=hps.train.fp16_run): - y_hat, l_length, attn, ids_slice, x_mask, z_mask, \ - (z, z_p, m_p, logs_p, m_q, logs_q), (hidden_x, logw, logw_) = net_g(x, x_lengths, spec, spec_lengths, speakers, tone, language, bert) - mel = spec_to_mel_torch( - spec, - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.mel_fmin, - hps.data.mel_fmax) - y_mel = commons.slice_segments(mel, ids_slice, hps.train.segment_size // hps.data.hop_length) - y_hat_mel = mel_spectrogram_torch( - y_hat.squeeze(1), - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.hop_length, - hps.data.win_length, - hps.data.mel_fmin, - hps.data.mel_fmax - ) - - y = commons.slice_segments(y, ids_slice * hps.data.hop_length, hps.train.segment_size) # slice - - # Discriminator - y_d_hat_r, y_d_hat_g, _, _ = net_d(y, y_hat.detach()) - with autocast(enabled=False): - loss_disc, losses_disc_r, losses_disc_g = discriminator_loss(y_d_hat_r, y_d_hat_g) - loss_disc_all = loss_disc - if net_dur_disc is not None: - y_dur_hat_r, y_dur_hat_g = net_dur_disc(hidden_x.detach(), x_mask.detach(), logw.detach(), logw_.detach()) - with autocast(enabled=False): - # TODO: I think need to mean using the mask, but for now, just mean all - loss_dur_disc, losses_dur_disc_r, losses_dur_disc_g = discriminator_loss(y_dur_hat_r, y_dur_hat_g) - loss_dur_disc_all = loss_dur_disc - optim_dur_disc.zero_grad() - scaler.scale(loss_dur_disc_all).backward() - scaler.unscale_(optim_dur_disc) - grad_norm_dur_disc = commons.clip_grad_value_(net_dur_disc.parameters(), None) - scaler.step(optim_dur_disc) - - optim_d.zero_grad() - scaler.scale(loss_disc_all).backward() - scaler.unscale_(optim_d) - grad_norm_d = commons.clip_grad_value_(net_d.parameters(), None) - scaler.step(optim_d) - - with autocast(enabled=hps.train.fp16_run): - # Generator - y_d_hat_r, y_d_hat_g, fmap_r, fmap_g = net_d(y, y_hat) - if net_dur_disc is not None: - y_dur_hat_r, y_dur_hat_g = net_dur_disc(hidden_x, x_mask, logw, logw_) - with autocast(enabled=False): - loss_dur = torch.sum(l_length.float()) - loss_mel = F.l1_loss(y_mel, y_hat_mel) * hps.train.c_mel - loss_kl = kl_loss(z_p, logs_q, m_p, logs_p, z_mask) * hps.train.c_kl - - loss_fm = feature_loss(fmap_r, fmap_g) - loss_gen, losses_gen = generator_loss(y_d_hat_g) - loss_gen_all = loss_gen + loss_fm + loss_mel + loss_dur + loss_kl - if net_dur_disc is not None: - loss_dur_gen, losses_dur_gen = generator_loss(y_dur_hat_g) - loss_gen_all += loss_dur_gen - optim_g.zero_grad() - scaler.scale(loss_gen_all).backward() - scaler.unscale_(optim_g) - grad_norm_g = commons.clip_grad_value_(net_g.parameters(), None) - scaler.step(optim_g) - scaler.update() - - if rank == 0: - if global_step % hps.train.log_interval == 0: - lr = optim_g.param_groups[0]['lr'] - losses = [loss_disc, loss_gen, loss_fm, loss_mel, loss_dur, loss_kl] - logger.info('Train Epoch: {} [{:.0f}%]'.format( - epoch, - 100. * batch_idx / len(train_loader))) - logger.info([x.item() for x in losses] + [global_step, lr]) - - scalar_dict = {"loss/g/total": loss_gen_all, "loss/d/total": loss_disc_all, "learning_rate": lr, - "grad_norm_d": grad_norm_d, "grad_norm_g": grad_norm_g} - scalar_dict.update( - {"loss/g/fm": loss_fm, "loss/g/mel": loss_mel, "loss/g/dur": loss_dur, "loss/g/kl": loss_kl}) - scalar_dict.update({"loss/g/{}".format(i): v for i, v in enumerate(losses_gen)}) - scalar_dict.update({"loss/d_r/{}".format(i): v for i, v in enumerate(losses_disc_r)}) - scalar_dict.update({"loss/d_g/{}".format(i): v for i, v in enumerate(losses_disc_g)}) - - image_dict = { - "slice/mel_org": utils.plot_spectrogram_to_numpy(y_mel[0].data.cpu().numpy()), - "slice/mel_gen": utils.plot_spectrogram_to_numpy(y_hat_mel[0].data.cpu().numpy()), - "all/mel": utils.plot_spectrogram_to_numpy(mel[0].data.cpu().numpy()), - "all/attn": utils.plot_alignment_to_numpy(attn[0, 0].data.cpu().numpy()) - } - utils.summarize( - writer=writer, - global_step=global_step, - images=image_dict, - scalars=scalar_dict) - - if global_step % hps.train.eval_interval == 0: - evaluate(hps, net_g, eval_loader, writer_eval) - utils.save_checkpoint(net_g, optim_g, hps.train.learning_rate, epoch, - os.path.join(hps.model_dir, "G_{}.pth".format(global_step))) - utils.save_checkpoint(net_d, optim_d, hps.train.learning_rate, epoch, - os.path.join(hps.model_dir, "D_{}.pth".format(global_step))) - if net_dur_disc is not None: - utils.save_checkpoint(net_dur_disc, optim_dur_disc, hps.train.learning_rate, epoch, os.path.join(hps.model_dir, "DUR_{}.pth".format(global_step))) - keep_ckpts = getattr(hps.train, 'keep_ckpts', 5) - if keep_ckpts > 0: - utils.clean_checkpoints(path_to_models=hps.model_dir, n_ckpts_to_keep=keep_ckpts, sort_by_time=True) - - - global_step += 1 - - if rank == 0: - logger.info('====> Epoch: {}'.format(epoch)) - - - -def evaluate(hps, generator, eval_loader, writer_eval): - generator.eval() - image_dict = {} - audio_dict = {} - print("Evaluating ...") - with torch.no_grad(): - for batch_idx, (x, x_lengths, spec, spec_lengths, y, y_lengths, speakers, tone, language, bert) in enumerate(eval_loader): - x, x_lengths = x.cuda(), x_lengths.cuda() - spec, spec_lengths = spec.cuda(), spec_lengths.cuda() - y, y_lengths = y.cuda(), y_lengths.cuda() - speakers = speakers.cuda() - bert = bert.cuda() - tone = tone.cuda() - language = language.cuda() - for use_sdp in [True, False]: - y_hat, attn, mask, *_ = generator.module.infer(x, x_lengths, speakers, tone, language, bert, y=spec, max_len=1000, sdp_ratio=0.0 if not use_sdp else 1.0) - y_hat_lengths = mask.sum([1, 2]).long() * hps.data.hop_length - - mel = spec_to_mel_torch( - spec, - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.mel_fmin, - hps.data.mel_fmax) - y_hat_mel = mel_spectrogram_torch( - y_hat.squeeze(1).float(), - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.hop_length, - hps.data.win_length, - hps.data.mel_fmin, - hps.data.mel_fmax - ) - image_dict.update({ - f"gen/mel_{batch_idx}": utils.plot_spectrogram_to_numpy(y_hat_mel[0].cpu().numpy()) - }) - audio_dict.update({ - f"gen/audio_{batch_idx}_{use_sdp}": y_hat[0, :, :y_hat_lengths[0]] - }) - image_dict.update({f"gt/mel_{batch_idx}": utils.plot_spectrogram_to_numpy(mel[0].cpu().numpy())}) - audio_dict.update({f"gt/audio_{batch_idx}": y[0, :, :y_lengths[0]]}) - - utils.summarize( - writer=writer_eval, - global_step=global_step, - images=image_dict, - audios=audio_dict, - audio_sampling_rate=hps.data.sampling_rate - ) - generator.train() - -if __name__ == "__main__": - main() diff --git a/spaces/XzJosh/TianDou-Bert-VITS2/text/english_bert_mock.py b/spaces/XzJosh/TianDou-Bert-VITS2/text/english_bert_mock.py deleted file mode 100644 index 3b894ced5b6d619a18d6bdd7d7606ba9e6532050..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/TianDou-Bert-VITS2/text/english_bert_mock.py +++ /dev/null @@ -1,5 +0,0 @@ -import torch - - -def get_bert_feature(norm_text, word2ph): - return torch.zeros(1024, sum(word2ph)) diff --git a/spaces/XzJosh/XingTong-Bert-VITS2/text/japanese.py b/spaces/XzJosh/XingTong-Bert-VITS2/text/japanese.py deleted file mode 100644 index ddedafa0c5b7986068dc6c91637a86febc3923a9..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/XingTong-Bert-VITS2/text/japanese.py +++ /dev/null @@ -1,104 +0,0 @@ -# modified from https://github.com/CjangCjengh/vits/blob/main/text/japanese.py -import re -import sys - -import pyopenjtalk - -from text import symbols - -# Regular expression matching Japanese without punctuation marks: -_japanese_characters = re.compile( - r'[A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]') - -# Regular expression matching non-Japanese characters or punctuation marks: -_japanese_marks = re.compile( - r'[^A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]') - -# List of (symbol, Japanese) pairs for marks: -_symbols_to_japanese = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('%', 'パーセント') -]] - - -# List of (consonant, sokuon) pairs: -_real_sokuon = [(re.compile('%s' % x[0]), x[1]) for x in [ - (r'Q([↑↓]*[kg])', r'k#\1'), - (r'Q([↑↓]*[tdjʧ])', r't#\1'), - (r'Q([↑↓]*[sʃ])', r's\1'), - (r'Q([↑↓]*[pb])', r'p#\1') -]] - -# List of (consonant, hatsuon) pairs: -_real_hatsuon = [(re.compile('%s' % x[0]), x[1]) for x in [ - (r'N([↑↓]*[pbm])', r'm\1'), - (r'N([↑↓]*[ʧʥj])', r'n^\1'), - (r'N([↑↓]*[tdn])', r'n\1'), - (r'N([↑↓]*[kg])', r'ŋ\1') -]] - - - -def post_replace_ph(ph): - rep_map = { - ':': ',', - ';': ',', - ',': ',', - '。': '.', - '!': '!', - '?': '?', - '\n': '.', - "·": ",", - '、': ",", - '...': '…', - 'v': "V" - } - if ph in rep_map.keys(): - ph = rep_map[ph] - if ph in symbols: - return ph - if ph not in symbols: - ph = 'UNK' - return ph - -def symbols_to_japanese(text): - for regex, replacement in _symbols_to_japanese: - text = re.sub(regex, replacement, text) - return text - - -def preprocess_jap(text): - '''Reference https://r9y9.github.io/ttslearn/latest/notebooks/ch10_Recipe-Tacotron.html''' - text = symbols_to_japanese(text) - sentences = re.split(_japanese_marks, text) - marks = re.findall(_japanese_marks, text) - text = [] - for i, sentence in enumerate(sentences): - if re.match(_japanese_characters, sentence): - p = pyopenjtalk.g2p(sentence) - text += p.split(" ") - - if i < len(marks): - text += [marks[i].replace(' ', '')] - return text - -def text_normalize(text): - # todo: jap text normalize - return text - -def g2p(norm_text): - phones = preprocess_jap(norm_text) - phones = [post_replace_ph(i) for i in phones] - # todo: implement tones and word2ph - tones = [0 for i in phones] - word2ph = [1 for i in phones] - return phones, tones, word2ph - - -if __name__ == '__main__': - for line in open("../../../Downloads/transcript_utf8.txt").readlines(): - text = line.split(":")[1] - phones, tones, word2ph = g2p(text) - for p in phones: - if p == "z": - print(text, phones) - sys.exit(0) diff --git a/spaces/YONG627/456123/yolov5-code-main/detect.py b/spaces/YONG627/456123/yolov5-code-main/detect.py deleted file mode 100644 index 6cda9fc1a000c31032af6eb5307f6038ceb9ce69..0000000000000000000000000000000000000000 --- a/spaces/YONG627/456123/yolov5-code-main/detect.py +++ /dev/null @@ -1,261 +0,0 @@ -# YOLOv5 🚀 by Ultralytics, GPL-3.0 license -""" -Run YOLOv5 detection inference on images, videos, directories, globs, YouTube, webcam, streams, etc. - -Usage - sources: - $ python detect.py --weights yolov5s.pt --source 0 # webcam - img.jpg # image - vid.mp4 # video - screen # screenshot - path/ # directory - list.txt # list of images - list.streams # list of streams - 'path/*.jpg' # glob - 'https://youtu.be/Zgi9g1ksQHc' # YouTube - 'rtsp://example.com/media.mp4' # RTSP, RTMP, HTTP stream - -Usage - formats: - $ python detect.py --weights yolov5s.pt # PyTorch - yolov5s.torchscript # TorchScript - yolov5s.onnx # ONNX Runtime or OpenCV DNN with --dnn - yolov5s_openvino_model # OpenVINO - yolov5s.engine # TensorRT - yolov5s.mlmodel # CoreML (macOS-only) - yolov5s_saved_model # TensorFlow SavedModel - yolov5s.pb # TensorFlow GraphDef - yolov5s.tflite # TensorFlow Lite - yolov5s_edgetpu.tflite # TensorFlow Edge TPU - yolov5s_paddle_model # PaddlePaddle -""" - -import argparse -import os -import platform -import sys -from pathlib import Path - -import torch - -FILE = Path(__file__).resolve() -ROOT = FILE.parents[0] # YOLOv5 root directory -if str(ROOT) not in sys.path: - sys.path.append(str(ROOT)) # add ROOT to PATH -ROOT = Path(os.path.relpath(ROOT, Path.cwd())) # relative - -from models.common import DetectMultiBackend -from utils.dataloaders import IMG_FORMATS, VID_FORMATS, LoadImages, LoadScreenshots, LoadStreams -from utils.general import (LOGGER, Profile, check_file, check_img_size, check_imshow, check_requirements, colorstr, cv2, - increment_path, non_max_suppression, print_args, scale_boxes, strip_optimizer, xyxy2xywh) -from utils.plots import Annotator, colors, save_one_box -from utils.torch_utils import select_device, smart_inference_mode - - -@smart_inference_mode() -def run( - weights=ROOT / 'yolov5s.pt', # model path or triton URL - source=ROOT / 'data/images', # file/dir/URL/glob/screen/0(webcam) - data=ROOT / 'data/coco128.yaml', # dataset.yaml path - imgsz=(640, 640), # inference size (height, width) - conf_thres=0.25, # confidence threshold - iou_thres=0.45, # NMS IOU threshold - max_det=1000, # maximum detections per image - device='', # cuda device, i.e. 0 or 0,1,2,3 or cpu - view_img=False, # show results - save_txt=False, # save results to *.txt - save_conf=False, # save confidences in --save-txt labels - save_crop=False, # save cropped prediction boxes - nosave=False, # do not save images/videos - classes=None, # filter by class: --class 0, or --class 0 2 3 - agnostic_nms=False, # class-agnostic NMS - augment=False, # augmented inference - visualize=False, # visualize features - update=False, # update all models - project=ROOT / 'runs/detect', # save results to project/name - name='exp', # save results to project/name - exist_ok=False, # existing project/name ok, do not increment - line_thickness=3, # bounding box thickness (pixels) - hide_labels=False, # hide labels - hide_conf=False, # hide confidences - half=False, # use FP16 half-precision inference - dnn=False, # use OpenCV DNN for ONNX inference - vid_stride=1, # video frame-rate stride -): - source = str(source) - save_img = not nosave and not source.endswith('.txt') # save inference images - is_file = Path(source).suffix[1:] in (IMG_FORMATS + VID_FORMATS) - is_url = source.lower().startswith(('rtsp://', 'rtmp://', 'http://', 'https://')) - webcam = source.isnumeric() or source.endswith('.streams') or (is_url and not is_file) - screenshot = source.lower().startswith('screen') - if is_url and is_file: - source = check_file(source) # download - - # Directories - save_dir = increment_path(Path(project) / name, exist_ok=exist_ok) # increment run - (save_dir / 'labels' if save_txt else save_dir).mkdir(parents=True, exist_ok=True) # make dir - - # Load model - device = select_device(device) - model = DetectMultiBackend(weights, device=device, dnn=dnn, data=data, fp16=half) - stride, names, pt = model.stride, model.names, model.pt - imgsz = check_img_size(imgsz, s=stride) # check image size - - # Dataloader - bs = 1 # batch_size - if webcam: - view_img = check_imshow(warn=True) - dataset = LoadStreams(source, img_size=imgsz, stride=stride, auto=pt, vid_stride=vid_stride) - bs = len(dataset) - elif screenshot: - dataset = LoadScreenshots(source, img_size=imgsz, stride=stride, auto=pt) - else: - dataset = LoadImages(source, img_size=imgsz, stride=stride, auto=pt, vid_stride=vid_stride) - vid_path, vid_writer = [None] * bs, [None] * bs - - # Run inference - model.warmup(imgsz=(1 if pt or model.triton else bs, 3, *imgsz)) # warmup - seen, windows, dt = 0, [], (Profile(), Profile(), Profile()) - for path, im, im0s, vid_cap, s in dataset: - with dt[0]: - im = torch.from_numpy(im).to(model.device) - im = im.half() if model.fp16 else im.float() # uint8 to fp16/32 - im /= 255 # 0 - 255 to 0.0 - 1.0 - if len(im.shape) == 3: - im = im[None] # expand for batch dim - - # Inference - with dt[1]: - visualize = increment_path(save_dir / Path(path).stem, mkdir=True) if visualize else False - pred = model(im, augment=augment, visualize=visualize) - - # NMS - with dt[2]: - pred = non_max_suppression(pred, conf_thres, iou_thres, classes, agnostic_nms, max_det=max_det) - - # Second-stage classifier (optional) - # pred = utils.general.apply_classifier(pred, classifier_model, im, im0s) - - # Process predictions - for i, det in enumerate(pred): # per image - seen += 1 - if webcam: # batch_size >= 1 - p, im0, frame = path[i], im0s[i].copy(), dataset.count - s += f'{i}: ' - else: - p, im0, frame = path, im0s.copy(), getattr(dataset, 'frame', 0) - - p = Path(p) # to Path - save_path = str(save_dir / p.name) # im.jpg - txt_path = str(save_dir / 'labels' / p.stem) + ('' if dataset.mode == 'image' else f'_{frame}') # im.txt - s += '%gx%g ' % im.shape[2:] # print string - gn = torch.tensor(im0.shape)[[1, 0, 1, 0]] # normalization gain whwh - imc = im0.copy() if save_crop else im0 # for save_crop - annotator = Annotator(im0, line_width=line_thickness, example=str(names)) - if len(det): - # Rescale boxes from img_size to im0 size - det[:, :4] = scale_boxes(im.shape[2:], det[:, :4], im0.shape).round() - - # Print results - for c in det[:, 5].unique(): - n = (det[:, 5] == c).sum() # detections per class - s += f"{n} {names[int(c)]}{'s' * (n > 1)}, " # add to string - - # Write results - for *xyxy, conf, cls in reversed(det): - if save_txt: # Write to file - xywh = (xyxy2xywh(torch.tensor(xyxy).view(1, 4)) / gn).view(-1).tolist() # normalized xywh - line = (cls, *xywh, conf) if save_conf else (cls, *xywh) # label format - with open(f'{txt_path}.txt', 'a') as f: - f.write(('%g ' * len(line)).rstrip() % line + '\n') - - if save_img or save_crop or view_img: # Add bbox to image - c = int(cls) # integer class - label = None if hide_labels else (names[c] if hide_conf else f'{names[c]} {conf:.2f}') - annotator.box_label(xyxy, label, color=colors(c, True)) - if save_crop: - save_one_box(xyxy, imc, file=save_dir / 'crops' / names[c] / f'{p.stem}.jpg', BGR=True) - - # Stream results - im0 = annotator.result() - if view_img: - if platform.system() == 'Linux' and p not in windows: - windows.append(p) - cv2.namedWindow(str(p), cv2.WINDOW_NORMAL | cv2.WINDOW_KEEPRATIO) # allow window resize (Linux) - cv2.resizeWindow(str(p), im0.shape[1], im0.shape[0]) - cv2.imshow(str(p), im0) - cv2.waitKey(1) # 1 millisecond - - # Save results (image with detections) - if save_img: - if dataset.mode == 'image': - cv2.imwrite(save_path, im0) - else: # 'video' or 'stream' - if vid_path[i] != save_path: # new video - vid_path[i] = save_path - if isinstance(vid_writer[i], cv2.VideoWriter): - vid_writer[i].release() # release previous video writer - if vid_cap: # video - fps = vid_cap.get(cv2.CAP_PROP_FPS) - w = int(vid_cap.get(cv2.CAP_PROP_FRAME_WIDTH)) - h = int(vid_cap.get(cv2.CAP_PROP_FRAME_HEIGHT)) - else: # stream - fps, w, h = 30, im0.shape[1], im0.shape[0] - save_path = str(Path(save_path).with_suffix('.mp4')) # force *.mp4 suffix on results videos - vid_writer[i] = cv2.VideoWriter(save_path, cv2.VideoWriter_fourcc(*'mp4v'), fps, (w, h)) - vid_writer[i].write(im0) - - # Print time (inference-only) - LOGGER.info(f"{s}{'' if len(det) else '(no detections), '}{dt[1].dt * 1E3:.1f}ms") - - # Print results - t = tuple(x.t / seen * 1E3 for x in dt) # speeds per image - LOGGER.info(f'Speed: %.1fms pre-process, %.1fms inference, %.1fms NMS per image at shape {(1, 3, *imgsz)}' % t) - if save_txt or save_img: - s = f"\n{len(list(save_dir.glob('labels/*.txt')))} labels saved to {save_dir / 'labels'}" if save_txt else '' - LOGGER.info(f"Results saved to {colorstr('bold', save_dir)}{s}") - if update: - strip_optimizer(weights[0]) # update model (to fix SourceChangeWarning) - - -def parse_opt(): - parser = argparse.ArgumentParser() - parser.add_argument('--weights', nargs='+', type=str, default=ROOT / 'yolov5s.pt', help='model path or triton URL') - parser.add_argument('--source', type=str, default=ROOT / 'data/images', help='file/dir/URL/glob/screen/0(webcam)') - parser.add_argument('--data', type=str, default=ROOT / 'data/coco128.yaml', help='(optional) dataset.yaml path') - parser.add_argument('--imgsz', '--img', '--img-size', nargs='+', type=int, default=[640], help='inference size h,w') - parser.add_argument('--conf-thres', type=float, default=0.25, help='confidence threshold') - parser.add_argument('--iou-thres', type=float, default=0.45, help='NMS IoU threshold') - parser.add_argument('--max-det', type=int, default=1000, help='maximum detections per image') - parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu') - parser.add_argument('--view-img', action='store_true', help='show results') - parser.add_argument('--save-txt', action='store_true', help='save results to *.txt') - parser.add_argument('--save-conf', action='store_true', help='save confidences in --save-txt labels') - parser.add_argument('--save-crop', action='store_true', help='save cropped prediction boxes') - parser.add_argument('--nosave', action='store_true', help='do not save images/videos') - parser.add_argument('--classes', nargs='+', type=int, help='filter by class: --classes 0, or --classes 0 2 3') - parser.add_argument('--agnostic-nms', action='store_true', help='class-agnostic NMS') - parser.add_argument('--augment', action='store_true', help='augmented inference') - parser.add_argument('--visualize', action='store_true', help='visualize features') - parser.add_argument('--update', action='store_true', help='update all models') - parser.add_argument('--project', default=ROOT / 'runs/detect', help='save results to project/name') - parser.add_argument('--name', default='exp', help='save results to project/name') - parser.add_argument('--exist-ok', action='store_true', help='existing project/name ok, do not increment') - parser.add_argument('--line-thickness', default=3, type=int, help='bounding box thickness (pixels)') - parser.add_argument('--hide-labels', default=False, action='store_true', help='hide labels') - parser.add_argument('--hide-conf', default=False, action='store_true', help='hide confidences') - parser.add_argument('--half', action='store_true', help='use FP16 half-precision inference') - parser.add_argument('--dnn', action='store_true', help='use OpenCV DNN for ONNX inference') - parser.add_argument('--vid-stride', type=int, default=1, help='video frame-rate stride') - opt = parser.parse_args() - opt.imgsz *= 2 if len(opt.imgsz) == 1 else 1 # expand - print_args(vars(opt)) - return opt - - -def main(opt): - check_requirements(exclude=('tensorboard', 'thop')) - run(**vars(opt)) - - -if __name__ == '__main__': - opt = parse_opt() - main(opt) diff --git a/spaces/YONG627/456123/yolov5-code-main/utils/loggers/comet/__init__.py b/spaces/YONG627/456123/yolov5-code-main/utils/loggers/comet/__init__.py deleted file mode 100644 index d4599841c9fc4df3e9ad4bf847f8bd13c3a9175d..0000000000000000000000000000000000000000 --- a/spaces/YONG627/456123/yolov5-code-main/utils/loggers/comet/__init__.py +++ /dev/null @@ -1,508 +0,0 @@ -import glob -import json -import logging -import os -import sys -from pathlib import Path - -logger = logging.getLogger(__name__) - -FILE = Path(__file__).resolve() -ROOT = FILE.parents[3] # YOLOv5 root directory -if str(ROOT) not in sys.path: - sys.path.append(str(ROOT)) # add ROOT to PATH - -try: - import comet_ml - - # Project Configuration - config = comet_ml.config.get_config() - COMET_PROJECT_NAME = config.get_string(os.getenv('COMET_PROJECT_NAME'), 'comet.project_name', default='yolov5') -except (ModuleNotFoundError, ImportError): - comet_ml = None - COMET_PROJECT_NAME = None - -import PIL -import torch -import torchvision.transforms as T -import yaml - -from utils.dataloaders import img2label_paths -from utils.general import check_dataset, scale_boxes, xywh2xyxy -from utils.metrics import box_iou - -COMET_PREFIX = 'comet://' - -COMET_MODE = os.getenv('COMET_MODE', 'online') - -# Model Saving Settings -COMET_MODEL_NAME = os.getenv('COMET_MODEL_NAME', 'yolov5') - -# Dataset Artifact Settings -COMET_UPLOAD_DATASET = os.getenv('COMET_UPLOAD_DATASET', 'false').lower() == 'true' - -# Evaluation Settings -COMET_LOG_CONFUSION_MATRIX = os.getenv('COMET_LOG_CONFUSION_MATRIX', 'true').lower() == 'true' -COMET_LOG_PREDICTIONS = os.getenv('COMET_LOG_PREDICTIONS', 'true').lower() == 'true' -COMET_MAX_IMAGE_UPLOADS = int(os.getenv('COMET_MAX_IMAGE_UPLOADS', 100)) - -# Confusion Matrix Settings -CONF_THRES = float(os.getenv('CONF_THRES', 0.001)) -IOU_THRES = float(os.getenv('IOU_THRES', 0.6)) - -# Batch Logging Settings -COMET_LOG_BATCH_METRICS = os.getenv('COMET_LOG_BATCH_METRICS', 'false').lower() == 'true' -COMET_BATCH_LOGGING_INTERVAL = os.getenv('COMET_BATCH_LOGGING_INTERVAL', 1) -COMET_PREDICTION_LOGGING_INTERVAL = os.getenv('COMET_PREDICTION_LOGGING_INTERVAL', 1) -COMET_LOG_PER_CLASS_METRICS = os.getenv('COMET_LOG_PER_CLASS_METRICS', 'false').lower() == 'true' - -RANK = int(os.getenv('RANK', -1)) - -to_pil = T.ToPILImage() - - -class CometLogger: - """Log metrics, parameters, source code, models and much more - with Comet - """ - - def __init__(self, opt, hyp, run_id=None, job_type='Training', **experiment_kwargs) -> None: - self.job_type = job_type - self.opt = opt - self.hyp = hyp - - # Comet Flags - self.comet_mode = COMET_MODE - - self.save_model = opt.save_period > -1 - self.model_name = COMET_MODEL_NAME - - # Batch Logging Settings - self.log_batch_metrics = COMET_LOG_BATCH_METRICS - self.comet_log_batch_interval = COMET_BATCH_LOGGING_INTERVAL - - # Dataset Artifact Settings - self.upload_dataset = self.opt.upload_dataset if self.opt.upload_dataset else COMET_UPLOAD_DATASET - self.resume = self.opt.resume - - # Default parameters to pass to Experiment objects - self.default_experiment_kwargs = { - 'log_code': False, - 'log_env_gpu': True, - 'log_env_cpu': True, - 'project_name': COMET_PROJECT_NAME,} - self.default_experiment_kwargs.update(experiment_kwargs) - self.experiment = self._get_experiment(self.comet_mode, run_id) - - self.data_dict = self.check_dataset(self.opt.data) - self.class_names = self.data_dict['names'] - self.num_classes = self.data_dict['nc'] - - self.logged_images_count = 0 - self.max_images = COMET_MAX_IMAGE_UPLOADS - - if run_id is None: - self.experiment.log_other('Created from', 'YOLOv5') - if not isinstance(self.experiment, comet_ml.OfflineExperiment): - workspace, project_name, experiment_id = self.experiment.url.split('/')[-3:] - self.experiment.log_other( - 'Run Path', - f'{workspace}/{project_name}/{experiment_id}', - ) - self.log_parameters(vars(opt)) - self.log_parameters(self.opt.hyp) - self.log_asset_data( - self.opt.hyp, - name='hyperparameters.json', - metadata={'type': 'hyp-config-file'}, - ) - self.log_asset( - f'{self.opt.save_dir}/opt.yaml', - metadata={'type': 'opt-config-file'}, - ) - - self.comet_log_confusion_matrix = COMET_LOG_CONFUSION_MATRIX - - if hasattr(self.opt, 'conf_thres'): - self.conf_thres = self.opt.conf_thres - else: - self.conf_thres = CONF_THRES - if hasattr(self.opt, 'iou_thres'): - self.iou_thres = self.opt.iou_thres - else: - self.iou_thres = IOU_THRES - - self.log_parameters({'val_iou_threshold': self.iou_thres, 'val_conf_threshold': self.conf_thres}) - - self.comet_log_predictions = COMET_LOG_PREDICTIONS - if self.opt.bbox_interval == -1: - self.comet_log_prediction_interval = 1 if self.opt.epochs < 10 else self.opt.epochs // 10 - else: - self.comet_log_prediction_interval = self.opt.bbox_interval - - if self.comet_log_predictions: - self.metadata_dict = {} - self.logged_image_names = [] - - self.comet_log_per_class_metrics = COMET_LOG_PER_CLASS_METRICS - - self.experiment.log_others({ - 'comet_mode': COMET_MODE, - 'comet_max_image_uploads': COMET_MAX_IMAGE_UPLOADS, - 'comet_log_per_class_metrics': COMET_LOG_PER_CLASS_METRICS, - 'comet_log_batch_metrics': COMET_LOG_BATCH_METRICS, - 'comet_log_confusion_matrix': COMET_LOG_CONFUSION_MATRIX, - 'comet_model_name': COMET_MODEL_NAME,}) - - # Check if running the Experiment with the Comet Optimizer - if hasattr(self.opt, 'comet_optimizer_id'): - self.experiment.log_other('optimizer_id', self.opt.comet_optimizer_id) - self.experiment.log_other('optimizer_objective', self.opt.comet_optimizer_objective) - self.experiment.log_other('optimizer_metric', self.opt.comet_optimizer_metric) - self.experiment.log_other('optimizer_parameters', json.dumps(self.hyp)) - - def _get_experiment(self, mode, experiment_id=None): - if mode == 'offline': - if experiment_id is not None: - return comet_ml.ExistingOfflineExperiment( - previous_experiment=experiment_id, - **self.default_experiment_kwargs, - ) - - return comet_ml.OfflineExperiment(**self.default_experiment_kwargs,) - - else: - try: - if experiment_id is not None: - return comet_ml.ExistingExperiment( - previous_experiment=experiment_id, - **self.default_experiment_kwargs, - ) - - return comet_ml.Experiment(**self.default_experiment_kwargs) - - except ValueError: - logger.warning('COMET WARNING: ' - 'Comet credentials have not been set. ' - 'Comet will default to offline logging. ' - 'Please set your credentials to enable online logging.') - return self._get_experiment('offline', experiment_id) - - return - - def log_metrics(self, log_dict, **kwargs): - self.experiment.log_metrics(log_dict, **kwargs) - - def log_parameters(self, log_dict, **kwargs): - self.experiment.log_parameters(log_dict, **kwargs) - - def log_asset(self, asset_path, **kwargs): - self.experiment.log_asset(asset_path, **kwargs) - - def log_asset_data(self, asset, **kwargs): - self.experiment.log_asset_data(asset, **kwargs) - - def log_image(self, img, **kwargs): - self.experiment.log_image(img, **kwargs) - - def log_model(self, path, opt, epoch, fitness_score, best_model=False): - if not self.save_model: - return - - model_metadata = { - 'fitness_score': fitness_score[-1], - 'epochs_trained': epoch + 1, - 'save_period': opt.save_period, - 'total_epochs': opt.epochs,} - - model_files = glob.glob(f'{path}/*.pt') - for model_path in model_files: - name = Path(model_path).name - - self.experiment.log_model( - self.model_name, - file_or_folder=model_path, - file_name=name, - metadata=model_metadata, - overwrite=True, - ) - - def check_dataset(self, data_file): - with open(data_file) as f: - data_config = yaml.safe_load(f) - - if data_config['path'].startswith(COMET_PREFIX): - path = data_config['path'].replace(COMET_PREFIX, '') - data_dict = self.download_dataset_artifact(path) - - return data_dict - - self.log_asset(self.opt.data, metadata={'type': 'data-config-file'}) - - return check_dataset(data_file) - - def log_predictions(self, image, labelsn, path, shape, predn): - if self.logged_images_count >= self.max_images: - return - detections = predn[predn[:, 4] > self.conf_thres] - iou = box_iou(labelsn[:, 1:], detections[:, :4]) - mask, _ = torch.where(iou > self.iou_thres) - if len(mask) == 0: - return - - filtered_detections = detections[mask] - filtered_labels = labelsn[mask] - - image_id = path.split('/')[-1].split('.')[0] - image_name = f'{image_id}_curr_epoch_{self.experiment.curr_epoch}' - if image_name not in self.logged_image_names: - native_scale_image = PIL.Image.open(path) - self.log_image(native_scale_image, name=image_name) - self.logged_image_names.append(image_name) - - metadata = [] - for cls, *xyxy in filtered_labels.tolist(): - metadata.append({ - 'label': f'{self.class_names[int(cls)]}-gt', - 'score': 100, - 'box': { - 'x': xyxy[0], - 'y': xyxy[1], - 'x2': xyxy[2], - 'y2': xyxy[3]},}) - for *xyxy, conf, cls in filtered_detections.tolist(): - metadata.append({ - 'label': f'{self.class_names[int(cls)]}', - 'score': conf * 100, - 'box': { - 'x': xyxy[0], - 'y': xyxy[1], - 'x2': xyxy[2], - 'y2': xyxy[3]},}) - - self.metadata_dict[image_name] = metadata - self.logged_images_count += 1 - - return - - def preprocess_prediction(self, image, labels, shape, pred): - nl, _ = labels.shape[0], pred.shape[0] - - # Predictions - if self.opt.single_cls: - pred[:, 5] = 0 - - predn = pred.clone() - scale_boxes(image.shape[1:], predn[:, :4], shape[0], shape[1]) - - labelsn = None - if nl: - tbox = xywh2xyxy(labels[:, 1:5]) # target boxes - scale_boxes(image.shape[1:], tbox, shape[0], shape[1]) # native-space labels - labelsn = torch.cat((labels[:, 0:1], tbox), 1) # native-space labels - scale_boxes(image.shape[1:], predn[:, :4], shape[0], shape[1]) # native-space pred - - return predn, labelsn - - def add_assets_to_artifact(self, artifact, path, asset_path, split): - img_paths = sorted(glob.glob(f'{asset_path}/*')) - label_paths = img2label_paths(img_paths) - - for image_file, label_file in zip(img_paths, label_paths): - image_logical_path, label_logical_path = map(lambda x: os.path.relpath(x, path), [image_file, label_file]) - - try: - artifact.add(image_file, logical_path=image_logical_path, metadata={'split': split}) - artifact.add(label_file, logical_path=label_logical_path, metadata={'split': split}) - except ValueError as e: - logger.error('COMET ERROR: Error adding file to Artifact. Skipping file.') - logger.error(f'COMET ERROR: {e}') - continue - - return artifact - - def upload_dataset_artifact(self): - dataset_name = self.data_dict.get('dataset_name', 'yolov5-dataset') - path = str((ROOT / Path(self.data_dict['path'])).resolve()) - - metadata = self.data_dict.copy() - for key in ['train', 'val', 'test']: - split_path = metadata.get(key) - if split_path is not None: - metadata[key] = split_path.replace(path, '') - - artifact = comet_ml.Artifact(name=dataset_name, artifact_type='dataset', metadata=metadata) - for key in metadata.keys(): - if key in ['train', 'val', 'test']: - if isinstance(self.upload_dataset, str) and (key != self.upload_dataset): - continue - - asset_path = self.data_dict.get(key) - if asset_path is not None: - artifact = self.add_assets_to_artifact(artifact, path, asset_path, key) - - self.experiment.log_artifact(artifact) - - return - - def download_dataset_artifact(self, artifact_path): - logged_artifact = self.experiment.get_artifact(artifact_path) - artifact_save_dir = str(Path(self.opt.save_dir) / logged_artifact.name) - logged_artifact.download(artifact_save_dir) - - metadata = logged_artifact.metadata - data_dict = metadata.copy() - data_dict['path'] = artifact_save_dir - - metadata_names = metadata.get('names') - if type(metadata_names) == dict: - data_dict['names'] = {int(k): v for k, v in metadata.get('names').items()} - elif type(metadata_names) == list: - data_dict['names'] = {int(k): v for k, v in zip(range(len(metadata_names)), metadata_names)} - else: - raise "Invalid 'names' field in dataset yaml file. Please use a list or dictionary" - - data_dict = self.update_data_paths(data_dict) - return data_dict - - def update_data_paths(self, data_dict): - path = data_dict.get('path', '') - - for split in ['train', 'val', 'test']: - if data_dict.get(split): - split_path = data_dict.get(split) - data_dict[split] = (f'{path}/{split_path}' if isinstance(split, str) else [ - f'{path}/{x}' for x in split_path]) - - return data_dict - - def on_pretrain_routine_end(self, paths): - if self.opt.resume: - return - - for path in paths: - self.log_asset(str(path)) - - if self.upload_dataset: - if not self.resume: - self.upload_dataset_artifact() - - return - - def on_train_start(self): - self.log_parameters(self.hyp) - - def on_train_epoch_start(self): - return - - def on_train_epoch_end(self, epoch): - self.experiment.curr_epoch = epoch - - return - - def on_train_batch_start(self): - return - - def on_train_batch_end(self, log_dict, step): - self.experiment.curr_step = step - if self.log_batch_metrics and (step % self.comet_log_batch_interval == 0): - self.log_metrics(log_dict, step=step) - - return - - def on_train_end(self, files, save_dir, last, best, epoch, results): - if self.comet_log_predictions: - curr_epoch = self.experiment.curr_epoch - self.experiment.log_asset_data(self.metadata_dict, 'image-metadata.json', epoch=curr_epoch) - - for f in files: - self.log_asset(f, metadata={'epoch': epoch}) - self.log_asset(f'{save_dir}/results.csv', metadata={'epoch': epoch}) - - if not self.opt.evolve: - model_path = str(best if best.exists() else last) - name = Path(model_path).name - if self.save_model: - self.experiment.log_model( - self.model_name, - file_or_folder=model_path, - file_name=name, - overwrite=True, - ) - - # Check if running Experiment with Comet Optimizer - if hasattr(self.opt, 'comet_optimizer_id'): - metric = results.get(self.opt.comet_optimizer_metric) - self.experiment.log_other('optimizer_metric_value', metric) - - self.finish_run() - - def on_val_start(self): - return - - def on_val_batch_start(self): - return - - def on_val_batch_end(self, batch_i, images, targets, paths, shapes, outputs): - if not (self.comet_log_predictions and ((batch_i + 1) % self.comet_log_prediction_interval == 0)): - return - - for si, pred in enumerate(outputs): - if len(pred) == 0: - continue - - image = images[si] - labels = targets[targets[:, 0] == si, 1:] - shape = shapes[si] - path = paths[si] - predn, labelsn = self.preprocess_prediction(image, labels, shape, pred) - if labelsn is not None: - self.log_predictions(image, labelsn, path, shape, predn) - - return - - def on_val_end(self, nt, tp, fp, p, r, f1, ap, ap50, ap_class, confusion_matrix): - if self.comet_log_per_class_metrics: - if self.num_classes > 1: - for i, c in enumerate(ap_class): - class_name = self.class_names[c] - self.experiment.log_metrics( - { - 'mAP@.5': ap50[i], - 'mAP@.5:.95': ap[i], - 'precision': p[i], - 'recall': r[i], - 'f1': f1[i], - 'true_positives': tp[i], - 'false_positives': fp[i], - 'support': nt[c]}, - prefix=class_name) - - if self.comet_log_confusion_matrix: - epoch = self.experiment.curr_epoch - class_names = list(self.class_names.values()) - class_names.append('background') - num_classes = len(class_names) - - self.experiment.log_confusion_matrix( - matrix=confusion_matrix.matrix, - max_categories=num_classes, - labels=class_names, - epoch=epoch, - column_label='Actual Category', - row_label='Predicted Category', - file_name=f'confusion-matrix-epoch-{epoch}.json', - ) - - def on_fit_epoch_end(self, result, epoch): - self.log_metrics(result, epoch=epoch) - - def on_model_save(self, last, epoch, final_epoch, best_fitness, fi): - if ((epoch + 1) % self.opt.save_period == 0 and not final_epoch) and self.opt.save_period != -1: - self.log_model(last.parent, self.opt, epoch, fi, best_model=best_fitness == fi) - - def on_params_update(self, params): - self.log_parameters(params) - - def finish_run(self): - self.experiment.end() diff --git a/spaces/Yiqin/ChatVID/model/vision/grit_src/third_party/CenterNet2/configs/new_baselines/mask_rcnn_regnetx_4gf_dds_FPN_400ep_LSJ.py b/spaces/Yiqin/ChatVID/model/vision/grit_src/third_party/CenterNet2/configs/new_baselines/mask_rcnn_regnetx_4gf_dds_FPN_400ep_LSJ.py deleted file mode 100644 index 8f369a2afedb6c6e69fd52ff9a9a6b1cdf965937..0000000000000000000000000000000000000000 --- a/spaces/Yiqin/ChatVID/model/vision/grit_src/third_party/CenterNet2/configs/new_baselines/mask_rcnn_regnetx_4gf_dds_FPN_400ep_LSJ.py +++ /dev/null @@ -1,14 +0,0 @@ -from .mask_rcnn_regnetx_4gf_dds_FPN_100ep_LSJ import ( - dataloader, - lr_multiplier, - model, - optimizer, - train, -) - -train.max_iter *= 4 # 100ep -> 400ep - -lr_multiplier.scheduler.milestones = [ - milestone * 4 for milestone in lr_multiplier.scheduler.milestones -] -lr_multiplier.scheduler.num_updates = train.max_iter diff --git a/spaces/YuAnthony/Voice-Recognition/utils/record.py b/spaces/YuAnthony/Voice-Recognition/utils/record.py deleted file mode 100644 index 399ba901c321e24811954ec835d87ed3425c8c3a..0000000000000000000000000000000000000000 --- a/spaces/YuAnthony/Voice-Recognition/utils/record.py +++ /dev/null @@ -1,43 +0,0 @@ -import wave - -import pyaudio - - -class RecordAudio: - def __init__(self): - # 录音参数 - self.chunk = 1024 - self.format = pyaudio.paInt16 - self.channels = 1 - self.rate = 16000 - - # 打开录音 - self.p = pyaudio.PyAudio() - self.stream = self.p.open(format=self.format, - channels=self.channels, - rate=self.rate, - input=True, - frames_per_buffer=self.chunk) - - def record(self, output_path="audio/temp.wav", record_seconds=3): - """ - 录音 - :param output_path: 录音保存的路径,后缀名为wav - :param record_seconds: 录音时间,默认3秒 - :return: 录音的文件路径 - """ - i = input("按下回车键开机录音,录音3秒中:") - print("开始录音......") - frames = [] - for i in range(0, int(self.rate / self.chunk * record_seconds)): - data = self.stream.read(self.chunk) - frames.append(data) - - print("录音已结束!") - wf = wave.open(output_path, 'wb') - wf.setnchannels(self.channels) - wf.setsampwidth(self.p.get_sample_size(self.format)) - wf.setframerate(self.rate) - wf.writeframes(b''.join(frames)) - wf.close() - return output_path diff --git a/spaces/Yuliang/ICON/lib/renderer/gl/render2.py b/spaces/Yuliang/ICON/lib/renderer/gl/render2.py deleted file mode 100644 index b7f38fc80ebe6e7a07cedcdd90206c2255172429..0000000000000000000000000000000000000000 --- a/spaces/Yuliang/ICON/lib/renderer/gl/render2.py +++ /dev/null @@ -1,388 +0,0 @@ -''' -MIT License - -Copyright (c) 2019 Shunsuke Saito, Zeng Huang, and Ryota Natsume - -Permission is hereby granted, free of charge, to any person obtaining a copy -of this software and associated documentation files (the "Software"), to deal -in the Software without restriction, including without limitation the rights -to use, copy, modify, merge, publish, distribute, sublicense, and/or sell -copies of the Software, and to permit persons to whom the Software is -furnished to do so, subject to the following conditions: - -The above copyright notice and this permission notice shall be included in all -copies or substantial portions of the Software. - -THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR -IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, -FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE -AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER -LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE -SOFTWARE. -''' -import numpy as np -from OpenGL.GLUT import * -from .framework import * - -_glut_window = None - - -class Render: - def __init__(self, - width=1600, - height=1200, - name='GL Renderer', - program_files=['simple.fs', 'simple.vs'], - color_size=1, - ms_rate=1): - self.width = width - self.height = height - self.name = name - self.display_mode = GLUT_DOUBLE | GLUT_RGB | GLUT_DEPTH - self.use_inverse_depth = False - - global _glut_window - if _glut_window is None: - glutInit() - glutInitDisplayMode(self.display_mode) - glutInitWindowSize(self.width, self.height) - glutInitWindowPosition(0, 0) - _glut_window = glutCreateWindow("My Render.") - - # glEnable(GL_DEPTH_CLAMP) - glEnable(GL_DEPTH_TEST) - - glClampColor(GL_CLAMP_READ_COLOR, GL_FALSE) - glClampColor(GL_CLAMP_FRAGMENT_COLOR, GL_FALSE) - glClampColor(GL_CLAMP_VERTEX_COLOR, GL_FALSE) - - # init program - shader_list = [] - - for program_file in program_files: - _, ext = os.path.splitext(program_file) - if ext == '.vs': - shader_list.append(loadShader(GL_VERTEX_SHADER, program_file)) - elif ext == '.fs': - shader_list.append(loadShader(GL_FRAGMENT_SHADER, - program_file)) - elif ext == '.gs': - shader_list.append(loadShader(GL_GEOMETRY_SHADER, - program_file)) - - self.program = createProgram(shader_list) - - for shader in shader_list: - glDeleteShader(shader) - - # Init uniform variables - self.model_mat_unif = glGetUniformLocation(self.program, 'ModelMat') - self.persp_mat_unif = glGetUniformLocation(self.program, 'PerspMat') - - self.vertex_buffer = glGenBuffers(1) - - # Init screen quad program and buffer - self.quad_program, self.quad_buffer = self.init_quad_program() - - # Configure frame buffer - self.frame_buffer = glGenFramebuffers(1) - glBindFramebuffer(GL_FRAMEBUFFER, self.frame_buffer) - - self.intermediate_fbo = None - if ms_rate > 1: - # Configure texture buffer to render to - self.color_buffer = [] - for i in range(color_size): - color_buffer = glGenTextures(1) - multi_sample_rate = ms_rate - glBindTexture(GL_TEXTURE_2D_MULTISAMPLE, color_buffer) - glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, - GL_CLAMP_TO_EDGE) - glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, - GL_CLAMP_TO_EDGE) - glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, - GL_LINEAR) - glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, - GL_LINEAR) - glTexImage2DMultisample(GL_TEXTURE_2D_MULTISAMPLE, - multi_sample_rate, GL_RGBA32F, - self.width, self.height, GL_TRUE) - glBindTexture(GL_TEXTURE_2D_MULTISAMPLE, 0) - glFramebufferTexture2D(GL_FRAMEBUFFER, - GL_COLOR_ATTACHMENT0 + i, - GL_TEXTURE_2D_MULTISAMPLE, color_buffer, - 0) - self.color_buffer.append(color_buffer) - - self.render_buffer = glGenRenderbuffers(1) - glBindRenderbuffer(GL_RENDERBUFFER, self.render_buffer) - glRenderbufferStorageMultisample(GL_RENDERBUFFER, - multi_sample_rate, - GL_DEPTH24_STENCIL8, self.width, - self.height) - glBindRenderbuffer(GL_RENDERBUFFER, 0) - glFramebufferRenderbuffer(GL_FRAMEBUFFER, - GL_DEPTH_STENCIL_ATTACHMENT, - GL_RENDERBUFFER, self.render_buffer) - - attachments = [] - for i in range(color_size): - attachments.append(GL_COLOR_ATTACHMENT0 + i) - glDrawBuffers(color_size, attachments) - glBindFramebuffer(GL_FRAMEBUFFER, 0) - - self.intermediate_fbo = glGenFramebuffers(1) - glBindFramebuffer(GL_FRAMEBUFFER, self.intermediate_fbo) - - self.screen_texture = [] - for i in range(color_size): - screen_texture = glGenTextures(1) - glBindTexture(GL_TEXTURE_2D, screen_texture) - glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA32F, self.width, - self.height, 0, GL_RGBA, GL_FLOAT, None) - glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, - GL_LINEAR) - glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, - GL_LINEAR) - glFramebufferTexture2D(GL_FRAMEBUFFER, - GL_COLOR_ATTACHMENT0 + i, GL_TEXTURE_2D, - screen_texture, 0) - self.screen_texture.append(screen_texture) - - glDrawBuffers(color_size, attachments) - glBindFramebuffer(GL_FRAMEBUFFER, 0) - else: - self.color_buffer = [] - for i in range(color_size): - color_buffer = glGenTextures(1) - glBindTexture(GL_TEXTURE_2D, color_buffer) - glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, - GL_CLAMP_TO_EDGE) - glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, - GL_CLAMP_TO_EDGE) - glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, - GL_NEAREST) - glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, - GL_NEAREST) - glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA32F, self.width, - self.height, 0, GL_RGBA, GL_FLOAT, None) - glFramebufferTexture2D(GL_FRAMEBUFFER, - GL_COLOR_ATTACHMENT0 + i, GL_TEXTURE_2D, - color_buffer, 0) - self.color_buffer.append(color_buffer) - - # Configure depth texture map to render to - self.depth_buffer = glGenTextures(1) - glBindTexture(GL_TEXTURE_2D, self.depth_buffer) - glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT) - glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT) - glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST) - glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST) - glTexParameteri(GL_TEXTURE_2D, GL_DEPTH_TEXTURE_MODE, GL_INTENSITY) - glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_MODE, - GL_COMPARE_R_TO_TEXTURE) - glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_FUNC, GL_LEQUAL) - glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, self.width, - self.height, 0, GL_DEPTH_COMPONENT, GL_FLOAT, None) - glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, - GL_TEXTURE_2D, self.depth_buffer, 0) - - attachments = [] - for i in range(color_size): - attachments.append(GL_COLOR_ATTACHMENT0 + i) - glDrawBuffers(color_size, attachments) - self.screen_texture = self.color_buffer - - glBindFramebuffer(GL_FRAMEBUFFER, 0) - - # Configure texture buffer if needed - self.render_texture = None - - # NOTE: original render_texture only support one input - # this is tentative member of this issue - self.render_texture_v2 = {} - - # Inner storage for buffer data - self.vertex_data = None - self.vertex_dim = None - self.n_vertices = None - - self.model_view_matrix = None - self.projection_matrix = None - - glutDisplayFunc(self.display) - - def init_quad_program(self): - shader_list = [] - - shader_list.append(loadShader(GL_VERTEX_SHADER, "quad.vs")) - shader_list.append(loadShader(GL_FRAGMENT_SHADER, "quad.fs")) - - the_program = createProgram(shader_list) - - for shader in shader_list: - glDeleteShader(shader) - - # vertex attributes for a quad that fills the entire screen in Normalized Device Coordinates. - # positions # texCoords - quad_vertices = np.array([ - -1.0, 1.0, 0.0, 1.0, -1.0, -1.0, 0.0, 0.0, 1.0, -1.0, 1.0, 0.0, - -1.0, 1.0, 0.0, 1.0, 1.0, -1.0, 1.0, 0.0, 1.0, 1.0, 1.0, 1.0 - ]) - - quad_buffer = glGenBuffers(1) - glBindBuffer(GL_ARRAY_BUFFER, quad_buffer) - glBufferData(GL_ARRAY_BUFFER, quad_vertices, GL_STATIC_DRAW) - - glBindBuffer(GL_ARRAY_BUFFER, 0) - - return the_program, quad_buffer - - def set_mesh(self, vertices, faces): - self.vertex_data = vertices[faces.reshape([-1])] - self.vertex_dim = self.vertex_data.shape[1] - self.n_vertices = self.vertex_data.shape[0] - - glBindBuffer(GL_ARRAY_BUFFER, self.vertex_buffer) - glBufferData(GL_ARRAY_BUFFER, self.vertex_data, GL_STATIC_DRAW) - - glBindBuffer(GL_ARRAY_BUFFER, 0) - - def set_viewpoint(self, projection, model_view): - self.projection_matrix = projection - self.model_view_matrix = model_view - - def draw_init(self): - glBindFramebuffer(GL_FRAMEBUFFER, self.frame_buffer) - glEnable(GL_DEPTH_TEST) - - # glClearColor(0.0, 0.0, 0.0, 0.0) - glClearColor(1.0, 1.0, 1.0, 0.0) # Black background - - if self.use_inverse_depth: - glDepthFunc(GL_GREATER) - glClearDepth(0.0) - else: - glDepthFunc(GL_LESS) - glClearDepth(1.0) - glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT) - - def draw_end(self): - if self.intermediate_fbo is not None: - for i in range(len(self.color_buffer)): - glBindFramebuffer(GL_READ_FRAMEBUFFER, self.frame_buffer) - glReadBuffer(GL_COLOR_ATTACHMENT0 + i) - glBindFramebuffer(GL_DRAW_FRAMEBUFFER, self.intermediate_fbo) - glDrawBuffer(GL_COLOR_ATTACHMENT0 + i) - glBlitFramebuffer(0, 0, self.width, self.height, 0, 0, - self.width, self.height, GL_COLOR_BUFFER_BIT, - GL_NEAREST) - - glBindFramebuffer(GL_FRAMEBUFFER, 0) - glDepthFunc(GL_LESS) - glClearDepth(1.0) - - def draw(self): - self.draw_init() - - glUseProgram(self.program) - glUniformMatrix4fv(self.model_mat_unif, 1, GL_FALSE, - self.model_view_matrix.transpose()) - glUniformMatrix4fv(self.persp_mat_unif, 1, GL_FALSE, - self.projection_matrix.transpose()) - - glBindBuffer(GL_ARRAY_BUFFER, self.vertex_buffer) - - glEnableVertexAttribArray(0) - glVertexAttribPointer(0, self.vertex_dim, GL_DOUBLE, GL_FALSE, 0, None) - - glDrawArrays(GL_TRIANGLES, 0, self.n_vertices) - - glDisableVertexAttribArray(0) - - glBindBuffer(GL_ARRAY_BUFFER, 0) - - glUseProgram(0) - - self.draw_end() - - def get_color(self, color_id=0): - glBindFramebuffer( - GL_FRAMEBUFFER, self.intermediate_fbo - if self.intermediate_fbo is not None else self.frame_buffer) - glReadBuffer(GL_COLOR_ATTACHMENT0 + color_id) - data = glReadPixels(0, - 0, - self.width, - self.height, - GL_RGBA, - GL_FLOAT, - outputType=None) - glBindFramebuffer(GL_FRAMEBUFFER, 0) - rgb = data.reshape(self.height, self.width, -1) - rgb = np.flip(rgb, 0) - return rgb - - def get_z_value(self): - glBindFramebuffer(GL_FRAMEBUFFER, self.frame_buffer) - data = glReadPixels(0, - 0, - self.width, - self.height, - GL_DEPTH_COMPONENT, - GL_FLOAT, - outputType=None) - glBindFramebuffer(GL_FRAMEBUFFER, 0) - z = data.reshape(self.height, self.width) - z = np.flip(z, 0) - return z - - def display(self): - # First we draw a scene. - # Notice the result is stored in the texture buffer. - self.draw() - - # Then we return to the default frame buffer since we will display on the screen. - glBindFramebuffer(GL_FRAMEBUFFER, 0) - - # Do the clean-up. - # glClearColor(0.0, 0.0, 0.0, 0.0) #Black background - glClearColor(1.0, 1.0, 1.0, 0.0) # Black background - glClear(GL_COLOR_BUFFER_BIT) - - # We draw a rectangle which covers the whole screen. - glUseProgram(self.quad_program) - glBindBuffer(GL_ARRAY_BUFFER, self.quad_buffer) - - size_of_double = 8 - glEnableVertexAttribArray(0) - glVertexAttribPointer(0, 2, GL_DOUBLE, GL_FALSE, 4 * size_of_double, - None) - glEnableVertexAttribArray(1) - glVertexAttribPointer(1, 2, GL_DOUBLE, GL_FALSE, 4 * size_of_double, - c_void_p(2 * size_of_double)) - - glDisable(GL_DEPTH_TEST) - - # The stored texture is then mapped to this rectangle. - # properly assing color buffer texture - glActiveTexture(GL_TEXTURE0) - glBindTexture(GL_TEXTURE_2D, self.screen_texture[0]) - glUniform1i(glGetUniformLocation(self.quad_program, 'screenTexture'), - 0) - - glDrawArrays(GL_TRIANGLES, 0, 6) - - glDisableVertexAttribArray(1) - glDisableVertexAttribArray(0) - - glEnable(GL_DEPTH_TEST) - glBindBuffer(GL_ARRAY_BUFFER, 0) - glUseProgram(0) - - glutSwapBuffers() - glutPostRedisplay() - - def show(self): - glutMainLoop() diff --git a/spaces/aadnk/faster-whisper-webui/src/hooks/whisperProgressHook.py b/spaces/aadnk/faster-whisper-webui/src/hooks/whisperProgressHook.py deleted file mode 100644 index aa09958a05e0b3c54736f7209f8a05a94912752e..0000000000000000000000000000000000000000 --- a/spaces/aadnk/faster-whisper-webui/src/hooks/whisperProgressHook.py +++ /dev/null @@ -1,91 +0,0 @@ -import sys -import threading -from typing import List, Union -import tqdm - -from src.hooks.progressListener import ProgressListener - -class ProgressListenerHandle: - def __init__(self, listener: ProgressListener): - self.listener = listener - - def __enter__(self): - register_thread_local_progress_listener(self.listener) - - def __exit__(self, exc_type, exc_val, exc_tb): - unregister_thread_local_progress_listener(self.listener) - - if exc_type is None: - self.listener.on_finished() - -class _CustomProgressBar(tqdm.tqdm): - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - self._current = self.n # Set the initial value - - def update(self, n): - super().update(n) - # Because the progress bar might be disabled, we need to manually update the progress - self._current += n - - # Inform listeners - listeners = _get_thread_local_listeners() - - for listener in listeners: - listener.on_progress(self._current, self.total) - -_thread_local = threading.local() - -def _get_thread_local_listeners(): - if not hasattr(_thread_local, 'listeners'): - _thread_local.listeners = [] - return _thread_local.listeners - -_hooked = False - -def init_progress_hook(): - global _hooked - - if _hooked: - return - - # Inject into tqdm.tqdm of Whisper, so we can see progress - import whisper.transcribe - transcribe_module = sys.modules['whisper.transcribe'] - transcribe_module.tqdm.tqdm = _CustomProgressBar - _hooked = True - -def register_thread_local_progress_listener(progress_listener: ProgressListener): - # This is a workaround for the fact that the progress bar is not exposed in the API - init_progress_hook() - - listeners = _get_thread_local_listeners() - listeners.append(progress_listener) - -def unregister_thread_local_progress_listener(progress_listener: ProgressListener): - listeners = _get_thread_local_listeners() - - if progress_listener in listeners: - listeners.remove(progress_listener) - -def create_progress_listener_handle(progress_listener: ProgressListener): - return ProgressListenerHandle(progress_listener) - -# Example usage -if __name__ == '__main__': - class PrintingProgressListener: - def on_progress(self, current: Union[int, float], total: Union[int, float]): - print(f"Progress: {current}/{total}") - - def on_finished(self): - print("Finished") - - import whisper - model = whisper.load_model("medium") - - with create_progress_listener_handle(PrintingProgressListener()) as listener: - # Set verbose to None to disable the progress bar, as we are using our own - result = model.transcribe("J:\\Dev\\OpenAI\\whisper\\tests\\Noriko\\out.mka", language="Japanese", fp16=False, verbose=None) - print(result) - - print("Done") \ No newline at end of file diff --git a/spaces/abhishek/dreambooth/README.md b/spaces/abhishek/dreambooth/README.md deleted file mode 100644 index bc44ae527ad73e1ba6853201f97963fd08400099..0000000000000000000000000000000000000000 --- a/spaces/abhishek/dreambooth/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: Dreambooth AutoTrain -emoji: ⚡ -colorFrom: red -colorTo: pink -sdk: gradio -sdk_version: 3.14.0 -app_file: main.py -pinned: false ---- diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmcv/runner/optimizer/__init__.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmcv/runner/optimizer/__init__.py deleted file mode 100644 index 53c34d0470992cbc374f29681fdd00dc0e57968d..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmcv/runner/optimizer/__init__.py +++ /dev/null @@ -1,9 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .builder import (OPTIMIZER_BUILDERS, OPTIMIZERS, build_optimizer, - build_optimizer_constructor) -from .default_constructor import DefaultOptimizerConstructor - -__all__ = [ - 'OPTIMIZER_BUILDERS', 'OPTIMIZERS', 'DefaultOptimizerConstructor', - 'build_optimizer', 'build_optimizer_constructor' -] diff --git a/spaces/abidlabs/images/app.py b/spaces/abidlabs/images/app.py deleted file mode 100644 index d2782cea00b1bfcd22df7c204d9e52a6baf46ac2..0000000000000000000000000000000000000000 --- a/spaces/abidlabs/images/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/stabilityai/stable-diffusion-2").launch() \ No newline at end of file diff --git a/spaces/abrar-lohia/text-2-character-anim/VQTrans/models/modules.py b/spaces/abrar-lohia/text-2-character-anim/VQTrans/models/modules.py deleted file mode 100644 index 4f06cd98d4f6029bd3df073095cf50498483d54a..0000000000000000000000000000000000000000 --- a/spaces/abrar-lohia/text-2-character-anim/VQTrans/models/modules.py +++ /dev/null @@ -1,109 +0,0 @@ -import torch -import torch.nn as nn -from torch.nn.utils.rnn import pack_padded_sequence - -def init_weight(m): - if isinstance(m, nn.Conv1d) or isinstance(m, nn.Linear) or isinstance(m, nn.ConvTranspose1d): - nn.init.xavier_normal_(m.weight) - # m.bias.data.fill_(0.01) - if m.bias is not None: - nn.init.constant_(m.bias, 0) - - -class MovementConvEncoder(nn.Module): - def __init__(self, input_size, hidden_size, output_size): - super(MovementConvEncoder, self).__init__() - self.main = nn.Sequential( - nn.Conv1d(input_size, hidden_size, 4, 2, 1), - nn.Dropout(0.2, inplace=True), - nn.LeakyReLU(0.2, inplace=True), - nn.Conv1d(hidden_size, output_size, 4, 2, 1), - nn.Dropout(0.2, inplace=True), - nn.LeakyReLU(0.2, inplace=True), - ) - self.out_net = nn.Linear(output_size, output_size) - self.main.apply(init_weight) - self.out_net.apply(init_weight) - - def forward(self, inputs): - inputs = inputs.permute(0, 2, 1) - outputs = self.main(inputs).permute(0, 2, 1) - # print(outputs.shape) - return self.out_net(outputs) - - - -class TextEncoderBiGRUCo(nn.Module): - def __init__(self, word_size, pos_size, hidden_size, output_size, device): - super(TextEncoderBiGRUCo, self).__init__() - self.device = device - - self.pos_emb = nn.Linear(pos_size, word_size) - self.input_emb = nn.Linear(word_size, hidden_size) - self.gru = nn.GRU(hidden_size, hidden_size, batch_first=True, bidirectional=True) - self.output_net = nn.Sequential( - nn.Linear(hidden_size * 2, hidden_size), - nn.LayerNorm(hidden_size), - nn.LeakyReLU(0.2, inplace=True), - nn.Linear(hidden_size, output_size) - ) - - self.input_emb.apply(init_weight) - self.pos_emb.apply(init_weight) - self.output_net.apply(init_weight) - self.hidden_size = hidden_size - self.hidden = nn.Parameter(torch.randn((2, 1, self.hidden_size), requires_grad=True)) - - # input(batch_size, seq_len, dim) - def forward(self, word_embs, pos_onehot, cap_lens): - num_samples = word_embs.shape[0] - - pos_embs = self.pos_emb(pos_onehot) - inputs = word_embs + pos_embs - input_embs = self.input_emb(inputs) - hidden = self.hidden.repeat(1, num_samples, 1) - - cap_lens = cap_lens.data.tolist() - emb = pack_padded_sequence(input_embs, cap_lens, batch_first=True) - - gru_seq, gru_last = self.gru(emb, hidden) - - gru_last = torch.cat([gru_last[0], gru_last[1]], dim=-1) - - return self.output_net(gru_last) - - -class MotionEncoderBiGRUCo(nn.Module): - def __init__(self, input_size, hidden_size, output_size, device): - super(MotionEncoderBiGRUCo, self).__init__() - self.device = device - - self.input_emb = nn.Linear(input_size, hidden_size) - self.gru = nn.GRU(hidden_size, hidden_size, batch_first=True, bidirectional=True) - self.output_net = nn.Sequential( - nn.Linear(hidden_size*2, hidden_size), - nn.LayerNorm(hidden_size), - nn.LeakyReLU(0.2, inplace=True), - nn.Linear(hidden_size, output_size) - ) - - self.input_emb.apply(init_weight) - self.output_net.apply(init_weight) - self.hidden_size = hidden_size - self.hidden = nn.Parameter(torch.randn((2, 1, self.hidden_size), requires_grad=True)) - - # input(batch_size, seq_len, dim) - def forward(self, inputs, m_lens): - num_samples = inputs.shape[0] - - input_embs = self.input_emb(inputs) - hidden = self.hidden.repeat(1, num_samples, 1) - - cap_lens = m_lens.data.tolist() - emb = pack_padded_sequence(input_embs, cap_lens, batch_first=True, enforce_sorted=False) - - gru_seq, gru_last = self.gru(emb, hidden) - - gru_last = torch.cat([gru_last[0], gru_last[1]], dim=-1) - - return self.output_net(gru_last) diff --git a/spaces/akhaliq/Music_Source_Separation/scripts/5_inference/musdb18/inference.sh b/spaces/akhaliq/Music_Source_Separation/scripts/5_inference/musdb18/inference.sh deleted file mode 100644 index 21ecd5a30731343ee9b74e181ef4602b528a87d4..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/Music_Source_Separation/scripts/5_inference/musdb18/inference.sh +++ /dev/null @@ -1,17 +0,0 @@ -#!/bin/bash -WORKSPACE=${1:-"./workspaces/bytesep"} # The first argument is workspace directory. - -echo "WORKSPACE=${WORKSPACE}" - -# Users can modify the following config file. -TRAIN_CONFIG_YAML="scripts/4_train/musdb18/configs/vocals-accompaniment,unet.yaml" - -CHECKPOINT_PATH="${WORKSPACE}/checkpoints/musdb18/train/config=vocals-accompaniment,unet,gpus=1/step=300000.pth" - -# Inference -CUDA_VISIBLE_DEVICES=0 python3 bytesep/inference.py \ - --config_yaml=$TRAIN_CONFIG_YAML \ - --checkpoint_path=$CHECKPOINT_PATH \ - --audio_path="resources/vocals_accompaniment_10s.mp3" \ - --output_path="sep_results/vocals_accompaniment_10s_sep_vocals.mp3" - \ No newline at end of file diff --git a/spaces/akhaliq/SummerTime/evaluation/base_metric.py b/spaces/akhaliq/SummerTime/evaluation/base_metric.py deleted file mode 100644 index fc6349011a2b7971ba7330e0d28579d9fe5a94fb..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/SummerTime/evaluation/base_metric.py +++ /dev/null @@ -1,27 +0,0 @@ -from typing import List, Tuple, Dict - - -class SummMetric: - metric_name: str = None - range: Tuple[float, float] = None - higher_is_better: bool = None - requires_heavy_compute: bool = None - - def evaluate( - self, - # TODO zhangir: integrate with dataset api - inputs: List[str], - targets: List[str], - keys: List[str], - ) -> Dict[str, float]: - """ - All metrics should have this function. - :input: A list of summaries. - :target: A list of target summaries corresponding to each entry of input. - :keys: Which metrics to return, - e.g, ['rouge_1_f_score', 'rouge_2_f_score'] - :return: A dictionary with keys metrics and values scores. - """ - raise NotImplementedError( - "the base class for metrics shouldn't be instantiated!" - ) diff --git a/spaces/akhaliq/VQMIVC/ParallelWaveGAN/egs/arctic/voc1/local/data_prep.sh b/spaces/akhaliq/VQMIVC/ParallelWaveGAN/egs/arctic/voc1/local/data_prep.sh deleted file mode 100644 index 94d5293e5df8c8bec79e8e2c5f36163d4f02b9bb..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/VQMIVC/ParallelWaveGAN/egs/arctic/voc1/local/data_prep.sh +++ /dev/null @@ -1,113 +0,0 @@ -#!/bin/bash - -# Copyright 2019 Tomoki Hayashi -# MIT License (https://opensource.org/licenses/MIT) - -# shellcheck disable=SC1091 -. ./path.sh || exit 1; - -num_dev=100 -num_eval=100 -train_set="train_nodev" -dev_set="dev" -eval_set="eval" -shuffle=false - -# shellcheck disable=SC1091 -. utils/parse_options.sh || exit 1; - -db_root=$1 -spk=$2 -data_dir=$3 - -# check arguments -if [ $# != 3 ]; then - echo "Usage: $0 " - echo "e.g.: $0 downloads/cms_us_slt_arctic slt data" - echo "" - echo "Options:" - echo " --num_dev: number of development uttreances (default=250)." - echo " --num_eval: number of evaluation uttreances (default=250)." - echo " --train_set: name of train set (default=train_nodev)." - echo " --dev_set: name of dev set (default=dev)." - echo " --eval_set: name of eval set (default=eval)." - echo " --shuffle: whether to perform shuffle in making dev / eval set (default=false)." - exit 1 -fi - -set -euo pipefail - -# check speaker -available_spks=( - "slt" "clb" "bdl" "rms" "jmk" "awb" "ksp" -) -if ! echo "${available_spks[*]}" | grep -q "${spk}"; then - echo "Specified speaker ${spk} is not available." - echo "Available speakers: ${available_spks[*]}" - exit 1 -fi - -[ ! -e "${data_dir}/all" ] && mkdir -p "${data_dir}/all" - -# set filenames -scp="${data_dir}/all/wav.scp" -segments="${data_dir}/all/segments" - -# check file existence -[ -e "${scp}" ] && rm "${scp}" -[ -e "${segments}" ] && rm "${segments}" - -# make scp -find "${db_root}" -name "*.wav" -follow | sort | while read -r filename; do - id="${spk}_$(basename "${filename}" | sed -e "s/\.[^\.]*$//g")" - echo "${id} ${filename}" >> "${scp}" -done - -# make segments -find "${db_root}/lab" -name "*.lab" -follow | sort | while read -r filename; do - # get start time - while read -r line; do - phn=$(echo "${line}" | cut -d " " -f 3) - if [ "${phn}" != "pau" ]; then - break - fi - start=$(echo "${line}" | cut -d " " -f 1) - done < <(tail -n +2 "$filename") - # get end time - while read -r line; do - end=$(echo "${line}" | cut -d " " -f 1) - phn=$(echo "${line}" | cut -d " " -f 3) - if [ "${phn}" != "pau" ]; then - break - fi - done < <(tail -n +2 "$filename" | tac) - echo "${spk}_$(basename "${filename}" .lab) ${spk}_$(basename "${filename}" .lab) ${start} ${end}" >> "${segments}" -done - -# check -diff -q <(awk '{print $1}' "${scp}") <(awk '{print $1}' "${segments}") > /dev/null - -# split -num_all=$(wc -l < "${scp}") -num_deveval=$((num_dev + num_eval)) -num_train=$((num_all - num_deveval)) -utils/split_data.sh \ - --num_first "${num_train}" \ - --num_second "${num_deveval}" \ - --shuffle "${shuffle}" \ - "${data_dir}/all" \ - "${data_dir}/${train_set}" \ - "${data_dir}/deveval" -utils/split_data.sh \ - --num_first "${num_dev}" \ - --num_second "${num_eval}" \ - --shuffle "${shuffle}" \ - "${data_dir}/deveval" \ - "${data_dir}/${dev_set}" \ - "${data_dir}/${eval_set}" - -# remove tmp directories -rm -rf "${data_dir}/all" -rm -rf "${data_dir}/deveval" - -echo "Successfully prepared data." diff --git a/spaces/akhaliq/encoder4editing/README.md b/spaces/akhaliq/encoder4editing/README.md deleted file mode 100644 index a41d11b3749f74dfa56cf6f53715c96d7eacf2cc..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/encoder4editing/README.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -title: Encoder4editing -emoji: 🌍 -colorFrom: yellow -colorTo: yellow -sdk: gradio -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/akhaliq/lama/bin/paper_runfiles/update_test_data_stats.sh b/spaces/akhaliq/lama/bin/paper_runfiles/update_test_data_stats.sh deleted file mode 100644 index ff77d586f308202fbd019d8cc4be641f0d6aa1a5..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/lama/bin/paper_runfiles/update_test_data_stats.sh +++ /dev/null @@ -1,30 +0,0 @@ -#!/usr/bin/env bash - -# paths to data are valid for mml7 - -source "$(dirname $0)/env.sh" - -#INDIR="/data/inpainting/paper_data/Places365_val_test/test_large_30k" -# -#for dataset in random_medium_256 random_medium_512 random_thick_256 random_thick_512 random_thin_256 random_thin_512 -#do -# "$BINDIR/calc_dataset_stats.py" "$INDIR/$dataset" "$INDIR/${dataset}_stats2" -#done -# -#"$BINDIR/calc_dataset_stats.py" "/data/inpainting/evalset2" "/data/inpainting/evalset2_stats2" - - -INDIR="/data/inpainting/paper_data/CelebA-HQ_val_test/test" - -for dataset in random_medium_256 random_thick_256 random_thin_256 -do - "$BINDIR/calc_dataset_stats.py" "$INDIR/$dataset" "$INDIR/${dataset}_stats2" -done - - -INDIR="/data/inpainting/paper_data/Paris_StreetView_Dataset_val_256/paris_eval_gt" - -for dataset in random_medium_256 random_thick_256 random_thin_256 -do - "$BINDIR/calc_dataset_stats.py" "$INDIR/$dataset" "$INDIR/${dataset}_stats2" -done \ No newline at end of file diff --git a/spaces/akhaliq/stylegan3_clip/dnnlib/__init__.py b/spaces/akhaliq/stylegan3_clip/dnnlib/__init__.py deleted file mode 100644 index e0a006715176e91a5ed94a5d2362a87d53b4d889..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/stylegan3_clip/dnnlib/__init__.py +++ /dev/null @@ -1,9 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -from .util import EasyDict, make_cache_dir_path diff --git a/spaces/alamin655/replit-3B-inference/README.md b/spaces/alamin655/replit-3B-inference/README.md deleted file mode 100644 index 3ad8ec49f9fa062c3164688992718dab9d6889dd..0000000000000000000000000000000000000000 --- a/spaces/alamin655/replit-3B-inference/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Replit 3B Inference -emoji: 📉 -colorFrom: indigo -colorTo: indigo -sdk: gradio -sdk_version: 3.35.2 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/alex-mindspace/gpt-agents/swarmai/utils/memory/InternalMemoryBase.py b/spaces/alex-mindspace/gpt-agents/swarmai/utils/memory/InternalMemoryBase.py deleted file mode 100644 index 3765f5f696d5eac7fedad05d4891f594d7cccf82..0000000000000000000000000000000000000000 --- a/spaces/alex-mindspace/gpt-agents/swarmai/utils/memory/InternalMemoryBase.py +++ /dev/null @@ -1,25 +0,0 @@ -from abc import ABC, abstractmethod - -class InternalMemoryBase(ABC): - """Abstract base class for internal memory of agents in the swarm. - """ - - def __init__(self, n_entries): - """Initialize the internal memory. In the current architecture the memory always consists of a set of soltuions or evaluations. - During the operation, the agent should retrivie best solutions from it's internal memory based on the score. - - Moreover, the project is designed around LLMs for the proof of concepts, so we treat all entry content as a string. - """ - self.n_entries = n_entries - - @abstractmethod - def add_entry(self, score, entry): - """Add an entry to the internal memory. - """ - raise NotImplementedError - - @abstractmethod - def get_top_n(self, n): - """Get the top n entries from the internal memory. - """ - raise NotImplementedError \ No newline at end of file diff --git a/spaces/almakedon/faster-whisper-webui/src/hooks/progressListener.py b/spaces/almakedon/faster-whisper-webui/src/hooks/progressListener.py deleted file mode 100644 index a7852a24e237ae864bbce5f37674e1f7c817a1b3..0000000000000000000000000000000000000000 --- a/spaces/almakedon/faster-whisper-webui/src/hooks/progressListener.py +++ /dev/null @@ -1,8 +0,0 @@ -from typing import Union - -class ProgressListener: - def on_progress(self, current: Union[int, float], total: Union[int, float]): - self.total = total - - def on_finished(self): - pass \ No newline at end of file diff --git a/spaces/alonardo/Career_Companion/README.md b/spaces/alonardo/Career_Companion/README.md deleted file mode 100644 index 62beb37d9cee5b467961be9815b5393cc00c36b1..0000000000000000000000000000000000000000 --- a/spaces/alonardo/Career_Companion/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Career Companion -emoji: 🚀 -colorFrom: pink -colorTo: green -sdk: gradio -sdk_version: 3.39.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/amanatid/Melissa_The_PubMedGPT_with_Voice_and_featuring_answers/sidebar.py b/spaces/amanatid/Melissa_The_PubMedGPT_with_Voice_and_featuring_answers/sidebar.py deleted file mode 100644 index 66724e8129b0f315cefc0cf3082e942c359c398e..0000000000000000000000000000000000000000 --- a/spaces/amanatid/Melissa_The_PubMedGPT_with_Voice_and_featuring_answers/sidebar.py +++ /dev/null @@ -1,38 +0,0 @@ -import streamlit as st - -from faq import faq - - -def set_openai_api_key(api_key: str): - st.session_state["OPENAI_API_KEY"] = api_key - - -def sidebar(): - with st.sidebar: - st.markdown( - "## How to use\n" - "1. Enter your [OpenAI API key](https://platform.openai.com/account/api-keys) below🔑\n" # noqa: E501 - "2. Choose the Medic Topic to dicuss🚩\n" - "3. Load the number of papers you want to investigate. \n" - "4. Choose a criterion.\n" - "5. Wait for the message 'PubMed papers are loaded based on the criteria' to be appeared.\n" - ) - - - - st.markdown("---") - st.markdown("# About") - st.markdown( - "⚕️PubMedGPT allows you to commit a scientific dialogue based on" - " a specific question/criterion and the amount of data that are loaded from" - "[PubMed](https://pubmed.ncbi.nlm.nih.gov/). " - ) - st.markdown( - "This is a work in progress. " - "You can contribute to the project on [GitHub](https://github.com/amanatid/ArxivChatBot_StreamlitApp) " - "with your feedback and suggestions💡" - ) - st.markdown("Made by [amanatid](amanatid@gmail.com)") - st.markdown("---") - - faq() diff --git a/spaces/amitjamadagni/qs-benchmarks/app.py b/spaces/amitjamadagni/qs-benchmarks/app.py deleted file mode 100644 index 8de8e080783ea464370dabcab07e9465ac9eae69..0000000000000000000000000000000000000000 --- a/spaces/amitjamadagni/qs-benchmarks/app.py +++ /dev/null @@ -1,6 +0,0 @@ -import os -from subprocess import Popen - -command = ["mercury", "run", f"0.0.0.0:{os.environ.get('PORT', 7860)}"] -worker = Popen(command) -worker.wait() \ No newline at end of file diff --git a/spaces/anon9i9/finetuned_diffusion_test/style.css b/spaces/anon9i9/finetuned_diffusion_test/style.css deleted file mode 100644 index 9bfa78cc983f84693cf7cbab1e3bfd0e0d36c944..0000000000000000000000000000000000000000 --- a/spaces/anon9i9/finetuned_diffusion_test/style.css +++ /dev/null @@ -1,24 +0,0 @@ -.finetuned-diffusion-div div{ - display:inline-flex; - align-items:center; - gap:.8rem; - font-size:1.75rem -} -.finetuned-diffusion-div div h1{ - font-weight:900; - margin-bottom:7px -} -.finetuned-diffusion-div p{ - margin-bottom:10px; - font-size:94% -} -a{ - text-decoration:underline -} -.tabs{ - margin-top:0; - margin-bottom:0 -} -#gallery{ - min-height:20rem -} diff --git a/spaces/aodianyun/stable-diffusion-webui/extensions/deforum/scripts/deforum_helpers/src/adabins/__init__.py b/spaces/aodianyun/stable-diffusion-webui/extensions/deforum/scripts/deforum_helpers/src/adabins/__init__.py deleted file mode 100644 index 8b2a0eea190658f294d0a49363ea28543087bdf6..0000000000000000000000000000000000000000 --- a/spaces/aodianyun/stable-diffusion-webui/extensions/deforum/scripts/deforum_helpers/src/adabins/__init__.py +++ /dev/null @@ -1 +0,0 @@ -from .unet_adaptive_bins import UnetAdaptiveBins diff --git a/spaces/artificialguybr/video-dubbing/TTS/TTS/tts/layers/xtts/gpt.py b/spaces/artificialguybr/video-dubbing/TTS/TTS/tts/layers/xtts/gpt.py deleted file mode 100644 index 683104d871a283ba0a7d58fc719fd427c049f034..0000000000000000000000000000000000000000 --- a/spaces/artificialguybr/video-dubbing/TTS/TTS/tts/layers/xtts/gpt.py +++ /dev/null @@ -1,617 +0,0 @@ -# ported from: https://github.com/neonbjb/tortoise-tts - -import functools -import math -import random - -import torch -import torch.nn as nn -import torch.nn.functional as F -from transformers import GPT2Config - -from TTS.tts.layers.xtts.gpt_inference import GPT2InferenceModel -from TTS.tts.layers.xtts.latent_encoder import ConditioningEncoder -from TTS.tts.layers.xtts.perceiver_encoder import PerceiverResampler - - -def null_position_embeddings(range, dim): - return torch.zeros((range.shape[0], range.shape[1], dim), device=range.device) - - -class LearnedPositionEmbeddings(nn.Module): - def __init__(self, seq_len, model_dim, init=0.02, relative=False): - super().__init__() - # nn.Embedding - self.emb = torch.nn.Embedding(seq_len, model_dim) - # Initializing this way is standard for GPT-2 - self.emb.weight.data.normal_(mean=0.0, std=init) - self.relative = relative - self.seq_len = seq_len - - def forward(self, x): - sl = x.shape[1] - if self.relative: - start = random.randint(sl, self.seq_len) - sl - return self.emb(torch.arange(start, start + sl, device=x.device)) - else: - return self.emb(torch.arange(0, sl, device=x.device)) - - def get_fixed_embedding(self, ind, dev): - return self.emb(torch.tensor([ind], device=dev)).unsqueeze(0) - - -def build_hf_gpt_transformer( - layers, - model_dim, - heads, - max_mel_seq_len, - max_text_seq_len, - max_prompt_len, - checkpointing, -): - """ - GPT-2 implemented by the HuggingFace library. - """ - from transformers import GPT2Config, GPT2Model - - gpt_config = GPT2Config( - vocab_size=256, # Unused. - n_positions=max_mel_seq_len + max_text_seq_len + max_prompt_len, - n_ctx=max_mel_seq_len + max_text_seq_len + max_prompt_len, - n_embd=model_dim, - n_layer=layers, - n_head=heads, - gradient_checkpointing=checkpointing, - use_cache=not checkpointing, - ) - gpt = GPT2Model(gpt_config) - # Override the built in positional embeddings - del gpt.wpe - gpt.wpe = functools.partial(null_position_embeddings, dim=model_dim) - # Built-in token embeddings are unused. - del gpt.wte - - mel_pos_emb = ( - LearnedPositionEmbeddings(max_mel_seq_len, model_dim) - if max_mel_seq_len != -1 - else functools.partial(null_position_embeddings, dim=model_dim) - ) - text_pos_emb = ( - LearnedPositionEmbeddings(max_text_seq_len, model_dim) - if max_mel_seq_len != -1 - else functools.partial(null_position_embeddings, dim=model_dim) - ) - # gpt = torch.compile(gpt, mode="reduce-overhead", fullgraph=True) - return gpt, mel_pos_emb, text_pos_emb, None, None - - -class GPT(nn.Module): - def __init__( - self, - start_text_token=261, - stop_text_token=0, - layers=8, - model_dim=512, - heads=8, - max_text_tokens=120, - max_mel_tokens=250, - max_prompt_tokens=70, - max_conditioning_inputs=1, - code_stride_len=1024, - number_text_tokens=256, - num_audio_tokens=8194, - start_audio_token=8192, - stop_audio_token=8193, - train_solo_embeddings=False, - checkpointing=False, - average_conditioning_embeddings=False, - label_smoothing=0.0, - use_perceiver_resampler=False, - perceiver_cond_length_compression=256, - ): - """ - Args: - - """ - super().__init__() - - self.label_smoothing = label_smoothing - self.number_text_tokens = number_text_tokens - self.start_text_token = start_text_token - self.stop_text_token = stop_text_token - self.num_audio_tokens = num_audio_tokens - self.start_audio_token = start_audio_token - self.stop_audio_token = stop_audio_token - self.start_prompt_token = start_audio_token - self.stop_prompt_token = stop_audio_token - self.layers = layers - self.heads = heads - self.model_dim = model_dim - self.max_conditioning_inputs = max_conditioning_inputs - self.max_mel_tokens = -1 if max_mel_tokens == -1 else max_mel_tokens + 2 + self.max_conditioning_inputs - self.max_text_tokens = -1 if max_text_tokens == -1 else max_text_tokens + 2 - self.max_prompt_tokens = max_prompt_tokens - self.code_stride_len = code_stride_len - self.conditioning_encoder = ConditioningEncoder(80, model_dim, num_attn_heads=heads) - self.conditioning_dropout = nn.Dropout1d(0.1) - self.average_conditioning_embeddings = average_conditioning_embeddings - self.use_perceiver_resampler = use_perceiver_resampler - self.perceiver_cond_length_compression = perceiver_cond_length_compression - - self.text_embedding = nn.Embedding(self.number_text_tokens, model_dim) - self.mel_embedding = nn.Embedding(self.num_audio_tokens, model_dim) - - ( - self.gpt, - self.mel_pos_embedding, - self.text_pos_embedding, - self.mel_layer_pos_embedding, - self.text_layer_pos_embedding, - ) = build_hf_gpt_transformer( - layers, - model_dim, - heads, - self.max_mel_tokens, - self.max_text_tokens, - self.max_prompt_tokens, - checkpointing, - ) - if train_solo_embeddings: - self.mel_solo_embedding = nn.Parameter(torch.randn(1, 1, model_dim) * 0.02, requires_grad=True) - self.text_solo_embedding = nn.Parameter(torch.randn(1, 1, model_dim) * 0.02, requires_grad=True) - else: - self.mel_solo_embedding = 0 - self.text_solo_embedding = 0 - - self.final_norm = nn.LayerNorm(model_dim) - self.text_head = nn.Linear(model_dim, self.number_text_tokens) - self.mel_head = nn.Linear(model_dim, self.num_audio_tokens) - - if self.use_perceiver_resampler: - # XTTS v2 - self.conditioning_perceiver = PerceiverResampler( - dim=model_dim, - depth=2, - dim_context=model_dim, - num_latents=32, - dim_head=64, - heads=8, - ff_mult=4, - use_flash_attn=False, - ) - else: - # XTTS v1 - self.prompt_embedding = nn.Embedding(self.num_audio_tokens, model_dim) - self.prompt_pos_embedding = LearnedPositionEmbeddings(24 * 9, model_dim) - - def get_grad_norm_parameter_groups(self): - return { - "conditioning_encoder": list(self.conditioning_encoder.parameters()), - "conditioning_perceiver": list(self.conditioning_perceiver.parameters()) - if self.use_perceiver_resampler - else None, - "gpt": list(self.gpt.parameters()), - "heads": list(self.text_head.parameters()) + list(self.mel_head.parameters()), - } - - def init_gpt_for_inference(self, kv_cache=True, use_deepspeed=False): - seq_length = self.max_prompt_tokens + self.max_mel_tokens + self.max_text_tokens + 1 - gpt_config = GPT2Config( - vocab_size=self.max_mel_tokens, - n_positions=seq_length, - n_ctx=seq_length, - n_embd=self.model_dim, - n_layer=self.layers, - n_head=self.heads, - gradient_checkpointing=False, - use_cache=True, - ) - self.gpt_inference = GPT2InferenceModel( - gpt_config, - self.gpt, - self.mel_pos_embedding, - self.mel_embedding, - self.final_norm, - self.mel_head, - kv_cache=kv_cache, - ) - self.gpt.wte = self.mel_embedding - - if use_deepspeed: - import deepspeed - - self.ds_engine = deepspeed.init_inference( - model=self.gpt_inference.half(), # Transformers models - mp_size=1, # Number of GPU - dtype=torch.float32, # desired data type of output - replace_method="auto", # Lets DS autmatically identify the layer to replace - replace_with_kernel_inject=True, # replace the model with the kernel injector - ) - self.gpt_inference = self.ds_engine.module.eval() - - def set_inputs_and_targets(self, input, start_token, stop_token): - inp = F.pad(input, (1, 0), value=start_token) - tar = F.pad(input, (0, 1), value=stop_token) - return inp, tar - - def set_mel_padding(self, mel_input_tokens, code_lengths): - """ - Given mel tokens that are derived from a padded audio clip and the actual lengths of each batch element in - that audio clip, reformats the tokens with stop_audio_token in place of the zero padding. This is required - preformatting to create a working TTS model. - """ - # Set padding areas within MEL (currently it is coded with the MEL code for ). - for b in range(len(code_lengths)): - actual_end = code_lengths[b] - if actual_end < mel_input_tokens.shape[-1]: - mel_input_tokens[b, actual_end:] = self.stop_audio_token - return mel_input_tokens - - def get_logits( - self, - first_inputs, - first_head, - second_inputs=None, - second_head=None, - prompt=None, - get_attns=False, - return_latent=False, - attn_mask_cond=None, - attn_mask_text=None, - attn_mask_mel=None, - ): - if prompt is not None: - offset = prompt.shape[1] - if second_inputs is not None: - emb = torch.cat([prompt, first_inputs, second_inputs], dim=1) - else: - emb = torch.cat([prompt, first_inputs], dim=1) - - # with torch.backends.cuda.sdp_kernel(enable_flash=True, enable_math=False, enable_mem_efficient=False): - attn_mask = None - if attn_mask_text is not None: - attn_mask = torch.cat([attn_mask_text, attn_mask_mel], dim=1) - if prompt is not None: - attn_mask_cond = torch.ones(prompt.shape[0], offset, dtype=torch.bool, device=emb.device) - attn_mask = torch.cat([attn_mask_cond, attn_mask], dim=1) - - gpt_out = self.gpt( - inputs_embeds=emb, - return_dict=True, - output_attentions=get_attns, - attention_mask=attn_mask, - ) - - if get_attns: - return gpt_out.attentions - - enc = gpt_out.last_hidden_state[:, offset:] - enc = self.final_norm(enc) - - if return_latent: - return enc[:, : first_inputs.shape[1]], enc[:, -second_inputs.shape[1] :] - - first_logits = enc[:, : first_inputs.shape[1]] - first_logits = first_head(first_logits) - first_logits = first_logits.permute(0, 2, 1) - if second_inputs is not None: - second_logits = enc[:, -second_inputs.shape[1] :] - second_logits = second_head(second_logits) - second_logits = second_logits.permute(0, 2, 1) - return first_logits, second_logits - else: - return first_logits - - def get_conditioning(self, speech_conditioning_input): - speech_conditioning_input = ( - speech_conditioning_input.unsqueeze(1) - if len(speech_conditioning_input.shape) == 3 - else speech_conditioning_input - ) - conds = [] - for j in range(speech_conditioning_input.shape[1]): - conds.append(self.conditioning_encoder(speech_conditioning_input[:, j])) - conds = torch.stack(conds, dim=1) - conds = conds.mean(dim=1) - return conds - - def get_prompts(self, prompt_codes): - """ - Create a prompt from the mel codes. This is used to condition the model on the mel codes. - Pad the prompt with start and stop mel tokens. - """ - prompt = prompt_codes - if self.training: - lengths = [] - # Compute the real prompt length based on the first encounter with the token 83 used for padding - for i in range(prompt_codes.shape[0]): - length = 0 - for j in range(prompt_codes.shape[1]): - if prompt_codes[i, j] == 83: - break - else: - length += 1 - lengths.append(length) - - # prompt_len = random.randint(1, 9) # in secs - prompt_len = 3 - prompt_len = prompt_len * 24 # in frames - if prompt_codes.shape[-1] >= prompt_len: - for i in range(prompt_codes.shape[0]): - if lengths[i] < prompt_len: - start = 0 - else: - start = random.randint(0, lengths[i] - prompt_len) - prompt = prompt_codes[:, start : start + prompt_len] - - # add start and stop tokens - prompt = F.pad(prompt, (1, 0), value=self.start_prompt_token) - prompt = F.pad(prompt, (0, 1), value=self.stop_prompt_token) - return prompt - - def get_style_emb(self, cond_input, return_latent=False): - """ - cond_input: (b, 80, s) or (b, 1, 80, s) - conds: (b, 1024, s) - """ - conds = None - if not return_latent: - if cond_input.ndim == 4: - cond_input = cond_input.squeeze(1) - conds = self.conditioning_encoder(cond_input) # (b, d, s) - if self.use_perceiver_resampler: - conds = self.conditioning_perceiver(conds.permute(0, 2, 1)).transpose(1, 2) # (b, d, 32) - else: - # already computed - conds = cond_input.unsqueeze(1) - return conds - - def forward( - self, - text_inputs, - text_lengths, - audio_codes, - wav_lengths, - cond_mels=None, - cond_idxs=None, - cond_lens=None, - cond_latents=None, - return_attentions=False, - return_latent=False, - ): - """ - Forward pass that uses both text and voice in either text conditioning mode or voice conditioning mode - (actuated by `text_first`). - - text_inputs: long tensor, (b,t) - text_lengths: long tensor, (b,) - mel_inputs: long tensor, (b,m) - wav_lengths: long tensor, (b,) - cond_mels: MEL float tensor, (b, 1, 80,s) - cond_idxs: cond start and end indexs, (b, 2) - - If return_attentions is specified, only logits are returned. - If return_latent is specified, loss & logits are not computed or returned. Only the predicted latents are returned. - """ - # ❗ FIXIT - if self.max_conditioning_inputs == 0: - assert cond_mels is None, " ❗ cond_mels is not None, but max_conditioning_inputs == 0" - - max_text_len = text_lengths.max() - code_lengths = torch.ceil(wav_lengths / self.code_stride_len).long() + 3 - - if cond_lens is not None: - if self.use_perceiver_resampler: - cond_lens = cond_lens // self.perceiver_cond_length_compression - else: - cond_lens = cond_lens // self.code_stride_len - - if cond_idxs is not None: - # recompute cond idxs for mel lengths - for idx in range(cond_idxs.size(0)): - if self.use_perceiver_resampler: - cond_idxs[idx] = cond_idxs[idx] // self.perceiver_cond_length_compression - else: - cond_idxs[idx] = cond_idxs[idx] // self.code_stride_len - - # ensure that the cond_mel does not have padding - # if cond_lens is not None and cond_idxs is None: - # min_cond_len = torch.min(cond_lens) - # cond_mels = cond_mels[:, :, :, :min_cond_len] - - # If len(codes) + 3 is larger than maxiumum allowed length, we truncate the codes. - max_mel_len = code_lengths.max() - - if max_mel_len > audio_codes.shape[-1]: - audio_codes = F.pad(audio_codes, (0, max_mel_len - audio_codes.shape[-1])) - - silence = True - for idx, l in enumerate(code_lengths): - length = l.item() - while silence: - if audio_codes[idx, length - 1] != 83: - break - length -= 1 - code_lengths[idx] = length - - # 💖 Lovely assertions - assert ( - max_mel_len <= audio_codes.shape[-1] - ), f" ❗ max_mel_len ({max_mel_len}) > audio_codes.shape[-1] ({audio_codes.shape[-1]})" - assert ( - max_text_len <= text_inputs.shape[-1] - ), f" ❗ max_text_len ({max_text_len}) > text_inputs.shape[-1] ({text_inputs.shape[-1]})" - - # Append stop token to text inputs - text_inputs = F.pad(text_inputs[:, :max_text_len], (0, 1), value=self.stop_text_token) - - # Append silence token to mel codes - audio_codes = F.pad(audio_codes[:, :max_mel_len], (0, 1), value=self.stop_audio_token) - - # Pad mel codes with stop_audio_token - audio_codes = self.set_mel_padding(audio_codes, code_lengths) - - # Build input and target tensors - # Prepend start token to inputs and append stop token to targets - text_inputs, text_targets = self.set_inputs_and_targets( - text_inputs, self.start_text_token, self.stop_text_token - ) - audio_codes, mel_targets = self.set_inputs_and_targets( - audio_codes, self.start_audio_token, self.stop_audio_token - ) - - # Set attn_mask - attn_mask_cond = None - attn_mask_text = None - attn_mask_mel = None - if not return_latent: - attn_mask_cond = torch.ones( - cond_mels.shape[0], - cond_mels.shape[-1], - dtype=torch.bool, - device=text_inputs.device, - ) - attn_mask_text = torch.ones( - text_inputs.shape[0], - text_inputs.shape[1], - dtype=torch.bool, - device=text_inputs.device, - ) - attn_mask_mel = torch.ones( - audio_codes.shape[0], - audio_codes.shape[1], - dtype=torch.bool, - device=audio_codes.device, - ) - - if cond_idxs is not None: - # use masking approach - for idx, r in enumerate(cond_idxs): - l = r[1] - r[0] - attn_mask_cond[idx, l:] = 0.0 - elif cond_lens is not None: - for idx, l in enumerate(cond_lens): - attn_mask_cond[idx, l:] = 0.0 - - for idx, l in enumerate(text_lengths): - attn_mask_text[idx, l + 1 :] = 0.0 - - for idx, l in enumerate(code_lengths): - attn_mask_mel[idx, l + 1 :] = 0.0 - - # Compute text embeddings + positional embeddings - text_emb = self.text_embedding(text_inputs) + self.text_pos_embedding(text_inputs) - - # Compute mel embeddings + positional embeddings - mel_emb = self.mel_embedding(audio_codes) + self.mel_pos_embedding(audio_codes) - - # Compute speech conditioning input - if cond_latents is None: - cond_latents = self.get_style_emb(cond_mels).transpose(1, 2) - - # Get logits - sub = -5 # don't ask me why 😄 - if self.training: - sub = -1 - - text_logits, mel_logits = self.get_logits( - text_emb, - self.text_head, - mel_emb, - self.mel_head, - prompt=cond_latents, - get_attns=return_attentions, - return_latent=return_latent, - attn_mask_cond=attn_mask_cond, - attn_mask_text=attn_mask_text, - attn_mask_mel=attn_mask_mel, - ) - if return_latent: - return mel_logits[:, :sub] # sub to prevent bla. - - if return_attentions: - return mel_logits - - # Set paddings to -1 to ignore them in loss - for idx, l in enumerate(text_lengths): - text_targets[idx, l + 1 :] = -1 - - for idx, l in enumerate(code_lengths): - mel_targets[idx, l + 1 :] = -1 - - # check if stoptoken is in every row of mel_targets - assert (mel_targets == self.stop_audio_token).sum() >= mel_targets.shape[ - 0 - ], f" ❗ mel_targets does not contain stop token ({self.stop_audio_token}) in every row." - - # ignore the loss for the segment used for conditioning - # coin flip for the segment to be ignored - if cond_idxs is not None: - cond_start = cond_idxs[idx, 0] - cond_end = cond_idxs[idx, 1] - mel_targets[idx, cond_start:cond_end] = -1 - - # Compute losses - loss_text = F.cross_entropy( - text_logits, text_targets.long(), ignore_index=-1, label_smoothing=self.label_smoothing - ) - loss_mel = F.cross_entropy( - mel_logits, mel_targets.long(), ignore_index=-1, label_smoothing=self.label_smoothing - ) - return loss_text.mean(), loss_mel.mean(), mel_logits - - def inference(self, cond_latents, text_inputs, **hf_generate_kwargs): - self.compute_embeddings(cond_latents, text_inputs) - return self.generate(cond_latents, text_inputs, **hf_generate_kwargs) - - def compute_embeddings( - self, - cond_latents, - text_inputs, - ): - text_inputs = F.pad(text_inputs, (0, 1), value=self.stop_text_token) - text_inputs = F.pad(text_inputs, (1, 0), value=self.start_text_token) - emb = self.text_embedding(text_inputs) + self.text_pos_embedding(text_inputs) - emb = torch.cat([cond_latents, emb], dim=1) - self.gpt_inference.store_prefix_emb(emb) - gpt_inputs = torch.full( - ( - emb.shape[0], - emb.shape[1] + 1, # +1 for the start_audio_token - ), - fill_value=1, - dtype=torch.long, - device=text_inputs.device, - ) - gpt_inputs[:, -1] = self.start_audio_token - return gpt_inputs - - def generate( - self, - cond_latents, - text_inputs, - **hf_generate_kwargs, - ): - gpt_inputs = self.compute_embeddings(cond_latents, text_inputs) - gen = self.gpt_inference.generate( - gpt_inputs, - bos_token_id=self.start_audio_token, - pad_token_id=self.stop_audio_token, - eos_token_id=self.stop_audio_token, - max_length=self.max_mel_tokens, - **hf_generate_kwargs, - ) - if "return_dict_in_generate" in hf_generate_kwargs: - return gen.sequences[:, gpt_inputs.shape[1] :], gen - return gen[:, gpt_inputs.shape[1] :] - - def get_generator(self, fake_inputs, **hf_generate_kwargs): - return self.gpt_inference.generate_stream( - fake_inputs, - bos_token_id=self.start_audio_token, - pad_token_id=self.stop_audio_token, - eos_token_id=self.stop_audio_token, - max_length=self.max_mel_tokens, - do_stream=True, - **hf_generate_kwargs, - ) diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Cython/Utility/MemoryView_C.c b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Cython/Utility/MemoryView_C.c deleted file mode 100644 index 0a5d8ee2c2fe1363316c15c5a6dd6483afecb60f..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Cython/Utility/MemoryView_C.c +++ /dev/null @@ -1,945 +0,0 @@ -////////// MemviewSliceStruct.proto ////////// -//@proto_block: utility_code_proto_before_types - -/* memoryview slice struct */ -struct {{memview_struct_name}}; - -typedef struct { - struct {{memview_struct_name}} *memview; - char *data; - Py_ssize_t shape[{{max_dims}}]; - Py_ssize_t strides[{{max_dims}}]; - Py_ssize_t suboffsets[{{max_dims}}]; -} {{memviewslice_name}}; - -// used for "len(memviewslice)" -#define __Pyx_MemoryView_Len(m) (m.shape[0]) - - -/////////// Atomics.proto ///////////// -//@proto_block: utility_code_proto_before_types - -#include - -#ifndef CYTHON_ATOMICS - #define CYTHON_ATOMICS 1 -#endif - -#define __pyx_atomic_int_type int -// todo: Portland pgcc, maybe OS X's OSAtomicIncrement32, -// libatomic + autotools-like distutils support? Such a pain... -#if CYTHON_ATOMICS && __GNUC__ >= 4 && (__GNUC_MINOR__ > 1 || \ - (__GNUC_MINOR__ == 1 && __GNUC_PATCHLEVEL >= 2)) && \ - !defined(__i386__) - /* gcc >= 4.1.2 */ - #define __pyx_atomic_incr_aligned(value, lock) __sync_fetch_and_add(value, 1) - #define __pyx_atomic_decr_aligned(value, lock) __sync_fetch_and_sub(value, 1) - - #ifdef __PYX_DEBUG_ATOMICS - #warning "Using GNU atomics" - #endif -#elif CYTHON_ATOMICS && defined(_MSC_VER) && 0 - /* msvc */ - #include - #undef __pyx_atomic_int_type - #define __pyx_atomic_int_type LONG - #define __pyx_atomic_incr_aligned(value, lock) InterlockedIncrement(value) - #define __pyx_atomic_decr_aligned(value, lock) InterlockedDecrement(value) - - #ifdef __PYX_DEBUG_ATOMICS - #pragma message ("Using MSVC atomics") - #endif -#elif CYTHON_ATOMICS && (defined(__ICC) || defined(__INTEL_COMPILER)) && 0 - #define __pyx_atomic_incr_aligned(value, lock) _InterlockedIncrement(value) - #define __pyx_atomic_decr_aligned(value, lock) _InterlockedDecrement(value) - - #ifdef __PYX_DEBUG_ATOMICS - #warning "Using Intel atomics" - #endif -#else - #undef CYTHON_ATOMICS - #define CYTHON_ATOMICS 0 - - #ifdef __PYX_DEBUG_ATOMICS - #warning "Not using atomics" - #endif -#endif - -typedef volatile __pyx_atomic_int_type __pyx_atomic_int; - -#if CYTHON_ATOMICS - #define __pyx_add_acquisition_count(memview) \ - __pyx_atomic_incr_aligned(__pyx_get_slice_count_pointer(memview), memview->lock) - #define __pyx_sub_acquisition_count(memview) \ - __pyx_atomic_decr_aligned(__pyx_get_slice_count_pointer(memview), memview->lock) -#else - #define __pyx_add_acquisition_count(memview) \ - __pyx_add_acquisition_count_locked(__pyx_get_slice_count_pointer(memview), memview->lock) - #define __pyx_sub_acquisition_count(memview) \ - __pyx_sub_acquisition_count_locked(__pyx_get_slice_count_pointer(memview), memview->lock) -#endif - - -/////////////// ObjectToMemviewSlice.proto /////////////// - -static CYTHON_INLINE {{memviewslice_name}} {{funcname}}(PyObject *, int writable_flag); - - -////////// MemviewSliceInit.proto ////////// - -#define __Pyx_BUF_MAX_NDIMS %(BUF_MAX_NDIMS)d - -#define __Pyx_MEMVIEW_DIRECT 1 -#define __Pyx_MEMVIEW_PTR 2 -#define __Pyx_MEMVIEW_FULL 4 -#define __Pyx_MEMVIEW_CONTIG 8 -#define __Pyx_MEMVIEW_STRIDED 16 -#define __Pyx_MEMVIEW_FOLLOW 32 - -#define __Pyx_IS_C_CONTIG 1 -#define __Pyx_IS_F_CONTIG 2 - -static int __Pyx_init_memviewslice( - struct __pyx_memoryview_obj *memview, - int ndim, - __Pyx_memviewslice *memviewslice, - int memview_is_new_reference); - -static CYTHON_INLINE int __pyx_add_acquisition_count_locked( - __pyx_atomic_int *acquisition_count, PyThread_type_lock lock); -static CYTHON_INLINE int __pyx_sub_acquisition_count_locked( - __pyx_atomic_int *acquisition_count, PyThread_type_lock lock); - -#define __pyx_get_slice_count_pointer(memview) (memview->acquisition_count_aligned_p) -#define __pyx_get_slice_count(memview) (*__pyx_get_slice_count_pointer(memview)) -#define __PYX_INC_MEMVIEW(slice, have_gil) __Pyx_INC_MEMVIEW(slice, have_gil, __LINE__) -#define __PYX_XDEC_MEMVIEW(slice, have_gil) __Pyx_XDEC_MEMVIEW(slice, have_gil, __LINE__) -static CYTHON_INLINE void __Pyx_INC_MEMVIEW({{memviewslice_name}} *, int, int); -static CYTHON_INLINE void __Pyx_XDEC_MEMVIEW({{memviewslice_name}} *, int, int); - - -/////////////// MemviewSliceIndex.proto /////////////// - -static CYTHON_INLINE char *__pyx_memviewslice_index_full( - const char *bufp, Py_ssize_t idx, Py_ssize_t stride, Py_ssize_t suboffset); - - -/////////////// ObjectToMemviewSlice /////////////// -//@requires: MemviewSliceValidateAndInit - -static CYTHON_INLINE {{memviewslice_name}} {{funcname}}(PyObject *obj, int writable_flag) { - {{memviewslice_name}} result = {{memslice_init}}; - __Pyx_BufFmt_StackElem stack[{{struct_nesting_depth}}]; - int axes_specs[] = { {{axes_specs}} }; - int retcode; - - if (obj == Py_None) { - /* We don't bother to refcount None */ - result.memview = (struct __pyx_memoryview_obj *) Py_None; - return result; - } - - retcode = __Pyx_ValidateAndInit_memviewslice(axes_specs, {{c_or_f_flag}}, - {{buf_flag}} | writable_flag, {{ndim}}, - &{{dtype_typeinfo}}, stack, - &result, obj); - - if (unlikely(retcode == -1)) - goto __pyx_fail; - - return result; -__pyx_fail: - result.memview = NULL; - result.data = NULL; - return result; -} - - -/////////////// MemviewSliceValidateAndInit.proto /////////////// - -static int __Pyx_ValidateAndInit_memviewslice( - int *axes_specs, - int c_or_f_flag, - int buf_flags, - int ndim, - __Pyx_TypeInfo *dtype, - __Pyx_BufFmt_StackElem stack[], - __Pyx_memviewslice *memviewslice, - PyObject *original_obj); - -/////////////// MemviewSliceValidateAndInit /////////////// -//@requires: Buffer.c::TypeInfoCompare -//@requires: Buffer.c::BufferFormatStructs -//@requires: Buffer.c::BufferFormatCheck - -static int -__pyx_check_strides(Py_buffer *buf, int dim, int ndim, int spec) -{ - if (buf->shape[dim] <= 1) - return 1; - - if (buf->strides) { - if (spec & __Pyx_MEMVIEW_CONTIG) { - if (spec & (__Pyx_MEMVIEW_PTR|__Pyx_MEMVIEW_FULL)) { - if (unlikely(buf->strides[dim] != sizeof(void *))) { - PyErr_Format(PyExc_ValueError, - "Buffer is not indirectly contiguous " - "in dimension %d.", dim); - goto fail; - } - } else if (unlikely(buf->strides[dim] != buf->itemsize)) { - PyErr_SetString(PyExc_ValueError, - "Buffer and memoryview are not contiguous " - "in the same dimension."); - goto fail; - } - } - - if (spec & __Pyx_MEMVIEW_FOLLOW) { - Py_ssize_t stride = buf->strides[dim]; - if (stride < 0) - stride = -stride; - if (unlikely(stride < buf->itemsize)) { - PyErr_SetString(PyExc_ValueError, - "Buffer and memoryview are not contiguous " - "in the same dimension."); - goto fail; - } - } - } else { - if (unlikely(spec & __Pyx_MEMVIEW_CONTIG && dim != ndim - 1)) { - PyErr_Format(PyExc_ValueError, - "C-contiguous buffer is not contiguous in " - "dimension %d", dim); - goto fail; - } else if (unlikely(spec & (__Pyx_MEMVIEW_PTR))) { - PyErr_Format(PyExc_ValueError, - "C-contiguous buffer is not indirect in " - "dimension %d", dim); - goto fail; - } else if (unlikely(buf->suboffsets)) { - PyErr_SetString(PyExc_ValueError, - "Buffer exposes suboffsets but no strides"); - goto fail; - } - } - - return 1; -fail: - return 0; -} - -static int -__pyx_check_suboffsets(Py_buffer *buf, int dim, CYTHON_UNUSED int ndim, int spec) -{ - // Todo: without PyBUF_INDIRECT we may not have suboffset information, i.e., the - // ptr may not be set to NULL but may be uninitialized? - if (spec & __Pyx_MEMVIEW_DIRECT) { - if (unlikely(buf->suboffsets && buf->suboffsets[dim] >= 0)) { - PyErr_Format(PyExc_ValueError, - "Buffer not compatible with direct access " - "in dimension %d.", dim); - goto fail; - } - } - - if (spec & __Pyx_MEMVIEW_PTR) { - if (unlikely(!buf->suboffsets || (buf->suboffsets[dim] < 0))) { - PyErr_Format(PyExc_ValueError, - "Buffer is not indirectly accessible " - "in dimension %d.", dim); - goto fail; - } - } - - return 1; -fail: - return 0; -} - -static int -__pyx_verify_contig(Py_buffer *buf, int ndim, int c_or_f_flag) -{ - int i; - - if (c_or_f_flag & __Pyx_IS_F_CONTIG) { - Py_ssize_t stride = 1; - for (i = 0; i < ndim; i++) { - if (unlikely(stride * buf->itemsize != buf->strides[i] && buf->shape[i] > 1)) { - PyErr_SetString(PyExc_ValueError, - "Buffer not fortran contiguous."); - goto fail; - } - stride = stride * buf->shape[i]; - } - } else if (c_or_f_flag & __Pyx_IS_C_CONTIG) { - Py_ssize_t stride = 1; - for (i = ndim - 1; i >- 1; i--) { - if (unlikely(stride * buf->itemsize != buf->strides[i] && buf->shape[i] > 1)) { - PyErr_SetString(PyExc_ValueError, - "Buffer not C contiguous."); - goto fail; - } - stride = stride * buf->shape[i]; - } - } - - return 1; -fail: - return 0; -} - -static int __Pyx_ValidateAndInit_memviewslice( - int *axes_specs, - int c_or_f_flag, - int buf_flags, - int ndim, - __Pyx_TypeInfo *dtype, - __Pyx_BufFmt_StackElem stack[], - __Pyx_memviewslice *memviewslice, - PyObject *original_obj) -{ - struct __pyx_memoryview_obj *memview, *new_memview; - __Pyx_RefNannyDeclarations - Py_buffer *buf; - int i, spec = 0, retval = -1; - __Pyx_BufFmt_Context ctx; - int from_memoryview = __pyx_memoryview_check(original_obj); - - __Pyx_RefNannySetupContext("ValidateAndInit_memviewslice", 0); - - if (from_memoryview && __pyx_typeinfo_cmp(dtype, ((struct __pyx_memoryview_obj *) - original_obj)->typeinfo)) { - /* We have a matching dtype, skip format parsing */ - memview = (struct __pyx_memoryview_obj *) original_obj; - new_memview = NULL; - } else { - memview = (struct __pyx_memoryview_obj *) __pyx_memoryview_new( - original_obj, buf_flags, 0, dtype); - new_memview = memview; - if (unlikely(!memview)) - goto fail; - } - - buf = &memview->view; - if (unlikely(buf->ndim != ndim)) { - PyErr_Format(PyExc_ValueError, - "Buffer has wrong number of dimensions (expected %d, got %d)", - ndim, buf->ndim); - goto fail; - } - - if (new_memview) { - __Pyx_BufFmt_Init(&ctx, stack, dtype); - if (unlikely(!__Pyx_BufFmt_CheckString(&ctx, buf->format))) goto fail; - } - - if (unlikely((unsigned) buf->itemsize != dtype->size)) { - PyErr_Format(PyExc_ValueError, - "Item size of buffer (%" CYTHON_FORMAT_SSIZE_T "u byte%s) " - "does not match size of '%s' (%" CYTHON_FORMAT_SSIZE_T "u byte%s)", - buf->itemsize, - (buf->itemsize > 1) ? "s" : "", - dtype->name, - dtype->size, - (dtype->size > 1) ? "s" : ""); - goto fail; - } - - /* Check axes */ - if (buf->len > 0) { - // 0-sized arrays do not undergo these checks since their strides are - // irrelevant and they are always both C- and F-contiguous. - for (i = 0; i < ndim; i++) { - spec = axes_specs[i]; - if (unlikely(!__pyx_check_strides(buf, i, ndim, spec))) - goto fail; - if (unlikely(!__pyx_check_suboffsets(buf, i, ndim, spec))) - goto fail; - } - - /* Check contiguity */ - if (unlikely(buf->strides && !__pyx_verify_contig(buf, ndim, c_or_f_flag))) - goto fail; - } - - /* Initialize */ - if (unlikely(__Pyx_init_memviewslice(memview, ndim, memviewslice, - new_memview != NULL) == -1)) { - goto fail; - } - - retval = 0; - goto no_fail; - -fail: - Py_XDECREF(new_memview); - retval = -1; - -no_fail: - __Pyx_RefNannyFinishContext(); - return retval; -} - - -////////// MemviewSliceInit ////////// - -static int -__Pyx_init_memviewslice(struct __pyx_memoryview_obj *memview, - int ndim, - {{memviewslice_name}} *memviewslice, - int memview_is_new_reference) -{ - __Pyx_RefNannyDeclarations - int i, retval=-1; - Py_buffer *buf = &memview->view; - __Pyx_RefNannySetupContext("init_memviewslice", 0); - - if (unlikely(memviewslice->memview || memviewslice->data)) { - PyErr_SetString(PyExc_ValueError, - "memviewslice is already initialized!"); - goto fail; - } - - if (buf->strides) { - for (i = 0; i < ndim; i++) { - memviewslice->strides[i] = buf->strides[i]; - } - } else { - Py_ssize_t stride = buf->itemsize; - for (i = ndim - 1; i >= 0; i--) { - memviewslice->strides[i] = stride; - stride *= buf->shape[i]; - } - } - - for (i = 0; i < ndim; i++) { - memviewslice->shape[i] = buf->shape[i]; - if (buf->suboffsets) { - memviewslice->suboffsets[i] = buf->suboffsets[i]; - } else { - memviewslice->suboffsets[i] = -1; - } - } - - memviewslice->memview = memview; - memviewslice->data = (char *)buf->buf; - if (__pyx_add_acquisition_count(memview) == 0 && !memview_is_new_reference) { - Py_INCREF(memview); - } - retval = 0; - goto no_fail; - -fail: - /* Don't decref, the memoryview may be borrowed. Let the caller do the cleanup */ - /* __Pyx_XDECREF(memviewslice->memview); */ - memviewslice->memview = 0; - memviewslice->data = 0; - retval = -1; -no_fail: - __Pyx_RefNannyFinishContext(); - return retval; -} - -#ifndef Py_NO_RETURN -// available since Py3.3 -#define Py_NO_RETURN -#endif - -static void __pyx_fatalerror(const char *fmt, ...) Py_NO_RETURN { - va_list vargs; - char msg[200]; - -#ifdef HAVE_STDARG_PROTOTYPES - va_start(vargs, fmt); -#else - va_start(vargs); -#endif - vsnprintf(msg, 200, fmt, vargs); - va_end(vargs); - - Py_FatalError(msg); -} - -static CYTHON_INLINE int -__pyx_add_acquisition_count_locked(__pyx_atomic_int *acquisition_count, - PyThread_type_lock lock) -{ - int result; - PyThread_acquire_lock(lock, 1); - result = (*acquisition_count)++; - PyThread_release_lock(lock); - return result; -} - -static CYTHON_INLINE int -__pyx_sub_acquisition_count_locked(__pyx_atomic_int *acquisition_count, - PyThread_type_lock lock) -{ - int result; - PyThread_acquire_lock(lock, 1); - result = (*acquisition_count)--; - PyThread_release_lock(lock); - return result; -} - - -static CYTHON_INLINE void -__Pyx_INC_MEMVIEW({{memviewslice_name}} *memslice, int have_gil, int lineno) -{ - int first_time; - struct {{memview_struct_name}} *memview = memslice->memview; - if (unlikely(!memview || (PyObject *) memview == Py_None)) - return; /* allow uninitialized memoryview assignment */ - - if (unlikely(__pyx_get_slice_count(memview) < 0)) - __pyx_fatalerror("Acquisition count is %d (line %d)", - __pyx_get_slice_count(memview), lineno); - - first_time = __pyx_add_acquisition_count(memview) == 0; - - if (unlikely(first_time)) { - if (have_gil) { - Py_INCREF((PyObject *) memview); - } else { - PyGILState_STATE _gilstate = PyGILState_Ensure(); - Py_INCREF((PyObject *) memview); - PyGILState_Release(_gilstate); - } - } -} - -static CYTHON_INLINE void __Pyx_XDEC_MEMVIEW({{memviewslice_name}} *memslice, - int have_gil, int lineno) { - int last_time; - struct {{memview_struct_name}} *memview = memslice->memview; - - if (unlikely(!memview || (PyObject *) memview == Py_None)) { - // we do not ref-count None - memslice->memview = NULL; - return; - } - - if (unlikely(__pyx_get_slice_count(memview) <= 0)) - __pyx_fatalerror("Acquisition count is %d (line %d)", - __pyx_get_slice_count(memview), lineno); - - last_time = __pyx_sub_acquisition_count(memview) == 1; - memslice->data = NULL; - - if (unlikely(last_time)) { - if (have_gil) { - Py_CLEAR(memslice->memview); - } else { - PyGILState_STATE _gilstate = PyGILState_Ensure(); - Py_CLEAR(memslice->memview); - PyGILState_Release(_gilstate); - } - } else { - memslice->memview = NULL; - } -} - - -////////// MemviewSliceCopyTemplate.proto ////////// - -static {{memviewslice_name}} -__pyx_memoryview_copy_new_contig(const __Pyx_memviewslice *from_mvs, - const char *mode, int ndim, - size_t sizeof_dtype, int contig_flag, - int dtype_is_object); - - -////////// MemviewSliceCopyTemplate ////////// - -static {{memviewslice_name}} -__pyx_memoryview_copy_new_contig(const __Pyx_memviewslice *from_mvs, - const char *mode, int ndim, - size_t sizeof_dtype, int contig_flag, - int dtype_is_object) -{ - __Pyx_RefNannyDeclarations - int i; - __Pyx_memviewslice new_mvs = {{memslice_init}}; - struct __pyx_memoryview_obj *from_memview = from_mvs->memview; - Py_buffer *buf = &from_memview->view; - PyObject *shape_tuple = NULL; - PyObject *temp_int = NULL; - struct __pyx_array_obj *array_obj = NULL; - struct __pyx_memoryview_obj *memview_obj = NULL; - - __Pyx_RefNannySetupContext("__pyx_memoryview_copy_new_contig", 0); - - for (i = 0; i < ndim; i++) { - if (unlikely(from_mvs->suboffsets[i] >= 0)) { - PyErr_Format(PyExc_ValueError, "Cannot copy memoryview slice with " - "indirect dimensions (axis %d)", i); - goto fail; - } - } - - shape_tuple = PyTuple_New(ndim); - if (unlikely(!shape_tuple)) { - goto fail; - } - __Pyx_GOTREF(shape_tuple); - - - for(i = 0; i < ndim; i++) { - temp_int = PyInt_FromSsize_t(from_mvs->shape[i]); - if(unlikely(!temp_int)) { - goto fail; - } else { - PyTuple_SET_ITEM(shape_tuple, i, temp_int); - temp_int = NULL; - } - } - - array_obj = __pyx_array_new(shape_tuple, sizeof_dtype, buf->format, (char *) mode, NULL); - if (unlikely(!array_obj)) { - goto fail; - } - __Pyx_GOTREF(array_obj); - - memview_obj = (struct __pyx_memoryview_obj *) __pyx_memoryview_new( - (PyObject *) array_obj, contig_flag, - dtype_is_object, - from_mvs->memview->typeinfo); - if (unlikely(!memview_obj)) - goto fail; - - /* initialize new_mvs */ - if (unlikely(__Pyx_init_memviewslice(memview_obj, ndim, &new_mvs, 1) < 0)) - goto fail; - - if (unlikely(__pyx_memoryview_copy_contents(*from_mvs, new_mvs, ndim, ndim, - dtype_is_object) < 0)) - goto fail; - - goto no_fail; - -fail: - __Pyx_XDECREF(new_mvs.memview); - new_mvs.memview = NULL; - new_mvs.data = NULL; -no_fail: - __Pyx_XDECREF(shape_tuple); - __Pyx_XDECREF(temp_int); - __Pyx_XDECREF(array_obj); - __Pyx_RefNannyFinishContext(); - return new_mvs; -} - - -////////// CopyContentsUtility.proto ///////// - -#define {{func_cname}}(slice) \ - __pyx_memoryview_copy_new_contig(&slice, "{{mode}}", {{ndim}}, \ - sizeof({{dtype_decl}}), {{contig_flag}}, \ - {{dtype_is_object}}) - - -////////// OverlappingSlices.proto ////////// - -static int __pyx_slices_overlap({{memviewslice_name}} *slice1, - {{memviewslice_name}} *slice2, - int ndim, size_t itemsize); - - -////////// OverlappingSlices ////////// - -/* Based on numpy's core/src/multiarray/array_assign.c */ - -/* Gets a half-open range [start, end) which contains the array data */ -static void -__pyx_get_array_memory_extents({{memviewslice_name}} *slice, - void **out_start, void **out_end, - int ndim, size_t itemsize) -{ - char *start, *end; - int i; - - start = end = slice->data; - - for (i = 0; i < ndim; i++) { - Py_ssize_t stride = slice->strides[i]; - Py_ssize_t extent = slice->shape[i]; - - if (extent == 0) { - *out_start = *out_end = start; - return; - } else { - if (stride > 0) - end += stride * (extent - 1); - else - start += stride * (extent - 1); - } - } - - /* Return a half-open range */ - *out_start = start; - *out_end = end + itemsize; -} - -/* Returns 1 if the arrays have overlapping data, 0 otherwise */ -static int -__pyx_slices_overlap({{memviewslice_name}} *slice1, - {{memviewslice_name}} *slice2, - int ndim, size_t itemsize) -{ - void *start1, *end1, *start2, *end2; - - __pyx_get_array_memory_extents(slice1, &start1, &end1, ndim, itemsize); - __pyx_get_array_memory_extents(slice2, &start2, &end2, ndim, itemsize); - - return (start1 < end2) && (start2 < end1); -} - - -////////// MemviewSliceCheckContig.proto ////////// - -#define __pyx_memviewslice_is_contig_{{contig_type}}{{ndim}}(slice) \ - __pyx_memviewslice_is_contig(slice, '{{contig_type}}', {{ndim}}) - - -////////// MemviewSliceIsContig.proto ////////// - -static int __pyx_memviewslice_is_contig(const {{memviewslice_name}} mvs, char order, int ndim);/*proto*/ - - -////////// MemviewSliceIsContig ////////// - -static int -__pyx_memviewslice_is_contig(const {{memviewslice_name}} mvs, char order, int ndim) -{ - int i, index, step, start; - Py_ssize_t itemsize = mvs.memview->view.itemsize; - - if (order == 'F') { - step = 1; - start = 0; - } else { - step = -1; - start = ndim - 1; - } - - for (i = 0; i < ndim; i++) { - index = start + step * i; - if (mvs.suboffsets[index] >= 0 || mvs.strides[index] != itemsize) - return 0; - - itemsize *= mvs.shape[index]; - } - - return 1; -} - - -/////////////// MemviewSliceIndex /////////////// - -static CYTHON_INLINE char * -__pyx_memviewslice_index_full(const char *bufp, Py_ssize_t idx, - Py_ssize_t stride, Py_ssize_t suboffset) -{ - bufp = bufp + idx * stride; - if (suboffset >= 0) { - bufp = *((char **) bufp) + suboffset; - } - return (char *) bufp; -} - - -/////////////// MemviewDtypeToObject.proto /////////////// - -{{if to_py_function}} -static CYTHON_INLINE PyObject *{{get_function}}(const char *itemp); /* proto */ -{{endif}} - -{{if from_py_function}} -static CYTHON_INLINE int {{set_function}}(const char *itemp, PyObject *obj); /* proto */ -{{endif}} - -/////////////// MemviewDtypeToObject /////////////// - -{{#__pyx_memview__to_object}} - -/* Convert a dtype to or from a Python object */ - -{{if to_py_function}} -static CYTHON_INLINE PyObject *{{get_function}}(const char *itemp) { - return (PyObject *) {{to_py_function}}(*({{dtype}} *) itemp); -} -{{endif}} - -{{if from_py_function}} -static CYTHON_INLINE int {{set_function}}(const char *itemp, PyObject *obj) { - {{dtype}} value = {{from_py_function}}(obj); - if ({{error_condition}}) - return 0; - *({{dtype}} *) itemp = value; - return 1; -} -{{endif}} - - -/////////////// MemviewObjectToObject.proto /////////////// - -/* Function callbacks (for memoryview object) for dtype object */ -static PyObject *{{get_function}}(const char *itemp); /* proto */ -static int {{set_function}}(const char *itemp, PyObject *obj); /* proto */ - - -/////////////// MemviewObjectToObject /////////////// - -static PyObject *{{get_function}}(const char *itemp) { - PyObject *result = *(PyObject **) itemp; - Py_INCREF(result); - return result; -} - -static int {{set_function}}(const char *itemp, PyObject *obj) { - Py_INCREF(obj); - Py_DECREF(*(PyObject **) itemp); - *(PyObject **) itemp = obj; - return 1; -} - -/////////// ToughSlice ////////// - -/* Dimension is indexed with 'start:stop:step' */ - -if (unlikely(__pyx_memoryview_slice_memviewslice( - &{{dst}}, - {{src}}.shape[{{dim}}], {{src}}.strides[{{dim}}], {{src}}.suboffsets[{{dim}}], - {{dim}}, - {{new_ndim}}, - &{{get_suboffset_dim()}}, - {{start}}, - {{stop}}, - {{step}}, - {{int(have_start)}}, - {{int(have_stop)}}, - {{int(have_step)}}, - 1) < 0)) -{ - {{error_goto}} -} - - -////////// SimpleSlice ////////// - -/* Dimension is indexed with ':' only */ - -{{dst}}.shape[{{new_ndim}}] = {{src}}.shape[{{dim}}]; -{{dst}}.strides[{{new_ndim}}] = {{src}}.strides[{{dim}}]; - -{{if access == 'direct'}} - {{dst}}.suboffsets[{{new_ndim}}] = -1; -{{else}} - {{dst}}.suboffsets[{{new_ndim}}] = {{src}}.suboffsets[{{dim}}]; - if ({{src}}.suboffsets[{{dim}}] >= 0) - {{get_suboffset_dim()}} = {{new_ndim}}; -{{endif}} - - -////////// SliceIndex ////////// - -// Dimension is indexed with an integer, we could use the ToughSlice -// approach, but this is faster - -{ - Py_ssize_t __pyx_tmp_idx = {{idx}}; - - {{if wraparound or boundscheck}} - Py_ssize_t __pyx_tmp_shape = {{src}}.shape[{{dim}}]; - {{endif}} - - Py_ssize_t __pyx_tmp_stride = {{src}}.strides[{{dim}}]; - {{if wraparound}} - if (__pyx_tmp_idx < 0) - __pyx_tmp_idx += __pyx_tmp_shape; - {{endif}} - - {{if boundscheck}} - if (unlikely(!__Pyx_is_valid_index(__pyx_tmp_idx, __pyx_tmp_shape))) { - {{if not have_gil}} - #ifdef WITH_THREAD - PyGILState_STATE __pyx_gilstate_save = PyGILState_Ensure(); - #endif - {{endif}} - - PyErr_SetString(PyExc_IndexError, - "Index out of bounds (axis {{dim}})"); - - {{if not have_gil}} - #ifdef WITH_THREAD - PyGILState_Release(__pyx_gilstate_save); - #endif - {{endif}} - - {{error_goto}} - } - {{endif}} - - {{if all_dimensions_direct}} - {{dst}}.data += __pyx_tmp_idx * __pyx_tmp_stride; - {{else}} - if ({{get_suboffset_dim()}} < 0) { - {{dst}}.data += __pyx_tmp_idx * __pyx_tmp_stride; - - /* This dimension is the first dimension, or is preceded by */ - /* direct or indirect dimensions that are indexed away. */ - /* Hence suboffset_dim must be less than zero, and we can have */ - /* our data pointer refer to another block by dereferencing. */ - /* slice.data -> B -> C becomes slice.data -> C */ - - {{if indirect}} - { - Py_ssize_t __pyx_tmp_suboffset = {{src}}.suboffsets[{{dim}}]; - - {{if generic}} - if (__pyx_tmp_suboffset >= 0) - {{endif}} - - {{dst}}.data = *((char **) {{dst}}.data) + __pyx_tmp_suboffset; - } - {{endif}} - - } else { - {{dst}}.suboffsets[{{get_suboffset_dim()}}] += __pyx_tmp_idx * __pyx_tmp_stride; - - /* Note: dimension can not be indirect, the compiler will have */ - /* issued an error */ - } - - {{endif}} -} - - -////////// FillStrided1DScalar.proto ////////// - -static void -__pyx_fill_slice_{{dtype_name}}({{type_decl}} *p, Py_ssize_t extent, Py_ssize_t stride, - size_t itemsize, void *itemp); - -////////// FillStrided1DScalar ////////// - -/* Fill a slice with a scalar value. The dimension is direct and strided or contiguous */ -/* This can be used as a callback for the memoryview object to efficienty assign a scalar */ -/* Currently unused */ -static void -__pyx_fill_slice_{{dtype_name}}({{type_decl}} *p, Py_ssize_t extent, Py_ssize_t stride, - size_t itemsize, void *itemp) -{ - Py_ssize_t i; - {{type_decl}} item = *(({{type_decl}} *) itemp); - {{type_decl}} *endp; - - stride /= sizeof({{type_decl}}); - endp = p + stride * extent; - - while (p < endp) { - *p = item; - p += stride; - } -} diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/PIL/ImageDraw.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/PIL/ImageDraw.py deleted file mode 100644 index ff94f0ce3d6de5c2fcae698dba5d01165176f24e..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/PIL/ImageDraw.py +++ /dev/null @@ -1,1058 +0,0 @@ -# -# The Python Imaging Library -# $Id$ -# -# drawing interface operations -# -# History: -# 1996-04-13 fl Created (experimental) -# 1996-08-07 fl Filled polygons, ellipses. -# 1996-08-13 fl Added text support -# 1998-06-28 fl Handle I and F images -# 1998-12-29 fl Added arc; use arc primitive to draw ellipses -# 1999-01-10 fl Added shape stuff (experimental) -# 1999-02-06 fl Added bitmap support -# 1999-02-11 fl Changed all primitives to take options -# 1999-02-20 fl Fixed backwards compatibility -# 2000-10-12 fl Copy on write, when necessary -# 2001-02-18 fl Use default ink for bitmap/text also in fill mode -# 2002-10-24 fl Added support for CSS-style color strings -# 2002-12-10 fl Added experimental support for RGBA-on-RGB drawing -# 2002-12-11 fl Refactored low-level drawing API (work in progress) -# 2004-08-26 fl Made Draw() a factory function, added getdraw() support -# 2004-09-04 fl Added width support to line primitive -# 2004-09-10 fl Added font mode handling -# 2006-06-19 fl Added font bearing support (getmask2) -# -# Copyright (c) 1997-2006 by Secret Labs AB -# Copyright (c) 1996-2006 by Fredrik Lundh -# -# See the README file for information on usage and redistribution. -# - -import math -import numbers -import warnings - -from . import Image, ImageColor -from ._deprecate import deprecate - -""" -A simple 2D drawing interface for PIL images. -

-Application code should use the Draw factory, instead of -directly. -""" - - -class ImageDraw: - font = None - - def __init__(self, im, mode=None): - """ - Create a drawing instance. - - :param im: The image to draw in. - :param mode: Optional mode to use for color values. For RGB - images, this argument can be RGB or RGBA (to blend the - drawing into the image). For all other modes, this argument - must be the same as the image mode. If omitted, the mode - defaults to the mode of the image. - """ - im.load() - if im.readonly: - im._copy() # make it writeable - blend = 0 - if mode is None: - mode = im.mode - if mode != im.mode: - if mode == "RGBA" and im.mode == "RGB": - blend = 1 - else: - raise ValueError("mode mismatch") - if mode == "P": - self.palette = im.palette - else: - self.palette = None - self._image = im - self.im = im.im - self.draw = Image.core.draw(self.im, blend) - self.mode = mode - if mode in ("I", "F"): - self.ink = self.draw.draw_ink(1) - else: - self.ink = self.draw.draw_ink(-1) - if mode in ("1", "P", "I", "F"): - # FIXME: fix Fill2 to properly support matte for I+F images - self.fontmode = "1" - else: - self.fontmode = "L" # aliasing is okay for other modes - self.fill = False - - def getfont(self): - """ - Get the current default font. - - To set the default font for this ImageDraw instance:: - - from PIL import ImageDraw, ImageFont - draw.font = ImageFont.truetype("Tests/fonts/FreeMono.ttf") - - To set the default font for all future ImageDraw instances:: - - from PIL import ImageDraw, ImageFont - ImageDraw.ImageDraw.font = ImageFont.truetype("Tests/fonts/FreeMono.ttf") - - If the current default font is ``None``, - it is initialized with ``ImageFont.load_default()``. - - :returns: An image font.""" - if not self.font: - # FIXME: should add a font repository - from . import ImageFont - - self.font = ImageFont.load_default() - return self.font - - def _getink(self, ink, fill=None): - if ink is None and fill is None: - if self.fill: - fill = self.ink - else: - ink = self.ink - else: - if ink is not None: - if isinstance(ink, str): - ink = ImageColor.getcolor(ink, self.mode) - if self.palette and not isinstance(ink, numbers.Number): - ink = self.palette.getcolor(ink, self._image) - ink = self.draw.draw_ink(ink) - if fill is not None: - if isinstance(fill, str): - fill = ImageColor.getcolor(fill, self.mode) - if self.palette and not isinstance(fill, numbers.Number): - fill = self.palette.getcolor(fill, self._image) - fill = self.draw.draw_ink(fill) - return ink, fill - - def arc(self, xy, start, end, fill=None, width=1): - """Draw an arc.""" - ink, fill = self._getink(fill) - if ink is not None: - self.draw.draw_arc(xy, start, end, ink, width) - - def bitmap(self, xy, bitmap, fill=None): - """Draw a bitmap.""" - bitmap.load() - ink, fill = self._getink(fill) - if ink is None: - ink = fill - if ink is not None: - self.draw.draw_bitmap(xy, bitmap.im, ink) - - def chord(self, xy, start, end, fill=None, outline=None, width=1): - """Draw a chord.""" - ink, fill = self._getink(outline, fill) - if fill is not None: - self.draw.draw_chord(xy, start, end, fill, 1) - if ink is not None and ink != fill and width != 0: - self.draw.draw_chord(xy, start, end, ink, 0, width) - - def ellipse(self, xy, fill=None, outline=None, width=1): - """Draw an ellipse.""" - ink, fill = self._getink(outline, fill) - if fill is not None: - self.draw.draw_ellipse(xy, fill, 1) - if ink is not None and ink != fill and width != 0: - self.draw.draw_ellipse(xy, ink, 0, width) - - def line(self, xy, fill=None, width=0, joint=None): - """Draw a line, or a connected sequence of line segments.""" - ink = self._getink(fill)[0] - if ink is not None: - self.draw.draw_lines(xy, ink, width) - if joint == "curve" and width > 4: - if not isinstance(xy[0], (list, tuple)): - xy = [tuple(xy[i : i + 2]) for i in range(0, len(xy), 2)] - for i in range(1, len(xy) - 1): - point = xy[i] - angles = [ - math.degrees(math.atan2(end[0] - start[0], start[1] - end[1])) - % 360 - for start, end in ((xy[i - 1], point), (point, xy[i + 1])) - ] - if angles[0] == angles[1]: - # This is a straight line, so no joint is required - continue - - def coord_at_angle(coord, angle): - x, y = coord - angle -= 90 - distance = width / 2 - 1 - return tuple( - p + (math.floor(p_d) if p_d > 0 else math.ceil(p_d)) - for p, p_d in ( - (x, distance * math.cos(math.radians(angle))), - (y, distance * math.sin(math.radians(angle))), - ) - ) - - flipped = ( - angles[1] > angles[0] and angles[1] - 180 > angles[0] - ) or (angles[1] < angles[0] and angles[1] + 180 > angles[0]) - coords = [ - (point[0] - width / 2 + 1, point[1] - width / 2 + 1), - (point[0] + width / 2 - 1, point[1] + width / 2 - 1), - ] - if flipped: - start, end = (angles[1] + 90, angles[0] + 90) - else: - start, end = (angles[0] - 90, angles[1] - 90) - self.pieslice(coords, start - 90, end - 90, fill) - - if width > 8: - # Cover potential gaps between the line and the joint - if flipped: - gap_coords = [ - coord_at_angle(point, angles[0] + 90), - point, - coord_at_angle(point, angles[1] + 90), - ] - else: - gap_coords = [ - coord_at_angle(point, angles[0] - 90), - point, - coord_at_angle(point, angles[1] - 90), - ] - self.line(gap_coords, fill, width=3) - - def shape(self, shape, fill=None, outline=None): - """(Experimental) Draw a shape.""" - shape.close() - ink, fill = self._getink(outline, fill) - if fill is not None: - self.draw.draw_outline(shape, fill, 1) - if ink is not None and ink != fill: - self.draw.draw_outline(shape, ink, 0) - - def pieslice(self, xy, start, end, fill=None, outline=None, width=1): - """Draw a pieslice.""" - ink, fill = self._getink(outline, fill) - if fill is not None: - self.draw.draw_pieslice(xy, start, end, fill, 1) - if ink is not None and ink != fill and width != 0: - self.draw.draw_pieslice(xy, start, end, ink, 0, width) - - def point(self, xy, fill=None): - """Draw one or more individual pixels.""" - ink, fill = self._getink(fill) - if ink is not None: - self.draw.draw_points(xy, ink) - - def polygon(self, xy, fill=None, outline=None, width=1): - """Draw a polygon.""" - ink, fill = self._getink(outline, fill) - if fill is not None: - self.draw.draw_polygon(xy, fill, 1) - if ink is not None and ink != fill and width != 0: - if width == 1: - self.draw.draw_polygon(xy, ink, 0, width) - else: - # To avoid expanding the polygon outwards, - # use the fill as a mask - mask = Image.new("1", self.im.size) - mask_ink = self._getink(1)[0] - - fill_im = mask.copy() - draw = Draw(fill_im) - draw.draw.draw_polygon(xy, mask_ink, 1) - - ink_im = mask.copy() - draw = Draw(ink_im) - width = width * 2 - 1 - draw.draw.draw_polygon(xy, mask_ink, 0, width) - - mask.paste(ink_im, mask=fill_im) - - im = Image.new(self.mode, self.im.size) - draw = Draw(im) - draw.draw.draw_polygon(xy, ink, 0, width) - self.im.paste(im.im, (0, 0) + im.size, mask.im) - - def regular_polygon( - self, bounding_circle, n_sides, rotation=0, fill=None, outline=None - ): - """Draw a regular polygon.""" - xy = _compute_regular_polygon_vertices(bounding_circle, n_sides, rotation) - self.polygon(xy, fill, outline) - - def rectangle(self, xy, fill=None, outline=None, width=1): - """Draw a rectangle.""" - ink, fill = self._getink(outline, fill) - if fill is not None: - self.draw.draw_rectangle(xy, fill, 1) - if ink is not None and ink != fill and width != 0: - self.draw.draw_rectangle(xy, ink, 0, width) - - def rounded_rectangle(self, xy, radius=0, fill=None, outline=None, width=1): - """Draw a rounded rectangle.""" - if isinstance(xy[0], (list, tuple)): - (x0, y0), (x1, y1) = xy - else: - x0, y0, x1, y1 = xy - - d = radius * 2 - - full_x = d >= x1 - x0 - if full_x: - # The two left and two right corners are joined - d = x1 - x0 - full_y = d >= y1 - y0 - if full_y: - # The two top and two bottom corners are joined - d = y1 - y0 - if full_x and full_y: - # If all corners are joined, that is a circle - return self.ellipse(xy, fill, outline, width) - - if d == 0: - # If the corners have no curve, that is a rectangle - return self.rectangle(xy, fill, outline, width) - - r = d // 2 - ink, fill = self._getink(outline, fill) - - def draw_corners(pieslice): - if full_x: - # Draw top and bottom halves - parts = ( - ((x0, y0, x0 + d, y0 + d), 180, 360), - ((x0, y1 - d, x0 + d, y1), 0, 180), - ) - elif full_y: - # Draw left and right halves - parts = ( - ((x0, y0, x0 + d, y0 + d), 90, 270), - ((x1 - d, y0, x1, y0 + d), 270, 90), - ) - else: - # Draw four separate corners - parts = ( - ((x1 - d, y0, x1, y0 + d), 270, 360), - ((x1 - d, y1 - d, x1, y1), 0, 90), - ((x0, y1 - d, x0 + d, y1), 90, 180), - ((x0, y0, x0 + d, y0 + d), 180, 270), - ) - for part in parts: - if pieslice: - self.draw.draw_pieslice(*(part + (fill, 1))) - else: - self.draw.draw_arc(*(part + (ink, width))) - - if fill is not None: - draw_corners(True) - - if full_x: - self.draw.draw_rectangle((x0, y0 + r + 1, x1, y1 - r - 1), fill, 1) - else: - self.draw.draw_rectangle((x0 + r + 1, y0, x1 - r - 1, y1), fill, 1) - if not full_x and not full_y: - self.draw.draw_rectangle((x0, y0 + r + 1, x0 + r, y1 - r - 1), fill, 1) - self.draw.draw_rectangle((x1 - r, y0 + r + 1, x1, y1 - r - 1), fill, 1) - if ink is not None and ink != fill and width != 0: - draw_corners(False) - - if not full_x: - self.draw.draw_rectangle( - (x0 + r + 1, y0, x1 - r - 1, y0 + width - 1), ink, 1 - ) - self.draw.draw_rectangle( - (x0 + r + 1, y1 - width + 1, x1 - r - 1, y1), ink, 1 - ) - if not full_y: - self.draw.draw_rectangle( - (x0, y0 + r + 1, x0 + width - 1, y1 - r - 1), ink, 1 - ) - self.draw.draw_rectangle( - (x1 - width + 1, y0 + r + 1, x1, y1 - r - 1), ink, 1 - ) - - def _multiline_check(self, text): - """Draw text.""" - split_character = "\n" if isinstance(text, str) else b"\n" - - return split_character in text - - def _multiline_split(self, text): - split_character = "\n" if isinstance(text, str) else b"\n" - - return text.split(split_character) - - def _multiline_spacing(self, font, spacing, stroke_width): - # this can be replaced with self.textbbox(...)[3] when textsize is removed - with warnings.catch_warnings(): - warnings.filterwarnings("ignore", category=DeprecationWarning) - return ( - self.textsize( - "A", - font=font, - stroke_width=stroke_width, - )[1] - + spacing - ) - - def text( - self, - xy, - text, - fill=None, - font=None, - anchor=None, - spacing=4, - align="left", - direction=None, - features=None, - language=None, - stroke_width=0, - stroke_fill=None, - embedded_color=False, - *args, - **kwargs, - ): - if self._multiline_check(text): - return self.multiline_text( - xy, - text, - fill, - font, - anchor, - spacing, - align, - direction, - features, - language, - stroke_width, - stroke_fill, - embedded_color, - ) - - if embedded_color and self.mode not in ("RGB", "RGBA"): - raise ValueError("Embedded color supported only in RGB and RGBA modes") - - if font is None: - font = self.getfont() - - def getink(fill): - ink, fill = self._getink(fill) - if ink is None: - return fill - return ink - - def draw_text(ink, stroke_width=0, stroke_offset=None): - mode = self.fontmode - if stroke_width == 0 and embedded_color: - mode = "RGBA" - coord = xy - try: - mask, offset = font.getmask2( - text, - mode, - direction=direction, - features=features, - language=language, - stroke_width=stroke_width, - anchor=anchor, - ink=ink, - *args, - **kwargs, - ) - coord = coord[0] + offset[0], coord[1] + offset[1] - except AttributeError: - try: - mask = font.getmask( - text, - mode, - direction, - features, - language, - stroke_width, - anchor, - ink, - *args, - **kwargs, - ) - except TypeError: - mask = font.getmask(text) - if stroke_offset: - coord = coord[0] + stroke_offset[0], coord[1] + stroke_offset[1] - if mode == "RGBA": - # font.getmask2(mode="RGBA") returns color in RGB bands and mask in A - # extract mask and set text alpha - color, mask = mask, mask.getband(3) - color.fillband(3, (ink >> 24) & 0xFF) - x, y = (int(c) for c in coord) - self.im.paste(color, (x, y, x + mask.size[0], y + mask.size[1]), mask) - else: - self.draw.draw_bitmap(coord, mask, ink) - - ink = getink(fill) - if ink is not None: - stroke_ink = None - if stroke_width: - stroke_ink = getink(stroke_fill) if stroke_fill is not None else ink - - if stroke_ink is not None: - # Draw stroked text - draw_text(stroke_ink, stroke_width) - - # Draw normal text - draw_text(ink, 0) - else: - # Only draw normal text - draw_text(ink) - - def multiline_text( - self, - xy, - text, - fill=None, - font=None, - anchor=None, - spacing=4, - align="left", - direction=None, - features=None, - language=None, - stroke_width=0, - stroke_fill=None, - embedded_color=False, - ): - if direction == "ttb": - raise ValueError("ttb direction is unsupported for multiline text") - - if anchor is None: - anchor = "la" - elif len(anchor) != 2: - raise ValueError("anchor must be a 2 character string") - elif anchor[1] in "tb": - raise ValueError("anchor not supported for multiline text") - - widths = [] - max_width = 0 - lines = self._multiline_split(text) - line_spacing = self._multiline_spacing(font, spacing, stroke_width) - for line in lines: - line_width = self.textlength( - line, font, direction=direction, features=features, language=language - ) - widths.append(line_width) - max_width = max(max_width, line_width) - - top = xy[1] - if anchor[1] == "m": - top -= (len(lines) - 1) * line_spacing / 2.0 - elif anchor[1] == "d": - top -= (len(lines) - 1) * line_spacing - - for idx, line in enumerate(lines): - left = xy[0] - width_difference = max_width - widths[idx] - - # first align left by anchor - if anchor[0] == "m": - left -= width_difference / 2.0 - elif anchor[0] == "r": - left -= width_difference - - # then align by align parameter - if align == "left": - pass - elif align == "center": - left += width_difference / 2.0 - elif align == "right": - left += width_difference - else: - raise ValueError('align must be "left", "center" or "right"') - - self.text( - (left, top), - line, - fill, - font, - anchor, - direction=direction, - features=features, - language=language, - stroke_width=stroke_width, - stroke_fill=stroke_fill, - embedded_color=embedded_color, - ) - top += line_spacing - - def textsize( - self, - text, - font=None, - spacing=4, - direction=None, - features=None, - language=None, - stroke_width=0, - ): - """Get the size of a given string, in pixels.""" - deprecate("textsize", 10, "textbbox or textlength") - if self._multiline_check(text): - with warnings.catch_warnings(): - warnings.filterwarnings("ignore", category=DeprecationWarning) - return self.multiline_textsize( - text, - font, - spacing, - direction, - features, - language, - stroke_width, - ) - - if font is None: - font = self.getfont() - with warnings.catch_warnings(): - warnings.filterwarnings("ignore", category=DeprecationWarning) - return font.getsize( - text, - direction, - features, - language, - stroke_width, - ) - - def multiline_textsize( - self, - text, - font=None, - spacing=4, - direction=None, - features=None, - language=None, - stroke_width=0, - ): - deprecate("multiline_textsize", 10, "multiline_textbbox") - max_width = 0 - lines = self._multiline_split(text) - line_spacing = self._multiline_spacing(font, spacing, stroke_width) - with warnings.catch_warnings(): - warnings.filterwarnings("ignore", category=DeprecationWarning) - for line in lines: - line_width, line_height = self.textsize( - line, - font, - spacing, - direction, - features, - language, - stroke_width, - ) - max_width = max(max_width, line_width) - return max_width, len(lines) * line_spacing - spacing - - def textlength( - self, - text, - font=None, - direction=None, - features=None, - language=None, - embedded_color=False, - ): - """Get the length of a given string, in pixels with 1/64 precision.""" - if self._multiline_check(text): - raise ValueError("can't measure length of multiline text") - if embedded_color and self.mode not in ("RGB", "RGBA"): - raise ValueError("Embedded color supported only in RGB and RGBA modes") - - if font is None: - font = self.getfont() - mode = "RGBA" if embedded_color else self.fontmode - try: - return font.getlength(text, mode, direction, features, language) - except AttributeError: - deprecate("textlength support for fonts without getlength", 10) - with warnings.catch_warnings(): - warnings.filterwarnings("ignore", category=DeprecationWarning) - size = self.textsize( - text, - font, - direction=direction, - features=features, - language=language, - ) - if direction == "ttb": - return size[1] - return size[0] - - def textbbox( - self, - xy, - text, - font=None, - anchor=None, - spacing=4, - align="left", - direction=None, - features=None, - language=None, - stroke_width=0, - embedded_color=False, - ): - """Get the bounding box of a given string, in pixels.""" - if embedded_color and self.mode not in ("RGB", "RGBA"): - raise ValueError("Embedded color supported only in RGB and RGBA modes") - - if self._multiline_check(text): - return self.multiline_textbbox( - xy, - text, - font, - anchor, - spacing, - align, - direction, - features, - language, - stroke_width, - embedded_color, - ) - - if font is None: - font = self.getfont() - mode = "RGBA" if embedded_color else self.fontmode - bbox = font.getbbox( - text, mode, direction, features, language, stroke_width, anchor - ) - return bbox[0] + xy[0], bbox[1] + xy[1], bbox[2] + xy[0], bbox[3] + xy[1] - - def multiline_textbbox( - self, - xy, - text, - font=None, - anchor=None, - spacing=4, - align="left", - direction=None, - features=None, - language=None, - stroke_width=0, - embedded_color=False, - ): - if direction == "ttb": - raise ValueError("ttb direction is unsupported for multiline text") - - if anchor is None: - anchor = "la" - elif len(anchor) != 2: - raise ValueError("anchor must be a 2 character string") - elif anchor[1] in "tb": - raise ValueError("anchor not supported for multiline text") - - widths = [] - max_width = 0 - lines = self._multiline_split(text) - line_spacing = self._multiline_spacing(font, spacing, stroke_width) - for line in lines: - line_width = self.textlength( - line, - font, - direction=direction, - features=features, - language=language, - embedded_color=embedded_color, - ) - widths.append(line_width) - max_width = max(max_width, line_width) - - top = xy[1] - if anchor[1] == "m": - top -= (len(lines) - 1) * line_spacing / 2.0 - elif anchor[1] == "d": - top -= (len(lines) - 1) * line_spacing - - bbox = None - - for idx, line in enumerate(lines): - left = xy[0] - width_difference = max_width - widths[idx] - - # first align left by anchor - if anchor[0] == "m": - left -= width_difference / 2.0 - elif anchor[0] == "r": - left -= width_difference - - # then align by align parameter - if align == "left": - pass - elif align == "center": - left += width_difference / 2.0 - elif align == "right": - left += width_difference - else: - raise ValueError('align must be "left", "center" or "right"') - - bbox_line = self.textbbox( - (left, top), - line, - font, - anchor, - direction=direction, - features=features, - language=language, - stroke_width=stroke_width, - embedded_color=embedded_color, - ) - if bbox is None: - bbox = bbox_line - else: - bbox = ( - min(bbox[0], bbox_line[0]), - min(bbox[1], bbox_line[1]), - max(bbox[2], bbox_line[2]), - max(bbox[3], bbox_line[3]), - ) - - top += line_spacing - - if bbox is None: - return xy[0], xy[1], xy[0], xy[1] - return bbox - - -def Draw(im, mode=None): - """ - A simple 2D drawing interface for PIL images. - - :param im: The image to draw in. - :param mode: Optional mode to use for color values. For RGB - images, this argument can be RGB or RGBA (to blend the - drawing into the image). For all other modes, this argument - must be the same as the image mode. If omitted, the mode - defaults to the mode of the image. - """ - try: - return im.getdraw(mode) - except AttributeError: - return ImageDraw(im, mode) - - -# experimental access to the outline API -try: - Outline = Image.core.outline -except AttributeError: - Outline = None - - -def getdraw(im=None, hints=None): - """ - (Experimental) A more advanced 2D drawing interface for PIL images, - based on the WCK interface. - - :param im: The image to draw in. - :param hints: An optional list of hints. - :returns: A (drawing context, drawing resource factory) tuple. - """ - # FIXME: this needs more work! - # FIXME: come up with a better 'hints' scheme. - handler = None - if not hints or "nicest" in hints: - try: - from . import _imagingagg as handler - except ImportError: - pass - if handler is None: - from . import ImageDraw2 as handler - if im: - im = handler.Draw(im) - return im, handler - - -def floodfill(image, xy, value, border=None, thresh=0): - """ - (experimental) Fills a bounded region with a given color. - - :param image: Target image. - :param xy: Seed position (a 2-item coordinate tuple). See - :ref:`coordinate-system`. - :param value: Fill color. - :param border: Optional border value. If given, the region consists of - pixels with a color different from the border color. If not given, - the region consists of pixels having the same color as the seed - pixel. - :param thresh: Optional threshold value which specifies a maximum - tolerable difference of a pixel value from the 'background' in - order for it to be replaced. Useful for filling regions of - non-homogeneous, but similar, colors. - """ - # based on an implementation by Eric S. Raymond - # amended by yo1995 @20180806 - pixel = image.load() - x, y = xy - try: - background = pixel[x, y] - if _color_diff(value, background) <= thresh: - return # seed point already has fill color - pixel[x, y] = value - except (ValueError, IndexError): - return # seed point outside image - edge = {(x, y)} - # use a set to keep record of current and previous edge pixels - # to reduce memory consumption - full_edge = set() - while edge: - new_edge = set() - for (x, y) in edge: # 4 adjacent method - for (s, t) in ((x + 1, y), (x - 1, y), (x, y + 1), (x, y - 1)): - # If already processed, or if a coordinate is negative, skip - if (s, t) in full_edge or s < 0 or t < 0: - continue - try: - p = pixel[s, t] - except (ValueError, IndexError): - pass - else: - full_edge.add((s, t)) - if border is None: - fill = _color_diff(p, background) <= thresh - else: - fill = p != value and p != border - if fill: - pixel[s, t] = value - new_edge.add((s, t)) - full_edge = edge # discard pixels processed - edge = new_edge - - -def _compute_regular_polygon_vertices(bounding_circle, n_sides, rotation): - """ - Generate a list of vertices for a 2D regular polygon. - - :param bounding_circle: The bounding circle is a tuple defined - by a point and radius. The polygon is inscribed in this circle. - (e.g. ``bounding_circle=(x, y, r)`` or ``((x, y), r)``) - :param n_sides: Number of sides - (e.g. ``n_sides=3`` for a triangle, ``6`` for a hexagon) - :param rotation: Apply an arbitrary rotation to the polygon - (e.g. ``rotation=90``, applies a 90 degree rotation) - :return: List of regular polygon vertices - (e.g. ``[(25, 50), (50, 50), (50, 25), (25, 25)]``) - - How are the vertices computed? - 1. Compute the following variables - - theta: Angle between the apothem & the nearest polygon vertex - - side_length: Length of each polygon edge - - centroid: Center of bounding circle (1st, 2nd elements of bounding_circle) - - polygon_radius: Polygon radius (last element of bounding_circle) - - angles: Location of each polygon vertex in polar grid - (e.g. A square with 0 degree rotation => [225.0, 315.0, 45.0, 135.0]) - - 2. For each angle in angles, get the polygon vertex at that angle - The vertex is computed using the equation below. - X= xcos(φ) + ysin(φ) - Y= −xsin(φ) + ycos(φ) - - Note: - φ = angle in degrees - x = 0 - y = polygon_radius - - The formula above assumes rotation around the origin. - In our case, we are rotating around the centroid. - To account for this, we use the formula below - X = xcos(φ) + ysin(φ) + centroid_x - Y = −xsin(φ) + ycos(φ) + centroid_y - """ - # 1. Error Handling - # 1.1 Check `n_sides` has an appropriate value - if not isinstance(n_sides, int): - raise TypeError("n_sides should be an int") - if n_sides < 3: - raise ValueError("n_sides should be an int > 2") - - # 1.2 Check `bounding_circle` has an appropriate value - if not isinstance(bounding_circle, (list, tuple)): - raise TypeError("bounding_circle should be a tuple") - - if len(bounding_circle) == 3: - *centroid, polygon_radius = bounding_circle - elif len(bounding_circle) == 2: - centroid, polygon_radius = bounding_circle - else: - raise ValueError( - "bounding_circle should contain 2D coordinates " - "and a radius (e.g. (x, y, r) or ((x, y), r) )" - ) - - if not all(isinstance(i, (int, float)) for i in (*centroid, polygon_radius)): - raise ValueError("bounding_circle should only contain numeric data") - - if not len(centroid) == 2: - raise ValueError( - "bounding_circle centre should contain 2D coordinates (e.g. (x, y))" - ) - - if polygon_radius <= 0: - raise ValueError("bounding_circle radius should be > 0") - - # 1.3 Check `rotation` has an appropriate value - if not isinstance(rotation, (int, float)): - raise ValueError("rotation should be an int or float") - - # 2. Define Helper Functions - def _apply_rotation(point, degrees, centroid): - return ( - round( - point[0] * math.cos(math.radians(360 - degrees)) - - point[1] * math.sin(math.radians(360 - degrees)) - + centroid[0], - 2, - ), - round( - point[1] * math.cos(math.radians(360 - degrees)) - + point[0] * math.sin(math.radians(360 - degrees)) - + centroid[1], - 2, - ), - ) - - def _compute_polygon_vertex(centroid, polygon_radius, angle): - start_point = [polygon_radius, 0] - return _apply_rotation(start_point, angle, centroid) - - def _get_angles(n_sides, rotation): - angles = [] - degrees = 360 / n_sides - # Start with the bottom left polygon vertex - current_angle = (270 - 0.5 * degrees) + rotation - for _ in range(0, n_sides): - angles.append(current_angle) - current_angle += degrees - if current_angle > 360: - current_angle -= 360 - return angles - - # 3. Variable Declarations - angles = _get_angles(n_sides, rotation) - - # 4. Compute Vertices - return [ - _compute_polygon_vertex(centroid, polygon_radius, angle) for angle in angles - ] - - -def _color_diff(color1, color2): - """ - Uses 1-norm distance to calculate difference between two values. - """ - if isinstance(color2, tuple): - return sum(abs(color1[i] - color2[i]) for i in range(0, len(color2))) - else: - return abs(color1 - color2) diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/absl/command_name.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/absl/command_name.py deleted file mode 100644 index 19964937d1762f067d6e045f46d6bd2e679c7227..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/absl/command_name.py +++ /dev/null @@ -1,63 +0,0 @@ -# Copyright 2017 The Abseil Authors. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -"""A tiny stand alone library to change the kernel process name on Linux.""" - -import os -import sys - -# This library must be kept small and stand alone. It is used by small things -# that require no extension modules. - - -def make_process_name_useful(): - """Sets the process name to something better than 'python' if possible.""" - set_kernel_process_name(os.path.basename(sys.argv[0])) - - -def set_kernel_process_name(name): - """Changes the Kernel's /proc/self/status process name on Linux. - - The kernel name is NOT what will be shown by the ps or top command. - It is a 15 character string stored in the kernel's process table that - is included in the kernel log when a process is OOM killed. - The first 15 bytes of name are used. Non-ASCII unicode is replaced with '?'. - - Does nothing if /proc/self/comm cannot be written or prctl() fails. - - Args: - name: bytes|unicode, the Linux kernel's command name to set. - """ - if not isinstance(name, bytes): - name = name.encode('ascii', 'replace') - try: - # This is preferred to using ctypes to try and call prctl() when possible. - with open('/proc/self/comm', 'wb') as proc_comm: - proc_comm.write(name[:15]) - except EnvironmentError: - try: - import ctypes - except ImportError: - return # No ctypes. - try: - libc = ctypes.CDLL('libc.so.6') - except EnvironmentError: - return # No libc.so.6. - pr_set_name = ctypes.c_ulong(15) # linux/prctl.h PR_SET_NAME value. - zero = ctypes.c_ulong(0) - try: - libc.prctl(pr_set_name, name, zero, zero, zero) - # Ignore the prctl return value. Nothing we can do if it errored. - except AttributeError: - return # No prctl. diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/examples/adaptive_span/adaptive_span_attention.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/examples/adaptive_span/adaptive_span_attention.py deleted file mode 100644 index 07f757bb8e1a8a67b1124175ee338c8735aa8d65..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/examples/adaptive_span/adaptive_span_attention.py +++ /dev/null @@ -1,160 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -import math - -import torch -import torch.nn as nn -import torch.nn.functional as F - - -class AdaptiveMask(nn.Module): - """Soft masking function for adaptive size. - It masks out the last K values of an input. The masking value - goes from 1 to 0 gradually, so K can be learned with - back-propagation. - Args: - max_size: maximum size (i.e. input dimension) - ramp_size: size of the ramp going from 0 to 1 - init_val: initial size proportion not to be masked out - shape: learn multiple sizes independent of each other - """ - - def __init__(self, max_size, ramp_size, init_val=0, shape=(1,)): - nn.Module.__init__(self) - self._max_size = max_size - self._ramp_size = ramp_size - self.current_val = nn.Parameter(torch.zeros(*shape) + init_val) - mask_template = torch.linspace(1 - max_size, 0, steps=max_size) - self.register_buffer("mask_template", mask_template) - - def forward(self, x): - mask = self.mask_template.float() + self.current_val.float() * self._max_size - mask = mask / self._ramp_size + 1 - mask = mask.clamp(0, 1) - if x.size(-1) < self._max_size: - # the input could have been trimmed beforehand to save computation - mask = mask.narrow(-1, self._max_size - x.size(-1), x.size(-1)) - x = (x * mask).type_as(x) - return x - - def get_current_max_size(self, include_ramp=True): - current_size = math.ceil(self.current_val.max().item() * self._max_size) - if include_ramp: - current_size += self._ramp_size - current_size = max(0, min(self._max_size, current_size)) - return current_size - - def get_current_avg_size(self, include_ramp=True): - current_size = math.ceil( - self.current_val.float().mean().item() * self._max_size - ) - if include_ramp: - current_size += self._ramp_size - current_size = max(0, min(self._max_size, current_size)) - return current_size - - def clamp_param(self): - """this need to be called after each update""" - self.current_val.data.clamp_(0, 1) - - -class AdaptiveSpan(nn.Module): - """Adaptive attention span for Transformerself. - This module learns an attention span length from data for each - self-attention head. - Args: - attn_span: maximum attention span - adapt_span_loss: loss coefficient for the span length - adapt_span_ramp: length of the masking ramp - adapt_span_init: initial size ratio - adapt_span_cache: adapt cache size to reduce memory usage - """ - - def __init__( - self, - attn_span, - adapt_span_ramp, - adapt_span_init, - n_head, - adapt_span_layer, - **kargs - ): - nn.Module.__init__(self) - self._max_span = attn_span - self._n_head = n_head - self._adapt_span_layer = adapt_span_layer - if self._adapt_span_layer: - self._mask = AdaptiveMask( - max_size=self._max_span, - ramp_size=adapt_span_ramp, - init_val=adapt_span_init, - ) - else: - self._mask = AdaptiveMask( - max_size=self._max_span, - ramp_size=adapt_span_ramp, - init_val=adapt_span_init, - shape=(n_head, 1, 1), - ) - - def forward(self, attn, normalize=True): - """mask attention with the right span""" - # batch and head dimensions are merged together, so separate them first - self.clamp_param() - if self._adapt_span_layer: - attn = self._mask(attn) - else: - B = attn.size(0) # batch size - M = attn.size(1) # block size - attn = attn.reshape(B // self._n_head, self._n_head, M, -1) - attn = self._mask(attn) - attn = attn.view(B, M, -1) - return attn - - def get_trim_len(self): - """how much of memory can be trimmed to reduce computation""" - L = self._max_span - trim_len = min(L - 1, L - self._mask.get_current_max_size()) - # too fine granularity might be bad for the memory management - trim_len = math.floor(trim_len / 64) * 64 - return trim_len - - def trim_memory(self, query, key, value, key_pe): - """trim out unnecessary memory beforehand to reduce computation""" - trim_len = self.get_trim_len() - cache_size = key.size(1) - query.size(1) - trim_len_cache = trim_len - (self._max_span - cache_size) - if trim_len_cache > 0: - key = key[:, trim_len_cache:, :] - value = value[:, trim_len_cache:, :] - elif trim_len_cache < 0: - # cache is too short! this happens when validation resumes - # after a lot of updates. - key = F.pad(key, [0, 0, -trim_len_cache, 0]) - value = F.pad(value, [0, 0, -trim_len_cache, 0]) - if trim_len > 0: - if key_pe is not None: - key_pe = key_pe[:, :, trim_len:] - return key, value, key_pe - - def get_cache_size(self): - """determine how long the cache should be""" - trim_len = self.get_trim_len() - # give a buffer of 64 steps since a span might increase - # in future updates - return min(self._max_span, self._max_span - trim_len + 64) - - def get_loss(self): - """a loss term for regularizing the span length""" - return self._max_span * self._mask.current_val.float().mean() - - def get_current_max_span(self): - return self._mask.get_current_max_size() - - def get_current_avg_span(self): - return self._mask.get_current_avg_size() - - def clamp_param(self): - self._mask.clamp_param() diff --git a/spaces/attention-refocusing/Attention-refocusing/gligen/ldm/modules/x_transformer.py b/spaces/attention-refocusing/Attention-refocusing/gligen/ldm/modules/x_transformer.py deleted file mode 100644 index 5fc15bf9cfe0111a910e7de33d04ffdec3877576..0000000000000000000000000000000000000000 --- a/spaces/attention-refocusing/Attention-refocusing/gligen/ldm/modules/x_transformer.py +++ /dev/null @@ -1,641 +0,0 @@ -"""shout-out to https://github.com/lucidrains/x-transformers/tree/main/x_transformers""" -import torch -from torch import nn, einsum -import torch.nn.functional as F -from functools import partial -from inspect import isfunction -from collections import namedtuple -from einops import rearrange, repeat, reduce - -# constants - -DEFAULT_DIM_HEAD = 64 - -Intermediates = namedtuple('Intermediates', [ - 'pre_softmax_attn', - 'post_softmax_attn' -]) - -LayerIntermediates = namedtuple('Intermediates', [ - 'hiddens', - 'attn_intermediates' -]) - - -class AbsolutePositionalEmbedding(nn.Module): - def __init__(self, dim, max_seq_len): - super().__init__() - self.emb = nn.Embedding(max_seq_len, dim) - self.init_() - - def init_(self): - nn.init.normal_(self.emb.weight, std=0.02) - - def forward(self, x): - n = torch.arange(x.shape[1], device=x.device) - return self.emb(n)[None, :, :] - - -class FixedPositionalEmbedding(nn.Module): - def __init__(self, dim): - super().__init__() - inv_freq = 1. / (10000 ** (torch.arange(0, dim, 2).float() / dim)) - self.register_buffer('inv_freq', inv_freq) - - def forward(self, x, seq_dim=1, offset=0): - t = torch.arange(x.shape[seq_dim], device=x.device).type_as(self.inv_freq) + offset - sinusoid_inp = torch.einsum('i , j -> i j', t, self.inv_freq) - emb = torch.cat((sinusoid_inp.sin(), sinusoid_inp.cos()), dim=-1) - return emb[None, :, :] - - -# helpers - -def exists(val): - return val is not None - - -def default(val, d): - if exists(val): - return val - return d() if isfunction(d) else d - - -def always(val): - def inner(*args, **kwargs): - return val - return inner - - -def not_equals(val): - def inner(x): - return x != val - return inner - - -def equals(val): - def inner(x): - return x == val - return inner - - -def max_neg_value(tensor): - return -torch.finfo(tensor.dtype).max - - -# keyword argument helpers - -def pick_and_pop(keys, d): - values = list(map(lambda key: d.pop(key), keys)) - return dict(zip(keys, values)) - - -def group_dict_by_key(cond, d): - return_val = [dict(), dict()] - for key in d.keys(): - match = bool(cond(key)) - ind = int(not match) - return_val[ind][key] = d[key] - return (*return_val,) - - -def string_begins_with(prefix, str): - return str.startswith(prefix) - - -def group_by_key_prefix(prefix, d): - return group_dict_by_key(partial(string_begins_with, prefix), d) - - -def groupby_prefix_and_trim(prefix, d): - kwargs_with_prefix, kwargs = group_dict_by_key(partial(string_begins_with, prefix), d) - kwargs_without_prefix = dict(map(lambda x: (x[0][len(prefix):], x[1]), tuple(kwargs_with_prefix.items()))) - return kwargs_without_prefix, kwargs - - -# classes -class Scale(nn.Module): - def __init__(self, value, fn): - super().__init__() - self.value = value - self.fn = fn - - def forward(self, x, **kwargs): - x, *rest = self.fn(x, **kwargs) - return (x * self.value, *rest) - - -class Rezero(nn.Module): - def __init__(self, fn): - super().__init__() - self.fn = fn - self.g = nn.Parameter(torch.zeros(1)) - - def forward(self, x, **kwargs): - x, *rest = self.fn(x, **kwargs) - return (x * self.g, *rest) - - -class ScaleNorm(nn.Module): - def __init__(self, dim, eps=1e-5): - super().__init__() - self.scale = dim ** -0.5 - self.eps = eps - self.g = nn.Parameter(torch.ones(1)) - - def forward(self, x): - norm = torch.norm(x, dim=-1, keepdim=True) * self.scale - return x / norm.clamp(min=self.eps) * self.g - - -class RMSNorm(nn.Module): - def __init__(self, dim, eps=1e-8): - super().__init__() - self.scale = dim ** -0.5 - self.eps = eps - self.g = nn.Parameter(torch.ones(dim)) - - def forward(self, x): - norm = torch.norm(x, dim=-1, keepdim=True) * self.scale - return x / norm.clamp(min=self.eps) * self.g - - -class Residual(nn.Module): - def forward(self, x, residual): - return x + residual - - -class GRUGating(nn.Module): - def __init__(self, dim): - super().__init__() - self.gru = nn.GRUCell(dim, dim) - - def forward(self, x, residual): - gated_output = self.gru( - rearrange(x, 'b n d -> (b n) d'), - rearrange(residual, 'b n d -> (b n) d') - ) - - return gated_output.reshape_as(x) - - -# feedforward - -class GEGLU(nn.Module): - def __init__(self, dim_in, dim_out): - super().__init__() - self.proj = nn.Linear(dim_in, dim_out * 2) - - def forward(self, x): - x, gate = self.proj(x).chunk(2, dim=-1) - return x * F.gelu(gate) - - -class FeedForward(nn.Module): - def __init__(self, dim, dim_out=None, mult=4, glu=False, dropout=0.): - super().__init__() - inner_dim = int(dim * mult) - dim_out = default(dim_out, dim) - project_in = nn.Sequential( - nn.Linear(dim, inner_dim), - nn.GELU() - ) if not glu else GEGLU(dim, inner_dim) - - self.net = nn.Sequential( - project_in, - nn.Dropout(dropout), - nn.Linear(inner_dim, dim_out) - ) - - def forward(self, x): - return self.net(x) - - -# attention. -class Attention(nn.Module): - def __init__( - self, - dim, - dim_head=DEFAULT_DIM_HEAD, - heads=8, - causal=False, - mask=None, - talking_heads=False, - sparse_topk=None, - use_entmax15=False, - num_mem_kv=0, - dropout=0., - on_attn=False - ): - super().__init__() - if use_entmax15: - raise NotImplementedError("Check out entmax activation instead of softmax activation!") - self.scale = dim_head ** -0.5 - self.heads = heads - self.causal = causal - self.mask = mask - - inner_dim = dim_head * heads - - self.to_q = nn.Linear(dim, inner_dim, bias=False) - self.to_k = nn.Linear(dim, inner_dim, bias=False) - self.to_v = nn.Linear(dim, inner_dim, bias=False) - self.dropout = nn.Dropout(dropout) - - # talking heads - self.talking_heads = talking_heads - if talking_heads: - self.pre_softmax_proj = nn.Parameter(torch.randn(heads, heads)) - self.post_softmax_proj = nn.Parameter(torch.randn(heads, heads)) - - # explicit topk sparse attention - self.sparse_topk = sparse_topk - - # entmax - #self.attn_fn = entmax15 if use_entmax15 else F.softmax - self.attn_fn = F.softmax - - # add memory key / values - self.num_mem_kv = num_mem_kv - if num_mem_kv > 0: - self.mem_k = nn.Parameter(torch.randn(heads, num_mem_kv, dim_head)) - self.mem_v = nn.Parameter(torch.randn(heads, num_mem_kv, dim_head)) - - # attention on attention - self.attn_on_attn = on_attn - self.to_out = nn.Sequential(nn.Linear(inner_dim, dim * 2), nn.GLU()) if on_attn else nn.Linear(inner_dim, dim) - - def forward( - self, - x, - context=None, - mask=None, - context_mask=None, - rel_pos=None, - sinusoidal_emb=None, - prev_attn=None, - mem=None - ): - b, n, _, h, talking_heads, device = *x.shape, self.heads, self.talking_heads, x.device - kv_input = default(context, x) - - q_input = x - k_input = kv_input - v_input = kv_input - - if exists(mem): - k_input = torch.cat((mem, k_input), dim=-2) - v_input = torch.cat((mem, v_input), dim=-2) - - if exists(sinusoidal_emb): - # in shortformer, the query would start at a position offset depending on the past cached memory - offset = k_input.shape[-2] - q_input.shape[-2] - q_input = q_input + sinusoidal_emb(q_input, offset=offset) - k_input = k_input + sinusoidal_emb(k_input) - - q = self.to_q(q_input) - k = self.to_k(k_input) - v = self.to_v(v_input) - - q, k, v = map(lambda t: rearrange(t, 'b n (h d) -> b h n d', h=h), (q, k, v)) - - input_mask = None - if any(map(exists, (mask, context_mask))): - q_mask = default(mask, lambda: torch.ones((b, n), device=device).bool()) - k_mask = q_mask if not exists(context) else context_mask - k_mask = default(k_mask, lambda: torch.ones((b, k.shape[-2]), device=device).bool()) - q_mask = rearrange(q_mask, 'b i -> b () i ()') - k_mask = rearrange(k_mask, 'b j -> b () () j') - input_mask = q_mask * k_mask - - if self.num_mem_kv > 0: - mem_k, mem_v = map(lambda t: repeat(t, 'h n d -> b h n d', b=b), (self.mem_k, self.mem_v)) - k = torch.cat((mem_k, k), dim=-2) - v = torch.cat((mem_v, v), dim=-2) - if exists(input_mask): - input_mask = F.pad(input_mask, (self.num_mem_kv, 0), value=True) - - dots = einsum('b h i d, b h j d -> b h i j', q, k) * self.scale - mask_value = max_neg_value(dots) - - if exists(prev_attn): - dots = dots + prev_attn - - pre_softmax_attn = dots - - if talking_heads: - dots = einsum('b h i j, h k -> b k i j', dots, self.pre_softmax_proj).contiguous() - - if exists(rel_pos): - dots = rel_pos(dots) - - if exists(input_mask): - dots.masked_fill_(~input_mask, mask_value) - del input_mask - - if self.causal: - i, j = dots.shape[-2:] - r = torch.arange(i, device=device) - mask = rearrange(r, 'i -> () () i ()') < rearrange(r, 'j -> () () () j') - mask = F.pad(mask, (j - i, 0), value=False) - dots.masked_fill_(mask, mask_value) - del mask - - if exists(self.sparse_topk) and self.sparse_topk < dots.shape[-1]: - top, _ = dots.topk(self.sparse_topk, dim=-1) - vk = top[..., -1].unsqueeze(-1).expand_as(dots) - mask = dots < vk - dots.masked_fill_(mask, mask_value) - del mask - - attn = self.attn_fn(dots, dim=-1) - post_softmax_attn = attn - - attn = self.dropout(attn) - - if talking_heads: - attn = einsum('b h i j, h k -> b k i j', attn, self.post_softmax_proj).contiguous() - - out = einsum('b h i j, b h j d -> b h i d', attn, v) - out = rearrange(out, 'b h n d -> b n (h d)') - - intermediates = Intermediates( - pre_softmax_attn=pre_softmax_attn, - post_softmax_attn=post_softmax_attn - ) - - return self.to_out(out), intermediates - - -class AttentionLayers(nn.Module): - def __init__( - self, - dim, - depth, - heads=8, - causal=False, - cross_attend=False, - only_cross=False, - use_scalenorm=False, - use_rmsnorm=False, - use_rezero=False, - rel_pos_num_buckets=32, - rel_pos_max_distance=128, - position_infused_attn=False, - custom_layers=None, - sandwich_coef=None, - par_ratio=None, - residual_attn=False, - cross_residual_attn=False, - macaron=False, - pre_norm=True, - gate_residual=False, - **kwargs - ): - super().__init__() - ff_kwargs, kwargs = groupby_prefix_and_trim('ff_', kwargs) - attn_kwargs, _ = groupby_prefix_and_trim('attn_', kwargs) - - dim_head = attn_kwargs.get('dim_head', DEFAULT_DIM_HEAD) - - self.dim = dim - self.depth = depth - self.layers = nn.ModuleList([]) - - self.has_pos_emb = position_infused_attn - self.pia_pos_emb = FixedPositionalEmbedding(dim) if position_infused_attn else None - self.rotary_pos_emb = always(None) - - assert rel_pos_num_buckets <= rel_pos_max_distance, 'number of relative position buckets must be less than the relative position max distance' - self.rel_pos = None - - self.pre_norm = pre_norm - - self.residual_attn = residual_attn - self.cross_residual_attn = cross_residual_attn - - norm_class = ScaleNorm if use_scalenorm else nn.LayerNorm - norm_class = RMSNorm if use_rmsnorm else norm_class - norm_fn = partial(norm_class, dim) - - norm_fn = nn.Identity if use_rezero else norm_fn - branch_fn = Rezero if use_rezero else None - - if cross_attend and not only_cross: - default_block = ('a', 'c', 'f') - elif cross_attend and only_cross: - default_block = ('c', 'f') - else: - default_block = ('a', 'f') - - if macaron: - default_block = ('f',) + default_block - - if exists(custom_layers): - layer_types = custom_layers - elif exists(par_ratio): - par_depth = depth * len(default_block) - assert 1 < par_ratio <= par_depth, 'par ratio out of range' - default_block = tuple(filter(not_equals('f'), default_block)) - par_attn = par_depth // par_ratio - depth_cut = par_depth * 2 // 3 # 2 / 3 attention layer cutoff suggested by PAR paper - par_width = (depth_cut + depth_cut // par_attn) // par_attn - assert len(default_block) <= par_width, 'default block is too large for par_ratio' - par_block = default_block + ('f',) * (par_width - len(default_block)) - par_head = par_block * par_attn - layer_types = par_head + ('f',) * (par_depth - len(par_head)) - elif exists(sandwich_coef): - assert sandwich_coef > 0 and sandwich_coef <= depth, 'sandwich coefficient should be less than the depth' - layer_types = ('a',) * sandwich_coef + default_block * (depth - sandwich_coef) + ('f',) * sandwich_coef - else: - layer_types = default_block * depth - - self.layer_types = layer_types - self.num_attn_layers = len(list(filter(equals('a'), layer_types))) - - for layer_type in self.layer_types: - if layer_type == 'a': - layer = Attention(dim, heads=heads, causal=causal, **attn_kwargs) - elif layer_type == 'c': - layer = Attention(dim, heads=heads, **attn_kwargs) - elif layer_type == 'f': - layer = FeedForward(dim, **ff_kwargs) - layer = layer if not macaron else Scale(0.5, layer) - else: - raise Exception(f'invalid layer type {layer_type}') - - if isinstance(layer, Attention) and exists(branch_fn): - layer = branch_fn(layer) - - if gate_residual: - residual_fn = GRUGating(dim) - else: - residual_fn = Residual() - - self.layers.append(nn.ModuleList([ - norm_fn(), - layer, - residual_fn - ])) - - def forward( - self, - x, - context=None, - mask=None, - context_mask=None, - mems=None, - return_hiddens=False - ): - hiddens = [] - intermediates = [] - prev_attn = None - prev_cross_attn = None - - mems = mems.copy() if exists(mems) else [None] * self.num_attn_layers - - for ind, (layer_type, (norm, block, residual_fn)) in enumerate(zip(self.layer_types, self.layers)): - is_last = ind == (len(self.layers) - 1) - - if layer_type == 'a': - hiddens.append(x) - layer_mem = mems.pop(0) - - residual = x - - if self.pre_norm: - x = norm(x) - - if layer_type == 'a': - out, inter = block(x, mask=mask, sinusoidal_emb=self.pia_pos_emb, rel_pos=self.rel_pos, - prev_attn=prev_attn, mem=layer_mem) - elif layer_type == 'c': - out, inter = block(x, context=context, mask=mask, context_mask=context_mask, prev_attn=prev_cross_attn) - elif layer_type == 'f': - out = block(x) - - x = residual_fn(out, residual) - - if layer_type in ('a', 'c'): - intermediates.append(inter) - - if layer_type == 'a' and self.residual_attn: - prev_attn = inter.pre_softmax_attn - elif layer_type == 'c' and self.cross_residual_attn: - prev_cross_attn = inter.pre_softmax_attn - - if not self.pre_norm and not is_last: - x = norm(x) - - if return_hiddens: - intermediates = LayerIntermediates( - hiddens=hiddens, - attn_intermediates=intermediates - ) - - return x, intermediates - - return x - - -class Encoder(AttentionLayers): - def __init__(self, **kwargs): - assert 'causal' not in kwargs, 'cannot set causality on encoder' - super().__init__(causal=False, **kwargs) - - - -class TransformerWrapper(nn.Module): - def __init__( - self, - *, - num_tokens, - max_seq_len, - attn_layers, - emb_dim=None, - max_mem_len=0., - emb_dropout=0., - num_memory_tokens=None, - tie_embedding=False, - use_pos_emb=True - ): - super().__init__() - assert isinstance(attn_layers, AttentionLayers), 'attention layers must be one of Encoder or Decoder' - - dim = attn_layers.dim - emb_dim = default(emb_dim, dim) - - self.max_seq_len = max_seq_len - self.max_mem_len = max_mem_len - self.num_tokens = num_tokens - - self.token_emb = nn.Embedding(num_tokens, emb_dim) - self.pos_emb = AbsolutePositionalEmbedding(emb_dim, max_seq_len) if ( - use_pos_emb and not attn_layers.has_pos_emb) else always(0) - self.emb_dropout = nn.Dropout(emb_dropout) - - self.project_emb = nn.Linear(emb_dim, dim) if emb_dim != dim else nn.Identity() - self.attn_layers = attn_layers - self.norm = nn.LayerNorm(dim) - - self.init_() - - self.to_logits = nn.Linear(dim, num_tokens) if not tie_embedding else lambda t: t @ self.token_emb.weight.t() - - # memory tokens (like [cls]) from Memory Transformers paper - num_memory_tokens = default(num_memory_tokens, 0) - self.num_memory_tokens = num_memory_tokens - if num_memory_tokens > 0: - self.memory_tokens = nn.Parameter(torch.randn(num_memory_tokens, dim)) - - # let funnel encoder know number of memory tokens, if specified - if hasattr(attn_layers, 'num_memory_tokens'): - attn_layers.num_memory_tokens = num_memory_tokens - - def init_(self): - nn.init.normal_(self.token_emb.weight, std=0.02) - - def forward( - self, - x, - return_embeddings=False, - mask=None, - return_mems=False, - return_attn=False, - mems=None, - **kwargs - ): - b, n, device, num_mem = *x.shape, x.device, self.num_memory_tokens - x = self.token_emb(x) - x += self.pos_emb(x) - x = self.emb_dropout(x) - - x = self.project_emb(x) - - if num_mem > 0: - mem = repeat(self.memory_tokens, 'n d -> b n d', b=b) - x = torch.cat((mem, x), dim=1) - - # auto-handle masking after appending memory tokens - if exists(mask): - mask = F.pad(mask, (num_mem, 0), value=True) - - x, intermediates = self.attn_layers(x, mask=mask, mems=mems, return_hiddens=True, **kwargs) - x = self.norm(x) - - mem, x = x[:, :num_mem], x[:, num_mem:] - - out = self.to_logits(x) if not return_embeddings else x - - if return_mems: - hiddens = intermediates.hiddens - new_mems = list(map(lambda pair: torch.cat(pair, dim=-2), zip(mems, hiddens))) if exists(mems) else hiddens - new_mems = list(map(lambda t: t[..., -self.max_mem_len:, :].detach(), new_mems)) - return out, new_mems - - if return_attn: - attn_maps = list(map(lambda t: t.post_softmax_attn, intermediates.attn_intermediates)) - return out, attn_maps - - return out - diff --git a/spaces/aukaru/claude-wangy/Dockerfile b/spaces/aukaru/claude-wangy/Dockerfile deleted file mode 100644 index 4cb0ce42128d9a2ad33a395883f5e5455a38c707..0000000000000000000000000000000000000000 --- a/spaces/aukaru/claude-wangy/Dockerfile +++ /dev/null @@ -1,11 +0,0 @@ -FROM node:18-bullseye-slim -RUN apt-get update && \ - apt-get install -y git -RUN git clone https://gitgud.io/khanon/oai-reverse-proxy.git /app -WORKDIR /app -RUN npm install -COPY Dockerfile greeting.md* .env* ./ -RUN npm run build -EXPOSE 7860 -ENV NODE_ENV=production -CMD [ "npm", "start" ] \ No newline at end of file diff --git a/spaces/awacke1/CarePlanQnAWithContext/README.md b/spaces/awacke1/CarePlanQnAWithContext/README.md deleted file mode 100644 index 4c7eb51035c0d5921f9b732c968ea74c86fd4463..0000000000000000000000000000000000000000 --- a/spaces/awacke1/CarePlanQnAWithContext/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: 🧠AskMeAnythingCareAndPrompts -emoji: 🔬🩺Experimental⚕️🧬 -colorFrom: purple -colorTo: pink -sdk: gradio -sdk_version: 3.1.1 -app_file: app.py -pinned: false -license: mit ---- \ No newline at end of file diff --git a/spaces/awen666/web-ui/_next/static/chunks/780.bcee5bed561e6909.js b/spaces/awen666/web-ui/_next/static/chunks/780.bcee5bed561e6909.js deleted file mode 100644 index 65c87a5248eb904ea378deecf62425ac6b8c546e..0000000000000000000000000000000000000000 --- a/spaces/awen666/web-ui/_next/static/chunks/780.bcee5bed561e6909.js +++ /dev/null @@ -1,260 +0,0 @@ -(self.webpackChunk_N_E=self.webpackChunk_N_E||[]).push([[780],{33747:function(e,t,n){"use strict";n.d(t,{YF:function(){return p},x7:function(){return l}});var r=n(21828),o=n(41778),i=n(86006),a=n(8431);let l=e=>({name:"arrow",options:e,fn(t){let{element:n,padding:o}="function"==typeof e?e(t):e;if(n&&({}).hasOwnProperty.call(n,"current")){if(null!=n.current)return(0,r.x7)({element:n.current,padding:o}).fn(t)}else if(n)return(0,r.x7)({element:n,padding:o}).fn(t);return{}}});var s="undefined"!=typeof document?i.useLayoutEffect:i.useEffect;function c(e,t){let n,r,o;if(e===t)return!0;if(typeof e!=typeof t)return!1;if("function"==typeof e&&e.toString()===t.toString())return!0;if(e&&t&&"object"==typeof e){if(Array.isArray(e)){if((n=e.length)!=t.length)return!1;for(r=n;0!=r--;)if(!c(e[r],t[r]))return!1;return!0}if((n=(o=Object.keys(e)).length)!==Object.keys(t).length)return!1;for(r=n;0!=r--;)if(!({}).hasOwnProperty.call(t,o[r]))return!1;for(r=n;0!=r--;){let n=o[r];if(("_owner"!==n||!e.$$typeof)&&!c(e[n],t[n]))return!1}return!0}return e!=e&&t!=t}function u(e){if("undefined"==typeof window)return 1;let t=e.ownerDocument.defaultView||window;return t.devicePixelRatio||1}function f(e,t){let n=u(e);return Math.round(t*n)/n}function d(e){let t=i.useRef(e);return s(()=>{t.current=e}),t}function p(e){void 0===e&&(e={});let{placement:t="bottom",strategy:n="absolute",middleware:r=[],platform:l,elements:{reference:p,floating:h}={},transform:g=!0,whileElementsMounted:m,open:b}=e,[v,y]=i.useState({x:0,y:0,strategy:n,placement:t,middlewareData:{},isPositioned:!1}),[x,w]=i.useState(r);c(x,r)||w(r);let[E,S]=i.useState(null),[k,_]=i.useState(null),O=i.useCallback(e=>{e!=R.current&&(R.current=e,S(e))},[S]),C=i.useCallback(e=>{e!==T.current&&(T.current=e,_(e))},[_]),A=p||E,N=h||k,R=i.useRef(null),T=i.useRef(null),P=i.useRef(v),M=d(m),j=d(l),L=i.useCallback(()=>{if(!R.current||!T.current)return;let e={placement:t,strategy:n,middleware:x};j.current&&(e.platform=j.current),(0,o.oo)(R.current,T.current,e).then(e=>{let t={...e,isPositioned:!0};I.current&&!c(P.current,t)&&(P.current=t,a.flushSync(()=>{y(t)}))})},[x,t,n,j]);s(()=>{!1===b&&P.current.isPositioned&&(P.current.isPositioned=!1,y(e=>({...e,isPositioned:!1})))},[b]);let I=i.useRef(!1);s(()=>(I.current=!0,()=>{I.current=!1}),[]),s(()=>{if(A&&(R.current=A),N&&(T.current=N),A&&N){if(M.current)return M.current(A,N,L);L()}},[A,N,L,M]);let D=i.useMemo(()=>({reference:R,floating:T,setReference:O,setFloating:C}),[O,C]),F=i.useMemo(()=>({reference:A,floating:N}),[A,N]),B=i.useMemo(()=>{let e={position:n,left:0,top:0};if(!F.floating)return e;let t=f(F.floating,v.x),r=f(F.floating,v.y);return g?{...e,transform:"translate("+t+"px, "+r+"px)",...u(F.floating)>=1.5&&{willChange:"transform"}}:{position:n,left:t,top:r}},[n,g,F.floating,v.x,v.y]);return i.useMemo(()=>({...v,update:L,refs:D,elements:F,floatingStyles:B}),[v,L,D,F,B])}},52134:function(e,t,n){"use strict";let r;n.d(t,{wD:function(){return eg},vs:function(){return ev},bQ:function(){return eC},YF:function(){return eA},NI:function(){return eR},JA:function(){return ey},c0:function(){return eZ},qs:function(){return eq}});var o=n(41778),i=n(33747),a=n(86006),l=n.t(a,2),s=n(472),c='input:not([inert]),select:not([inert]),textarea:not([inert]),a[href]:not([inert]),button:not([inert]),[tabindex]:not(slot):not([inert]),audio[controls]:not([inert]),video[controls]:not([inert]),[contenteditable]:not([contenteditable="false"]):not([inert]),details>summary:first-of-type:not([inert]),details:not([inert])',u="undefined"==typeof Element,f=u?function(){}:Element.prototype.matches||Element.prototype.msMatchesSelector||Element.prototype.webkitMatchesSelector,d=!u&&Element.prototype.getRootNode?function(e){var t;return null==e?void 0:null===(t=e.getRootNode)||void 0===t?void 0:t.call(e)}:function(e){return null==e?void 0:e.ownerDocument},p=function e(t,n){void 0===n&&(n=!0);var r,o=null==t?void 0:null===(r=t.getAttribute)||void 0===r?void 0:r.call(t,"inert");return""===o||"true"===o||n&&t&&e(t.parentNode)},h=function(e){var t,n=null==e?void 0:null===(t=e.getAttribute)||void 0===t?void 0:t.call(e,"contenteditable");return""===n||"true"===n},g=function(e,t,n){if(p(e))return[];var r=Array.prototype.slice.apply(e.querySelectorAll(c));return t&&f.call(e,c)&&r.unshift(e),r=r.filter(n)},m=function e(t,n,r){for(var o=[],i=Array.from(t);i.length;){var a=i.shift();if(!p(a,!1)){if("SLOT"===a.tagName){var l=a.assignedElements(),s=e(l.length?l:a.children,!0,r);r.flatten?o.push.apply(o,s):o.push({scopeParent:a,candidates:s})}else{f.call(a,c)&&r.filter(a)&&(n||!t.includes(a))&&o.push(a);var u=a.shadowRoot||"function"==typeof r.getShadowRoot&&r.getShadowRoot(a),d=!p(u,!1)&&(!r.shadowRootFilter||r.shadowRootFilter(a));if(u&&d){var h=e(!0===u?a.children:u.children,!0,r);r.flatten?o.push.apply(o,h):o.push({scopeParent:a,candidates:h})}else i.unshift.apply(i,a.children)}}}return o},b=function(e){return!isNaN(parseInt(e.getAttribute("tabindex"),10))},v=function(e){if(!e)throw Error("No node provided");return e.tabIndex<0&&(/^(AUDIO|VIDEO|DETAILS)$/.test(e.tagName)||h(e))&&!b(e)?0:e.tabIndex},y=function(e,t){var n=v(e);return n<0&&t&&!b(e)?0:n},x=function(e,t){return e.tabIndex===t.tabIndex?e.documentOrder-t.documentOrder:e.tabIndex-t.tabIndex},w=function(e){return"INPUT"===e.tagName},E=function(e,t){for(var n=0;nsummary:first-of-type")?e.parentElement:e;if(f.call(o,"details:not([open]) *"))return!0;if(n&&"full"!==n&&"legacy-full"!==n){if("non-zero-area"===n)return _(e)}else{if("function"==typeof r){for(var i=e;e;){var a=e.parentElement,l=d(e);if(a&&!a.shadowRoot&&!0===r(a))return _(e);e=e.assignedSlot?e.assignedSlot:a||l===e.ownerDocument?a:l.host}e=i}if(k(e))return!e.getClientRects().length;if("legacy-full"!==n)return!0}return!1},C=function(e){if(/^(INPUT|BUTTON|SELECT|TEXTAREA)$/.test(e.tagName))for(var t=e.parentElement;t;){if("FIELDSET"===t.tagName&&t.disabled){for(var n=0;nv(t))&&(r=e,!((o=t).disabled||p(o)||w(o)&&"hidden"===o.type||O(o,r)||"DETAILS"===o.tagName&&Array.prototype.slice.apply(o.children).some(function(e){return"SUMMARY"===e.tagName})||C(o)))},N=function(e){var t=parseInt(e.getAttribute("tabindex"),10);return!!isNaN(t)||t>=0},R=function e(t){var n=[],r=[];return t.forEach(function(t,o){var i=!!t.scopeParent,a=i?t.scopeParent:t,l=y(a,i),s=i?e(t.candidates):a;0===l?i?n.push.apply(n,s):n.push(a):r.push({documentOrder:o,tabIndex:l,item:t,isScope:i,content:s})}),r.sort(x).reduce(function(e,t){return t.isScope?e.push.apply(e,t.content):e.push(t.content),e},[]).concat(n)},T=function(e,t){return R((t=t||{}).getShadowRoot?m([e],t.includeContainer,{filter:A.bind(null,t),flatten:!1,getShadowRoot:t.getShadowRoot,shadowRootFilter:N}):g(e,t.includeContainer,A.bind(null,t)))};function P(){return(P=Object.assign?Object.assign.bind():function(e){for(var t=1;t"floating-ui-"+L++,D=l["useId".toString()],F=D||function(){let[e,t]=a.useState(()=>j?I():void 0);return M(()=>{null==e&&t(I())},[]),a.useEffect(()=>{j||(j=!0)},[]),e},B=a.createContext(null),z=a.createContext(null),$=()=>{var e;return(null==(e=a.useContext(B))?void 0:e.id)||null},U=()=>a.useContext(z);function H(e){return(null==e?void 0:e.ownerDocument)||document}function Z(){let e=navigator.userAgentData;return null!=e&&e.platform?e.platform:navigator.platform}function q(e){return H(e).defaultView||window}function V(e){return!!e&&(e instanceof Element||e instanceof q(e).Element)}function W(e){return!!e&&(e instanceof HTMLElement||e instanceof q(e).HTMLElement)}function G(e){if(0===e.mozInputSource&&e.isTrusted)return!0;let t=/Android/i;return(t.test(Z())||t.test(function(){let e=navigator.userAgentData;return e&&Array.isArray(e.brands)?e.brands.map(e=>{let{brand:t,version:n}=e;return t+"/"+n}).join(" "):navigator.userAgent}()))&&e.pointerType?"click"===e.type&&1===e.buttons:0===e.detail&&!e.pointerType}function K(e){return 0===e.width&&0===e.height||1===e.width&&1===e.height&&0===e.pressure&&0===e.detail&&"mouse"!==e.pointerType||e.width<1&&e.height<1&&0===e.pressure&&0===e.detail}function Y(){return/apple/i.test(navigator.vendor)}function X(e,t){if(!e||!t)return!1;let n=t.getRootNode&&t.getRootNode();if(e.contains(t))return!0;if(n&&function(e){if("undefined"==typeof ShadowRoot)return!1;let t=q(e).ShadowRoot;return e instanceof t||e instanceof ShadowRoot}(n)){let n=t;for(;n;){if(e===n)return!0;n=n.parentNode||n.host}}return!1}function J(e){return"data-floating-ui-"+e}function Q(e){let t=(0,a.useRef)(e);return M(()=>{t.current=e}),t}function ee(e){let t=e.activeElement;for(;(null==(n=t)?void 0:null==(r=n.shadowRoot)?void 0:r.activeElement)!=null;){var n,r;t=t.shadowRoot.activeElement}return t}let et=0;function en(e,t){void 0===t&&(t={});let{preventScroll:n=!1,cancelPrevious:r=!0,sync:o=!1}=t;r&&cancelAnimationFrame(et);let i=()=>null==e?void 0:e.focus({preventScroll:n});o?i():et=requestAnimationFrame(i)}function er(e,t){let n=e.filter(e=>{var n;return e.parentId===t&&(null==(n=e.context)?void 0:n.open)}),r=n;for(;r.length;)r=e.filter(e=>{var t;return null==(t=r)?void 0:t.some(t=>{var n;return e.parentId===t.id&&(null==(n=e.context)?void 0:n.open)})}),n=n.concat(r);return n}function eo(e){return"composedPath"in e?e.composedPath()[0]:e.target}function ei(e){e.preventDefault(),e.stopPropagation()}let ea=()=>({getShadowRoot:!0,displayCheck:"function"==typeof ResizeObserver&&ResizeObserver.toString().includes("[native code]")?"full":"none"});function el(e,t){let n=T(e,ea());"prev"===t&&n.reverse();let r=n.indexOf(ee(H(e)));return n.slice(r+1)[0]}function es(e,t){let n=t||e.currentTarget,r=e.relatedTarget;return!r||!X(n,r)}let ec={border:0,clip:"rect(0 0 0 0)",height:"1px",margin:"-1px",overflow:"hidden",padding:0,position:"fixed",whiteSpace:"nowrap",width:"1px",top:0,left:0};function eu(e){"Tab"===e.key&&(e.target,clearTimeout(r))}let ef=a.forwardRef(function(e,t){let[n,r]=a.useState();M(()=>(Y()&&r("button"),document.addEventListener("keydown",eu),()=>{document.removeEventListener("keydown",eu)}),[]);let o={ref:t,tabIndex:0,role:n,"aria-hidden":!n||void 0,[J("focus-guard")]:"",style:ec};return a.createElement("span",P({},e,o))}),ed=a.createContext(null),ep=()=>a.useContext(ed),eh=a.forwardRef(function(e,t){return a.createElement("button",P({},e,{type:"button",ref:t,tabIndex:-1,style:ec}))});function eg(e){var t;let{context:n,children:r,disabled:o=!1,order:i=["content"],guards:l=!0,initialFocus:c=0,returnFocus:u=!0,modal:f=!0,visuallyHiddenDismiss:d=!1,closeOnFocusOut:p=!0}=e,{open:h,refs:g,nodeId:m,onOpenChange:b,events:v,dataRef:y,elements:{domReference:x,floating:w}}=n,E=!(0,s.J_)()||l,S=Q(i),k=Q(c),_=Q(u),O=U(),C=ep(),A="number"==typeof c&&c<0,N=a.useRef(null),R=a.useRef(null),P=a.useRef(!1),j=a.useRef(null),L=a.useRef(!1),I=null!=C,D=x&&"combobox"===x.getAttribute("role")&&W(t=x)&&t.matches("input:not([type='hidden']):not([disabled]),[contenteditable]:not([contenteditable='false']),textarea:not([disabled])"),F=a.useCallback(function(e){return void 0===e&&(e=w),e?T(e,ea()):[]},[w]),B=a.useCallback(e=>{let t=F(e);return S.current.map(e=>x&&"reference"===e?x:w&&"floating"===e?w:t).filter(Boolean).flat()},[x,w,S,F]);function z(e){return!o&&d&&f?a.createElement(eh,{ref:"start"===e?N:R,onClick:e=>b(!1,e.nativeEvent)},"string"==typeof d?d:"Dismiss"):null}a.useEffect(()=>{if(o||!f)return;function e(e){if("Tab"===e.key){X(w,ee(H(w)))&&0===F().length&&!D&&ei(e);let t=B(),n=eo(e);"reference"===S.current[0]&&n===x&&(ei(e),e.shiftKey?en(t[t.length-1]):en(t[1])),"floating"===S.current[1]&&n===w&&e.shiftKey&&(ei(e),en(t[0]))}}let t=H(w);return t.addEventListener("keydown",e),()=>{t.removeEventListener("keydown",e)}},[o,x,w,f,S,g,D,F,B]),a.useEffect(()=>{if(!o&&p&&w&&W(x))return x.addEventListener("focusout",t),x.addEventListener("pointerdown",e),f||w.addEventListener("focusout",t),()=>{x.removeEventListener("focusout",t),x.removeEventListener("pointerdown",e),f||w.removeEventListener("focusout",t)};function e(){L.current=!0,setTimeout(()=>{L.current=!1})}function t(e){let t=e.relatedTarget;queueMicrotask(()=>{let n=!(X(x,t)||X(w,t)||X(t,w)||X(null==C?void 0:C.portalNode,t)||null!=t&&t.hasAttribute(J("focus-guard"))||O&&(er(O.nodesRef.current,m).find(e=>{var n,r;return X(null==(n=e.context)?void 0:n.elements.floating,t)||X(null==(r=e.context)?void 0:r.elements.domReference,t)})||(function(e,t){var n;let r=[],o=null==(n=e.find(e=>e.id===t))?void 0:n.parentId;for(;o;){let t=e.find(e=>e.id===o);o=null==t?void 0:t.parentId,t&&(r=r.concat(t))}return r})(O.nodesRef.current,m).find(e=>{var n,r;return(null==(n=e.context)?void 0:n.elements.floating)===t||(null==(r=e.context)?void 0:r.elements.domReference)===t})));t&&n&&!L.current&&t!==j.current&&(P.current=!0,b(!1,e))})}},[o,x,w,f,m,O,C,b,p]),a.useEffect(()=>{var e;if(o)return;let t=Array.from((null==C?void 0:null==(e=C.portalNode)?void 0:e.querySelectorAll("["+J("portal")+"]"))||[]);if(w&&f){let e=[w,...t,N.current,R.current].filter(e=>null!=e),n=E?s.Ry:s.cJ,r=n(S.current.includes("reference")||D?e.concat(x||[]):e,void 0,J("inert"));return()=>{r()}}},[o,x,w,f,S,C,D,E]),M(()=>{if(o||!w)return;let e=H(w),t=ee(e);queueMicrotask(()=>{let e=B(w),n=k.current,r=("number"==typeof n?e[n]:n.current)||w,o=X(w,t);A||o||!h||en(r,{preventScroll:r===w})})},[o,h,w,A,B,k]),M(()=>{if(o||!w)return;let e=!1,t=H(w),n=ee(t),r=y.current;function i(t){if("escapeKey"===t.type&&g.domReference.current&&(j.current=g.domReference.current),["referencePress","escapeKey"].includes(t.type))return;let n=t.data.returnFocus;"object"==typeof n?(P.current=!1,e=n.preventScroll):P.current=!n}return j.current=n,v.on("dismiss",i),()=>{v.off("dismiss",i);let n=ee(t),o=X(w,n)||O&&er(O.nodesRef.current,m).some(e=>{var t;return X(null==(t=e.context)?void 0:t.elements.floating,n)})||r.openEvent&&["click","mousedown"].includes(r.openEvent.type);o&&g.domReference.current&&(j.current=g.domReference.current),_.current&&W(j.current)&&!P.current&&en(j.current,{cancelPrevious:!1,preventScroll:e})}},[o,w,_,y,g,v,O,m]),M(()=>{if(!o&&C)return C.setFocusManagerState({...n,modal:f,closeOnFocusOut:p,open:h}),()=>{C.setFocusManagerState(null)}},[o,C,f,h,p,n]),M(()=>{if(!o&&w&&"function"==typeof MutationObserver){let e=()=>{let e=w.getAttribute("tabindex");S.current.includes("floating")||ee(H(w))!==g.domReference.current&&0===F().length?"0"!==e&&w.setAttribute("tabindex","0"):"-1"!==e&&w.setAttribute("tabindex","-1")};e();let t=new MutationObserver(e);return t.observe(w,{childList:!0,subtree:!0,attributes:!0}),()=>{t.disconnect()}}},[o,w,g,S,F]);let $=!o&&E&&!D&&(I||f);return a.createElement(a.Fragment,null,$&&a.createElement(ef,{"data-type":"inside",ref:null==C?void 0:C.beforeInsideRef,onFocus:e=>{if(f){let e=B();en("reference"===i[0]?e[0]:e[e.length-1])}else if(null!=C&&C.preserveTabOrder&&C.portalNode){if(P.current=!1,es(e,C.portalNode)){let e=el(document.body,"next")||x;null==e||e.focus()}else{var t;null==(t=C.beforeOutsideRef.current)||t.focus()}}}}),!D&&z("start"),r,z("end"),$&&a.createElement(ef,{"data-type":"inside",ref:null==C?void 0:C.afterInsideRef,onFocus:e=>{if(f)en(B()[0]);else if(null!=C&&C.preserveTabOrder&&C.portalNode){if(p&&(P.current=!0),es(e,C.portalNode)){let e=el(document.body,"prev")||x;null==e||e.focus()}else{var t;null==(t=C.afterOutsideRef.current)||t.focus()}}}}))}function em(e,t){let n=e.compareDocumentPosition(t);return n&Node.DOCUMENT_POSITION_FOLLOWING||n&Node.DOCUMENT_POSITION_CONTAINED_BY?-1:n&Node.DOCUMENT_POSITION_PRECEDING||n&Node.DOCUMENT_POSITION_CONTAINS?1:0}let eb=a.createContext({register:()=>{},unregister:()=>{},map:new Map,elementsRef:{current:[]}});function ev(e){let{children:t,elementsRef:n,labelsRef:r}=e,[o,i]=a.useState(()=>new Map),l=a.useCallback(e=>{i(t=>new Map(t).set(e,null))},[]),s=a.useCallback(e=>{i(t=>{let n=new Map(t);return n.delete(e),n})},[]);return M(()=>{let e=new Map(o),t=Array.from(e.keys()).sort(em);t.forEach((t,n)=>{e.set(t,n)}),!function(e,t){if(e.size!==t.size)return!1;for(let[n,r]of e.entries())if(r!==t.get(n))return!1;return!0}(o,e)&&i(e)},[o]),a.createElement(eb.Provider,{value:a.useMemo(()=>({register:l,unregister:s,map:o,elementsRef:n,labelsRef:r}),[l,s,o,n,r])},t)}function ey(e){let{label:t}=void 0===e?{}:e,[n,r]=a.useState(null),o=a.useRef(null),{register:i,unregister:l,map:s,elementsRef:c,labelsRef:u}=a.useContext(eb),f=a.useCallback(e=>{if(o.current=e,null!==n&&(c.current[n]=e,u)){var r;let o=void 0!==t;u.current[n]=o?t:null!=(r=null==e?void 0:e.textContent)?r:null}},[n,c,u,t]);return M(()=>{let e=o.current;if(e)return i(e),()=>{l(e)}},[i,l]),M(()=>{let e=o.current?s.get(o.current):null;null!=e&&r(e)},[s]),a.useMemo(()=>({ref:f,index:null==n?-1:n}),[n,f])}let ex=l["useInsertionEffect".toString()],ew=ex||(e=>e());function eE(e){let t=a.useRef(()=>{});return ew(()=>{t.current=e}),a.useCallback(function(){for(var e=arguments.length,n=Array(e),r=0;r{var t,n;return{escapeKeyBubbles:"boolean"==typeof e?e:null!=(t=null==e?void 0:e.escapeKey)&&t,outsidePressBubbles:"boolean"==typeof e?e:null==(n=null==e?void 0:e.outsidePress)||n}};function eC(e,t){void 0===t&&(t={});let{open:n,onOpenChange:r,events:i,nodeId:l,elements:{reference:s,domReference:c,floating:u},dataRef:f}=e,{enabled:d=!0,escapeKey:p=!0,outsidePress:h=!0,outsidePressEvent:g="pointerdown",referencePress:m=!1,referencePressEvent:b="pointerdown",ancestorScroll:v=!1,bubbles:y}=t,x=U(),w=null!=$(),E=eE("function"==typeof h?h:()=>!1),S="function"==typeof h?E:h,k=a.useRef(!1),{escapeKeyBubbles:_,outsidePressBubbles:O}=eO(y),C=eE(e=>{if(!n||!d||!p||"Escape"!==e.key)return;let t=x?er(x.nodesRef.current,l):[];if(!_&&(e.stopPropagation(),t.length>0)){let e=!0;if(t.forEach(t=>{var n;if(null!=(n=t.context)&&n.open&&!t.context.dataRef.current.__escapeKeyBubbles){e=!1;return}}),!e)return}i.emit("dismiss",{type:"escapeKey",data:{returnFocus:{preventScroll:!1}}}),r(!1,"nativeEvent"in e?e.nativeEvent:e)}),A=eE(e=>{let t=k.current;if(k.current=!1,t||"function"==typeof S&&!S(e))return;let n=eo(e);if(W(n)&&u){let t=n.clientWidth>0&&n.scrollWidth>n.clientWidth,r=n.clientHeight>0&&n.scrollHeight>n.clientHeight,o=r&&e.offsetX>n.clientWidth;if(r){let t="rtl"===q(u).getComputedStyle(n).direction;t&&(o=e.offsetX<=n.offsetWidth-n.clientWidth)}if(o||t&&e.offsetY>n.clientHeight)return}let o=x&&er(x.nodesRef.current,l).some(t=>{var n;return eS(e,null==(n=t.context)?void 0:n.elements.floating)});if(eS(e,u)||eS(e,c)||o)return;let a=x?er(x.nodesRef.current,l):[];if(a.length>0){let e=!0;if(a.forEach(t=>{var n;if(null!=(n=t.context)&&n.open&&!t.context.dataRef.current.__outsidePressBubbles){e=!1;return}}),!e)return}i.emit("dismiss",{type:"outsidePress",data:{returnFocus:w?{preventScroll:!0}:G(e)||K(e)}}),r(!1,e)});return a.useEffect(()=>{if(!n||!d)return;function e(e){r(!1,e)}f.current.__escapeKeyBubbles=_,f.current.__outsidePressBubbles=O;let t=H(u);p&&t.addEventListener("keydown",C),S&&t.addEventListener(g,A);let i=[];return v&&(V(c)&&(i=(0,o.Kx)(c)),V(u)&&(i=i.concat((0,o.Kx)(u))),!V(s)&&s&&s.contextElement&&(i=i.concat((0,o.Kx)(s.contextElement)))),(i=i.filter(e=>{var n;return e!==(null==(n=t.defaultView)?void 0:n.visualViewport)})).forEach(t=>{t.addEventListener("scroll",e,{passive:!0})}),()=>{p&&t.removeEventListener("keydown",C),S&&t.removeEventListener(g,A),i.forEach(t=>{t.removeEventListener("scroll",e)})}},[f,u,c,s,p,S,g,n,r,v,d,_,O,C,A]),a.useEffect(()=>{k.current=!1},[S,g]),a.useMemo(()=>d?{reference:{onKeyDown:C,[ek[b]]:e=>{m&&(i.emit("dismiss",{type:"referencePress",data:{returnFocus:!1}}),r(!1,e.nativeEvent))}},floating:{onKeyDown:C,[e_[g]]:()=>{k.current=!0}}}:{},[d,i,m,g,b,r,C])}function eA(e){var t;void 0===e&&(e={});let{open:n=!1,onOpenChange:r,nodeId:o}=e,[l,s]=a.useState(null),c=(null==(t=e.elements)?void 0:t.reference)||l,u=(0,i.YF)(e),f=U(),d=eE((e,t)=>{e&&(h.current.openEvent=t),null==r||r(e,t)}),p=a.useRef(null),h=a.useRef({}),g=a.useState(()=>(function(){let e=new Map;return{emit(t,n){var r;null==(r=e.get(t))||r.forEach(e=>e(n))},on(t,n){e.set(t,[...e.get(t)||[],n])},off(t,n){var r;e.set(t,(null==(r=e.get(t))?void 0:r.filter(e=>e!==n))||[])}}})())[0],m=F(),b=a.useCallback(e=>{let t=V(e)?{getBoundingClientRect:()=>e.getBoundingClientRect(),contextElement:e}:e;u.refs.setReference(t)},[u.refs]),v=a.useCallback(e=>{(V(e)||null===e)&&(p.current=e,s(e)),(V(u.refs.reference.current)||null===u.refs.reference.current||null!==e&&!V(e))&&u.refs.setReference(e)},[u.refs]),y=a.useMemo(()=>({...u.refs,setReference:v,setPositionReference:b,domReference:p}),[u.refs,v,b]),x=a.useMemo(()=>({...u.elements,domReference:c}),[u.elements,c]),w=a.useMemo(()=>({...u,refs:y,elements:x,dataRef:h,nodeId:o,floatingId:m,events:g,open:n,onOpenChange:d}),[u,o,m,g,n,d,y,x]);return M(()=>{let e=null==f?void 0:f.nodesRef.current.find(e=>e.id===o);e&&(e.context=w)}),a.useMemo(()=>({...u,context:w,refs:y,elements:x}),[u,y,x,w])}function eN(e,t,n){let r=new Map;return{..."floating"===n&&{tabIndex:-1},...e,...t.map(e=>e?e[n]:null).concat(e).reduce((e,t)=>(t&&Object.entries(t).forEach(t=>{let[n,o]=t;if(0===n.indexOf("on")){if(r.has(n)||r.set(n,[]),"function"==typeof o){var i;null==(i=r.get(n))||i.push(o),e[n]=function(){for(var e,t=arguments.length,o=Array(t),i=0;ie(...o)).find(e=>void 0!==e)}}}else e[n]=o}),e),{})}}function eR(e){void 0===e&&(e=[]);let t=e,n=a.useCallback(t=>eN(t,e,"reference"),t),r=a.useCallback(t=>eN(t,e,"floating"),t),o=a.useCallback(t=>eN(t,e,"item"),e.map(e=>null==e?void 0:e.item));return a.useMemo(()=>({getReferenceProps:n,getFloatingProps:r,getItemProps:o}),[n,r,o])}let eT=!1,eP="ArrowUp",eM="ArrowDown",ej="ArrowLeft",eL="ArrowRight";function eI(e,t,n){return Math.floor(e/t)!==n}function eD(e,t){return t<0||t>=e.current.length}function eF(e,t){let{startingIndex:n=-1,decrement:r=!1,disabledIndices:o,amount:i=1}=void 0===t?{}:t,a=e.current,l=n;do{var s,c;l+=r?-i:i}while(l>=0&&l<=a.length-1&&(o?o.includes(l):null==a[l]||(null==(s=a[l])?void 0:s.hasAttribute("disabled"))||(null==(c=a[l])?void 0:c.getAttribute("aria-disabled"))==="true"));return l}function eB(e,t,n){switch(e){case"vertical":return t;case"horizontal":return n;default:return t||n}}function ez(e,t){return eB(t,e===eP||e===eM,e===ej||e===eL)}function e$(e,t,n){return eB(t,e===eM,n?e===ej:e===eL)||"Enter"===e||" "==e||""===e}function eU(e,t){return eF(e,{disabledIndices:t})}function eH(e,t){return eF(e,{decrement:!0,startingIndex:e.current.length,disabledIndices:t})}function eZ(e,t){let{open:n,onOpenChange:r,refs:o,elements:{domReference:i,floating:l}}=e,{listRef:s,activeIndex:c,onNavigate:u=()=>{},enabled:f=!0,selectedIndex:d=null,allowEscape:p=!1,loop:h=!1,nested:g=!1,rtl:m=!1,virtual:b=!1,focusItemOnOpen:v="auto",focusItemOnHover:y=!0,openOnArrowKeyDown:x=!0,disabledIndices:w,orientation:E="vertical",cols:S=1,scrollItemIntoView:k=!0}=t,_=$(),O=U(),C=eE(u),A=a.useRef(v),N=a.useRef(null!=d?d:-1),R=a.useRef(null),T=a.useRef(!0),P=a.useRef(C),j=a.useRef(!!l),L=a.useRef(!1),I=a.useRef(!1),D=Q(w),F=Q(n),B=Q(k),[z,q]=a.useState(),V=eE(function(e,t,n){void 0===n&&(n=!1);let r=e.current[t.current];r&&(b?q(r.id):en(r,{preventScroll:!0,sync:!!(Z().toLowerCase().startsWith("mac")&&!navigator.maxTouchPoints&&Y())&&(eT||L.current)}),requestAnimationFrame(()=>{let e=B.current,t=e&&r&&(n||!T.current);t&&(null==r.scrollIntoView||r.scrollIntoView("boolean"==typeof e?{block:"nearest",inline:"nearest"}:e))}))});M(()=>{document.createElement("div").focus({get preventScroll(){return eT=!0,!1}})},[]),M(()=>{f&&(n&&l?A.current&&null!=d&&(I.current=!0,C(d)):j.current&&(N.current=-1,P.current(null)))},[f,n,l,d,C]),M(()=>{if(f&&n&&l){if(null==c){if(L.current=!1,null==d&&(j.current&&(N.current=-1,V(s,N)),!j.current&&A.current&&(null!=R.current||!0===A.current&&null==R.current))){let e=0,t=()=>{if(null==s.current[0]){if(e<2){let n=e?requestAnimationFrame:queueMicrotask;n(t)}e++}else N.current=null==R.current||e$(R.current,E,m)||g?eU(s,D.current):eH(s,D.current),R.current=null,C(N.current)};t()}}else eD(s,c)||(N.current=c,V(s,N,I.current),I.current=!1)}},[f,n,l,c,d,g,s,E,m,C,V,D]),M(()=>{if(f&&j.current&&!l&&O){var e,t;let n=O.nodesRef.current,r=null==(e=n.find(e=>e.id===_))?void 0:null==(t=e.context)?void 0:t.elements.floating,o=ee(H(l)),i=n.some(e=>e.context&&X(e.context.elements.floating,o));r&&!i&&r.focus({preventScroll:!0})}},[f,l,O,_]),M(()=>{P.current=C,j.current=!!l}),M(()=>{n||(R.current=null)},[n]);let J=null!=c,et=a.useMemo(()=>{function e(e){if(!n)return;let t=s.current.indexOf(e);-1!==t&&C(t)}let t={onFocus(t){let{currentTarget:n}=t;e(n)},onClick:e=>{let{currentTarget:t}=e;return t.focus({preventScroll:!0})},...y&&{onMouseMove(t){let{currentTarget:n}=t;e(n)},onPointerLeave(e){let{pointerType:t}=e;T.current&&"touch"!==t&&(N.current=-1,V(s,N),C(null),b||en(o.floating.current,{preventScroll:!0}))}}};return t},[n,o,V,y,s,C,b]);return a.useMemo(()=>{if(!f)return{};let e=D.current;function t(t){var a;if(T.current=!1,L.current=!0,!F.current&&t.currentTarget===o.floating.current)return;if(g&&(a=t.key,eB(E,m?a===eL:a===ej,a===eP))){ei(t),r(!1,t.nativeEvent),W(i)&&i.focus();return}let l=N.current,c=eU(s,e),u=eH(s,e);if("Home"===t.key&&(ei(t),N.current=c,C(N.current)),"End"===t.key&&(ei(t),N.current=u,C(N.current)),S>1){let n=N.current;if(t.key===eP){if(ei(t),-1===n)N.current=u;else if(N.current=eF(s,{startingIndex:n,amount:S,decrement:!0,disabledIndices:e}),h&&(n-Se?r:r-S}eD(s,N.current)&&(N.current=n),C(N.current)}if(t.key===eM&&(ei(t),-1===n?N.current=c:(N.current=eF(s,{startingIndex:n,amount:S,disabledIndices:e}),h&&n+S>u&&(N.current=eF(s,{startingIndex:n%S-S,amount:S,disabledIndices:e}))),eD(s,N.current)&&(N.current=n),C(N.current)),"both"===E){let r=Math.floor(n/S);t.key===eL&&(ei(t),n%S!=S-1?(N.current=eF(s,{startingIndex:n,disabledIndices:e}),h&&eI(N.current,S,r)&&(N.current=eF(s,{startingIndex:n-n%S-1,disabledIndices:e}))):h&&(N.current=eF(s,{startingIndex:n-n%S-1,disabledIndices:e})),eI(N.current,S,r)&&(N.current=n)),t.key===ej&&(ei(t),n%S!=0?(N.current=eF(s,{startingIndex:n,disabledIndices:e,decrement:!0}),h&&eI(N.current,S,r)&&(N.current=eF(s,{startingIndex:n+(S-n%S),decrement:!0,disabledIndices:e}))):h&&(N.current=eF(s,{startingIndex:n+(S-n%S),decrement:!0,disabledIndices:e})),eI(N.current,S,r)&&(N.current=n));let o=Math.floor(u/S)===r;eD(s,N.current)&&(h&&o?N.current=t.key===ej?u:eF(s,{startingIndex:n-n%S-1,disabledIndices:e}):N.current=n),C(N.current);return}}if(ez(t.key,E)){if(ei(t),n&&!b&&ee(t.currentTarget.ownerDocument)===t.currentTarget){N.current=e$(t.key,E,m)?c:u,C(N.current);return}e$(t.key,E,m)?h?N.current=l>=u?p&&l!==s.current.length?-1:c:eF(s,{startingIndex:l,disabledIndices:e}):N.current=Math.min(u,eF(s,{startingIndex:l,disabledIndices:e})):h?N.current=l<=c?p&&-1!==l?s.current.length:u:eF(s,{startingIndex:l,decrement:!0,disabledIndices:e}):N.current=Math.max(c,eF(s,{startingIndex:l,decrement:!0,disabledIndices:e})),eD(s,N.current)?C(null):C(N.current)}}function a(e){"auto"===v&&G(e.nativeEvent)&&(A.current=!0)}let l=b&&n&&J&&{"aria-activedescendant":z};return{reference:{...l,onKeyDown(o){var i;T.current=!1;let a=0===o.key.indexOf("Arrow");if(b&&n)return t(o);if(!n&&!x&&a)return;let l=a||"Enter"===o.key||""===o.key.trim(),c=ez(o.key,E),u=(i=o.key,eB(E,m?i===ej:i===eL,i===eM));if(l&&(R.current=g&&c?null:o.key),g){u&&(ei(o),n?(N.current=eU(s,e),C(N.current)):r(!0,o.nativeEvent));return}c&&(null!=d&&(N.current=d),ei(o),!n&&x?r(!0,o.nativeEvent):t(o),n&&C(N.current))},onFocus(){n&&C(null)},onPointerDown:function(e){A.current=v,"auto"===v&&K(e.nativeEvent)&&(A.current=!0)},onMouseDown:a,onClick:a},floating:{"aria-orientation":"both"===E?void 0:E,...l,onKeyDown:t,onPointerMove(){T.current=!0}},item:et}},[i,o,z,D,F,s,f,E,m,b,n,J,g,d,x,p,S,h,v,C,r,et])}function eq(e,t){void 0===t&&(t={});let{open:n,floatingId:r}=e,{enabled:o=!0,role:i="dialog"}=t,l=F();return a.useMemo(()=>{let e={id:r,role:i};return o?"tooltip"===i?{reference:{"aria-describedby":n?r:void 0},floating:e}:{reference:{"aria-expanded":n?"true":"false","aria-haspopup":"alertdialog"===i?"dialog":i,"aria-controls":n?r:void 0,..."listbox"===i&&{role:"combobox"},..."menu"===i&&{id:l}},floating:{...e,..."menu"===i&&{"aria-labelledby":l}}}:{}},[o,i,n,r,l])}},29872:function(e,t,n){"use strict";var r,o=Object.assign||function(e){for(var t=1;t=0)&&Object.prototype.hasOwnProperty.call(e,r)&&(n[r]=e[r]);return n}(e,["fill","width","height","style"]);return i.default.createElement("svg",o({viewBox:"0 0 24 24",style:o({fill:void 0===t?"currentColor":t,width:void 0===n?24:n,height:void 0===r?24:r},l)},s),i.default.createElement("path",{d:"M21,7L9,19L3.5,13.5L4.91,12.09L9,16.17L19.59,5.59L21,7Z"}))}},42684:function(e,t,n){"use strict";var r,o=Object.assign||function(e){for(var t=1;t=0)&&Object.prototype.hasOwnProperty.call(e,r)&&(n[r]=e[r]);return n}(e,["fill","width","height","style"]);return i.default.createElement("svg",o({viewBox:"0 0 24 24",style:o({fill:void 0===t?"currentColor":t,width:void 0===n?24:n,height:void 0===r?24:r},l)},s),i.default.createElement("path",{d:"M12,18.17L8.83,15L7.42,16.41L12,21L16.59,16.41L15.17,15M12,5.83L15.17,9L16.58,7.59L12,3L7.41,7.59L8.83,9L12,5.83Z"}))}},16329:function(e,t,n){!function(e,t,n){"use strict";var r=function(e){if(e&&e.__esModule)return e;var t=Object.create(null);return e&&Object.keys(e).forEach(function(n){if("default"!==n){var r=Object.getOwnPropertyDescriptor(e,n);Object.defineProperty(t,n,r.get?r:{enumerable:!0,get:function(){return e[n]}})}}),t.default=e,Object.freeze(t)}(t);function o(){return(o=Object.assign?Object.assign.bind():function(e){for(var t=1;t{this.listeners.add(e);let t=this.options?.onSubscribe?.(e,this);return()=>{this.listeners.delete(e),t?.()}};setState=e=>{let t=this.state;this.state=this.options?.updateFn?this.options.updateFn(t)(e):e(t),this.state!==t&&(this.options?.onUpdate?.(this.state,t),this.queue.push(()=>{this.listeners.forEach(e=>e(this.state,t))}),this.#e())};#e=()=>{this.batching||(this.queue.forEach(e=>e()),this.queue=[])};batch=e=>{this.batching=!0,e(),this.batching=!1,this.#e()}}function l(e,t){if(Object.is(e,t))return!0;if("object"!=typeof e||null===e||"object"!=typeof t||null===t)return!1;let n=Object.keys(e);if(n.length!==Object.keys(t).length)return!1;for(let r=0;r(e.preventDefault(),e.returnValue=""),f=()=>{removeEventListener(c,u,{capture:!0})};function d(e){let t=e.getLocation(),n=()=>{},r=new Set,o=[],i=[],a=()=>{if(o.length)o[0]?.(a,()=>{o=[],f()});else{for(;i.length;)i.shift()?.();s()}},l=e=>{i.push(e),a()},s=()=>{t=e.getLocation(),r.forEach(e=>e())};return{get location(){return t},listen:t=>(0===r.size&&(n=e.listener(s)),r.add(t),()=>{r.delete(t),0===r.size&&n()}),push:(t,n)=>{l(()=>{e.pushState(t,n)})},replace:(t,n)=>{l(()=>{e.replaceState(t,n)})},go:t=>{l(()=>{e.go(t)})},back:()=>{l(()=>{e.back()})},forward:()=>{l(()=>{e.forward()})},createHref:t=>e.createHref(t),block:e=>(o.push(e),1===o.length&&addEventListener(c,u,{capture:!0}),()=>{(o=o.filter(t=>t!==e)).length||f()})}}function p(e){let t=e?.getHref??(()=>`${window.location.pathname}${window.location.hash}${window.location.search}`),n=e?.createHref??(e=>e);return d({getLocation:()=>g(t(),history.state),listener:e=>(window.addEventListener(s,e),()=>{window.removeEventListener(s,e)}),pushState:(e,t)=>{window.history.pushState({...t,key:m()},"",n(e))},replaceState:(e,t)=>{window.history.replaceState({...t,key:m()},"",n(e))},back:()=>window.history.back(),forward:()=>window.history.forward(),go:e=>window.history.go(e),createHref:e=>n(e)})}function h(e={initialEntries:["/"]}){let t=e.initialEntries,n=e.initialIndex??t.length-1,r={};return d({getLocation:()=>g(t[n],r),listener:()=>()=>{},pushState:(e,o)=>{r={...o,key:m()},t.push(e),n++},replaceState:(e,o)=>{r={...o,key:m()},t[n]=e},back:()=>{n--},forward:()=>{n=Math.min(n+1,t.length-1)},go:e=>window.history.go(e),createHref:e=>e})}function g(e,t){let n=e.indexOf("#"),r=e.indexOf("?");return{href:e,pathname:e.substring(0,n>0?r>0?Math.min(n,r):n:r>0?r:e.length),hash:n>-1?e.substring(n,r):"",search:r>-1?e.substring(r):"",state:t}}function m(){return(Math.random()+1).toString(36).substring(7)}function b(e){return e[e.length-1]}function v(e,t){return"function"==typeof e?e(t):e}function y(e,t){return t.reduce((t,n)=>(t[n]=e[n],t),{})}function x(e,t){if(e===t)return e;let n=Array.isArray(e)&&Array.isArray(t);if(n||w(e)&&w(t)){let r=n?e.length:Object.keys(e).length,o=n?t:Object.keys(t),i=o.length,a=n?[]:{},l=0;for(let r=0;r!S(e[n],t[n])):!(!Array.isArray(e)||!Array.isArray(t))&&e.length===t.length&&e.every((e,n)=>S(e,t[n])))}function k(e){return _(e.filter(Boolean).join("/"))}function _(e){return e.replace(/\/{2,}/g,"/")}function O(e){return"/"===e?e:e.replace(/^\/{1,}/,"")}function C(e){return"/"===e?e:e.replace(/\/{1,}$/,"")}function A(e){return C(O(e))}function N(e,t,n){t=t.replace(RegExp(`^${e}`),"/"),n=n.replace(RegExp(`^${e}`),"/");let r=R(t),o=R(n);return o.forEach((e,t)=>{if("/"===e.value)t?t===o.length-1&&r.push(e):r=[e];else if(".."===e.value)r.length>1&&"/"===b(r)?.value&&r.pop(),r.pop();else{if("."===e.value)return;r.push(e)}}),_(k([e,...r.map(e=>e.value)]))}function R(e){if(!e)return[];let t=[];if("/"===(e=_(e)).slice(0,1)&&(e=e.substring(1),t.push({type:"pathname",value:"/"})),!e)return t;let n=e.split("/").filter(Boolean);return t.push(...n.map(e=>"$"===e||"*"===e?{type:"wildcard",value:e}:"$"===e.charAt(0)?{type:"param",value:e}:{type:"pathname",value:e})),"/"===e.slice(-1)&&(e=e.substring(1),t.push({type:"pathname",value:"/"})),t}function T(e,t,n){return k(R(e).map(e=>["$","*"].includes(e.value)&&!n?"":"param"===e.type?t[e.value.substring(1)]??"":e.value))}function P(e,t,n){let r=M(e,t,n);if(!n.to||r)return r??{}}function M(e,t,n){if(!t.startsWith(e))return;let r=R(t="/"!=e?t.substring(e.length):t),o=R(`${n.to??"$"}`);"/"===b(r)?.value&&r.pop();let i={};return(()=>{for(let e=0;ee.value)),!0);if("pathname"===a.type){if("/"===a.value&&!t?.value)return!0;if(t){if(n.caseSensitive){if(a.value!==t.value)return!1}else if(a.value.toLowerCase()!==t.value.toLowerCase())return!1}}if(!t)return!1;if("param"===a.type){if("/"===t?.value)return!1;"$"!==t.value.charAt(0)&&(i[a.value.substring(1)]=t.value)}}if(l&&!s)return!!n.fuzzy}return!0})()?i:void 0}function j(e,t){var n,r,o,i="";for(n in e)if(void 0!==(o=e[n])){if(Array.isArray(o))for(r=0;r{this.originalIndex=e.originalIndex,this.router=e.router;let t=this.options,n=!t?.path&&!t?.id;this.parentRoute=this.options?.getParentRoute?.(),n?this.path=D:i(this.parentRoute);let r=n?D:t.path;r&&"/"!==r&&(r=A(r));let o=t?.id||r,a=n?D:k([this.parentRoute.id===D?"":this.parentRoute.id,o]);r===D&&(r="/"),a!==D&&(a=k(["/",a]));let l=a===D?"/":C(k([this.parentRoute.fullPath,r]));this.path=r,this.id=a,this.fullPath=l};addChildren=e=>(this.children=e,this)}class B extends F{constructor(e){super(e)}static withRouterContext=()=>e=>new B(e)}let z=U(JSON.parse),$=H(JSON.stringify);function U(e){return t=>{"?"===t.substring(0,1)&&(t=t.substring(1));let n=I(t);for(let t in n){let r=n[t];if("string"==typeof r)try{n[t]=e(r)}catch(e){}}return n}}function H(e){return t=>{Object.keys(t={...t}).forEach(n=>{let r=t[n];if(void 0===r||void 0===r)delete t[n];else if(r&&"object"==typeof r&&null!==r)try{t[n]=e(r)}catch(e){}});let n=j(t).toString();return n?`?${n}`:""}}let Z=async({router:e,routeMatch:t})=>{let n=e.buildNext({to:".",search:e=>({...e??{},__data:{matchId:t.id}})}),r=await fetch(n.href,{method:"GET",signal:t.abortController.signal});if(r.ok)return r.json();throw Error("Failed to fetch match data")};class q{#t;startedLoadingAt=Date.now();resolveNavigation=()=>{};constructor(e){this.options={defaultPreloadDelay:50,context:void 0,...e,stringifySearch:e?.stringifySearch??$,parseSearch:e?.parseSearch??z,fetchServerDataFn:e?.fetchServerDataFn??Z},this.__store=new a(W(),{onUpdate:e=>{this.state=e}}),this.state=this.__store.state,this.basepath="",this.update(e),this.options.Router?.(this);let t=this.buildNext({hash:!0,fromCurrent:!0,search:!0,state:!0});this.state.latestLocation.href!==t.href&&this.#n({...t,replace:!0})}reset=()=>{this.__store.setState(e=>Object.assign(e,W()))};mount=()=>(V||this.state.currentMatches.length||this.safeLoad(),()=>{});update=e=>{if(Object.assign(this.options,e),!this.history||this.options.history&&this.options.history!==this.history){this.#t&&this.#t(),this.history=this.options.history??(V?h():p());let e=this.#r();this.__store.setState(t=>({...t,latestLocation:e,currentLocation:e})),this.#t=this.history.listen(()=>{this.safeLoad({next:this.#r(this.state.latestLocation)})})}let{basepath:t,routeTree:n}=this.options;return this.basepath=`/${A(t??"")??""}`,n&&(this.routesById={},this.routeTree=this.#o(n)),this};buildNext=e=>{let t=this.#i(e),n=this.matchRoutes(t.pathname);return this.#i({...e,__matches:n})};cancelMatches=()=>{[...this.state.currentMatches,...this.state.pendingMatches||[]].forEach(e=>{e.cancel()})};safeLoad=e=>{this.load(e).catch(e=>{console.warn(e),i(!1)})};load=async e=>{let t,n=Date.now(),r=n;if(this.startedLoadingAt=r,this.cancelMatches(),this.__store.batch(()=>{e?.next&&this.__store.setState(t=>({...t,latestLocation:e.next})),t=this.matchRoutes(this.state.latestLocation.pathname,{strictParseParams:!0}),this.__store.setState(e=>({...e,status:"pending",pendingMatches:t,pendingLocation:this.state.latestLocation}))}),await this.loadMatches(t,this.state.pendingLocation),this.startedLoadingAt!==r)return this.navigationPromise;let o=this.state.currentMatches,i=[],a=[];o.forEach(e=>{t.find(t=>t.id===e.id)?a.push(e):i.push(e)});let l=t.filter(e=>!o.find(t=>t.id===e.id));n=Date.now(),i.forEach(e=>{e.__onExit?.({params:e.params,search:e.state.routeSearch}),"error"===e.state.status&&this.__store.setState(e=>({...e,status:"idle",error:void 0}))}),a.forEach(e=>{e.route.options.onTransition?.({params:e.params,search:e.state.routeSearch})}),l.forEach(e=>{e.__onExit=e.route.options.onLoaded?.({params:e.params,search:e.state.search})});let s=this.state.currentLocation;this.__store.setState(e=>({...e,status:"idle",currentLocation:this.state.latestLocation,currentMatches:t,pendingLocation:void 0,pendingMatches:void 0})),t.forEach(e=>{e.__commit()}),s.href!==this.state.currentLocation.href&&this.options.onRouteChange?.(),this.resolveNavigation()};getRoute=e=>{let t=this.routesById[e];return i(t),t};loadRoute=async(e=this.state.latestLocation)=>{let t=this.buildNext(e),n=this.matchRoutes(t.pathname,{strictParseParams:!0});return await this.loadMatches(n,t),n};preloadRoute=async(e=this.state.latestLocation)=>{let t=this.buildNext(e),n=this.matchRoutes(t.pathname,{strictParseParams:!0});return await this.loadMatches(n,t,{preload:!0}),n};matchRoutes=(e,t)=>{let n=[];if(!this.routeTree)return n;let r=[...this.state.currentMatches,...this.state.pendingMatches??[]],o=async i=>{let a=b(n)?.params??{},l=this.options.filterRoutes?.(i)??i,s=[],c=(n,r)=>(r.some(r=>{let o=r.children;if(!r.path&&o?.length)return c([...s,r],o);let i=!("/"===r.path&&!o?.length),l=P(this.basepath,e,{to:r.fullPath,fuzzy:i,caseSensitive:r.options.caseSensitive??this.options.caseSensitive});if(l){let e;try{e=r.options.parseParams?.(l)??l}catch(e){if(t?.strictParseParams)throw e}a={...a,...e}}return l&&(s=[...n,r]),!!s.length}),!!s.length);if(c([],l),!s.length)return;s.forEach(e=>{let t=T(e.path,a),o=T(e.id,a,!0),i=r.find(e=>e.id===o)||new Y(this,e,{id:o,params:a,pathname:k([this.basepath,t])});n.push(i)});let u=b(s).children;u?.length&&o(u)};return o([this.routeTree]),n};loadMatches=async(e,t,n)=>{let r;try{await Promise.all(e.map(async(e,t)=>{try{await e.route.options.beforeLoad?.({router:this,match:e})}catch(o){if(G(o))throw o;r=r??t;let n=e.route.options.onBeforeLoadError??e.route.options.onError;try{n?.(o)}catch(t){if(G(t))throw t;return void e.__store.setState(e=>({...e,error:t,status:"error",updatedAt:Date.now()}))}e.__store.setState(e=>({...e,error:o,status:"error",updatedAt:Date.now()}))}}))}catch(e){if(G(e))return void(n?.preload||this.navigate(e));throw e}let o=e.slice(0,r),i=o.map(async(e,r)=>{let i=o[r-1];e.__load({preload:n?.preload,location:t,parentMatch:i}),await e.__loadPromise,i&&await i.__loadPromise});await Promise.all(i)};reload=()=>{this.navigate({fromCurrent:!0,replace:!0,search:!0})};resolvePath=(e,t)=>N(this.basepath,e,_(t));navigate=async({from:e,to:t="",search:n,hash:r,replace:o,params:a})=>{let l;let s=String(t),c=void 0===e?e:String(e);try{new URL(`${s}`),l=!0}catch(e){}return i(!l),this.#n({from:c,to:s,search:n,hash:r,replace:o,params:a})};matchRoute=(e,t)=>{e={...e,to:e.to?this.resolvePath(e.from??"",e.to):void 0};let n=this.buildNext(e),r=t?.pending?this.state.pendingLocation:this.state.currentLocation;if(!r)return!1;let o=P(this.basepath,r.pathname,{...t,to:n.pathname});return!!o&&(t?.includeSearch??1?!!S(r.search,n.search)&&o:o)};buildLink=({from:e,to:t=".",search:n,params:r,hash:o,target:i,replace:a,activeOptions:l,preload:s,preloadDelay:c,disabled:u})=>{try{return new URL(`${t}`),{type:"external",href:t}}catch(e){}let f={from:e,to:t,search:n,params:r,hash:o,replace:a},d=this.buildNext(f);s=s??this.options.defaultPreload;let p=c??this.options.defaultPreloadDelay??0,h=this.state.currentLocation.pathname.split("/"),g=d.pathname.split("/").every((e,t)=>e===h[t]),m=l?.exact?this.state.currentLocation.pathname===d.pathname:g,b=!l?.includeHash||this.state.currentLocation.hash===d.hash,v=!(l?.includeSearch??1)||S(this.state.currentLocation.search,d.search);return{type:"internal",next:d,handleFocus:e=>{s&&this.preloadRoute(f).catch(e=>{console.warn(e),console.warn("Error preloading route! ☝️")})},handleClick:e=>{u||e.metaKey||e.altKey||e.ctrlKey||e.shiftKey||e.defaultPrevented||i&&"_self"!==i||0!==e.button||(e.preventDefault(),this.#n(f))},handleEnter:e=>{let t=e.target||{};if(s){if(t.preloadTimeout)return;t.preloadTimeout=setTimeout(()=>{t.preloadTimeout=null,this.preloadRoute(f).catch(e=>{console.warn(e),console.warn("Error preloading route! ☝️")})},p)}},handleLeave:e=>{let t=e.target||{};t.preloadTimeout&&(clearTimeout(t.preloadTimeout),t.preloadTimeout=null)},handleTouchStart:e=>{this.preloadRoute(f).catch(e=>{console.warn(e),console.warn("Error preloading route! ☝️")})},isActive:m&&b&&v,disabled:u}};dehydrate=()=>({state:{...y(this.state,["latestLocation","currentLocation","status","lastUpdated"]),currentMatches:this.state.currentMatches.map(e=>({id:e.id,state:{status:e.state.status}}))}});hydrate=e=>{this.__store.setState(t=>{let n=this.matchRoutes(e.state.latestLocation.pathname,{strictParseParams:!0});return n.forEach((t,n)=>{let r=e.state.currentMatches[n];i(r&&r.id===t.id),t.__store.setState(e=>({...e,...r.state}))}),{...t,...e.state,currentMatches:n}})};#o=e=>{let t=(e,n)=>{e.forEach((e,n)=>{e.init({originalIndex:n,router:this}),i(!this.routesById[e.id],String(e.id)),this.routesById[e.id]=e;let r=e.children;r?.length&&(t(r),e.children=r.map((e,t)=>{let n=R(O(_(e.path??"/")));for(;n.length>1&&"/"===n[0]?.value;)n.shift();let r=0;return n.forEach((e,t)=>{let n=1;for(;t--;)n*=.001;"pathname"===e.type&&"/"!==e.value?r+=1*n:"param"===e.type?r+=2*n:"wildcard"===e.type&&(r+=3*n)}),{child:e,parsed:n,index:t,score:r}}).sort((e,t)=>e.score!==t.score?e.score-t.score:e.index-t.index).map(e=>e.child))})};t([e]);let n=(e,t)=>{e.forEach(e=>{e.isRoot?i(!t):i(!t||e.parentRoute===t,(e.path,e.parentRoute?.id,t?.id)),e.children&&n(e.children,e)})};return n([e],void 0),e};#r=e=>{let{pathname:t,search:n,hash:r,state:o}=this.history.location,i=this.options.parseSearch(n);return{pathname:t,searchStr:n,search:x(e?.search,i),hash:r.split("#").reverse()[0]??"",href:`${t}${n}${r}`,state:o,key:o?.key||"__init__"}};#i=(e={})=>{e.fromCurrent=e.fromCurrent??""===e.to;let t=e.fromCurrent?this.state.latestLocation.pathname:e.from??this.state.latestLocation.pathname,n=N(this.basepath??"/",t,`${e.to??""}`),r={...b(this.matchRoutes(this.state.latestLocation.pathname,{strictParseParams:!0}))?.params},o=!0===(e.params??!0)?r:v(e.params,r);o&&e.__matches?.map(e=>e.route.options.stringifyParams).filter(Boolean).forEach(e=>{o={...o,...e(o)}}),n=T(n,o??{});let i=e.__matches?.map(e=>e.route.options.preSearchFilters??[]).flat().filter(Boolean)??[],a=e.__matches?.map(e=>e.route.options.postSearchFilters??[]).flat().filter(Boolean)??[],l=i?.length?i?.reduce((e,t)=>t(e),this.state.latestLocation.search):this.state.latestLocation.search,s=!0===e.search?l:e.search?v(e.search,l)??{}:i?.length?l:{},c=a?.length?a.reduce((e,t)=>t(e),s):s,u=x(this.state.latestLocation.search,c),f=this.options.stringifySearch(u),d=!0===e.hash?this.state.latestLocation.hash:v(e.hash,this.state.latestLocation.hash);return d=d?`#${d}`:"",{pathname:n,search:u,searchStr:f,state:!0===e.state?this.state.latestLocation.state:v(e.state,this.state.latestLocation.state),hash:d,href:this.history.createHref(`${n}${f}${d}`),key:e.key}};#n=async e=>{let t=this.buildNext(e),n=""+Date.now()+Math.random();this.navigateTimeout&&clearTimeout(this.navigateTimeout);let r="replace";e.replace||(r="push"),this.state.latestLocation.href!==t.href||t.key||(r="replace");let o=`${t.pathname}${t.searchStr}${t.hash?`${t.hash}`:""}`;return this.history["push"===r?"push":"replace"](o,{id:n,...t.state}),this.navigationPromise=new Promise(e=>{let t=this.resolveNavigation;this.resolveNavigation=()=>{t(),e()}})}}let V="undefined"==typeof window||!window.document.createElement;function W(){return{status:"idle",latestLocation:null,currentLocation:null,currentMatches:[],lastUpdated:Date.now()}}function G(e){return!!e?.isRedirect}let K=["component","errorComponent","pendingComponent"];class Y{abortController=new AbortController;constructor(e,t,n){Object.assign(this,{route:t,router:e,id:n.id,pathname:n.pathname,params:n.params,__store:new a({updatedAt:0,routeSearch:{},search:{},status:"idle"},{onUpdate:e=>{this.state=e}})}),this.state=this.__store.state,K.map(async e=>{let t=this.route.options[e];"function"!=typeof this[e]&&(this[e]=t)}),"idle"!==this.state.status||this.#a()||this.__store.setState(e=>({...e,status:"success"}))}#a=()=>!(!this.route.options.onLoad&&!K.some(e=>this.route.options[e]?.preload));__commit=()=>{let{routeSearch:e,search:t,context:n,routeContext:r}=this.#l({location:this.router.state.currentLocation});this.context=n,this.routeContext=r,this.__store.setState(n=>({...n,routeSearch:x(n.routeSearch,e),search:x(n.search,t)}))};cancel=()=>{this.abortController?.abort()};#s=e=>{let t=this.parentMatch?this.parentMatch.#s(e):{search:e.location.search,routeSearch:e.location.search};try{let e=("object"==typeof this.route.options.validateSearch?this.route.options.validateSearch.parse:this.route.options.validateSearch)?.(t.search)??{};return{routeSearch:e,search:{...t.search,...e}}}catch(t){if(G(t))throw t;(this.route.options.onValidateSearchError??this.route.options.onError)?.(t);let e=Error("Invalid search params found",{cause:t});throw e.code="INVALID_SEARCH_PARAMS",e}};#l=e=>{let{search:t,routeSearch:n}=this.#s(e);try{let e=this.route.options.getContext?.({parentContext:this.parentMatch?.routeContext??{},context:this.parentMatch?.context??this.router?.options.context??{},params:this.params,search:t})||{};return{routeSearch:n,search:t,context:{...this.parentMatch?.context??this.router?.options.context,...e},routeContext:e}}catch(e){throw this.route.options.onError?.(e),e}};__load=async e=>{let t;this.parentMatch=e.parentMatch;try{t=this.#l(e)}catch(t){return G(t)?void(e?.preload||this.router.navigate(t)):void this.__store.setState(e=>({...e,status:"error",error:t}))}let{routeSearch:n,search:r,context:o,routeContext:i}=t;if("pending"!==this.state.status)return this.__loadPromise=Promise.resolve().then(async()=>{let t;let a=""+Date.now()+Math.random();this.#c=a,"idle"===this.state.status&&this.__store.setState(e=>({...e,status:"pending"}));let l=(async()=>{await Promise.all(K.map(async e=>{let t=this.route.options[e];this[e]?.preload&&(this[e]=await this.router.options.loadComponent(t))}))})(),s=Promise.resolve().then(()=>{if(this.route.options.onLoad)return this.route.options.onLoad({params:this.params,routeSearch:n,search:r,signal:this.abortController.signal,preload:!!e?.preload,routeContext:i,context:o})});try{if(await Promise.all([l,s]),t=a!==this.#c?this.__loadPromise:void 0)return await t;this.__store.setState(e=>({...e,error:void 0,status:"success",updatedAt:Date.now()}))}catch(n){if(G(n))return void(e?.preload||this.router.navigate(n));let t=this.route.options.onLoadError??this.route.options.onError;try{t?.(n)}catch(t){return G(t)?void(e?.preload||this.router.navigate(t)):void this.__store.setState(e=>({...e,error:t,status:"error",updatedAt:Date.now()}))}this.__store.setState(e=>({...e,error:n,status:"error",updatedAt:Date.now()}))}finally{delete this.__loadPromise}}),this.__loadPromise};#c=""}/** - * react-store - * - * Copyright (c) TanStack - * - * This source code is licensed under the MIT license found in the - * LICENSE.md file in the root directory of this source tree. - * - * @license MIT - */function X(e,t=e=>e,r){return n.useSyncExternalStoreWithSelector(e.subscribe,()=>e.state,()=>e.state,t,r?l:void 0)}function J(e){let t=en(),{type:n,children:o,target:i,activeProps:a=()=>({className:"active"}),inactiveProps:l=()=>({}),activeOptions:s,disabled:c,hash:u,search:f,params:d,to:p=".",preload:h,preloadDelay:g,replace:m,style:b,className:y,onClick:x,onFocus:w,onMouseEnter:E,onMouseLeave:S,onTouchStart:k,..._}=e,O=t.buildLink(e);if("external"===O.type){let{href:e}=O;return{href:e}}let{handleClick:C,handleFocus:A,handleEnter:N,handleLeave:R,handleTouchStart:T,isActive:P,next:M}=O,j=e=>t=>{t.persist&&t.persist(),e.filter(Boolean).forEach(e=>{t.defaultPrevented||e(t)})},L=P?v(a,{})??{}:{},I=P?{}:v(l,{})??{};return{...L,...I,..._,href:c?void 0:M.href,onClick:j([x,e=>{r.startTransition?r.startTransition(()=>{C(e)}):C(e)}]),onFocus:j([w,A]),onMouseEnter:j([E,N]),onMouseLeave:j([S,R]),onTouchStart:j([k,T]),target:i,style:{...b,...L.style,...I.style},className:[y,L.className,I.className].filter(Boolean).join(" ")||void 0,...c?{role:"link","aria-disabled":!0}:void 0,"data-status":P?"active":void 0}}let Q=r.forwardRef((e,t)=>{let n=J(e);return r.createElement("a",o({ref:t},n,{children:"function"==typeof e.children?e.children({isActive:"active"===n["data-status"]}):e.children}))}),ee=r.createContext(null),et=r.createContext(null);function en(){let e=r.useContext(et);return X(e.router.__store),e.router}function er(e,t){let n=en();return X(n.__store,e,t),n}function eo(){return r.useContext(ee)}function ei(e){let t=en(),n=eo()[0],r=e?.from?t.state.currentMatches.find(t=>t.route.id===e?.from):n;return i(r,e?.from&&e.from),(e?.strict??1)&&i(n.route.id==r?.route.id,(r?.route.id,n.route.id,r?.route.id,r?.route.id)),X(r.__store,t=>e?.track?.(r)??r,e?.shallow),r}function ea(){let e=en();return r.useCallback(t=>{let{pending:n,caseSensitive:r,...o}=t;return e.matchRoute(o,{pending:n,caseSensitive:r})},[])}function el(){let e=eo().slice(1),t=e[0];return t?r.createElement(es,{matches:e,match:t}):null}function es({matches:e,match:t}){let n=en();X(t.__store,e=>[e.status,e.error],!0);let o=r.useCallback(()=>null,[]),i=t.pendingComponent??n.options.defaultPendingComponent??o,a=t.errorComponent??n.options.defaultErrorComponent,l=t.route.options.wrapInSuspense??1?r.Suspense:eu,s=a?ef:eu;return r.createElement(ee.Provider,{value:e},r.createElement(l,{fallback:r.createElement(i,null)},r.createElement(s,{key:t.route.id,errorComponent:a,onCatch:()=>{t.id}},r.createElement(ec,{match:t}))))}function ec(e){let t=en();if("error"===e.match.state.status)throw e.match.state.error;if("success"===e.match.state.status)return r.createElement(e.match.component??t.options.defaultComponent??el);if("pending"===e.match.state.status)throw e.match.__loadPromise;i(!1)}function eu(e){return r.createElement(r.Fragment,null,e.children)}class ef extends r.Component{state={error:!1,info:void 0};componentDidCatch(e,t){this.props.onCatch(e,t),console.error(e),this.setState({error:e,info:t})}render(){return r.createElement(ed,o({},this.props,{errorState:this.state,reset:()=>this.setState({})}))}}function ed(e){let[t,n]=r.useState(e.errorState),o=en(),i=e.errorComponent??ep,a=r.useRef("");return r.useEffect(()=>{t&&o.state.currentLocation.key!==a.current&&n({}),a.current=o.state.currentLocation.key},[t,o.state.currentLocation.key]),r.useEffect(()=>{e.errorState.error&&n(e.errorState)},[e.errorState.error]),e.errorState.error&&t.error?r.createElement(i,t):e.children}function ep({error:e}){return r.createElement("div",{style:{padding:".5rem",maxWidth:"100%"}},r.createElement("strong",{style:{fontSize:"1.2rem"}},"Something went wrong!"),r.createElement("div",{style:{height:".5rem"}}),r.createElement("div",null,r.createElement("pre",{style:{fontSize:".7em",border:"1px solid red",borderRadius:".25rem",padding:".5rem",color:"red",overflow:"auto"}},e.message?r.createElement("code",null,e.message):null)))}function eh(e,t=!0){let n=er();r.useEffect(()=>{if(!t)return;let r=n.history.block((t,n)=>{window.confirm(e)?(r(),t()):n()});return r})}e.Block=function({message:e,condition:t,children:n}){return eh(e,t),n??null},e.ErrorComponent=ep,e.Link=Q,e.MatchRoute=function(e){let t=ea()(e);return t?"function"==typeof e.children?e.children(t):t?e.children:null:null},e.Navigate=function(e){let t=en();return r.useLayoutEffect(()=>{t.navigate(e)},[]),null},e.Outlet=el,e.ReactRouter=class extends q{constructor(e){super({...e,loadComponent:async e=>(e.preload&&await e.preload(),e)})}},e.RootRoute=B,e.Route=F,e.RouteMatch=Y,e.Router=q,e.RouterProvider=function({router:e,...t}){e.update(t);let n=X(e.__store,e=>e.currentMatches);return r.useEffect(e.mount,[e]),r.createElement(et.Provider,{value:{router:e}},r.createElement(ee.Provider,{value:[void 0,...n]},r.createElement(ef,{errorComponent:ep,onCatch:()=>{}},r.createElement(el,null))))},e.cleanPath=_,e.createBrowserHistory=p,e.createHashHistory=function(){return p({getHref:()=>window.location.hash.substring(1),createHref:e=>`#${e}`})},e.createMemoryHistory=h,e.decode=I,e.defaultFetchServerDataFn=Z,e.defaultParseSearch=z,e.defaultStringifySearch=$,e.encode=j,e.functionalUpdate=v,e.interpolatePath=T,e.invariant=i,e.isPlainObject=w,e.isRedirect=G,e.joinPaths=k,e.last=b,e.lazy=function(e){let t=r.lazy(e);return t.preload=async()=>{await e()},t},e.matchByPath=M,e.matchPathname=P,e.matchesContext=ee,e.parsePathname=R,e.parseSearchWith=U,e.partialDeepEqual=S,e.pick=y,e.redirect=function(e){return e.isRedirect=!0,e},e.replaceEqualDeep=x,e.resolvePath=N,e.rootRouteId=D,e.routerContext=et,e.stringifySearchWith=H,e.trimPath=A,e.trimPathLeft=O,e.trimPathRight=C,e.useBlocker=eh,e.useLinkProps=J,e.useMatch=ei,e.useMatchRoute=ea,e.useMatches=eo,e.useNavigate=function(e){let t=en();return r.useCallback(n=>t.navigate({...e,...n}),[])},e.useParams=function(e){let t=en();return X(t.__store,t=>{let n=b(t.currentMatches)?.params;return e?.track?.(n)??n},!0),b(t.state.currentMatches)?.params},e.useRoute=function(e){let t=en().getRoute(e);return i(t),t},e.useRouter=er,e.useRouterContext=en,e.useSearch=function(e){let{track:t,...n}=e,r=ei(n);return X(r.__store,t=>e?.track?.(t.search)??t.search,!0),r.state.search},e.useStore=X,e.warning=function(e,t){},Object.defineProperty(e,"__esModule",{value:!0})}(t,n(86006),n(97737))},472:function(e,t,n){"use strict";n.d(t,{J_:function(){return d},Ry:function(){return u},cJ:function(){return p}});var r=function(e){return"undefined"==typeof document?null:(Array.isArray(e)?e[0]:e).ownerDocument.body},o=new WeakMap,i=new WeakMap,a={},l=0,s=function(e){return e&&(e.host||s(e.parentNode))},c=function(e,t,n,r){var c=(Array.isArray(e)?e:[e]).map(function(e){if(t.contains(e))return e;var n=s(e);return n&&t.contains(n)?n:(console.error("aria-hidden",e,"in not contained inside",t,". Doing nothing"),null)}).filter(function(e){return!!e});a[n]||(a[n]=new WeakMap);var u=a[n],f=[],d=new Set,p=new Set(c),h=function(e){!e||d.has(e)||(d.add(e),h(e.parentNode))};c.forEach(h);var g=function(e){!e||p.has(e)||Array.prototype.forEach.call(e.children,function(e){if(d.has(e))g(e);else{var t=e.getAttribute(r),a=null!==t&&"false"!==t,l=(o.get(e)||0)+1,s=(u.get(e)||0)+1;o.set(e,l),u.set(e,s),f.push(e),1===l&&a&&i.set(e,!0),1===s&&e.setAttribute(n,"true"),a||e.setAttribute(r,"true")}})};return g(t),d.clear(),l++,function(){f.forEach(function(e){var t=o.get(e)-1,a=u.get(e)-1;o.set(e,t),u.set(e,a),t||(i.has(e)||e.removeAttribute(r),i.delete(e)),a||e.removeAttribute(n)}),--l||(o=new WeakMap,o=new WeakMap,i=new WeakMap,a={})}},u=function(e,t,n){void 0===n&&(n="data-aria-hidden");var o=Array.from(Array.isArray(e)?e:[e]),i=t||r(e);return i?(o.push.apply(o,Array.from(i.querySelectorAll("[aria-live]"))),c(o,i,n,"aria-hidden")):function(){return null}},f=function(e,t,n){void 0===n&&(n="data-inert-ed");var o=t||r(e);return o?c(e,o,n,"inert"):function(){return null}},d=function(){return"undefined"!=typeof HTMLElement&&HTMLElement.prototype.hasOwnProperty("inert")},p=function(e,t,n){return void 0===n&&(n="data-suppressed"),(d()?f:u)(e,t,n)}},8683:function(e,t){var n;/*! - Copyright (c) 2018 Jed Watson. - Licensed under the MIT License (MIT), see - http://jedwatson.github.io/classnames -*/!function(){"use strict";var r={}.hasOwnProperty;function o(){for(var e=[],t=0;t=0;)(c=e(r,o,i,a,p+1,s+1))>h&&(p===l?c*=1:t.test(r.charAt(p-1))?(c*=.9,(f=r.slice(l,p-1).match(n))&&l>0&&(c*=Math.pow(.999,f.length))):t.test(r.slice(l,p-1))?(c*=0,l>0&&(c*=Math.pow(.999,p-l))):(c*=.3,l>0&&(c*=Math.pow(.999,p-l))),r.charAt(p)!==o.charAt(s)&&(c*=.9999)),c<.1&&i.charAt(p-1)===a.charAt(s+1)&&i.charAt(p-1)!==a.charAt(s)&&.1*(u=e(r,o,i,a,p+1,s+2))>c&&(c=.1*u),c>h&&(h=c),p=i.indexOf(d,p+1);return h}(e,r,e.toLowerCase(),r.toLowerCase(),0,0)}},27652:function(e,t,n){"use strict";var r=n(49494),o={"text/plain":"Text","text/html":"Url",default:"Text"};e.exports=function(e,t){var n,i,a,l,s,c,u,f,d=!1;t||(t={}),a=t.debug||!1;try{if(s=r(),c=document.createRange(),u=document.getSelection(),(f=document.createElement("span")).textContent=e,f.ariaHidden="true",f.style.all="unset",f.style.position="fixed",f.style.top=0,f.style.clip="rect(0, 0, 0, 0)",f.style.whiteSpace="pre",f.style.webkitUserSelect="text",f.style.MozUserSelect="text",f.style.msUserSelect="text",f.style.userSelect="text",f.addEventListener("copy",function(n){if(n.stopPropagation(),t.format){if(n.preventDefault(),void 0===n.clipboardData){a&&console.warn("unable to use e.clipboardData"),a&&console.warn("trying IE specific stuff"),window.clipboardData.clearData();var r=o[t.format]||o.default;window.clipboardData.setData(r,e)}else n.clipboardData.clearData(),n.clipboardData.setData(t.format,e)}t.onCopy&&(n.preventDefault(),t.onCopy(n.clipboardData))}),document.body.appendChild(f),c.selectNodeContents(f),u.addRange(c),!document.execCommand("copy"))throw Error("copy command was unsuccessful");d=!0}catch(r){a&&console.error("unable to copy using execCommand: ",r),a&&console.warn("trying IE specific stuff");try{window.clipboardData.setData(t.format||"text",e),t.onCopy&&t.onCopy(window.clipboardData),d=!0}catch(r){a&&console.error("unable to copy using clipboardData: ",r),a&&console.error("falling back to prompt"),n="message"in t?t.message:"Copy to clipboard: #{key}, Enter",i=(/mac os x/i.test(navigator.userAgent)?"⌘":"Ctrl")+"+C",l=n.replace(/#{\s*key\s*}/g,i),window.prompt(l,e)}}finally{u&&("function"==typeof u.removeRange?u.removeRange(c):u.removeAllRanges()),f&&document.body.removeChild(f),s()}return d}},19867:function(e,t,n){var r=n(34142);e.exports=r},44433:function(e,t,n){var r=n(7);e.exports=r},64519:function(e,t,n){var r=n(65050);e.exports=r},71008:function(e,t,n){var r=n(97434);e.exports=r},77685:function(e,t,n){var r=n(94531);e.exports=r},85344:function(e,t,n){var r=n(2608);e.exports=r},10986:function(e,t,n){var r=n(15587);n(47708),n(20551),n(87118),e.exports=r},30073:function(e,t,n){var r=n(51036);e.exports=r},51486:function(e,t,n){var r=n(43948);e.exports=r},30810:function(e,t,n){n(14560),n(99298);var r=n(1131);e.exports=r.Array.from},11750:function(e,t,n){n(11815);var r=n(1131);e.exports=r.Array.isArray},24378:function(e,t,n){n(23902);var r=n(12018);e.exports=r("Array").concat},29900:function(e,t,n){n(92642);var r=n(12018);e.exports=r("Array").filter},79107:function(e,t,n){n(56931);var r=n(12018);e.exports=r("Array").forEach},1753:function(e,t,n){n(9266);var r=n(12018);e.exports=r("Array").indexOf},65785:function(e,t,n){n(91343);var r=n(12018);e.exports=r("Array").push},68403:function(e,t,n){n(77920);var r=n(12018);e.exports=r("Array").slice},28285:function(e,t,n){n(48174);var r=n(12018);e.exports=r("Array").splice},13217:function(e,t,n){n(78944);var r=n(1131);e.exports=r.Date.now},90642:function(e,t,n){n(78312),n(14560);var r=n(89329);e.exports=r},80197:function(e,t,n){var r=n(49477),o=n(24378),i=Array.prototype;e.exports=function(e){var t=e.concat;return e===i||r(i,e)&&t===i.concat?o:t}},65874:function(e,t,n){var r=n(49477),o=n(29900),i=Array.prototype;e.exports=function(e){var t=e.filter;return e===i||r(i,e)&&t===i.filter?o:t}},45774:function(e,t,n){var r=n(49477),o=n(1753),i=Array.prototype;e.exports=function(e){var t=e.indexOf;return e===i||r(i,e)&&t===i.indexOf?o:t}},21151:function(e,t,n){var r=n(49477),o=n(65785),i=Array.prototype;e.exports=function(e){var t=e.push;return e===i||r(i,e)&&t===i.push?o:t}},58616:function(e,t,n){var r=n(49477),o=n(68403),i=Array.prototype;e.exports=function(e){var t=e.slice;return e===i||r(i,e)&&t===i.slice?o:t}},8231:function(e,t,n){var r=n(49477),o=n(28285),i=Array.prototype;e.exports=function(e){var t=e.splice;return e===i||r(i,e)&&t===i.splice?o:t}},36347:function(e,t,n){n(86461);var r=n(1131);e.exports=r.Math.sign},22030:function(e,t,n){n(58857);var r=n(1131).Object,o=e.exports=function(e,t){return r.defineProperties(e,t)};r.defineProperties.sham&&(o.sham=!0)},73304:function(e,t,n){n(86819);var r=n(1131).Object,o=e.exports=function(e,t,n){return r.defineProperty(e,t,n)};r.defineProperty.sham&&(o.sham=!0)},8768:function(e,t,n){n(51005);var r=n(1131).Object,o=e.exports=function(e,t){return r.getOwnPropertyDescriptor(e,t)};r.getOwnPropertyDescriptor.sham&&(o.sham=!0)},18312:function(e,t,n){n(72269);var r=n(1131);e.exports=r.Object.getOwnPropertyDescriptors},84715:function(e,t,n){n(60613);var r=n(1131);e.exports=r.Object.getOwnPropertySymbols},23197:function(e,t,n){n(46568);var r=n(1131);e.exports=r.Object.keys},26643:function(e,t,n){n(23902),n(8094),n(60613),n(83292),n(37375),n(46014),n(74639),n(66612),n(81790),n(49092),n(46176),n(15821),n(72926),n(77517),n(2978),n(22828),n(74598),n(7681),n(2675),n(50186);var r=n(1131);e.exports=r.Symbol},93872:function(e,t,n){n(78312),n(8094),n(14560),n(66612);var r=n(38197);e.exports=r.f("iterator")},20610:function(e,t,n){n(33345),n(2978);var r=n(38197);e.exports=r.f("toPrimitive")},35413:function(e,t,n){var r=n(19867);e.exports=r},82685:function(e,t,n){var r=n(44433);e.exports=r},83161:function(e,t,n){var r=n(64519);e.exports=r},99889:function(e,t,n){var r=n(71008);e.exports=r},24245:function(e,t,n){var r=n(77685);e.exports=r},23094:function(e,t,n){var r=n(85344);e.exports=r},3708:function(e,t,n){var r=n(10986);n(64473),n(43555),n(15894),n(47051),n(95359),n(26782),n(68316),n(64869),n(99244),n(95284),e.exports=r},22992:function(e,t,n){var r=n(30073);e.exports=r},22663:function(e,t,n){var r=n(51486);e.exports=r},21846:function(e,t,n){var r=n(75628),o=n(99525),i=TypeError;e.exports=function(e){if(r(e))return e;throw i(o(e)+" is not a function")}},77722:function(e,t,n){var r=n(75628),o=String,i=TypeError;e.exports=function(e){if("object"==typeof e||r(e))return e;throw i("Can't set "+o(e)+" as a prototype")}},74418:function(e){e.exports=function(){}},4152:function(e,t,n){var r=n(90545),o=String,i=TypeError;e.exports=function(e){if(r(e))return e;throw i(o(e)+" is not an object")}},34069:function(e,t,n){"use strict";var r=n(48390).forEach,o=n(84992)("forEach");e.exports=o?[].forEach:function(e){return r(this,e,arguments.length>1?arguments[1]:void 0)}},97525:function(e,t,n){"use strict";var r=n(87477),o=n(24871),i=n(97826),a=n(24180),l=n(51094),s=n(72521),c=n(42149),u=n(14965),f=n(73080),d=n(89329),p=Array;e.exports=function(e){var t,n,h,g,m,b,v=i(e),y=s(this),x=arguments.length,w=x>1?arguments[1]:void 0,E=void 0!==w;E&&(w=r(w,x>2?arguments[2]:void 0));var S=d(v),k=0;if(S&&!(this===p&&l(S)))for(m=(g=f(v,S)).next,n=y?new this:[];!(h=o(m,g)).done;k++)b=E?a(g,w,[h.value,k],!0):h.value,u(n,k,b);else for(t=c(v),n=y?new this(t):p(t);t>k;k++)b=E?w(v[k],k):v[k],u(n,k,b);return n.length=k,n}},34597:function(e,t,n){var r=n(69146),o=n(55950),i=n(42149),a=function(e){return function(t,n,a){var l,s=r(t),c=i(s),u=o(a,c);if(e&&n!=n){for(;c>u;)if((l=s[u++])!=l)return!0}else for(;c>u;u++)if((e||u in s)&&s[u]===n)return e||u||0;return!e&&-1}};e.exports={includes:a(!0),indexOf:a(!1)}},48390:function(e,t,n){var r=n(87477),o=n(60254),i=n(72911),a=n(97826),l=n(42149),s=n(62128),c=o([].push),u=function(e){var t=1==e,n=2==e,o=3==e,u=4==e,f=6==e,d=7==e,p=5==e||f;return function(h,g,m,b){for(var v,y,x=a(h),w=i(x),E=r(g,m),S=l(w),k=0,_=b||s,O=t?_(h,S):n||d?_(h,0):void 0;S>k;k++)if((p||k in w)&&(y=E(v=w[k],k,x),e)){if(t)O[k]=y;else if(y)switch(e){case 3:return!0;case 5:return v;case 6:return k;case 2:c(O,v)}else switch(e){case 4:return!1;case 7:c(O,v)}}return f?-1:o||u?u:O}};e.exports={forEach:u(0),map:u(1),filter:u(2),some:u(3),every:u(4),find:u(5),findIndex:u(6),filterReject:u(7)}},55893:function(e,t,n){var r=n(29720),o=n(45216),i=n(34750),a=o("species");e.exports=function(e){return i>=51||!r(function(){var t=[];return(t.constructor={})[a]=function(){return{foo:1}},1!==t[e](Boolean).foo})}},84992:function(e,t,n){"use strict";var r=n(29720);e.exports=function(e,t){var n=[][e];return!!n&&r(function(){n.call(null,t||function(){return 1},1)})}},83695:function(e,t,n){"use strict";var r=n(83383),o=n(4063),i=TypeError,a=Object.getOwnPropertyDescriptor,l=r&&!function(){if(void 0!==this)return!0;try{Object.defineProperty([],"length",{writable:!1}).length=1}catch(e){return e instanceof TypeError}}();e.exports=l?function(e,t){if(o(e)&&!a(e,"length").writable)throw i("Cannot set read only .length");return e.length=t}:function(e,t){return e.length=t}},89086:function(e,t,n){var r=n(55950),o=n(42149),i=n(14965),a=Array,l=Math.max;e.exports=function(e,t,n){for(var s=o(e),c=r(t,s),u=r(void 0===n?s:n,s),f=a(l(u-c,0)),d=0;c9007199254740991)throw t("Maximum allowed index exceeded");return e}},68166:function(e){e.exports={CSSRuleList:0,CSSStyleDeclaration:0,CSSValueList:0,ClientRectList:0,DOMRectList:0,DOMStringList:0,DOMTokenList:1,DataTransferItemList:0,FileList:0,HTMLAllCollection:0,HTMLCollection:0,HTMLFormElement:0,HTMLSelectElement:0,MediaList:0,MimeTypeArray:0,NamedNodeMap:0,NodeList:1,PaintRequestList:0,Plugin:0,PluginArray:0,SVGLengthList:0,SVGNumberList:0,SVGPathSegList:0,SVGPointList:0,SVGStringList:0,SVGTransformList:0,SourceBufferList:0,StyleSheetList:0,TextTrackCueList:0,TextTrackList:0,TouchList:0}},53207:function(e){e.exports="function"==typeof Bun&&Bun&&"string"==typeof Bun.version},47362:function(e){e.exports="undefined"!=typeof navigator&&String(navigator.userAgent)||""},34750:function(e,t,n){var r,o,i=n(32604),a=n(47362),l=i.process,s=i.Deno,c=l&&l.versions||s&&s.version,u=c&&c.v8;u&&(o=(r=u.split("."))[0]>0&&r[0]<4?1:+(r[0]+r[1])),!o&&a&&(!(r=a.match(/Edge\/(\d+)/))||r[1]>=74)&&(r=a.match(/Chrome\/(\d+)/))&&(o=+r[1]),e.exports=o},12018:function(e,t,n){var r=n(1131);e.exports=function(e){return r[e+"Prototype"]}},59528:function(e){e.exports=["constructor","hasOwnProperty","isPrototypeOf","propertyIsEnumerable","toLocaleString","toString","valueOf"]},67001:function(e,t,n){"use strict";var r=n(32604),o=n(62863),i=n(31793),a=n(75628),l=n(6052).f,s=n(4817),c=n(1131),u=n(87477),f=n(7172),d=n(2177),p=function(e){var t=function(n,r,i){if(this instanceof t){switch(arguments.length){case 0:return new e;case 1:return new e(n);case 2:return new e(n,r)}return new e(n,r,i)}return o(e,this,arguments)};return t.prototype=e.prototype,t};e.exports=function(e,t){var n,o,h,g,m,b,v,y,x,w=e.target,E=e.global,S=e.stat,k=e.proto,_=E?r:S?r[w]:(r[w]||{}).prototype,O=E?c:c[w]||f(c,w,{})[w],C=O.prototype;for(g in t)o=!(n=s(E?g:w+(S?".":"#")+g,e.forced))&&_&&d(_,g),b=O[g],o&&(v=e.dontCallGetSet?(x=l(_,g))&&x.value:_[g]),m=o&&v?v:t[g],(!o||typeof b!=typeof m)&&(y=e.bind&&o?u(m,r):e.wrap&&o?p(m):k&&a(m)?i(m):m,(e.sham||m&&m.sham||b&&b.sham)&&f(y,"sham",!0),f(O,g,y),k&&(d(c,h=w+"Prototype")||f(c,h,{}),f(c[h],g,m),e.real&&C&&(n||!C[g])&&f(C,g,m)))}},29720:function(e){e.exports=function(e){try{return!!e()}catch(e){return!0}}},62863:function(e,t,n){var r=n(46391),o=Function.prototype,i=o.apply,a=o.call;e.exports="object"==typeof Reflect&&Reflect.apply||(r?a.bind(i):function(){return a.apply(i,arguments)})},87477:function(e,t,n){var r=n(31793),o=n(21846),i=n(46391),a=r(r.bind);e.exports=function(e,t){return o(e),void 0===t?e:i?a(e,t):function(){return e.apply(t,arguments)}}},46391:function(e,t,n){var r=n(29720);e.exports=!r(function(){var e=(function(){}).bind();return"function"!=typeof e||e.hasOwnProperty("prototype")})},24871:function(e,t,n){var r=n(46391),o=Function.prototype.call;e.exports=r?o.bind(o):function(){return o.apply(o,arguments)}},79752:function(e,t,n){var r=n(83383),o=n(2177),i=Function.prototype,a=r&&Object.getOwnPropertyDescriptor,l=o(i,"name"),s=l&&(!r||r&&a(i,"name").configurable);e.exports={EXISTS:l,PROPER:l&&"something"===(function(){}).name,CONFIGURABLE:s}},70145:function(e,t,n){var r=n(60254),o=n(21846);e.exports=function(e,t,n){try{return r(o(Object.getOwnPropertyDescriptor(e,t)[n]))}catch(e){}}},31793:function(e,t,n){var r=n(79307),o=n(60254);e.exports=function(e){if("Function"===r(e))return o(e)}},60254:function(e,t,n){var r=n(46391),o=Function.prototype,i=o.call,a=r&&o.bind.bind(i,i);e.exports=r?a:function(e){return function(){return i.apply(e,arguments)}}},80875:function(e,t,n){var r=n(1131),o=n(32604),i=n(75628),a=function(e){return i(e)?e:void 0};e.exports=function(e,t){return arguments.length<2?a(r[e])||a(o[e]):r[e]&&r[e][t]||o[e]&&o[e][t]}},89329:function(e,t,n){var r=n(95980),o=n(61024),i=n(45139),a=n(76577),l=n(45216)("iterator");e.exports=function(e){if(!i(e))return o(e,l)||o(e,"@@iterator")||a[r(e)]}},73080:function(e,t,n){var r=n(24871),o=n(21846),i=n(4152),a=n(99525),l=n(89329),s=TypeError;e.exports=function(e,t){var n=arguments.length<2?l(e):t;if(o(n))return i(r(n,e));throw s(a(e)+" is not iterable")}},96438:function(e,t,n){var r=n(60254),o=n(4063),i=n(75628),a=n(79307),l=n(9755),s=r([].push);e.exports=function(e){if(i(e))return e;if(o(e)){for(var t=e.length,n=[],r=0;r0?n:t)(r)}},38051:function(e,t,n){var r,o=n(4152),i=n(57685),a=n(59528),l=n(72291),s=n(25681),c=n(25053),u=n(99502),f="prototype",d="script",p=u("IE_PROTO"),h=function(){},g=function(e){return"<"+d+">"+e+""},m=function(e){e.write(g("")),e.close();var t=e.parentWindow.Object;return e=null,t},b=function(){var e,t=c("iframe");return t.style.display="none",s.appendChild(t),t.src=String("java"+d+":"),(e=t.contentWindow.document).open(),e.write(g("document.F=Object")),e.close(),e.F},v=function(){try{r=new ActiveXObject("htmlfile")}catch(e){}v="undefined"!=typeof document?document.domain&&r?m(r):b():m(r);for(var e=a.length;e--;)delete v[f][a[e]];return v()};l[p]=!0,e.exports=Object.create||function(e,t){var n;return null!==e?(h[f]=o(e),n=new h,h[f]=null,n[p]=e):n=v(),void 0===t?n:i.f(n,t)}},57685:function(e,t,n){var r=n(83383),o=n(19594),i=n(1237),a=n(4152),l=n(69146),s=n(14844);t.f=r&&!o?Object.defineProperties:function(e,t){a(e);for(var n,r=l(t),o=s(t),c=o.length,u=0;c>u;)i.f(e,n=o[u++],r[n]);return e}},1237:function(e,t,n){var r=n(83383),o=n(24343),i=n(19594),a=n(4152),l=n(24581),s=TypeError,c=Object.defineProperty,u=Object.getOwnPropertyDescriptor,f="enumerable",d="configurable",p="writable";t.f=r?i?function(e,t,n){if(a(e),t=l(t),a(n),"function"==typeof e&&"prototype"===t&&"value"in n&&p in n&&!n[p]){var r=u(e,t);r&&r[p]&&(e[t]=n.value,n={configurable:d in n?n[d]:r[d],enumerable:f in n?n[f]:r[f],writable:!1})}return c(e,t,n)}:c:function(e,t,n){if(a(e),t=l(t),a(n),o)try{return c(e,t,n)}catch(e){}if("get"in n||"set"in n)throw s("Accessors not supported");return"value"in n&&(e[t]=n.value),e}},6052:function(e,t,n){var r=n(83383),o=n(24871),i=n(59954),a=n(88863),l=n(69146),s=n(24581),c=n(2177),u=n(24343),f=Object.getOwnPropertyDescriptor;t.f=r?f:function(e,t){if(e=l(e),t=s(t),u)try{return f(e,t)}catch(e){}if(c(e,t))return a(!o(i.f,e,t),e[t])}},85801:function(e,t,n){var r=n(79307),o=n(69146),i=n(62627).f,a=n(89086),l="object"==typeof window&&window&&Object.getOwnPropertyNames?Object.getOwnPropertyNames(window):[],s=function(e){try{return i(e)}catch(e){return a(l)}};e.exports.f=function(e){return l&&"Window"==r(e)?s(e):i(o(e))}},62627:function(e,t,n){var r=n(74052),o=n(59528).concat("length","prototype");t.f=Object.getOwnPropertyNames||function(e){return r(e,o)}},86846:function(e,t){t.f=Object.getOwnPropertySymbols},45983:function(e,t,n){var r=n(2177),o=n(75628),i=n(97826),a=n(99502),l=n(27632),s=a("IE_PROTO"),c=Object,u=c.prototype;e.exports=l?c.getPrototypeOf:function(e){var t=i(e);if(r(t,s))return t[s];var n=t.constructor;return o(n)&&t instanceof n?n.prototype:t instanceof c?u:null}},49477:function(e,t,n){var r=n(60254);e.exports=r({}.isPrototypeOf)},74052:function(e,t,n){var r=n(60254),o=n(2177),i=n(69146),a=n(34597).indexOf,l=n(72291),s=r([].push);e.exports=function(e,t){var n,r=i(e),c=0,u=[];for(n in r)!o(l,n)&&o(r,n)&&s(u,n);for(;t.length>c;)o(r,n=t[c++])&&(~a(u,n)||s(u,n));return u}},14844:function(e,t,n){var r=n(74052),o=n(59528);e.exports=Object.keys||function(e){return r(e,o)}},59954:function(e,t){"use strict";var n={}.propertyIsEnumerable,r=Object.getOwnPropertyDescriptor,o=r&&!n.call({1:2},1);t.f=o?function(e){var t=r(this,e);return!!t&&t.enumerable}:n},23122:function(e,t,n){var r=n(70145),o=n(4152),i=n(77722);e.exports=Object.setPrototypeOf||("__proto__"in{}?function(){var e,t=!1,n={};try{(e=r(Object.prototype,"__proto__","set"))(n,[]),t=n instanceof Array}catch(e){}return function(n,r){return o(n),i(r),t?e(n,r):n.__proto__=r,n}}():void 0)},67018:function(e,t,n){"use strict";var r=n(51134),o=n(95980);e.exports=r?({}).toString:function(){return"[object "+o(this)+"]"}},64156:function(e,t,n){var r=n(24871),o=n(75628),i=n(90545),a=TypeError;e.exports=function(e,t){var n,l;if("string"===t&&o(n=e.toString)&&!i(l=r(n,e))||o(n=e.valueOf)&&!i(l=r(n,e))||"string"!==t&&o(n=e.toString)&&!i(l=r(n,e)))return l;throw a("Can't convert object to primitive value")}},57259:function(e,t,n){var r=n(80875),o=n(60254),i=n(62627),a=n(86846),l=n(4152),s=o([].concat);e.exports=r("Reflect","ownKeys")||function(e){var t=i.f(l(e)),n=a.f;return n?s(t,n(e)):t}},1131:function(e){e.exports={}},65896:function(e,t,n){var r=n(45139),o=TypeError;e.exports=function(e){if(r(e))throw o("Can't call method on "+e);return e}},61432:function(e,t,n){"use strict";var r,o=n(32604),i=n(62863),a=n(75628),l=n(53207),s=n(47362),c=n(95236),u=n(69248),f=o.Function,d=/MSIE .\./.test(s)||l&&((r=o.Bun.version.split(".")).length<3||0==r[0]&&(r[1]<3||3==r[1]&&0==r[2]));e.exports=function(e,t){var n=t?2:1;return d?function(r,o){var l=u(arguments.length,1)>n,s=a(r)?r:f(r),d=l?c(arguments,n):[],p=l?function(){i(s,this,d)}:s;return t?e(p,o):e(p)}:e}},795:function(e,t,n){var r=n(51134),o=n(1237).f,i=n(7172),a=n(2177),l=n(67018),s=n(45216)("toStringTag");e.exports=function(e,t,n,c){if(e){var u=n?e:e.prototype;a(u,s)||o(u,s,{configurable:!0,value:t}),c&&!r&&i(u,"toString",l)}}},99502:function(e,t,n){var r=n(28818),o=n(45357),i=r("keys");e.exports=function(e){return i[e]||(i[e]=o(e))}},59090:function(e,t,n){var r=n(32604),o=n(99827),i="__core-js_shared__",a=r[i]||o(i,{});e.exports=a},28818:function(e,t,n){var r=n(60411),o=n(59090);(e.exports=function(e,t){return o[e]||(o[e]=void 0!==t?t:{})})("versions",[]).push({version:"3.31.1",mode:r?"pure":"global",copyright:"\xa9 2014-2023 Denis Pushkarev (zloirock.ru)",license:"https://github.com/zloirock/core-js/blob/v3.31.1/LICENSE",source:"https://github.com/zloirock/core-js"})},66905:function(e,t,n){var r=n(60254),o=n(54354),i=n(9755),a=n(65896),l=r("".charAt),s=r("".charCodeAt),c=r("".slice),u=function(e){return function(t,n){var r,u,f=i(a(t)),d=o(n),p=f.length;return d<0||d>=p?e?"":void 0:(r=s(f,d))<55296||r>56319||d+1===p||(u=s(f,d+1))<56320||u>57343?e?l(f,d):r:e?c(f,d,d+2):(r-55296<<10)+(u-56320)+65536}};e.exports={codeAt:u(!1),charAt:u(!0)}},42112:function(e,t,n){var r=n(34750),o=n(29720),i=n(32604).String;e.exports=!!Object.getOwnPropertySymbols&&!o(function(){var e=Symbol();return!i(e)||!(Object(e) instanceof Symbol)||!Symbol.sham&&r&&r<41})},71607:function(e,t,n){var r=n(24871),o=n(80875),i=n(45216),a=n(14423);e.exports=function(){var e=o("Symbol"),t=e&&e.prototype,n=t&&t.valueOf,l=i("toPrimitive");t&&!t[l]&&a(t,l,function(e){return r(n,this)},{arity:1})}},96889:function(e,t,n){var r=n(80875),o=n(60254),i=r("Symbol"),a=i.keyFor,l=o(i.prototype.valueOf);e.exports=i.isRegisteredSymbol||function(e){try{return void 0!==a(l(e))}catch(e){return!1}}},88822:function(e,t,n){for(var r=n(28818),o=n(80875),i=n(60254),a=n(42617),l=n(45216),s=o("Symbol"),c=s.isWellKnownSymbol,u=o("Object","getOwnPropertyNames"),f=i(s.prototype.valueOf),d=r("wks"),p=0,h=u(s),g=h.length;p0?o(r(e),9007199254740991):0}},97826:function(e,t,n){var r=n(65896),o=Object;e.exports=function(e){return o(r(e))}},477:function(e,t,n){var r=n(24871),o=n(90545),i=n(42617),a=n(61024),l=n(64156),s=n(45216),c=TypeError,u=s("toPrimitive");e.exports=function(e,t){if(!o(e)||i(e))return e;var n,s=a(e,u);if(s){if(void 0===t&&(t="default"),!o(n=r(s,e,t))||i(n))return n;throw c("Can't convert object to primitive value")}return void 0===t&&(t="number"),l(e,t)}},24581:function(e,t,n){var r=n(477),o=n(42617);e.exports=function(e){var t=r(e,"string");return o(t)?t:t+""}},51134:function(e,t,n){var r=n(45216)("toStringTag"),o={};o[r]="z",e.exports="[object z]"===String(o)},9755:function(e,t,n){var r=n(95980),o=String;e.exports=function(e){if("Symbol"===r(e))throw TypeError("Cannot convert a Symbol value to a string");return o(e)}},99525:function(e){var t=String;e.exports=function(e){try{return t(e)}catch(e){return"Object"}}},45357:function(e,t,n){var r=n(60254),o=0,i=Math.random(),a=r(1..toString);e.exports=function(e){return"Symbol("+(void 0===e?"":e)+")_"+a(++o+i,36)}},58371:function(e,t,n){var r=n(42112);e.exports=r&&!Symbol.sham&&"symbol"==typeof Symbol.iterator},19594:function(e,t,n){var r=n(83383),o=n(29720);e.exports=r&&o(function(){return 42!=Object.defineProperty(function(){},"prototype",{value:42,writable:!1}).prototype})},69248:function(e){var t=TypeError;e.exports=function(e,n){if(e=51||!o(function(){var e=[];return e[g]=!1,e.concat()[0]!==e}),b=function(e){if(!a(e))return!1;var t=e[g];return void 0!==t?!!t:i(e)};r({target:"Array",proto:!0,arity:1,forced:!m||!d("concat")},{concat:function(e){var t,n,r,o,i,a=l(this),d=f(a,0),p=0;for(t=-1,r=arguments.length;t1?arguments[1]:void 0)}})},56931:function(e,t,n){"use strict";var r=n(67001),o=n(34069);r({target:"Array",proto:!0,forced:[].forEach!=o},{forEach:o})},99298:function(e,t,n){var r=n(67001),o=n(97525);r({target:"Array",stat:!0,forced:!n(81985)(function(e){Array.from(e)})},{from:o})},9266:function(e,t,n){"use strict";var r=n(67001),o=n(31793),i=n(34597).indexOf,a=n(84992),l=o([].indexOf),s=!!l&&1/l([1],1,-0)<0;r({target:"Array",proto:!0,forced:s||!a("indexOf")},{indexOf:function(e){var t=arguments.length>1?arguments[1]:void 0;return s?l(this,e,t)||0:i(this,e,t)}})},11815:function(e,t,n){n(67001)({target:"Array",stat:!0},{isArray:n(4063)})},78312:function(e,t,n){"use strict";var r=n(69146),o=n(74418),i=n(76577),a=n(64535),l=n(1237).f,s=n(25149),c=n(96398),u=n(60411),f=n(83383),d="Array Iterator",p=a.set,h=a.getterFor(d);e.exports=s(Array,"Array",function(e,t){p(this,{type:d,target:r(e),index:0,kind:t})},function(){var e=h(this),t=e.target,n=e.kind,r=e.index++;return!t||r>=t.length?(e.target=void 0,c(void 0,!0)):"keys"==n?c(r,!1):"values"==n?c(t[r],!1):c([r,t[r]],!1)},"values");var g=i.Arguments=i.Array;if(o("keys"),o("values"),o("entries"),!u&&f&&"values"!==g.name)try{l(g,"name",{value:"values"})}catch(e){}},91343:function(e,t,n){"use strict";var r=n(67001),o=n(97826),i=n(42149),a=n(83695),l=n(6439);r({target:"Array",proto:!0,arity:1,forced:n(29720)(function(){return 4294967297!==[].push.call({length:4294967296},1)})||!function(){try{Object.defineProperty([],"length",{writable:!1}).push()}catch(e){return e instanceof TypeError}}()},{push:function(e){var t=o(this),n=i(t),r=arguments.length;l(n+r);for(var s=0;sx-r+n;m--)d(y,m-1)}else if(n>r)for(m=x-r;m>w;m--)b=m+r-1,v=m+n-1,b in y?y[v]=y[b]:d(y,v);for(m=0;mf;)void 0!==(n=o(r,t=c[f++]))&&s(u,t,n);return u}})},27112:function(e,t,n){var r=n(67001),o=n(42112),i=n(29720),a=n(86846),l=n(97826);r({target:"Object",stat:!0,forced:!o||i(function(){a.f(1)})},{getOwnPropertySymbols:function(e){var t=a.f;return t?t(l(e)):[]}})},46568:function(e,t,n){var r=n(67001),o=n(97826),i=n(14844);r({target:"Object",stat:!0,forced:n(29720)(function(){i(1)})},{keys:function(e){return i(o(e))}})},8094:function(){},50186:function(){},14560:function(e,t,n){"use strict";var r=n(66905).charAt,o=n(9755),i=n(64535),a=n(25149),l=n(96398),s="String Iterator",c=i.set,u=i.getterFor(s);a(String,"String",function(e){c(this,{type:s,string:o(e),index:0})},function(){var e,t=u(this),n=t.string,o=t.index;return o>=n.length?l(void 0,!0):(e=r(n,o),t.index+=e.length,l(e,!1))})},83292:function(e,t,n){n(28547)("asyncIterator")},27892:function(e,t,n){"use strict";var r=n(67001),o=n(32604),i=n(24871),a=n(60254),l=n(60411),s=n(83383),c=n(42112),u=n(29720),f=n(2177),d=n(49477),p=n(4152),h=n(69146),g=n(24581),m=n(9755),b=n(88863),v=n(38051),y=n(14844),x=n(62627),w=n(85801),E=n(86846),S=n(6052),k=n(1237),_=n(57685),O=n(59954),C=n(14423),A=n(70866),N=n(28818),R=n(99502),T=n(72291),P=n(45357),M=n(45216),j=n(38197),L=n(28547),I=n(71607),D=n(795),F=n(64535),B=n(48390).forEach,z=R("hidden"),$="Symbol",U="prototype",H=F.set,Z=F.getterFor($),q=Object[U],V=o.Symbol,W=V&&V[U],G=o.TypeError,K=o.QObject,Y=S.f,X=k.f,J=w.f,Q=O.f,ee=a([].push),et=N("symbols"),en=N("op-symbols"),er=N("wks"),eo=!K||!K[U]||!K[U].findChild,ei=s&&u(function(){return 7!=v(X({},"a",{get:function(){return X(this,"a",{value:7}).a}})).a})?function(e,t,n){var r=Y(q,t);r&&delete q[t],X(e,t,n),r&&e!==q&&X(q,t,r)}:X,ea=function(e,t){var n=et[e]=v(W);return H(n,{type:$,tag:e,description:t}),s||(n.description=t),n},el=function(e,t,n){e===q&&el(en,t,n),p(e);var r=g(t);return(p(n),f(et,r))?(n.enumerable?(f(e,z)&&e[z][r]&&(e[z][r]=!1),n=v(n,{enumerable:b(0,!1)})):(f(e,z)||X(e,z,b(1,{})),e[z][r]=!0),ei(e,r,n)):X(e,r,n)},es=function(e,t){p(e);var n=h(t),r=y(n).concat(ed(n));return B(r,function(t){(!s||i(ec,n,t))&&el(e,t,n[t])}),e},ec=function(e){var t=g(e),n=i(Q,this,t);return(!(this===q&&f(et,t))||!!f(en,t))&&(!(n||!f(this,t)||!f(et,t)||f(this,z)&&this[z][t])||n)},eu=function(e,t){var n=h(e),r=g(t);if(!(n===q&&f(et,r))||f(en,r)){var o=Y(n,r);return o&&f(et,r)&&!(f(n,z)&&n[z][r])&&(o.enumerable=!0),o}},ef=function(e){var t=J(h(e)),n=[];return B(t,function(e){f(et,e)||f(T,e)||ee(n,e)}),n},ed=function(e){var t=e===q,n=J(t?en:h(e)),r=[];return B(n,function(e){f(et,e)&&(!t||f(q,e))&&ee(r,et[e])}),r};c||(C(W=(V=function(){if(d(W,this))throw G("Symbol is not a constructor");var e=arguments.length&&void 0!==arguments[0]?m(arguments[0]):void 0,t=P(e),n=function(e){this===q&&i(n,en,e),f(this,z)&&f(this[z],t)&&(this[z][t]=!1),ei(this,t,b(1,e))};return s&&eo&&ei(q,t,{configurable:!0,set:n}),ea(t,e)})[U],"toString",function(){return Z(this).tag}),C(V,"withoutSetter",function(e){return ea(P(e),e)}),O.f=ec,k.f=el,_.f=es,S.f=eu,x.f=w.f=ef,E.f=ed,j.f=function(e){return ea(M(e),e)},s&&(A(W,"description",{configurable:!0,get:function(){return Z(this).description}}),l||C(q,"propertyIsEnumerable",ec,{unsafe:!0}))),r({global:!0,constructor:!0,wrap:!0,forced:!c,sham:!c},{Symbol:V}),B(y(er),function(e){L(e)}),r({target:$,stat:!0,forced:!c},{useSetter:function(){eo=!0},useSimple:function(){eo=!1}}),r({target:"Object",stat:!0,forced:!c,sham:!s},{create:function(e,t){return void 0===t?v(e):es(v(e),t)},defineProperty:el,defineProperties:es,getOwnPropertyDescriptor:eu}),r({target:"Object",stat:!0,forced:!c},{getOwnPropertyNames:ef}),I(),D(V,$),T[z]=!0},37375:function(){},89367:function(e,t,n){var r=n(67001),o=n(80875),i=n(2177),a=n(9755),l=n(28818),s=n(34601),c=l("string-to-symbol-registry"),u=l("symbol-to-string-registry");r({target:"Symbol",stat:!0,forced:!s},{for:function(e){var t=a(e);if(i(c,t))return c[t];var n=o("Symbol")(t);return c[t]=n,u[n]=t,n}})},46014:function(e,t,n){n(28547)("hasInstance")},74639:function(e,t,n){n(28547)("isConcatSpreadable")},66612:function(e,t,n){n(28547)("iterator")},60613:function(e,t,n){n(27892),n(89367),n(71574),n(32148),n(27112)},71574:function(e,t,n){var r=n(67001),o=n(2177),i=n(42617),a=n(99525),l=n(28818),s=n(34601),c=l("symbol-to-string-registry");r({target:"Symbol",stat:!0,forced:!s},{keyFor:function(e){if(!i(e))throw TypeError(a(e)+" is not a symbol");if(o(c,e))return c[e]}})},49092:function(e,t,n){n(28547)("matchAll")},81790:function(e,t,n){n(28547)("match")},46176:function(e,t,n){n(28547)("replace")},15821:function(e,t,n){n(28547)("search")},72926:function(e,t,n){n(28547)("species")},77517:function(e,t,n){n(28547)("split")},2978:function(e,t,n){var r=n(28547),o=n(71607);r("toPrimitive"),o()},22828:function(e,t,n){var r=n(80875),o=n(28547),i=n(795);o("toStringTag"),i(r("Symbol"),"Symbol")},74598:function(e,t,n){n(28547)("unscopables")},47708:function(e,t,n){var r=n(45216),o=n(1237).f,i=r("metadata"),a=Function.prototype;void 0===a[i]&&o(a,i,{value:null})},64473:function(e,t,n){n(28547)("asyncDispose")},20551:function(e,t,n){n(28547)("dispose")},43555:function(e,t,n){n(67001)({target:"Symbol",stat:!0},{isRegisteredSymbol:n(96889)})},26782:function(e,t,n){n(67001)({target:"Symbol",stat:!0,name:"isRegisteredSymbol"},{isRegistered:n(96889)})},15894:function(e,t,n){n(67001)({target:"Symbol",stat:!0,forced:!0},{isWellKnownSymbol:n(88822)})},68316:function(e,t,n){n(67001)({target:"Symbol",stat:!0,name:"isWellKnownSymbol",forced:!0},{isWellKnown:n(88822)})},47051:function(e,t,n){n(28547)("matcher")},64869:function(e,t,n){n(28547)("metadataKey")},87118:function(e,t,n){n(28547)("metadata")},95359:function(e,t,n){n(28547)("observable")},99244:function(e,t,n){n(28547)("patternMatch")},95284:function(e,t,n){n(28547)("replaceAll")},4583:function(e,t,n){n(78312);var r=n(68166),o=n(32604),i=n(95980),a=n(7172),l=n(76577),s=n(45216)("toStringTag");for(var c in r){var u=o[c],f=u&&u.prototype;f&&i(f)!==s&&a(f,s,c),l[c]=l.Array}},23929:function(e,t,n){var r=n(67001),o=n(32604),i=n(61432)(o.setInterval,!0);r({global:!0,bind:!0,forced:o.setInterval!==i},{setInterval:i})},31768:function(e,t,n){var r=n(67001),o=n(32604),i=n(61432)(o.setTimeout,!0);r({global:!0,bind:!0,forced:o.setTimeout!==i},{setTimeout:i})},16078:function(e,t,n){n(23929),n(31768)},34142:function(e,t,n){var r=n(30810);e.exports=r},7:function(e,t,n){var r=n(11750);e.exports=r},40981:function(e,t,n){var r=n(79107);e.exports=r},84699:function(e,t,n){var r=n(13217);e.exports=r},65050:function(e,t,n){var r=n(90642);n(4583),e.exports=r},69194:function(e,t,n){var r=n(80197);e.exports=r},59960:function(e,t,n){var r=n(65874);e.exports=r},20792:function(e,t,n){n(4583);var r=n(95980),o=n(2177),i=n(49477),a=n(40981),l=Array.prototype,s={DOMTokenList:!0,NodeList:!0};e.exports=function(e){var t=e.forEach;return e===l||i(l,e)&&t===l.forEach||o(s,r(e))?a:t}},45956:function(e,t,n){var r=n(45774);e.exports=r},97434:function(e,t,n){var r=n(21151);e.exports=r},94531:function(e,t,n){var r=n(58616);e.exports=r},16474:function(e,t,n){var r=n(8231);e.exports=r},43631:function(e,t,n){var r=n(36347);e.exports=r},25166:function(e,t,n){var r=n(22030);e.exports=r},2608:function(e,t,n){var r=n(73304);e.exports=r},13782:function(e,t,n){var r=n(8768);e.exports=r},28436:function(e,t,n){var r=n(18312);e.exports=r},58542:function(e,t,n){var r=n(84715);e.exports=r},20736:function(e,t,n){var r=n(23197);e.exports=r},66013:function(e,t,n){n(16078);var r=n(1131);e.exports=r.setInterval},51126:function(e,t,n){n(16078);var r=n(1131);e.exports=r.setTimeout},15587:function(e,t,n){var r=n(26643);n(4583),e.exports=r},51036:function(e,t,n){var r=n(93872);n(4583),e.exports=r},43948:function(e,t,n){var r=n(20610);e.exports=r},5370:function(e,t,n){var r=n(65170),o=n(72386);e.exports=function(e){if(r(e))return e;throw TypeError(o(e)+" is not a function")}},88507:function(e,t,n){"use strict";var r=n(46159).charAt;e.exports=function(e,t,n){return t+(n?r(e,t).length:1)}},24601:function(e,t,n){var r=n(86157);e.exports=function(e){if(r(e))return e;throw TypeError(String(e)+" is not an object")}},55122:function(e,t,n){var r=n(83798),o=n(38791),i=n(93584),a=function(e){return function(t,n,a){var l,s=r(t),c=i(s),u=o(a,c);if(e&&n!=n){for(;c>u;)if((l=s[u++])!=l)return!0}else for(;c>u;u++)if((e||u in s)&&s[u]===n)return e||u||0;return!e&&-1}};e.exports={includes:a(!0),indexOf:a(!1)}},51746:function(e){var t={}.toString;e.exports=function(e){return t.call(e).slice(8,-1)}},63658:function(e,t,n){var r=n(38823),o=n(65170),i=n(51746),a=n(26739)("toStringTag"),l="Arguments"==i(function(){return arguments}()),s=function(e,t){try{return e[t]}catch(e){}};e.exports=r?i:function(e){var t,n,r;return void 0===e?"Undefined":null===e?"Null":"string"==typeof(n=s(t=Object(e),a))?n:l?i(t):"Object"==(r=i(t))&&o(t.callee)?"Arguments":r}},48565:function(e,t,n){var r=n(36984),o=n(15105),i=n(47604),a=n(69128);e.exports=function(e,t){for(var n=o(t),l=a.f,s=i.f,c=0;c=74)&&(r=a.match(/Chrome\/(\d+)/))&&(o=r[1]),e.exports=o&&+o},41780:function(e){e.exports=["constructor","hasOwnProperty","isPrototypeOf","propertyIsEnumerable","toLocaleString","toString","valueOf"]},54271:function(e,t,n){var r=n(92727),o=n(47604).f,i=n(30430),a=n(81980),l=n(4039),s=n(48565),c=n(95160);e.exports=function(e,t){var n,u,f,d,p,h=e.target,g=e.global,m=e.stat;if(n=g?r:m?r[h]||l(h,{}):(r[h]||{}).prototype)for(u in t){if(d=t[u],f=e.noTargetGet?(p=o(n,u))&&p.value:n[u],!c(g?u:h+(m?".":"#")+u,e.forced)&&void 0!==f){if(typeof d==typeof f)continue;s(d,f)}(e.sham||f&&f.sham)&&i(d,"sham",!0),a(n,u,d,e)}}},61531:function(e){e.exports=function(e){try{return!!e()}catch(e){return!0}}},49069:function(e,t,n){"use strict";n(8914);var r=n(81980),o=n(88467),i=n(61531),a=n(26739),l=n(30430),s=a("species"),c=RegExp.prototype;e.exports=function(e,t,n,u){var f=a(e),d=!i(function(){var t={};return t[f]=function(){return 7},7!=""[e](t)}),p=d&&!i(function(){var t=!1,n=/a/;return"split"===e&&((n={}).constructor={},n.constructor[s]=function(){return n},n.flags="",n[f]=/./[f]),n.exec=function(){return t=!0,null},n[f](""),!t});if(!d||!p||n){var h=/./[f],g=t(f,""[e],function(e,t,n,r,i){var a=t.exec;return a===o||a===c.exec?d&&!i?{done:!0,value:h.call(t,n,r)}:{done:!0,value:e.call(n,t,r)}:{done:!1}});r(String.prototype,e,g[0]),r(c,f,g[1])}u&&l(c[f],"sham",!0)}},15112:function(e,t,n){var r=n(56667),o=n(36984),i=Function.prototype,a=r&&Object.getOwnPropertyDescriptor,l=o(i,"name"),s=l&&(!r||r&&a(i,"name").configurable);e.exports={EXISTS:l,PROPER:l&&"something"===(function(){}).name,CONFIGURABLE:s}},99604:function(e,t,n){var r=n(92727),o=n(65170);e.exports=function(e,t){var n;return arguments.length<2?o(n=r[e])?n:void 0:r[e]&&r[e][t]}},92567:function(e,t,n){var r=n(5370);e.exports=function(e,t){var n=e[t];return null==n?void 0:r(n)}},9562:function(e,t,n){var r=n(47322),o=Math.floor,i="".replace,a=/\$([$&'`]|\d{1,2}|<[^>]*>)/g,l=/\$([$&'`]|\d{1,2})/g;e.exports=function(e,t,n,s,c,u){var f=n+e.length,d=s.length,p=l;return void 0!==c&&(c=r(c),p=a),i.call(u,p,function(r,i){var a;switch(i.charAt(0)){case"$":return"$";case"&":return e;case"`":return t.slice(0,n);case"'":return t.slice(f);case"<":a=c[i.slice(1,-1)];break;default:var l=+i;if(0===l)return r;if(l>d){var u=o(l/10);if(0===u)return r;if(u<=d)return void 0===s[u-1]?i.charAt(1):s[u-1]+i.charAt(1);return r}a=s[l-1]}return void 0===a?"":a})}},92727:function(e,t,n){var r=function(e){return e&&e.Math==Math&&e};e.exports=r("object"==typeof globalThis&&globalThis)||r("object"==typeof window&&window)||r("object"==typeof self&&self)||r("object"==typeof n.g&&n.g)||function(){return this}()||Function("return this")()},36984:function(e,t,n){var r=n(47322),o={}.hasOwnProperty;e.exports=Object.hasOwn||function(e,t){return o.call(r(e),t)}},90090:function(e){e.exports={}},66294:function(e,t,n){var r=n(99604);e.exports=r("document","documentElement")},50066:function(e,t,n){var r=n(56667),o=n(61531),i=n(88506);e.exports=!r&&!o(function(){return 7!=Object.defineProperty(i("div"),"a",{get:function(){return 7}}).a})},29554:function(e,t,n){var r=n(61531),o=n(51746),i="".split;e.exports=r(function(){return!Object("z").propertyIsEnumerable(0)})?function(e){return"String"==o(e)?i.call(e,""):Object(e)}:Object},12319:function(e,t,n){var r=n(65170),o=n(41679),i=Function.toString;r(o.inspectSource)||(o.inspectSource=function(e){return i.call(e)}),e.exports=o.inspectSource},32784:function(e,t,n){var r,o,i,a=n(74073),l=n(92727),s=n(86157),c=n(30430),u=n(36984),f=n(41679),d=n(28182),p=n(90090),h="Object already initialized",g=l.WeakMap;if(a||f.state){var m=f.state||(f.state=new g),b=m.get,v=m.has,y=m.set;r=function(e,t){if(v.call(m,e))throw TypeError(h);return t.facade=e,y.call(m,e,t),t},o=function(e){return b.call(m,e)||{}},i=function(e){return v.call(m,e)}}else{var x=d("state");p[x]=!0,r=function(e,t){if(u(e,x))throw TypeError(h);return t.facade=e,c(e,x,t),t},o=function(e){return u(e,x)?e[x]:{}},i=function(e){return u(e,x)}}e.exports={set:r,get:o,has:i,enforce:function(e){return i(e)?o(e):r(e,{})},getterFor:function(e){return function(t){var n;if(!s(t)||(n=o(t)).type!==e)throw TypeError("Incompatible receiver, "+e+" required");return n}}}},65170:function(e){e.exports=function(e){return"function"==typeof e}},95160:function(e,t,n){var r=n(61531),o=n(65170),i=/#|\.prototype\./,a=function(e,t){var n=s[l(e)];return n==u||n!=c&&(o(t)?r(t):!!t)},l=a.normalize=function(e){return String(e).replace(i,".").toLowerCase()},s=a.data={},c=a.NATIVE="N",u=a.POLYFILL="P";e.exports=a},86157:function(e,t,n){var r=n(65170);e.exports=function(e){return"object"==typeof e?null!==e:r(e)}},38277:function(e){e.exports=!1},66290:function(e,t,n){var r=n(65170),o=n(99604),i=n(78451);e.exports=i?function(e){return"symbol"==typeof e}:function(e){var t=o("Symbol");return r(t)&&Object(e) instanceof t}},93584:function(e,t,n){var r=n(44446);e.exports=function(e){return r(e.length)}},26200:function(e,t,n){var r=n(28583),o=n(61531);e.exports=!!Object.getOwnPropertySymbols&&!o(function(){var e=Symbol();return!String(e)||!(Object(e) instanceof Symbol)||!Symbol.sham&&r&&r<41})},74073:function(e,t,n){var r=n(92727),o=n(65170),i=n(12319),a=r.WeakMap;e.exports=o(a)&&/native code/.test(i(a))},65581:function(e,t,n){var r,o=n(24601),i=n(28587),a=n(41780),l=n(90090),s=n(66294),c=n(88506),u=n(28182),f="prototype",d="script",p=u("IE_PROTO"),h=function(){},g=function(e){return"<"+d+">"+e+""},m=function(e){e.write(g("")),e.close();var t=e.parentWindow.Object;return e=null,t},b=function(){var e,t=c("iframe");return t.style.display="none",s.appendChild(t),t.src=String("java"+d+":"),(e=t.contentWindow.document).open(),e.write(g("document.F=Object")),e.close(),e.F},v=function(){try{r=new ActiveXObject("htmlfile")}catch(e){}v="undefined"!=typeof document?document.domain&&r?m(r):b():m(r);for(var e=a.length;e--;)delete v[f][a[e]];return v()};l[p]=!0,e.exports=Object.create||function(e,t){var n;return null!==e?(h[f]=o(e),n=new h,h[f]=null,n[p]=e):n=v(),void 0===t?n:i(n,t)}},28587:function(e,t,n){var r=n(56667),o=n(69128),i=n(24601),a=n(63835);e.exports=r?Object.defineProperties:function(e,t){i(e);for(var n,r=a(t),l=r.length,s=0;l>s;)o.f(e,n=r[s++],t[n]);return e}},69128:function(e,t,n){var r=n(56667),o=n(50066),i=n(24601),a=n(87892),l=Object.defineProperty;t.f=r?l:function(e,t,n){if(i(e),t=a(t),i(n),o)try{return l(e,t,n)}catch(e){}if("get"in n||"set"in n)throw TypeError("Accessors not supported");return"value"in n&&(e[t]=n.value),e}},47604:function(e,t,n){var r=n(56667),o=n(66681),i=n(49173),a=n(83798),l=n(87892),s=n(36984),c=n(50066),u=Object.getOwnPropertyDescriptor;t.f=r?u:function(e,t){if(e=a(e),t=l(t),c)try{return u(e,t)}catch(e){}if(s(e,t))return i(!o.f.call(e,t),e[t])}},93572:function(e,t,n){var r=n(87535),o=n(41780).concat("length","prototype");t.f=Object.getOwnPropertyNames||function(e){return r(e,o)}},8831:function(e,t){t.f=Object.getOwnPropertySymbols},87535:function(e,t,n){var r=n(36984),o=n(83798),i=n(55122).indexOf,a=n(90090);e.exports=function(e,t){var n,l=o(e),s=0,c=[];for(n in l)!r(a,n)&&r(l,n)&&c.push(n);for(;t.length>s;)r(l,n=t[s++])&&(~i(c,n)||c.push(n));return c}},63835:function(e,t,n){var r=n(87535),o=n(41780);e.exports=Object.keys||function(e){return r(e,o)}},66681:function(e,t){"use strict";var n={}.propertyIsEnumerable,r=Object.getOwnPropertyDescriptor,o=r&&!n.call({1:2},1);t.f=o?function(e){var t=r(this,e);return!!t&&t.enumerable}:n},3211:function(e,t,n){"use strict";var r=n(38823),o=n(63658);e.exports=r?({}).toString:function(){return"[object "+o(this)+"]"}},87450:function(e,t,n){var r=n(65170),o=n(86157);e.exports=function(e,t){var n,i;if("string"===t&&r(n=e.toString)&&!o(i=n.call(e))||r(n=e.valueOf)&&!o(i=n.call(e))||"string"!==t&&r(n=e.toString)&&!o(i=n.call(e)))return i;throw TypeError("Can't convert object to primitive value")}},15105:function(e,t,n){var r=n(99604),o=n(93572),i=n(8831),a=n(24601);e.exports=r("Reflect","ownKeys")||function(e){var t=o.f(a(e)),n=i.f;return n?t.concat(n(e)):t}},81980:function(e,t,n){var r=n(92727),o=n(65170),i=n(36984),a=n(30430),l=n(4039),s=n(12319),c=n(32784),u=n(15112).CONFIGURABLE,f=c.get,d=c.enforce,p=String(String).split("String");(e.exports=function(e,t,n,s){var c,f=!!s&&!!s.unsafe,h=!!s&&!!s.enumerable,g=!!s&&!!s.noTargetGet,m=s&&void 0!==s.name?s.name:t;if(o(n)&&("Symbol("===String(m).slice(0,7)&&(m="["+String(m).replace(/^Symbol\(([^)]*)\)/,"$1")+"]"),(!i(n,"name")||u&&n.name!==m)&&a(n,"name",m),(c=d(n)).source||(c.source=p.join("string"==typeof m?m:""))),e===r){h?e[t]=n:l(t,n);return}f?!g&&e[t]&&(h=!0):delete e[t],h?e[t]=n:a(e,t,n)})(Function.prototype,"toString",function(){return o(this)&&f(this).source||s(this)})},49583:function(e,t,n){var r=n(24601),o=n(65170),i=n(51746),a=n(88467);e.exports=function(e,t){var n=e.exec;if(o(n)){var l=n.call(e,t);return null!==l&&r(l),l}if("RegExp"===i(e))return a.call(e,t);throw TypeError("RegExp#exec called on incompatible receiver")}},88467:function(e,t,n){"use strict";var r,o,i=n(93542),a=n(54181),l=n(51591),s=n(25396),c=n(65581),u=n(32784).get,f=n(80155),d=n(4023),p=RegExp.prototype.exec,h=s("native-string-replace",String.prototype.replace),g=p,m=(r=/a/,o=/b*/g,p.call(r,"a"),p.call(o,"a"),0!==r.lastIndex||0!==o.lastIndex),b=l.UNSUPPORTED_Y||l.BROKEN_CARET,v=void 0!==/()??/.exec("")[1];(m||v||b||f||d)&&(g=function(e){var t,n,r,o,l,s,f,d=u(this),y=i(e),x=d.raw;if(x)return x.lastIndex=this.lastIndex,t=g.call(x,y),this.lastIndex=x.lastIndex,t;var w=d.groups,E=b&&this.sticky,S=a.call(this),k=this.source,_=0,O=y;if(E&&(-1===(S=S.replace("y","")).indexOf("g")&&(S+="g"),O=y.slice(this.lastIndex),this.lastIndex>0&&(!this.multiline||this.multiline&&"\n"!==y.charAt(this.lastIndex-1))&&(k="(?: "+k+")",O=" "+O,_++),n=RegExp("^(?:"+k+")",S)),v&&(n=RegExp("^"+k+"$(?!\\s)",S)),m&&(r=this.lastIndex),o=p.call(E?n:this,O),E?o?(o.input=o.input.slice(_),o[0]=o[0].slice(_),o.index=this.lastIndex,this.lastIndex+=o[0].length):this.lastIndex=0:m&&o&&(this.lastIndex=this.global?o.index+o[0].length:r),v&&o&&o.length>1&&h.call(o[0],n,function(){for(l=1;lb)","g");return"b"!==e.exec("b").groups.a||"bc"!=="b".replace(e,"$c")})},96884:function(e){e.exports=function(e){if(void 0==e)throw TypeError("Can't call method on "+e);return e}},4039:function(e,t,n){var r=n(92727);e.exports=function(e,t){try{Object.defineProperty(r,e,{value:t,configurable:!0,writable:!0})}catch(n){r[e]=t}return t}},28182:function(e,t,n){var r=n(25396),o=n(68176),i=r("keys");e.exports=function(e){return i[e]||(i[e]=o(e))}},41679:function(e,t,n){var r=n(92727),o=n(4039),i="__core-js_shared__",a=r[i]||o(i,{});e.exports=a},25396:function(e,t,n){var r=n(38277),o=n(41679);(e.exports=function(e,t){return o[e]||(o[e]=void 0!==t?t:{})})("versions",[]).push({version:"3.18.3",mode:r?"pure":"global",copyright:"\xa9 2021 Denis Pushkarev (zloirock.ru)"})},46159:function(e,t,n){var r=n(48946),o=n(93542),i=n(96884),a=function(e){return function(t,n){var a,l,s=o(i(t)),c=r(n),u=s.length;return c<0||c>=u?e?"":void 0:(a=s.charCodeAt(c))<55296||a>56319||c+1===u||(l=s.charCodeAt(c+1))<56320||l>57343?e?s.charAt(c):a:e?s.slice(c,c+2):(a-55296<<10)+(l-56320)+65536}};e.exports={codeAt:a(!1),charAt:a(!0)}},38791:function(e,t,n){var r=n(48946),o=Math.max,i=Math.min;e.exports=function(e,t){var n=r(e);return n<0?o(n+t,0):i(n,t)}},83798:function(e,t,n){var r=n(29554),o=n(96884);e.exports=function(e){return r(o(e))}},48946:function(e){var t=Math.ceil,n=Math.floor;e.exports=function(e){var r=+e;return r!=r||0===r?0:(r>0?n:t)(r)}},44446:function(e,t,n){var r=n(48946),o=Math.min;e.exports=function(e){return e>0?o(r(e),9007199254740991):0}},47322:function(e,t,n){var r=n(96884);e.exports=function(e){return Object(r(e))}},67256:function(e,t,n){var r=n(86157),o=n(66290),i=n(92567),a=n(87450),l=n(26739)("toPrimitive");e.exports=function(e,t){if(!r(e)||o(e))return e;var n,s=i(e,l);if(s){if(void 0===t&&(t="default"),!r(n=s.call(e,t))||o(n))return n;throw TypeError("Can't convert object to primitive value")}return void 0===t&&(t="number"),a(e,t)}},87892:function(e,t,n){var r=n(67256),o=n(66290);e.exports=function(e){var t=r(e,"string");return o(t)?t:String(t)}},38823:function(e,t,n){var r=n(26739)("toStringTag"),o={};o[r]="z",e.exports="[object z]"===String(o)},93542:function(e,t,n){var r=n(63658);e.exports=function(e){if("Symbol"===r(e))throw TypeError("Cannot convert a Symbol value to a string");return String(e)}},72386:function(e){e.exports=function(e){try{return String(e)}catch(e){return"Object"}}},68176:function(e){var t=0,n=Math.random();e.exports=function(e){return"Symbol("+String(void 0===e?"":e)+")_"+(++t+n).toString(36)}},78451:function(e,t,n){var r=n(26200);e.exports=r&&!Symbol.sham&&"symbol"==typeof Symbol.iterator},26739:function(e,t,n){var r=n(92727),o=n(25396),i=n(36984),a=n(68176),l=n(26200),s=n(78451),c=o("wks"),u=r.Symbol,f=s?u:u&&u.withoutSetter||a;e.exports=function(e){return i(c,e)&&(l||"string"==typeof c[e])||(l&&i(u,e)?c[e]=u[e]:c[e]=f("Symbol."+e)),c[e]}},74009:function(e,t,n){var r=n(81980),o=Date.prototype,i="Invalid Date",a="toString",l=o[a],s=o.getTime;String(new Date(NaN))!=i&&r(o,a,function(){var e=s.call(this);return e==e?l.call(this):i})},95761:function(e,t,n){var r=n(56667),o=n(15112).EXISTS,i=n(69128).f,a=Function.prototype,l=a.toString,s=/^\s*function ([^ (]*)/;r&&!o&&i(a,"name",{configurable:!0,get:function(){try{return l.call(this).match(s)[1]}catch(e){return""}}})},93935:function(e,t,n){var r=n(38823),o=n(81980),i=n(3211);r||o(Object.prototype,"toString",i,{unsafe:!0})},8914:function(e,t,n){"use strict";var r=n(54271),o=n(88467);r({target:"RegExp",proto:!0,forced:/./.exec!==o},{exec:o})},8016:function(e,t,n){"use strict";var r=n(15112).PROPER,o=n(81980),i=n(24601),a=n(93542),l=n(61531),s=n(54181),c="toString",u=RegExp.prototype,f=u[c],d=l(function(){return"/a/b"!=f.call({source:"a",flags:"b"})}),p=r&&f.name!=c;(d||p)&&o(RegExp.prototype,c,function(){var e=i(this),t=a(e.source),n=e.flags;return"/"+t+"/"+a(void 0===n&&e instanceof RegExp&&!("flags"in u)?s.call(e):n)},{unsafe:!0})},45684:function(e,t,n){"use strict";var r=n(49069),o=n(61531),i=n(24601),a=n(65170),l=n(48946),s=n(44446),c=n(93542),u=n(96884),f=n(88507),d=n(92567),p=n(9562),h=n(49583),g=n(26739)("replace"),m=Math.max,b=Math.min,v="$0"==="a".replace(/./,"$0"),y=!!/./[g]&&""===/./[g]("a","$0");r("replace",function(e,t,n){var r=y?"$":"$0";return[function(e,n){var r=u(this),o=void 0==e?void 0:d(e,g);return o?o.call(e,r,n):t.call(c(r),e,n)},function(e,o){var u=i(this),d=c(e);if("string"==typeof o&&-1===o.indexOf(r)&&-1===o.indexOf("$<")){var g=n(t,u,d,o);if(g.done)return g.value}var v=a(o);v||(o=c(o));var y=u.global;if(y){var x=u.unicode;u.lastIndex=0}for(var w=[];;){var E=h(u,d);if(null===E||(w.push(E),!y))break;""===c(E[0])&&(u.lastIndex=f(d,s(u.lastIndex),x))}for(var S="",k=0,_=0;_=k&&(S+=d.slice(k,A)+M,k=A+C.length)}return S+d.slice(k)}]},!!o(function(){var e=/./;return e.exec=function(){var e=[];return e.groups={a:"7"},e},"7"!=="".replace(e,"$")})||!v||y)},68274:function(e,t,n){let r;var o=n(52040);t.formatArgs=function(t){if(t[0]=(this.useColors?"%c":"")+this.namespace+(this.useColors?" %c":" ")+t[0]+(this.useColors?"%c ":" ")+"+"+e.exports.humanize(this.diff),!this.useColors)return;let n="color: "+this.color;t.splice(1,0,n,"color: inherit");let r=0,o=0;t[0].replace(/%[a-zA-Z%]/g,e=>{"%%"!==e&&(r++,"%c"===e&&(o=r))}),t.splice(o,0,n)},t.save=function(e){try{e?t.storage.setItem("debug",e):t.storage.removeItem("debug")}catch(e){}},t.load=function(){let e;try{e=t.storage.getItem("debug")}catch(e){}return!e&&void 0!==o&&"env"in o&&(e=o.env.DEBUG),e},t.useColors=function(){return"undefined"!=typeof window&&!!window.process&&("renderer"===window.process.type||!!window.process.__nwjs)||!("undefined"!=typeof navigator&&navigator.userAgent&&navigator.userAgent.toLowerCase().match(/(edge|trident)\/(\d+)/))&&("undefined"!=typeof document&&document.documentElement&&document.documentElement.style&&document.documentElement.style.WebkitAppearance||"undefined"!=typeof window&&window.console&&(window.console.firebug||window.console.exception&&window.console.table)||"undefined"!=typeof navigator&&navigator.userAgent&&navigator.userAgent.toLowerCase().match(/firefox\/(\d+)/)&&parseInt(RegExp.$1,10)>=31||"undefined"!=typeof navigator&&navigator.userAgent&&navigator.userAgent.toLowerCase().match(/applewebkit\/(\d+)/))},t.storage=function(){try{return localStorage}catch(e){}}(),t.destroy=(r=!1,()=>{r||(r=!0,console.warn("Instance method `debug.destroy()` is deprecated and no longer does anything. It will be removed in the next major version of `debug`."))}),t.colors=["#0000CC","#0000FF","#0033CC","#0033FF","#0066CC","#0066FF","#0099CC","#0099FF","#00CC00","#00CC33","#00CC66","#00CC99","#00CCCC","#00CCFF","#3300CC","#3300FF","#3333CC","#3333FF","#3366CC","#3366FF","#3399CC","#3399FF","#33CC00","#33CC33","#33CC66","#33CC99","#33CCCC","#33CCFF","#6600CC","#6600FF","#6633CC","#6633FF","#66CC00","#66CC33","#9900CC","#9900FF","#9933CC","#9933FF","#99CC00","#99CC33","#CC0000","#CC0033","#CC0066","#CC0099","#CC00CC","#CC00FF","#CC3300","#CC3333","#CC3366","#CC3399","#CC33CC","#CC33FF","#CC6600","#CC6633","#CC9900","#CC9933","#CCCC00","#CCCC33","#FF0000","#FF0033","#FF0066","#FF0099","#FF00CC","#FF00FF","#FF3300","#FF3333","#FF3366","#FF3399","#FF33CC","#FF33FF","#FF6600","#FF6633","#FF9900","#FF9933","#FFCC00","#FFCC33"],t.log=console.debug||console.log||(()=>{}),e.exports=n(31765)(t);let{formatters:i}=e.exports;i.j=function(e){try{return JSON.stringify(e)}catch(e){return"[UnexpectedJSONParseError]: "+e.message}}},31765:function(e,t,n){e.exports=function(e){function t(e){let n,o,i;let a=null;function l(...e){if(!l.enabled)return;let r=Number(new Date),o=r-(n||r);l.diff=o,l.prev=n,l.curr=r,n=r,e[0]=t.coerce(e[0]),"string"!=typeof e[0]&&e.unshift("%O");let i=0;e[0]=e[0].replace(/%([a-zA-Z%])/g,(n,r)=>{if("%%"===n)return"%";i++;let o=t.formatters[r];if("function"==typeof o){let t=e[i];n=o.call(l,t),e.splice(i,1),i--}return n}),t.formatArgs.call(l,e);let a=l.log||t.log;a.apply(l,e)}return l.namespace=e,l.useColors=t.useColors(),l.color=t.selectColor(e),l.extend=r,l.destroy=t.destroy,Object.defineProperty(l,"enabled",{enumerable:!0,configurable:!1,get:()=>null!==a?a:(o!==t.namespaces&&(o=t.namespaces,i=t.enabled(e)),i),set:e=>{a=e}}),"function"==typeof t.init&&t.init(l),l}function r(e,n){let r=t(this.namespace+(void 0===n?":":n)+e);return r.log=this.log,r}function o(e){return e.toString().substring(2,e.toString().length-2).replace(/\.\*\?$/,"*")}return t.debug=t,t.default=t,t.coerce=function(e){return e instanceof Error?e.stack||e.message:e},t.disable=function(){let e=[...t.names.map(o),...t.skips.map(o).map(e=>"-"+e)].join(",");return t.enable(""),e},t.enable=function(e){let n;t.save(e),t.namespaces=e,t.names=[],t.skips=[];let r=("string"==typeof e?e:"").split(/[\s,]+/),o=r.length;for(n=0;n{t[n]=e[n]}),t.names=[],t.skips=[],t.formatters={},t.selectColor=function(e){let n=0;for(let t=0;t0?parseInt(n):null}(),t){case"b":c+=parseInt(d(),10).toString(2);break;case"c":"string"==typeof(n=d())||n instanceof String?c+=n:c+=String.fromCharCode(parseInt(n,10));break;case"d":c+=parseInt(d(),10);break;case"f":r=String(parseFloat(d()).toFixed(o||6)),c+=f?r:r.replace(/^0/,"");break;case"j":c+=JSON.stringify(d());break;case"o":c+="0"+parseInt(d(),10).toString(8);break;case"s":c+=d();break;case"x":c+="0x"+parseInt(d(),10).toString(16);break;case"X":c+="0x"+parseInt(d(),10).toString(16).toUpperCase();break;default:c+=t}else"%"===t?u=!0:c+=t;return c}(t=e.exports=n).format=n,t.vsprintf=function(e,t){return n.apply(null,[e].concat(t))},"undefined"!=typeof console&&"function"==typeof console.log&&(t.printf=function(){console.log(n.apply(null,arguments))})}()},10184:function(e,t,n){"use strict";function r(e){return Array.isArray?Array.isArray(e):"[object Array]"===u(e)}n.d(t,{Z:function(){return q}});let o=1/0;function i(e){return"string"==typeof e}function a(e){return"number"==typeof e}function l(e){return"object"==typeof e}function s(e){return null!=e}function c(e){return!e.trim().length}function u(e){return null==e?void 0===e?"[object Undefined]":"[object Null]":Object.prototype.toString.call(e)}let f=e=>`Invalid value for key ${e}`,d=e=>`Pattern length exceeds max of ${e}.`,p=e=>`Missing ${e} property in key`,h=e=>`Property 'weight' in key '${e}' must be a positive integer`,g=Object.prototype.hasOwnProperty;class m{constructor(e){this._keys=[],this._keyMap={};let t=0;e.forEach(e=>{let n=b(e);t+=n.weight,this._keys.push(n),this._keyMap[n.id]=n,t+=n.weight}),this._keys.forEach(e=>{e.weight/=t})}get(e){return this._keyMap[e]}keys(){return this._keys}toJSON(){return JSON.stringify(this._keys)}}function b(e){let t=null,n=null,o=null,a=1,l=null;if(i(e)||r(e))o=e,t=v(e),n=y(e);else{if(!g.call(e,"name"))throw Error(p("name"));let r=e.name;if(o=r,g.call(e,"weight")&&(a=e.weight)<=0)throw Error(h(r));t=v(r),n=y(r),l=e.getFn}return{path:t,id:n,weight:a,src:o,getFn:l}}function v(e){return r(e)?e:e.split(".")}function y(e){return r(e)?e.join("."):e}var x={isCaseSensitive:!1,includeScore:!1,keys:[],shouldSort:!0,sortFn:(e,t)=>e.score===t.score?e.idx{if(s(e)){if(t[d]){var p,h;let g=t[d],m=e[g];if(s(m)){if(d===t.length-1&&(i(m)||a(m)||!0===(p=m)||!1===p||l(h=p)&&null!==h&&"[object Boolean]"==u(p)))n.push(null==m?"":function(e){if("string"==typeof e)return e;let t=e+"";return"0"==t&&1/e==-o?"-0":t}(m));else if(r(m)){c=!0;for(let e=0,n=m.length;e{this._keysMap[e.id]=t})}create(){!this.isCreated&&this.docs.length&&(this.isCreated=!0,i(this.docs[0])?this.docs.forEach((e,t)=>{this._addString(e,t)}):this.docs.forEach((e,t)=>{this._addObject(e,t)}),this.norm.clear())}add(e){let t=this.size();i(e)?this._addString(e,t):this._addObject(e,t)}removeAt(e){this.records.splice(e,1);for(let t=e,n=this.size();t{let a=t.getFn?t.getFn(e):this.getFn(e,t.path);if(s(a)){if(r(a)){let e=[],t=[{nestedArrIndex:-1,value:a}];for(;t.length;){let{nestedArrIndex:n,value:o}=t.pop();if(s(o)){if(i(o)&&!c(o)){let t={v:o,i:n,n:this.norm.get(o)};e.push(t)}else r(o)&&o.forEach((e,n)=>{t.push({nestedArrIndex:n,value:e})})}}n.$[o]=e}else if(i(a)&&!c(a)){let e={v:a,n:this.norm.get(a)};n.$[o]=e}}}),this.records.push(n)}toJSON(){return{keys:this.keys,records:this.records}}}function S(e,t,{getFn:n=x.getFn,fieldNormWeight:r=x.fieldNormWeight}={}){let o=new E({getFn:n,fieldNormWeight:r});return o.setKeys(e.map(b)),o.setSources(t),o.create(),o}function k(e,{errors:t=0,currentLocation:n=0,expectedLocation:r=0,distance:o=x.distance,ignoreLocation:i=x.ignoreLocation}={}){let a=t/e.length;if(i)return a;let l=Math.abs(r-n);return o?a+l/o:l?1:a}class _{constructor(e,{location:t=x.location,threshold:n=x.threshold,distance:r=x.distance,includeMatches:o=x.includeMatches,findAllMatches:i=x.findAllMatches,minMatchCharLength:a=x.minMatchCharLength,isCaseSensitive:l=x.isCaseSensitive,ignoreLocation:s=x.ignoreLocation}={}){if(this.options={location:t,threshold:n,distance:r,includeMatches:o,findAllMatches:i,minMatchCharLength:a,isCaseSensitive:l,ignoreLocation:s},this.pattern=l?e:e.toLowerCase(),this.chunks=[],!this.pattern.length)return;let c=(e,t)=>{this.chunks.push({pattern:e,alphabet:function(e){let t={};for(let n=0,r=e.length;n32){let e=0,t=u%32,n=u-t;for(;e{let{isMatch:g,score:m,indices:b}=function(e,t,n,{location:r=x.location,distance:o=x.distance,threshold:i=x.threshold,findAllMatches:a=x.findAllMatches,minMatchCharLength:l=x.minMatchCharLength,includeMatches:s=x.includeMatches,ignoreLocation:c=x.ignoreLocation}={}){let u;if(t.length>32)throw Error(d(32));let f=t.length,p=e.length,h=Math.max(0,Math.min(r,p)),g=i,m=h,b=l>1||s,v=b?Array(p):[];for(;(u=e.indexOf(t,m))>-1;)if(g=Math.min(k(t,{currentLocation:u,expectedLocation:h,distance:o,ignoreLocation:c}),g),m=u+f,b){let e=0;for(;e=s;i-=1){let a=i-1,l=n[e.charAt(a)];if(b&&(v[a]=+!!l),d[i]=(d[i+1]<<1|1)&l,r&&(d[i]|=(y[i+1]|y[i])<<1|1|y[i+1]),d[i]&S&&(w=k(t,{errors:r,currentLocation:a,expectedLocation:h,distance:o,ignoreLocation:c}))<=g){if(g=w,(m=a)<=h)break;s=Math.max(1,2*h-m)}}let x=k(t,{errors:r+1,currentLocation:h,expectedLocation:h,distance:o,ignoreLocation:c});if(x>g)break;y=d}let _={isMatch:m>=0,score:Math.max(.001,w)};if(b){let e=function(e=[],t=x.minMatchCharLength){let n=[],r=-1,o=-1,i=0;for(let a=e.length;i=t&&n.push([r,o]),r=-1)}return e[i-1]&&i-r>=t&&n.push([r,i-1]),n}(v,l);e.length?s&&(_.indices=e):_.isMatch=!1}return _}(e,t,p,{location:r+h,distance:o,threshold:i,findAllMatches:a,minMatchCharLength:l,includeMatches:n,ignoreLocation:s});g&&(f=!0),u+=m,g&&b&&(c=[...c,...b])});let p={isMatch:f,score:f?u/this.chunks.length:1};return f&&n&&(p.indices=c),p}}class O{constructor(e){this.pattern=e}static isMultiMatch(e){return C(e,this.multiRegex)}static isSingleMatch(e){return C(e,this.singleRegex)}search(){}}function C(e,t){let n=e.match(t);return n?n[1]:null}class A extends O{constructor(e,{location:t=x.location,threshold:n=x.threshold,distance:r=x.distance,includeMatches:o=x.includeMatches,findAllMatches:i=x.findAllMatches,minMatchCharLength:a=x.minMatchCharLength,isCaseSensitive:l=x.isCaseSensitive,ignoreLocation:s=x.ignoreLocation}={}){super(e),this._bitapSearch=new _(e,{location:t,threshold:n,distance:r,includeMatches:o,findAllMatches:i,minMatchCharLength:a,isCaseSensitive:l,ignoreLocation:s})}static get type(){return"fuzzy"}static get multiRegex(){return/^"(.*)"$/}static get singleRegex(){return/^(.*)$/}search(e){return this._bitapSearch.searchIn(e)}}class N extends O{constructor(e){super(e)}static get type(){return"include"}static get multiRegex(){return/^'"(.*)"$/}static get singleRegex(){return/^'(.*)$/}search(e){let t,n=0,r=[],o=this.pattern.length;for(;(t=e.indexOf(this.pattern,n))>-1;)n=t+o,r.push([t,n-1]);let i=!!r.length;return{isMatch:i,score:i?0:1,indices:r}}}let R=[class extends O{constructor(e){super(e)}static get type(){return"exact"}static get multiRegex(){return/^="(.*)"$/}static get singleRegex(){return/^=(.*)$/}search(e){let t=e===this.pattern;return{isMatch:t,score:t?0:1,indices:[0,this.pattern.length-1]}}},N,class extends O{constructor(e){super(e)}static get type(){return"prefix-exact"}static get multiRegex(){return/^\^"(.*)"$/}static get singleRegex(){return/^\^(.*)$/}search(e){let t=e.startsWith(this.pattern);return{isMatch:t,score:t?0:1,indices:[0,this.pattern.length-1]}}},class extends O{constructor(e){super(e)}static get type(){return"inverse-prefix-exact"}static get multiRegex(){return/^!\^"(.*)"$/}static get singleRegex(){return/^!\^(.*)$/}search(e){let t=!e.startsWith(this.pattern);return{isMatch:t,score:t?0:1,indices:[0,e.length-1]}}},class extends O{constructor(e){super(e)}static get type(){return"inverse-suffix-exact"}static get multiRegex(){return/^!"(.*)"\$$/}static get singleRegex(){return/^!(.*)\$$/}search(e){let t=!e.endsWith(this.pattern);return{isMatch:t,score:t?0:1,indices:[0,e.length-1]}}},class extends O{constructor(e){super(e)}static get type(){return"suffix-exact"}static get multiRegex(){return/^"(.*)"\$$/}static get singleRegex(){return/^(.*)\$$/}search(e){let t=e.endsWith(this.pattern);return{isMatch:t,score:t?0:1,indices:[e.length-this.pattern.length,e.length-1]}}},class extends O{constructor(e){super(e)}static get type(){return"inverse-exact"}static get multiRegex(){return/^!"(.*)"$/}static get singleRegex(){return/^!(.*)$/}search(e){let t=e.indexOf(this.pattern),n=-1===t;return{isMatch:n,score:n?0:1,indices:[0,e.length-1]}}},A],T=R.length,P=/ +(?=(?:[^\"]*\"[^\"]*\")*[^\"]*$)/,M=new Set([A.type,N.type]),j=[];function L(e,t){for(let n=0,r=j.length;n!!(e[I.AND]||e[I.OR]),B=e=>!!e[D.PATH],z=e=>!r(e)&&l(e)&&!F(e),$=e=>({[I.AND]:Object.keys(e).map(t=>({[t]:e[t]}))});function U(e,t,{auto:n=!0}={}){let o=e=>{let a=Object.keys(e),l=B(e);if(!l&&a.length>1&&!F(e))return o($(e));if(z(e)){let r=l?e[D.PATH]:a[0],o=l?e[D.PATTERN]:e[r];if(!i(o))throw Error(f(r));let s={keyId:y(r),pattern:o};return n&&(s.searcher=L(o,t)),s}let s={children:[],operator:a[0]};return a.forEach(t=>{let n=e[t];r(n)&&n.forEach(e=>{s.children.push(o(e))})}),s};return F(e)||(e=$(e)),o(e)}function H(e,t){let n=e.matches;t.matches=[],s(n)&&n.forEach(e=>{if(!s(e.indices)||!e.indices.length)return;let{indices:n,value:r}=e,o={indices:n,value:r};e.key&&(o.key=e.key.src),e.idx>-1&&(o.refIndex=e.idx),t.matches.push(o)})}function Z(e,t){t.score=e.score}class q{constructor(e,t={},n){this.options={...x,...t},this.options.useExtendedSearch,this._keyStore=new m(this.options.keys),this.setCollection(e,n)}setCollection(e,t){if(this._docs=e,t&&!(t instanceof E))throw Error("Incorrect 'index' type");this._myIndex=t||S(this.options.keys,this._docs,{getFn:this.options.getFn,fieldNormWeight:this.options.fieldNormWeight})}add(e){s(e)&&(this._docs.push(e),this._myIndex.add(e))}remove(e=()=>!1){let t=[];for(let n=0,r=this._docs.length;n{let n=1;e.matches.forEach(({key:e,norm:r,score:o})=>{let i=e?e.weight:null;n*=Math.pow(0===o&&i?Number.EPSILON:o,(i||1)*(t?1:r))}),e.score=n})}(c,{ignoreFieldNorm:s}),o&&c.sort(l),a(t)&&t>-1&&(c=c.slice(0,t)),function(e,t,{includeMatches:n=x.includeMatches,includeScore:r=x.includeScore}={}){let o=[];return n&&o.push(H),r&&o.push(Z),e.map(e=>{let{idx:n}=e,r={item:t[n],refIndex:n};return o.length&&o.forEach(t=>{t(e,r)}),r})}(c,this._docs,{includeMatches:n,includeScore:r})}_searchStringList(e){let t=L(e,this.options),{records:n}=this._myIndex,r=[];return n.forEach(({v:e,i:n,n:o})=>{if(!s(e))return;let{isMatch:i,score:a,indices:l}=t.searchIn(e);i&&r.push({item:e,idx:n,matches:[{score:a,value:e,norm:o,indices:l}]})}),r}_searchLogical(e){let t=U(e,this.options),n=(e,t,r)=>{if(!e.children){let{keyId:n,searcher:o}=e,i=this._findMatches({key:this._keyStore.get(n),value:this._myIndex.getValueForItemAtKeyId(t,n),searcher:o});return i&&i.length?[{idx:r,item:t,matches:i}]:[]}let o=[];for(let i=0,a=e.children.length;i{if(s(e)){let a=n(t,e,r);a.length&&(o[r]||(o[r]={idx:r,item:e,matches:[]},i.push(o[r])),a.forEach(({matches:e})=>{o[r].matches.push(...e)}))}}),i}_searchObjectList(e){let t=L(e,this.options),{keys:n,records:r}=this._myIndex,o=[];return r.forEach(({$:e,i:r})=>{if(!s(e))return;let i=[];n.forEach((n,r)=>{i.push(...this._findMatches({key:n,value:e[r],searcher:t}))}),i.length&&o.push({idx:r,item:e,matches:i})}),o}_findMatches({key:e,value:t,searcher:n}){if(!s(t))return[];let o=[];if(r(t))t.forEach(({v:t,i:r,n:i})=>{if(!s(t))return;let{isMatch:a,score:l,indices:c}=n.searchIn(t);a&&o.push({score:l,key:e,value:t,idx:r,norm:i,indices:c})});else{let{v:r,n:i}=t,{isMatch:a,score:l,indices:s}=n.searchIn(r);a&&o.push({score:l,key:e,value:r,norm:i,indices:s})}return o}}q.version="6.6.2",q.createIndex=S,q.parseIndex=function(e,{getFn:t=x.getFn,fieldNormWeight:n=x.fieldNormWeight}={}){let{keys:r,records:o}=e,i=new E({getFn:t,fieldNormWeight:n});return i.setKeys(r),i.setIndexRecords(o),i},q.config=x,q.parseQuery=U,function(...e){j.push(...e)}(class{constructor(e,{isCaseSensitive:t=x.isCaseSensitive,includeMatches:n=x.includeMatches,minMatchCharLength:r=x.minMatchCharLength,ignoreLocation:o=x.ignoreLocation,findAllMatches:i=x.findAllMatches,location:a=x.location,threshold:l=x.threshold,distance:s=x.distance}={}){this.query=null,this.options={isCaseSensitive:t,includeMatches:n,minMatchCharLength:r,findAllMatches:i,ignoreLocation:o,location:a,threshold:l,distance:s},this.pattern=t?e:e.toLowerCase(),this.query=function(e,t={}){return e.split("|").map(e=>{let n=e.trim().split(P).filter(e=>e&&!!e.trim()),r=[];for(let e=0,o=n.length;eMath.random().toString(36).substring(2);let f=new i.default({rules:{emphasis:{filter:["br"],replacement:()=>"\n"}}}),d=e=>Array.isArray(e)?d(e.at(-1)):e?.value||e,p=(e,t,n)=>{let r=e.find(e=>"button"===e.type&&"Submit"===e.props.value)?.id,o=t.findIndex(e=>e.targets?.includes?.(r));return -1===o?t.findIndex((e={})=>e.inputs?.length&&e.outputs?.length&&e.backend_fn&&e.trigger===n):o},h=(e,t)=>{let n=e.find(e=>"button"===e.type)?.id;return n?t.findIndex(e=>e.targets?.includes?.(n)):-1};t.GradioChatBot=class{options;history=[];session_hash;instance_map;constructor(e="0"){if("string"==typeof e?this.options={url:e}:this.options=e,(0,l.default)(this.options.endpoint||this.options.url,"endpoint and url must specify one of them"),!isNaN(this.options.url)){let e=parseInt(this.options.url,10);(0,l.default)(e{let{components:r,dependencies:o}=t,i=o[e],a=i?.inputs.map(e=>this.instance_map[e].props.value);u("fnIndex",e);let s=n?0:i?.inputs.indexOf(i?.targets?.[0]);return s<0&&(s=i?.inputs.findIndex(e=>r?.find(t=>e===t.id&&("textbox"===t.type||t.example_input)))),(0,l.default)(s>-1,"Cannot find the input box"),u("inputIndex",s),[a,s]};html2Markdown(e){return e=this.options.parseHtml?f.turndown(e||""):e,e?.replace?.(/�/g,"").trim()}async reset(){this.history=[],this.instance_map=null,this.session_hash=(0,t.generateHash)()}async chat(e,t){return(0,l.default)(e,"input can't be empty!"),new Promise(async(n,o)=>{try{let{endpoint:i,fnIndex:a,args:c=[],hf_token:f}=this.options,d=await (0,s.client)(i,{session_hash:this.session_hash,hf_token:f,normalise_files:!0}),{components:g,dependencies:m}=d.config,b=this.instance_map;b||(b=g.reduce((e,t)=>(e[t.id]=t,e),{}),this.instance_map=b),(a=a??p(g,m,"submit"))<0&&(a=Math.max(h(g,m),p(g,m,"click"))),(0,l.default)(-1!==a,"Failed to parse this space, you may need to specify the fnIndex manually!");let[v,y]=this.parseInputs(a,d.config);c?.length||(c=v);let x=this.options.inputIndex??y;x>-1&&(c[x]=e),u("args",a,JSON.stringify(c));let w=[],E=-1,S=[],k=/^'([^]+)'$/,_=new Map,O=(e,n)=>{let o=m[n].outputs;e?.forEach((e,n)=>{let i=b[o[n]];if(i.props.value_is_output=!0,"object"==typeof e&&null!==e&&"update"===e.__type__)for(let[t,n]of Object.entries(e))"__type__"!==t&&(i.props[t]=n);else if(i.props.value=e,r.env.DEBUG&&u("value",i.type,JSON.stringify(e)),"chatbot"===i.type&&e){this.history=e.slice(-this.options.historySize),i.props.value=this.history;let n=e?.at(-1)?.at(-1);t?.onMessage?.(this.html2Markdown(n))}})},C=async(e,r=null,i=null)=>{let a=m[e],l=w[e];if(S=S.filter(({fn_index:t})=>t!==e),a.cancels&&await Promise.all(a.cancels.map(async e=>{let t=_.get(e);return t?.cancel(),t})),"pending"===l||"generating"===l)return;let s={fn_index:e,data:r||a.inputs.map(e=>b[e].props.value),event_data:a.collects_event_data?i:null},c=()=>{let r=d.submit(s.fn_index,s.data,s.event_data).on("data",({data:e,fn_index:t})=>{O(e,t)}).on("status",({fn_index:e,...i})=>{if(w[e]=i.stage,u("status",i.stage),"complete"===i.stage){let t=!0;if(m.map(async(n,r)=>{n.trigger_after===e&&(t=!1,C(r))}),r.destroy(),t){let e=this.history?.at(-1)?.at(-1);n(this.html2Markdown(e))}}if("error"===i.stage){if(i.message){let t=i.message.replace(k,(e,t)=>t);S=[{type:"error",message:t,id:++E,fn_index:e},...S]}m.map(async(t,n)=>{t.trigger_after!==e||t.trigger_only_on_success||C(n)}),t?.onError?.(i.message||"error"),o(i.message||"error"),r.destroy()}});_.set(e,r)};a.frontend_fn?a.frontend_fn(s.data.concat(a.outputs.map(e=>b[e].props.value))).then(t=>{a.backend_fn?(s.data=t,c()):O(t,e)}):a.backend_fn&&c()};C(a,c)}catch(e){o(e)}})}}},23067:function(e,t,n){"use strict";var r=n(91083).Buffer,o=this&&this.__importDefault||function(e){return e&&e.__esModule?e:{default:e}};Object.defineProperty(t,"__esModule",{value:!0}),t.walk_and_store_blobs=t.handle_blob=t.client=t.duplicate=t.upload_files=t.post_data=void 0;let i=o(n(16218)),a=n(62961),l="Connection errored out.";async function s(e,t,n){let r={"Content-Type":"application/json"};n&&(r.Authorization=`Bearer ${n}`);try{var o=await fetch(e,{method:"POST",body:JSON.stringify(t),headers:r})}catch(e){return[{error:l},500]}let i=await o.json();return[i,o.status]}async function c(e,t,n){let r={};n&&(r.Authorization=`Bearer ${n}`);let o=new FormData;t.forEach(e=>{o.append("files",e)});try{var i=await fetch(`${e}/upload`,{method:"POST",body:o,headers:r})}catch(e){return{error:l}}let a=await i.json();return{files:a}}async function u(e,t){let{hf_token:n,private:r,hardware:o,timeout:i}=t;if(o&&!a.hardware_types.includes(o))throw Error(`Invalid hardware type provided. Valid types are: ${a.hardware_types.map(e=>`"${e}"`).join(",")}.`);let l={Authorization:`Bearer ${n}`},s=(await (await fetch("https://huggingface.co/api/whoami-v2",{headers:l})).json()).name,c=e.split("/")[1],u={repository:`${s}/${c}`};r&&(u.private=!0);try{let r=await fetch(`https://huggingface.co/api/spaces/${e}/duplicate`,{method:"POST",headers:{"Content-Type":"application/json",...l},body:JSON.stringify(u)});if(409===r.status)return f(`${s}/${c}`,t);{let l;let u=await r.json();o||(l=await (0,a.get_space_hardware)(e,n));let d=o||l||"cpu-basic";return await (0,a.set_space_hardware)(`${s}/${c}`,d,n),await (0,a.set_space_timeout)(`${s}/${c}`,i||300,n),f(u.url,t)}}catch(e){throw Error(e)}}async function f(e,t={normalise_files:!0,session_hash:Math.random().toString(36).substring(2)}){return new Promise(async n=>{let r,o;let{status_callback:s,hf_token:c,normalise_files:u,session_hash:f}=t,b={predict:function(e,t,n){let r=!1,o=!1;return new Promise((i,a)=>{let l=R(e,t,n);l.on("data",e=>{r=!0,o&&l.destroy(),i(e)}).on("status",e=>{"error"===e.stage&&a(e),"complete"===e.stage&&r&&l.destroy(),"complete"===e.stage&&(o=!0)})})},submit:R,view_api:T},v=u??!0,{ws_protocol:w,http_protocol:E,host:S,space_id:k}=await (0,a.process_endpoint)(e,c),_={},O={},C=!1;async function A(e){r=e,O=(0,a.map_names_to_ids)(e?.dependencies||[]);try{o=await T(r)}catch(e){console.error(`Could not get api details: ${e.message}`)}return{config:r,...b}}async function N(e){if(s&&s(e),"running"===e.status)try{r=await y(`${E}//${S}`,c);let e=await A(r);n(e)}catch(e){s&&s({status:"error",message:"Could not load this space.",load_status:"error",detail:"NOT_FOUND"})}}c&&k&&(C=await g(k,c));try{r=await y(`${E}//${S}`,c);let e=await A(r);n(e)}catch(e){console.log("e",e),k?x(k,a.RE_SPACE_NAME.test(k)?"space_name":"subdomain",N):s&&s({status:"error",message:"Could not load this space.",load_status:"error",detail:"NOT_FOUND"})}function R(e,t,n){let a,s,u,p;if("number"==typeof e)a=e,s=o.unnamed_endpoints[a];else{let t=e.replace(/^\//,"");a=O[t],s=o.named_endpoints[e.trim()]}if("number"!=typeof a)throw Error("There is no endpoint matching that name of fn_index matching that number.");let h="number"==typeof e?"/predict":e,g=!1,b={};function y(e){let t=b[e.type]||[];t?.forEach(t=>t(e))}function x(e,t){let n=b[e]||[];return b[e]=n,n?.push(t),{on:x,off:k,cancel:A,destroy:N}}function k(e,t){let n=b[e]||[];return n=n?.filter(e=>e!==t),b[e]=n,{on:x,off:k,cancel:A,destroy:N}}async function A(){let e={stage:"complete",queue:!1,time:new Date};g=e,y({...e,type:"status",endpoint:h,fn_index:a}),p&&0===p.readyState?p.addEventListener("open",()=>{p.close()}):p.close();try{await fetch(`${E}//${S+r.path}/reset`,{headers:{"Content-Type":"application/json"},method:"POST",body:JSON.stringify({fn_index:a,session_hash:f})})}catch(e){console.warn("The `/reset` endpoint could not be called. Subsequent endpoint results may be unreliable.")}}function N(){for(let e in b)(b[e]||[]).forEach(t=>{k(e,t)})}return m(`${E}//${S+r.path}`,t,s,c).then(e=>{u={data:e||[],event_data:n,fn_index:a};{y({type:"status",stage:"pending",queue:!0,endpoint:h,fn_index:a,time:new Date});let e=new URL(`${w}://${S}${r.path} - /queue/join`);C&&e.searchParams.set("__sign",C),(p=new WebSocket(e)).onclose=e=>{e.wasClean||y({type:"status",stage:"error",message:l,queue:!0,endpoint:h,fn_index:a,time:new Date})},p.onmessage=function(e){let t=JSON.parse(e.data),{type:n,status:o,data:i}=function(e,t){switch(e?.msg){case"send_data":return{type:"data"};case"send_hash":return{type:"hash"};case"queue_full":return{type:"update",status:{queue:!0,message:"This application is too busy. Keep trying!",stage:"error",code:e.code,success:e.success}};case"estimation":return{type:"update",status:{queue:!0,stage:t||"pending",code:e.code,size:e.queue_size,position:e.rank,eta:e.rank_eta,success:e.success}};case"progress":return{type:"update",status:{queue:!0,stage:"pending",code:e.code,progress_data:e.progress_data,success:e.success}};case"process_generating":return{type:"generating",status:{queue:!0,message:e.success?null:e.output.error,stage:e.success?"generating":"error",code:e.code,progress_data:e.progress_data,eta:e.average_duration},data:e.success?e.output:null};case"process_completed":if("error"in e.output)return{type:"update",status:{queue:!0,message:e.output.error,stage:"error",code:e.code,success:e.success}};return{type:"complete",status:{queue:!0,message:e.success?void 0:e.output.error,stage:e.success?"complete":"error",code:e.code,progress_data:e.progress_data,eta:e.output.average_duration},data:e.success?e.output:null};case"process_starts":return{type:"update",status:{queue:!0,stage:"pending",code:e.code,size:e.rank,position:0,success:e.success}}}return{type:"none",status:{stage:"error",queue:!0}}}(t,_[a]);if("update"===n&&o&&!g)y({type:"status",endpoint:h,fn_index:a,time:new Date,...o}),"error"===o.stage&&p.close();else if("hash"===n){p.send(JSON.stringify({fn_index:a,session_hash:f}));return}else"data"===n?p.send(JSON.stringify({...u,session_hash:f})):"complete"===n?g=o:"generating"===n&&y({type:"status",time:new Date,...o,stage:o?.stage,queue:!0,endpoint:h,fn_index:a});i&&(y({type:"data",time:new Date,data:v?function(e,t,n,r){return e.map((e,o)=>t?.returns?.[o]?.component==="File"?d(e,n,r):t?.returns?.[o]?.component==="Gallery"?e.map(e=>Array.isArray(e)?[d(e[0],n,r),e[1]]:[d(e,n,r),null]):e&&"object"==typeof e&&e.is_file?d(e,n,r):e)}(i.data,s,r.root,r.root_url):i.data,endpoint:h,fn_index:a}),g&&(y({type:"status",time:new Date,...g,stage:o?.stage,queue:!0,endpoint:h,fn_index:a}),p.close()))},0>(0,i.default)(r.version||"2.0.0","3.6")&&p.addEventListener("open",()=>p.send(JSON.stringify({hash:f})))}}),{on:x,off:k,cancel:A,destroy:N}}async function T(e){let t;if(o)return o;let n={"Content-Type":"application/json"};if(c&&(n.Authorization=`Bearer ${c}`),!(t=0>(0,i.default)(e.version||"2.0.0","3.30")?await fetch("https://gradio-space-api-fetcher-v2.hf.space/api",{method:"POST",body:JSON.stringify({serialize:!1,config:JSON.stringify(e)}),headers:n}):await fetch(`${e.root}/info`,{headers:n})).ok)throw Error(l);let r=await t.json();"api"in r&&(r=r.api),r.named_endpoints["/predict"]&&!r.unnamed_endpoints["0"]&&(r.unnamed_endpoints[0]=r.named_endpoints["/predict"]);let a=function(e,t,n){let r={named_endpoints:{},unnamed_endpoints:{}};for(let o in e){let i=e[o];for(let e in i){let a=t.dependencies[e]?e:n[e.replace("/","")],l=i[e];r[o][e]={},r[o][e].parameters={},r[o][e].returns={},r[o][e].type=t.dependencies[a].types,r[o][e].parameters=l.parameters.map(({label:e,component:t,type:n,serializer:r})=>({label:e,component:t,type:p(n,t,r,"parameter"),description:h(n,r)})),r[o][e].returns=l.returns.map(({label:e,component:t,type:n,serializer:r})=>({label:e,component:t,type:p(n,t,r,"return"),description:h(n,r)}))}}return r}(r,e,O);return a}})}function d(e,t,n){if(null==e)return null;if("string"==typeof e)return{name:"file_data",data:e};if(Array.isArray(e)){let r=[];for(let o of e)null===o?r.push(null):r.push(d(o,t,n));return r}return e.is_file&&(n?e.data="/proxy="+n+"/file="+e.name:e.data=t+"/file="+e.name),e}function p(e,t,n,r){switch(e.type){case"string":return"string";case"boolean":return"boolean";case"number":return"number"}return"JSONSerializable"===n||"StringSerializable"===n?"any":"ListStringSerializable"===n?"string[]":"Image"===t?"parameter"===r?"Blob | File | Buffer":"string":"FileSerializable"===n?e?.type==="array"?"parameter"===r?"(Blob | File | Buffer)[]":"{ name: string; data: string; size?: number; is_file?: boolean; orig_name?: string}[]":"parameter"===r?"Blob | File | Buffer":"{ name: string; data: string; size?: number; is_file?: boolean; orig_name?: string}":"GallerySerializable"===n?"parameter"===r?"[(Blob | File | Buffer), (string | null)][]":"[{ name: string; data: string; size?: number; is_file?: boolean; orig_name?: string}, (string | null))][]":void 0}function h(e,t){return"GallerySerializable"===t?"array of [file, label] tuples":"ListStringSerializable"===t?"array of strings":"FileSerializable"===t?"array of files or single file":e.description}async function g(e,t){try{let n=await fetch(`https://huggingface.co/api/spaces/${e}/jwt`,{headers:{Authorization:`Bearer ${t}`}}),r=(await n.json()).token;return r||!1}catch(e){return console.error(e),!1}}async function m(e,t,n,r){let o=await v(t,void 0,[],!0,n);return Promise.all(o.map(async({path:t,blob:n,data:o,type:i})=>{if(!n)return{path:t,base64:o,type:i};{let o=(await c(e,[n],r)).files[0];return{path:t,file_url:o,type:i}}})).then(e=>(e.forEach(({path:e,file_url:n,base64:r,type:o})=>{if(r)b(t,r,e);else if("Gallery"===o)b(t,n,e);else if(n){let r={is_file:!0,name:`${n}`,data:null};b(t,r,e)}}),t))}function b(e,t,n){for(;n.length>1;)e=e[n.shift()];e[n.shift()]=t}async function v(e,t,n=[],o=!1,i){if(Array.isArray(e)){let r=[];return await Promise.all(e.map(async(a,l)=>{let s=n.slice();s.push(l);let c=await v(e[l],o?i?.parameters[l]?.component||void 0:t,s,!1,i);r=r.concat(c)})),r}if(globalThis.Buffer&&e instanceof globalThis.Buffer){let r="Image"===t;return[{path:n,blob:!r&&new Blob([e]),data:!!r&&`${e.toString("base64")}`,type:t}]}if(e instanceof Blob||"undefined"!=typeof window&&e instanceof File){if("Image"!==t)return[{path:n,blob:e,type:t}];{let o;if("undefined"!=typeof window)o=await new Promise((t,n)=>{let r=new FileReader;r.onloadend=()=>t(r.result),r.readAsDataURL(e)});else{let t=await e.arrayBuffer();o=r.from(t).toString("base64")}return[{path:n,data:o,type:t}]}}{if("object"!=typeof e)return[];let t=[];for(let r in e)if(e.hasOwnProperty(r)){let o=n.slice();o.push(r),t=t.concat(await v(e[r],void 0,o,!1,i))}return t}}async function y(e,t){let n={};if(t&&(n.Authorization=`Bearer ${t}`),"undefined"!=typeof window&&window.gradio_config&&"http://localhost:9876"!==location.origin){let t=window.gradio_config.root,n=window.gradio_config;return n.root=e+n.root,{...n,path:t}}if(e){let t=await fetch(`${e}/config`,{headers:n});if(200===t.status){let n=await t.json();return n.path=n.path??"",n.root=e,n}throw Error("Could not get config.")}throw Error("No config or app endpoint found")}async function x(e,t,n){let r,o,i="subdomain"===t?`https://huggingface.co/api/spaces/by-subdomain/${e}`:`https://huggingface.co/api/spaces/${e}`;try{if(o=(r=await fetch(i)).status,200!==o)throw Error();r=await r.json()}catch(e){n({status:"error",load_status:"error",message:"Could not get space status",detail:"NOT_FOUND"});return}if(!r||200!==o)return;let{runtime:{stage:l},id:s}=r;switch(l){case"STOPPED":case"SLEEPING":n({status:"sleeping",load_status:"pending",message:"Space is asleep. Waking it up...",detail:l}),setTimeout(()=>{x(e,t,n)},1e3);break;case"RUNNING":case"RUNNING_BUILDING":n({status:"running",load_status:"complete",message:"",detail:l});break;case"BUILDING":n({status:"building",load_status:"pending",message:"Space is building...",detail:l}),setTimeout(()=>{x(e,t,n)},1e3);break;default:n({status:"space_error",load_status:"error",message:"This space is experiencing an issue.",detail:l,discussions_enabled:await (0,a.discussions_enabled)(s)})}}t.post_data=s,t.upload_files=c,t.duplicate=u,t.client=f,t.handle_blob=m,t.walk_and_store_blobs=v},42794:function(e,t,n){"use strict";var r=this&&this.__createBinding||(Object.create?function(e,t,n,r){void 0===r&&(r=n);var o=Object.getOwnPropertyDescriptor(t,n);(!o||("get"in o?!t.__esModule:o.writable||o.configurable))&&(o={enumerable:!0,get:function(){return t[n]}}),Object.defineProperty(e,r,o)}:function(e,t,n,r){void 0===r&&(r=n),e[r]=t[n]}),o=this&&this.__exportStar||function(e,t){for(var n in e)"default"===n||Object.prototype.hasOwnProperty.call(t,n)||r(t,e,n)};Object.defineProperty(t,"__esModule",{value:!0}),t.duplicate=t.upload_files=t.post_data=t.client=void 0;var i=n(23067);Object.defineProperty(t,"client",{enumerable:!0,get:function(){return i.client}}),Object.defineProperty(t,"post_data",{enumerable:!0,get:function(){return i.post_data}}),Object.defineProperty(t,"upload_files",{enumerable:!0,get:function(){return i.upload_files}}),Object.defineProperty(t,"duplicate",{enumerable:!0,get:function(){return i.duplicate}}),o(n(60276),t)},62961:function(e,t){"use strict";function n(e){if(e.startsWith("http")){let{protocol:t,host:n}=new URL(e);return n.endsWith("hf.space")?{ws_protocol:"wss",host:n,http_protocol:t}:{ws_protocol:"https:"===t?"wss":"ws",http_protocol:t,host:n}}return{ws_protocol:"wss",http_protocol:"https:",host:e}}async function r(e,r){let o={};r&&(o.Authorization=`Bearer ${r}`);let i=e.trim();if(t.RE_SPACE_NAME.test(i))try{let t=await fetch(`https://huggingface.co/api/spaces/${i}/host`,{headers:o});if(200!==t.status)throw Error("Space metadata could not be loaded.");let r=(await t.json()).host;return{space_id:e,...n(r)}}catch(e){throw Error("Space metadata could not be loaded."+e.message)}if(t.RE_SPACE_DOMAIN.test(i)){let{ws_protocol:e,http_protocol:t,host:r}=n(i);return{space_id:r.replace(".hf.space",""),ws_protocol:e,http_protocol:t,host:r}}if(t.MD_SPACE_DOMAIN.test(i)){let e=new URL(i);return{space_id:!1,ws_protocol:"wss",http_protocol:"https:",host:`${e.host}${e.pathname}`}}return{space_id:!1,...n(i)}}Object.defineProperty(t,"__esModule",{value:!0}),t.hardware_types=t.set_space_timeout=t.set_space_hardware=t.get_space_hardware=t.discussions_enabled=t.map_names_to_ids=t.process_endpoint=t.MD_SPACE_DOMAIN=t.RE_SPACE_DOMAIN=t.RE_SPACE_NAME=t.determine_protocol=void 0,t.determine_protocol=n,t.RE_SPACE_NAME=/^[^\/]*\/[^\/]*$/,t.RE_SPACE_DOMAIN=/.*hf\.space\/{0,1}$/,t.MD_SPACE_DOMAIN=/^https:\/\/modelscope\.cn\//,t.process_endpoint=r,t.map_names_to_ids=function(e){let t={};return e.forEach(({api_name:e},n)=>{e&&(t[e]=n)}),t};let o=/^(?=[^]*\b[dD]iscussions{0,1}\b)(?=[^]*\b[dD]isabled\b)[^]*$/;async function i(e){try{let t=await fetch(`https://huggingface.co/api/spaces/${e}/discussions`,{method:"HEAD"}),n=t.headers.get("x-error-message");if(n&&o.test(n))return!1;return!0}catch(e){return!1}}async function a(e,t){let n={};t&&(n.Authorization=`Bearer ${t}`);try{let t=await fetch(`https://huggingface.co/api/spaces/${e}/runtime`,{headers:n});if(200!==t.status)throw Error("Space hardware could not be obtained.");let{hardware:r}=await t.json();return r}catch(e){throw Error(e.message)}}async function l(e,t,n){let r={};n&&(r.Authorization=`Bearer ${n}`);try{let n=await fetch(`https://huggingface.co/api/spaces/${e}/hardware`,{headers:r,body:JSON.stringify(t)});if(200!==n.status)throw Error("Space hardware could not be set. Please ensure the space hardware provided is valid and that a Hugging Face token is passed in.");let{hardware:o}=await n.json();return o}catch(e){throw Error(e.message)}}async function s(e,t,n){let r={};n&&(r.Authorization=`Bearer ${n}`);try{let n=await fetch(`https://huggingface.co/api/spaces/${e}/hardware`,{headers:r,body:JSON.stringify({seconds:t})});if(200!==n.status)throw Error("Space hardware could not be set. Please ensure the space hardware provided is valid and that a Hugging Face token is passed in.");let{hardware:o}=await n.json();return o}catch(e){throw Error(e.message)}}t.discussions_enabled=i,t.get_space_hardware=a,t.set_space_hardware=l,t.set_space_timeout=s,t.hardware_types=["cpu-basic","cpu-upgrade","t4-small","t4-medium","a10g-small","a10g-large","a100-large"]},24645:function(e){var t=/\/\*[^*]*\*+([^/*][^*]*\*+)*\//g,n=/\n/g,r=/^\s*/,o=/^(\*?[-#/*\\\w]+(\[[0-9a-z_-]+\])?)\s*/,i=/^:\s*/,a=/^((?:'(?:\\'|.)*?'|"(?:\\"|.)*?"|\([^)]*?\)|[^};])+)/,l=/^[;\s]*/,s=/^\s+|\s+$/g;function c(e){return e?e.replace(s,""):""}e.exports=function(e,s){if("string"!=typeof e)throw TypeError("First argument must be a string");if(!e)return[];s=s||{};var u=1,f=1;function d(e){var t=e.match(n);t&&(u+=t.length);var r=e.lastIndexOf("\n");f=~r?e.length-r:f+e.length}function p(){var e={line:u,column:f};return function(t){return t.position=new h(e),b(r),t}}function h(e){this.start=e,this.end={line:u,column:f},this.source=s.source}h.prototype.content=e;var g=[];function m(t){var n=Error(s.source+":"+u+":"+f+": "+t);if(n.reason=t,n.filename=s.source,n.line=u,n.column=f,n.source=e,s.silent)g.push(n);else throw n}function b(t){var n=t.exec(e);if(n){var r=n[0];return d(r),e=e.slice(r.length),n}}function v(e){var t;for(e=e||[];t=y();)!1!==t&&e.push(t);return e}function y(){var t=p();if("/"==e.charAt(0)&&"*"==e.charAt(1)){for(var n=2;""!=e.charAt(n)&&("*"!=e.charAt(n)||"/"!=e.charAt(n+1));)++n;if(n+=2,""===e.charAt(n-1))return m("End of comment missing");var r=e.slice(2,n-2);return f+=2,d(r),e=e.slice(n),f+=2,t({type:"comment",comment:r})}}return b(r),function(){var e,n=[];for(v(n);e=function(){var e=p(),n=b(o);if(n){if(y(),!b(i))return m("property missing ':'");var r=b(a),s=e({type:"declaration",property:c(n[0].replace(t,"")),value:r?c(r[0].replace(t,"")):""});return b(l),s}}();)!1!==e&&(n.push(e),v(n));return n}()}},65192:function(e,t,n){"use strict";function r(e){for(var t=arguments.length,n=Array(t>1?t-1:0),r=1;r3?t.i-4:t.i:Array.isArray(e)?1:u(e)?2:f(e)?3:0}function s(e,t){return 2===l(e)?e.has(t):Object.prototype.hasOwnProperty.call(e,t)}function c(e,t,n){var r=l(e);2===r?e.set(t,n):3===r?e.add(n):e[t]=n}function u(e){return I&&e instanceof Map}function f(e){return D&&e instanceof Set}function d(e){return e.o||e.t}function p(e){if(Array.isArray(e))return Array.prototype.slice.call(e);var t=Z(e);delete t[$];for(var n=H(t),r=0;r1&&(e.set=e.add=e.clear=e.delete=g),Object.freeze(e),t&&a(e,function(e,t){return h(t,!0)},!0)),e}function g(){r(2)}function m(e){return null==e||"object"!=typeof e||Object.isFrozen(e)}function b(e){var t=q[e];return t||r(18,e),t}function v(e,t){t&&(b("Patches"),e.u=[],e.s=[],e.v=t)}function y(e){x(e),e.p.forEach(E),e.p=null}function x(e){e===j&&(j=e.l)}function w(e){return j={p:[],l:j,h:e,m:!0,_:0}}function E(e){var t=e[$];0===t.i||1===t.i?t.j():t.g=!0}function S(e,t){t._=t.p.length;var n=t.p[0],o=void 0!==e&&e!==n;return t.h.O||b("ES5").S(t,e,o),o?(n[$].P&&(y(t),r(4)),i(e)&&(e=k(t,e),t.l||O(t,e)),t.u&&b("Patches").M(n[$].t,e,t.u,t.s)):e=k(t,n,[]),y(t),t.u&&t.v(t.u,t.s),e!==B?e:void 0}function k(e,t,n){if(m(t))return t;var r=t[$];if(!r)return a(t,function(o,i){return _(e,r,t,o,i,n)},!0),t;if(r.A!==e)return t;if(!r.P)return O(e,r.t,!0),r.t;if(!r.I){r.I=!0,r.A._--;var o=4===r.i||5===r.i?r.o=p(r.k):r.o,i=o,l=!1;3===r.i&&(i=new Set(o),o.clear(),l=!0),a(i,function(t,i){return _(e,r,o,t,i,n,l)}),O(e,o,!1),n&&e.u&&b("Patches").N(r,n,e.u,e.s)}return r.o}function _(e,t,n,r,a,l,u){if(o(a)){var f=k(e,a,l&&t&&3!==t.i&&!s(t.R,r)?l.concat(r):void 0);if(c(n,r,f),!o(f))return;e.m=!1}else u&&n.add(a);if(i(a)&&!m(a)){if(!e.h.D&&e._<1)return;k(e,a),t&&t.A.l||O(e,a)}}function O(e,t,n){void 0===n&&(n=!1),!e.l&&e.h.D&&e.m&&h(t,n)}function C(e,t){var n=e[$];return(n?d(n):e)[t]}function A(e,t){if(t in e)for(var n=Object.getPrototypeOf(e);n;){var r=Object.getOwnPropertyDescriptor(n,t);if(r)return r;n=Object.getPrototypeOf(n)}}function N(e){e.P||(e.P=!0,e.l&&N(e.l))}function R(e){e.o||(e.o=p(e.t))}function T(e,t,n){var r,o,i,a,l,s,c,d=u(t)?b("MapSet").F(t,n):f(t)?b("MapSet").T(t,n):e.O?(i=o={i:(r=Array.isArray(t))?1:0,A:n?n.A:j,P:!1,I:!1,R:{},l:n,t:t,k:null,o:null,j:null,C:!1},a=V,r&&(i=[o],a=W),s=(l=Proxy.revocable(i,a)).revoke,c=l.proxy,o.k=c,o.j=s,c):b("ES5").J(t,n);return(n?n.A:j).p.push(d),d}function P(e,t){switch(t){case 2:return new Map(e);case 3:return Array.from(e)}return p(e)}n.d(t,{sn:function(){return X}});var M,j,L="undefined"!=typeof Symbol&&"symbol"==typeof Symbol("x"),I="undefined"!=typeof Map,D="undefined"!=typeof Set,F="undefined"!=typeof Proxy&&void 0!==Proxy.revocable&&"undefined"!=typeof Reflect,B=L?Symbol.for("immer-nothing"):((M={})["immer-nothing"]=!0,M),z=L?Symbol.for("immer-draftable"):"__$immer_draftable",$=L?Symbol.for("immer-state"):"__$immer_state",U=""+Object.prototype.constructor,H="undefined"!=typeof Reflect&&Reflect.ownKeys?Reflect.ownKeys:void 0!==Object.getOwnPropertySymbols?function(e){return Object.getOwnPropertyNames(e).concat(Object.getOwnPropertySymbols(e))}:Object.getOwnPropertyNames,Z=Object.getOwnPropertyDescriptors||function(e){var t={};return H(e).forEach(function(n){t[n]=Object.getOwnPropertyDescriptor(e,n)}),t},q={},V={get:function(e,t){if(t===$)return e;var n,r,o=d(e);if(!s(o,t))return(r=A(o,t))?"value"in r?r.value:null===(n=r.get)||void 0===n?void 0:n.call(e.k):void 0;var a=o[t];return e.I||!i(a)?a:a===C(e.t,t)?(R(e),e.o[t]=T(e.A.h,a,e)):a},has:function(e,t){return t in d(e)},ownKeys:function(e){return Reflect.ownKeys(d(e))},set:function(e,t,n){var r=A(d(e),t);if(null==r?void 0:r.set)return r.set.call(e.k,n),!0;if(!e.P){var o=C(d(e),t),i=null==o?void 0:o[$];if(i&&i.t===n)return e.o[t]=n,e.R[t]=!1,!0;if((n===o?0!==n||1/n==1/o:n!=n&&o!=o)&&(void 0!==n||s(e.t,t)))return!0;R(e),N(e)}return e.o[t]===n&&(void 0!==n||t in e.o)||Number.isNaN(n)&&Number.isNaN(e.o[t])||(e.o[t]=n,e.R[t]=!0),!0},deleteProperty:function(e,t){return void 0!==C(e.t,t)||t in e.t?(e.R[t]=!1,R(e),N(e)):delete e.R[t],e.o&&delete e.o[t],!0},getOwnPropertyDescriptor:function(e,t){var n=d(e),r=Reflect.getOwnPropertyDescriptor(n,t);return r?{writable:!0,configurable:1!==e.i||"length"!==t,enumerable:r.enumerable,value:n[t]}:r},defineProperty:function(){r(11)},getPrototypeOf:function(e){return Object.getPrototypeOf(e.t)},setPrototypeOf:function(){r(12)}},W={};a(V,function(e,t){W[e]=function(){return arguments[0]=arguments[0][0],t.apply(this,arguments)}}),W.deleteProperty=function(e,t){return W.set.call(this,e,t,void 0)},W.set=function(e,t,n){return V.set.call(this,e[0],t,n,e[0])};var G=new(function(){function e(e){var t=this;this.O=F,this.D=!0,this.produce=function(e,n,o){if("function"==typeof e&&"function"!=typeof n){var a,l=n;return n=e,function(e){var r=this;void 0===e&&(e=l);for(var o=arguments.length,i=Array(o>1?o-1:0),a=1;a1?r-1:0),i=1;i=0;n--){var n,r=t[n];if(0===r.path.length&&"replace"===r.op){e=r.value;break}}n>-1&&(t=t.slice(n+1));var i=b("Patches").$;return o(e)?i(e,t):this.produce(e,function(e){return i(e,t)})},e}()),K=G.produce;G.produceWithPatches.bind(G),G.setAutoFreeze.bind(G),G.setUseProxies.bind(G),G.applyPatches.bind(G),G.createDraft.bind(G),G.finishDraft.bind(G);var Y=n(48115);function X(e){let t=(0,Y.cn)(e,(e,n,r)=>n(t,K(e(t),"function"==typeof r?r:()=>r)));return t}n(86006),new WeakMap},79922:function(e,t,n){var r=n(21671)(n(41314),"DataView");e.exports=r},7845:function(e,t,n){var r=n(44338),o=n(74779),i=n(28231),a=n(14798),l=n(90926);function s(e){var t=-1,n=null==e?0:e.length;for(this.clear();++tu))return!1;var d=s.get(e),p=s.get(t);if(d&&p)return d==t&&p==e;var h=-1,g=!0,m=2&n?new r:void 0;for(s.set(e,t),s.set(t,e);++h-1&&e%1==0&&e-1}},13332:function(e,t,n){var r=n(53457);e.exports=function(e,t){var n=this.__data__,o=r(n,e);return o<0?(++this.size,n.push([e,t])):n[o][1]=t,this}},63596:function(e,t,n){var r=n(7845),o=n(25214),i=n(357);e.exports=function(){this.size=0,this.__data__={hash:new r,map:new(i||o),string:new r}}},62353:function(e,t,n){var r=n(87225);e.exports=function(e){var t=r(this,e).delete(e);return this.size-=t?1:0,t}},89659:function(e,t,n){var r=n(87225);e.exports=function(e){return r(this,e).get(e)}},2730:function(e,t,n){var r=n(87225);e.exports=function(e){return r(this,e).has(e)}},2752:function(e,t,n){var r=n(87225);e.exports=function(e,t){var n=r(this,e),o=n.size;return n.set(e,t),this.size+=n.size==o?0:1,this}},56395:function(e){e.exports=function(e){var t=-1,n=Array(e.size);return e.forEach(function(e,r){n[++t]=[r,e]}),n}},45211:function(e){e.exports=function(e,t){return function(n){return null!=n&&n[e]===t&&(void 0!==t||e in Object(n))}}},18757:function(e,t,n){var r=n(85679);e.exports=function(e){var t=r(e,function(e){return 500===n.size&&n.clear(),e}),n=t.cache;return t}},98851:function(e,t,n){var r=n(21671)(Object,"create");e.exports=r},27978:function(e,t,n){var r=n(4605)(Object.keys,Object);e.exports=r},46348:function(e){e.exports=function(e){var t=[];if(null!=e)for(var n in Object(e))t.push(n);return t}},78084:function(e,t,n){e=n.nmd(e);var r=n(99499),o=t&&!t.nodeType&&t,i=o&&e&&!e.nodeType&&e,a=i&&i.exports===o&&r.process,l=function(){try{var e=i&&i.require&&i.require("util").types;if(e)return e;return a&&a.binding&&a.binding("util")}catch(e){}}();e.exports=l},59774:function(e){var t=Object.prototype.toString;e.exports=function(e){return t.call(e)}},4605:function(e){e.exports=function(e,t){return function(n){return e(t(n))}}},41314:function(e,t,n){var r=n(99499),o="object"==typeof self&&self&&self.Object===Object&&self,i=r||o||Function("return this")();e.exports=i},70954:function(e){e.exports=function(e){return this.__data__.set(e,"__lodash_hash_undefined__"),this}},56352:function(e){e.exports=function(e){return this.__data__.has(e)}},6789:function(e){e.exports=function(e){var t=-1,n=Array(e.size);return e.forEach(function(e){n[++t]=e}),n}},85846:function(e,t,n){var r=n(25214);e.exports=function(){this.__data__=new r,this.size=0}},47918:function(e){e.exports=function(e){var t=this.__data__,n=t.delete(e);return this.size=t.size,n}},51816:function(e){e.exports=function(e){return this.__data__.get(e)}},3373:function(e){e.exports=function(e){return this.__data__.has(e)}},14715:function(e,t,n){var r=n(25214),o=n(357),i=n(97794);e.exports=function(e,t){var n=this.__data__;if(n instanceof r){var a=n.__data__;if(!o||a.length<199)return a.push([e,t]),this.size=++n.size,this;n=this.__data__=new i(a)}return n.set(e,t),this.size=n.size,this}},52588:function(e,t,n){var r=n(18757),o=/[^.[\]]+|\[(?:(-?\d+(?:\.\d+)?)|(["'])((?:(?!\2)[^\\]|\\.)*?)\2)\]|(?=(?:\.|\[\])(?:\.|\[\]|$))/g,i=/\\(\\)?/g,a=r(function(e){var t=[];return 46===e.charCodeAt(0)&&t.push(""),e.replace(o,function(e,n,r,o){t.push(r?o.replace(i,"$1"):n||e)}),t});e.exports=a},87912:function(e,t,n){var r=n(50246),o=1/0;e.exports=function(e){if("string"==typeof e||r(e))return e;var t=e+"";return"0"==t&&1/e==-o?"-0":t}},77425:function(e){var t=Function.prototype.toString;e.exports=function(e){if(null!=e){try{return t.call(e)}catch(e){}try{return e+""}catch(e){}}return""}},48797:function(e,t,n){var r=n(33130);e.exports=function(e){return r(e,5)}},98895:function(e){e.exports=function(e,t){return e===t||e!=e&&t!=t}},17766:function(e,t,n){var r=n(23699),o=n(54434);e.exports=function(e,t){return e&&r(e,o(t))}},53671:function(e,t,n){var r=n(86271);e.exports=function(e,t,n){var o=null==e?void 0:r(e,t);return void 0===o?n:o}},87191:function(e,t,n){var r=n(91790),o=n(36015);e.exports=function(e,t){return null!=e&&o(e,t,r)}},14032:function(e){e.exports=function(e){return e}},20628:function(e,t,n){var r=n(73274),o=n(60655),i=Object.prototype,a=i.hasOwnProperty,l=i.propertyIsEnumerable,s=r(function(){return arguments}())?r:function(e){return o(e)&&a.call(e,"callee")&&!l.call(e,"callee")};e.exports=s},3642:function(e){var t=Array.isArray;e.exports=t},96717:function(e,t,n){var r=n(84547),o=n(78890);e.exports=function(e){return null!=e&&o(e.length)&&!r(e)}},49681:function(e,t,n){e=n.nmd(e);var r=n(41314),o=n(74367),i=t&&!t.nodeType&&t,a=i&&e&&!e.nodeType&&e,l=a&&a.exports===i?r.Buffer:void 0,s=l?l.isBuffer:void 0;e.exports=s||o},84547:function(e,t,n){var r=n(48276),o=n(74331);e.exports=function(e){if(!o(e))return!1;var t=r(e);return"[object Function]"==t||"[object GeneratorFunction]"==t||"[object AsyncFunction]"==t||"[object Proxy]"==t}},78890:function(e){e.exports=function(e){return"number"==typeof e&&e>-1&&e%1==0&&e<=9007199254740991}},8905:function(e,t,n){var r=n(87235),o=n(86080),i=n(78084),a=i&&i.isMap,l=a?o(a):r;e.exports=l},74331:function(e){e.exports=function(e){var t=typeof e;return null!=e&&("object"==t||"function"==t)}},60655:function(e){e.exports=function(e){return null!=e&&"object"==typeof e}},54477:function(e,t,n){var r=n(48276),o=n(27271),i=n(60655),a=Object.prototype,l=Function.prototype.toString,s=a.hasOwnProperty,c=l.call(Object);e.exports=function(e){if(!i(e)||"[object Object]"!=r(e))return!1;var t=o(e);if(null===t)return!0;var n=s.call(t,"constructor")&&t.constructor;return"function"==typeof n&&n instanceof n&&l.call(n)==c}},90911:function(e,t,n){var r=n(58651),o=n(86080),i=n(78084),a=i&&i.isSet,l=a?o(a):r;e.exports=l},782:function(e,t,n){var r=n(48276),o=n(3642),i=n(60655);e.exports=function(e){return"string"==typeof e||!o(e)&&i(e)&&"[object String]"==r(e)}},50246:function(e,t,n){var r=n(48276),o=n(60655);e.exports=function(e){return"symbol"==typeof e||o(e)&&"[object Symbol]"==r(e)}},97095:function(e,t,n){var r=n(59972),o=n(86080),i=n(78084),a=i&&i.isTypedArray,l=a?o(a):r;e.exports=l},28287:function(e,t,n){var r=n(86164),o=n(60922),i=n(96717);e.exports=function(e){return i(e)?r(e):o(e)}},76183:function(e,t,n){var r=n(86164),o=n(52449),i=n(96717);e.exports=function(e){return i(e)?r(e,!0):o(e)}},77636:function(e,t,n){var r=n(52908),o=n(23393),i=n(22525),a=n(3642);e.exports=function(e,t){return(a(e)?r:i)(e,o(t,3))}},85679:function(e,t,n){var r=n(97794);function o(e,t){if("function"!=typeof e||null!=t&&"function"!=typeof t)throw TypeError("Expected a function");var n=function(){var r=arguments,o=t?t.apply(this,r):r[0],i=n.cache;if(i.has(o))return i.get(o);var a=e.apply(this,r);return n.cache=i.set(o,a)||i,a};return n.cache=new(o.Cache||r),n}o.Cache=r,e.exports=o},78626:function(e,t,n){var r=n(31661),o=n(30452),i=n(78128),a=n(87912);e.exports=function(e){return i(e)?r(a(e)):o(e)}},6403:function(e){e.exports=function(){return[]}},74367:function(e){e.exports=function(){return!1}},51299:function(e,t,n){var r=n(84778);e.exports=function(e){return null==e?"":r(e)}},61750:function(e,t,n){"use strict";n.d(t,{Z:function(){return a}});var r=n(86006),o={xmlns:"http://www.w3.org/2000/svg",width:24,height:24,viewBox:"0 0 24 24",fill:"none",stroke:"currentColor",strokeWidth:2,strokeLinecap:"round",strokeLinejoin:"round"};let i=e=>e.replace(/([a-z0-9])([A-Z])/g,"$1-$2").toLowerCase(),a=(e,t)=>{let n=(0,r.forwardRef)(({color:n="currentColor",size:a=24,strokeWidth:l=2,absoluteStrokeWidth:s,children:c,...u},f)=>(0,r.createElement)("svg",{ref:f,...o,width:a,height:a,stroke:n,strokeWidth:s?24*Number(l)/Number(a):l,className:`lucide lucide-${i(e)}`,...u},[...t.map(([e,t])=>(0,r.createElement)(e,t)),...(Array.isArray(c)?c:[c])||[]]));return n.displayName=`${e}`,n}},87594:function(e,t,n){"use strict";n.d(t,{Z:function(){return o}});var r=n(61750);let o=(0,r.Z)("Search",[["circle",{cx:"11",cy:"11",r:"8",key:"4ej97u"}],["path",{d:"m21 21-4.3-4.3",key:"1qie3q"}]])},18178:function(e,t,n){"use strict";n.d(t,{Z:function(){return o}});var r=n(61750);let o=(0,r.Z)("X",[["path",{d:"M18 6 6 18",key:"1bl5f8"}],["path",{d:"m6 6 12 12",key:"d8bk6v"}]])},28352:function(e){var t="undefined"!=typeof window?window:self;e.exports=t.crypto||t.msCrypto},89586:function(e,t,n){e.exports=function(e){if(!e)return Math.random;var t=new Uint32Array(1);return function(){return e.getRandomValues(t)[0]/4294967296}}(n(28352))},27410:function(e){function t(e,t,n,r){return Math.round(e/n)+" "+r+(t>=1.5*n?"s":"")}e.exports=function(e,n){n=n||{};var r,o,i=typeof e;if("string"===i&&e.length>0)return function(e){if(!((e=String(e)).length>100)){var t=/^(-?(?:\d+)?\.?\d+) *(milliseconds?|msecs?|ms|seconds?|secs?|s|minutes?|mins?|m|hours?|hrs?|h|days?|d|weeks?|w|years?|yrs?|y)?$/i.exec(e);if(t){var n=parseFloat(t[1]);switch((t[2]||"ms").toLowerCase()){case"years":case"year":case"yrs":case"yr":case"y":return 315576e5*n;case"weeks":case"week":case"w":return 6048e5*n;case"days":case"day":case"d":return 864e5*n;case"hours":case"hour":case"hrs":case"hr":case"h":return 36e5*n;case"minutes":case"minute":case"mins":case"min":case"m":return 6e4*n;case"seconds":case"second":case"secs":case"sec":case"s":return 1e3*n;case"milliseconds":case"millisecond":case"msecs":case"msec":case"ms":return n;default:return}}}}(e);if("number"===i&&isFinite(e))return n.long?(r=Math.abs(e))>=864e5?t(e,r,864e5,"day"):r>=36e5?t(e,r,36e5,"hour"):r>=6e4?t(e,r,6e4,"minute"):r>=1e3?t(e,r,1e3,"second"):e+" ms":(o=Math.abs(e))>=864e5?Math.round(e/864e5)+"d":o>=36e5?Math.round(e/36e5)+"h":o>=6e4?Math.round(e/6e4)+"m":o>=1e3?Math.round(e/1e3)+"s":e+"ms";throw Error("val is not a non-empty string or a valid number. val="+JSON.stringify(e))}},52040:function(e,t,n){"use strict";var r,o;e.exports=(null==(r=n.g.process)?void 0:r.env)&&"object"==typeof(null==(o=n.g.process)?void 0:o.env)?n.g.process:n(66003)},73029:function(e,t,n){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"Image",{enumerable:!0,get:function(){return g}});let r=n(26927),o=n(25909),i=o._(n(86006)),a=r._(n(86174)),l=n(80529),s=n(17302),c=n(23442);n(46731);let u=r._(n(47235)),f={deviceSizes:[640,750,828,1080,1200,1920,2048,3840],imageSizes:[16,32,48,64,96,128,256,384],path:"/_next/image",loader:"default",dangerouslyAllowSVG:!1,unoptimized:!1};function d(e,t,n,r,o,i){let a=null==e?void 0:e.src;if(!e||e["data-loaded-src"]===a)return;e["data-loaded-src"]=a;let l="decode"in e?e.decode():Promise.resolve();l.catch(()=>{}).then(()=>{if(e.parentElement&&e.isConnected){if("blur"===t&&o(!0),null==n?void 0:n.current){let t=new Event("load");Object.defineProperty(t,"target",{writable:!1,value:e});let r=!1,o=!1;n.current({...t,nativeEvent:t,currentTarget:e,target:e,isDefaultPrevented:()=>r,isPropagationStopped:()=>o,persist:()=>{},preventDefault:()=>{r=!0,t.preventDefault()},stopPropagation:()=>{o=!0,t.stopPropagation()}})}(null==r?void 0:r.current)&&r.current(e)}})}function p(e){let[t,n]=i.version.split("."),r=parseInt(t,10),o=parseInt(n,10);return r>18||18===r&&o>=3?{fetchPriority:e}:{fetchpriority:e}}let h=(0,i.forwardRef)((e,t)=>{let{src:n,srcSet:r,sizes:o,height:a,width:l,decoding:s,className:c,style:u,fetchPriority:f,placeholder:h,loading:g,unoptimized:m,fill:b,onLoadRef:v,onLoadingCompleteRef:y,setBlurComplete:x,setShowAltText:w,onLoad:E,onError:S,...k}=e;return i.default.createElement("img",{...k,...p(f),loading:g,width:l,height:a,decoding:s,"data-nimg":b?"fill":"1",className:c,style:u,sizes:o,srcSet:r,src:n,ref:(0,i.useCallback)(e=>{t&&("function"==typeof t?t(e):"object"==typeof t&&(t.current=e)),e&&(S&&(e.src=e.src),e.complete&&d(e,h,v,y,x,m))},[n,h,v,y,x,S,m,t]),onLoad:e=>{let t=e.currentTarget;d(t,h,v,y,x,m)},onError:e=>{w(!0),"blur"===h&&x(!0),S&&S(e)}})}),g=(0,i.forwardRef)((e,t)=>{let n=(0,i.useContext)(c.ImageConfigContext),r=(0,i.useMemo)(()=>{let e=f||n||s.imageConfigDefault,t=[...e.deviceSizes,...e.imageSizes].sort((e,t)=>e-t),r=e.deviceSizes.sort((e,t)=>e-t);return{...e,allSizes:t,deviceSizes:r}},[n]),{onLoad:o,onLoadingComplete:d}=e,g=(0,i.useRef)(o);(0,i.useEffect)(()=>{g.current=o},[o]);let m=(0,i.useRef)(d);(0,i.useEffect)(()=>{m.current=d},[d]);let[b,v]=(0,i.useState)(!1),[y,x]=(0,i.useState)(!1),{props:w,meta:E}=(0,l.getImgProps)(e,{defaultLoader:u.default,imgConf:r,blurComplete:b,showAltText:y});return i.default.createElement(i.default.Fragment,null,i.default.createElement(h,{...w,unoptimized:E.unoptimized,placeholder:E.placeholder,fill:E.fill,onLoadRef:g,onLoadingCompleteRef:m,setBlurComplete:v,setShowAltText:x,ref:t}),E.priority?i.default.createElement(a.default,null,i.default.createElement("link",{key:"__nimg-"+w.src+w.srcSet+w.sizes,rel:"preload",as:"image",href:w.srcSet?void 0:w.src,imageSrcSet:w.srcSet,imageSizes:w.sizes,crossOrigin:w.crossOrigin,referrerPolicy:w.referrerPolicy,...p(w.fetchPriority)})):null)});("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},14620:function(e,t,n){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"AmpStateContext",{enumerable:!0,get:function(){return i}});let r=n(26927),o=r._(n(86006)),i=o.default.createContext({})},40353:function(e,t){"use strict";function n(e){let{ampFirst:t=!1,hybrid:n=!1,hasQuery:r=!1}=void 0===e?{}:e;return t||n&&r}Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"isInAmpMode",{enumerable:!0,get:function(){return n}})},80529:function(e,t,n){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"getImgProps",{enumerable:!0,get:function(){return l}}),n(46731);let r=n(16542),o=n(17302);function i(e){return void 0!==e.default}function a(e){return void 0===e?e:"number"==typeof e?Number.isFinite(e)?e:NaN:"string"==typeof e&&/^[0-9]+$/.test(e)?parseInt(e,10):NaN}function l(e,t){var n;let l,s,c,{src:u,sizes:f,unoptimized:d=!1,priority:p=!1,loading:h,className:g,quality:m,width:b,height:v,fill:y=!1,style:x,onLoad:w,onLoadingComplete:E,placeholder:S="empty",blurDataURL:k,fetchPriority:_,layout:O,objectFit:C,objectPosition:A,lazyBoundary:N,lazyRoot:R,...T}=e,{imgConf:P,showAltText:M,blurComplete:j,defaultLoader:L}=t,I=P||o.imageConfigDefault;if("allSizes"in I)l=I;else{let e=[...I.deviceSizes,...I.imageSizes].sort((e,t)=>e-t),t=I.deviceSizes.sort((e,t)=>e-t);l={...I,allSizes:e,deviceSizes:t}}let D=T.loader||L;delete T.loader,delete T.srcSet;let F="__next_img_default"in D;if(F){if("custom"===l.loader)throw Error('Image with src "'+u+'" is missing "loader" prop.\nRead more: https://nextjs.org/docs/messages/next-image-missing-loader')}else{let e=D;D=t=>{let{config:n,...r}=t;return e(r)}}if(O){"fill"===O&&(y=!0);let e={intrinsic:{maxWidth:"100%",height:"auto"},responsive:{width:"100%",height:"auto"}}[O];e&&(x={...x,...e});let t={responsive:"100vw",fill:"100vw"}[O];t&&!f&&(f=t)}let B="",z=a(b),$=a(v);if("object"==typeof(n=u)&&(i(n)||void 0!==n.src)){let e=i(u)?u.default:u;if(!e.src)throw Error("An object should only be passed to the image component src parameter if it comes from a static image import. It must include src. Received "+JSON.stringify(e));if(!e.height||!e.width)throw Error("An object should only be passed to the image component src parameter if it comes from a static image import. It must include height and width. Received "+JSON.stringify(e));if(s=e.blurWidth,c=e.blurHeight,k=k||e.blurDataURL,B=e.src,!y){if(z||$){if(z&&!$){let t=z/e.width;$=Math.round(e.height*t)}else if(!z&&$){let t=$/e.height;z=Math.round(e.width*t)}}else z=e.width,$=e.height}}let U=!p&&("lazy"===h||void 0===h);(!(u="string"==typeof u?u:B)||u.startsWith("data:")||u.startsWith("blob:"))&&(d=!0,U=!1),l.unoptimized&&(d=!0),F&&u.endsWith(".svg")&&!l.dangerouslyAllowSVG&&(d=!0),p&&(_="high");let H=a(m),Z=Object.assign(y?{position:"absolute",height:"100%",width:"100%",left:0,top:0,right:0,bottom:0,objectFit:C,objectPosition:A}:{},M?{}:{color:"transparent"},x),q="blur"===S&&k&&!j?{backgroundSize:Z.objectFit||"cover",backgroundPosition:Z.objectPosition||"50% 50%",backgroundRepeat:"no-repeat",backgroundImage:'url("data:image/svg+xml;charset=utf-8,'+(0,r.getImageBlurSvg)({widthInt:z,heightInt:$,blurWidth:s,blurHeight:c,blurDataURL:k,objectFit:Z.objectFit})+'")'}:{},V=function(e){let{config:t,src:n,unoptimized:r,width:o,quality:i,sizes:a,loader:l}=e;if(r)return{src:n,srcSet:void 0,sizes:void 0};let{widths:s,kind:c}=function(e,t,n){let{deviceSizes:r,allSizes:o}=e;if(n){let e=/(^|\s)(1?\d?\d)vw/g,t=[];for(let r;r=e.exec(n);r)t.push(parseInt(r[2]));if(t.length){let e=.01*Math.min(...t);return{widths:o.filter(t=>t>=r[0]*e),kind:"w"}}return{widths:o,kind:"w"}}if("number"!=typeof t)return{widths:r,kind:"w"};let i=[...new Set([t,2*t].map(e=>o.find(t=>t>=e)||o[o.length-1]))];return{widths:i,kind:"x"}}(t,o,a),u=s.length-1;return{sizes:a||"w"!==c?a:"100vw",srcSet:s.map((e,r)=>l({config:t,src:n,quality:i,width:e})+" "+("w"===c?e:r+1)+c).join(", "),src:l({config:t,src:n,quality:i,width:s[u]})}}({config:l,src:u,unoptimized:d,width:z,quality:H,sizes:f,loader:D}),W={...T,loading:U?"lazy":h,fetchPriority:_,width:z,height:$,decoding:"async",className:g,style:{...Z,...q},sizes:V.sizes,srcSet:V.srcSet,src:V.src},G={unoptimized:d,priority:p,placeholder:S,fill:y};return{props:W,meta:G}}},86174:function(e,t,n){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),function(e,t){for(var n in t)Object.defineProperty(e,n,{enumerable:!0,get:t[n]})}(t,{defaultHead:function(){return u},default:function(){return h}});let r=n(26927),o=n(25909),i=o._(n(86006)),a=r._(n(20255)),l=n(14620),s=n(27268),c=n(40353);function u(e){void 0===e&&(e=!1);let t=[i.default.createElement("meta",{charSet:"utf-8"})];return e||t.push(i.default.createElement("meta",{name:"viewport",content:"width=device-width"})),t}function f(e,t){return"string"==typeof t||"number"==typeof t?e:t.type===i.default.Fragment?e.concat(i.default.Children.toArray(t.props.children).reduce((e,t)=>"string"==typeof t||"number"==typeof t?e:e.concat(t),[])):e.concat(t)}n(46731);let d=["name","httpEquiv","charSet","itemProp"];function p(e,t){let{inAmpMode:n}=t;return e.reduce(f,[]).reverse().concat(u(n).reverse()).filter(function(){let e=new Set,t=new Set,n=new Set,r={};return o=>{let i=!0,a=!1;if(o.key&&"number"!=typeof o.key&&o.key.indexOf("$")>0){a=!0;let t=o.key.slice(o.key.indexOf("$")+1);e.has(t)?i=!1:e.add(t)}switch(o.type){case"title":case"base":t.has(o.type)?i=!1:t.add(o.type);break;case"meta":for(let e=0,t=d.length;e{let r=e.key||t;if(!n&&"link"===e.type&&e.props.href&&["https://fonts.googleapis.com/css","https://use.typekit.net/"].some(t=>e.props.href.startsWith(t))){let t={...e.props||{}};return t["data-href"]=t.href,t.href=void 0,t["data-optimized-fonts"]=!0,i.default.cloneElement(e,t)}return i.default.cloneElement(e,{key:r})})}let h=function(e){let{children:t}=e,n=(0,i.useContext)(l.AmpStateContext),r=(0,i.useContext)(s.HeadManagerContext);return i.default.createElement(a.default,{reduceComponentsToState:p,headManager:r,inAmpMode:(0,c.isInAmpMode)(n)},t)};("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},16542:function(e,t){"use strict";function n(e){let{widthInt:t,heightInt:n,blurWidth:r,blurHeight:o,blurDataURL:i,objectFit:a}=e,l=r||t,s=o||n,c=i.startsWith("data:image/jpeg")?"%3CfeComponentTransfer%3E%3CfeFuncA type='discrete' tableValues='1 1'/%3E%3C/feComponentTransfer%3E%":"";return l&&s?"%3Csvg xmlns='http%3A//www.w3.org/2000/svg' viewBox='0 0 "+l+" "+s+"'%3E%3Cfilter id='b' color-interpolation-filters='sRGB'%3E%3CfeGaussianBlur stdDeviation='"+(r&&o?"1":"20")+"'/%3E"+c+"%3C/filter%3E%3Cimage preserveAspectRatio='none' filter='url(%23b)' x='0' y='0' height='100%25' width='100%25' href='"+i+"'/%3E%3C/svg%3E":"%3Csvg xmlns='http%3A//www.w3.org/2000/svg'%3E%3Cimage style='filter:blur(20px)' preserveAspectRatio='"+("contain"===a?"xMidYMid":"cover"===a?"xMidYMid slice":"none")+"' x='0' y='0' height='100%25' width='100%25' href='"+i+"'/%3E%3C/svg%3E"}Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"getImageBlurSvg",{enumerable:!0,get:function(){return n}})},23442:function(e,t,n){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"ImageConfigContext",{enumerable:!0,get:function(){return a}});let r=n(26927),o=r._(n(86006)),i=n(17302),a=o.default.createContext(i.imageConfigDefault)},17302:function(e,t){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),function(e,t){for(var n in t)Object.defineProperty(e,n,{enumerable:!0,get:t[n]})}(t,{VALID_LOADERS:function(){return n},imageConfigDefault:function(){return r}});let n=["default","imgix","cloudinary","akamai","custom"],r={deviceSizes:[640,750,828,1080,1200,1920,2048,3840],imageSizes:[16,32,48,64,96,128,256,384],path:"/_next/image",loader:"default",loaderFile:"",domains:[],disableStaticImages:!1,minimumCacheTTL:60,formats:["image/webp"],dangerouslyAllowSVG:!1,contentSecurityPolicy:"script-src 'none'; frame-src 'none'; sandbox;",contentDispositionType:"inline",remotePatterns:[],unoptimized:!1}},45445:function(e,t,n){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),function(e,t){for(var n in t)Object.defineProperty(e,n,{enumerable:!0,get:t[n]})}(t,{default:function(){return c},unstable_getImgProps:function(){return s}});let r=n(26927),o=n(80529),i=n(46731),a=n(73029),l=r._(n(47235)),s=e=>{(0,i.warnOnce)("Warning: unstable_getImgProps() is experimental and may change or be removed at any time. Use at your own risk.");let{props:t}=(0,o.getImgProps)(e,{defaultLoader:l.default,imgConf:{deviceSizes:[640,750,828,1080,1200,1920,2048,3840],imageSizes:[16,32,48,64,96,128,256,384],path:"/_next/image",loader:"default",dangerouslyAllowSVG:!1,unoptimized:!1}});for(let[e,n]of Object.entries(t))void 0===n&&delete t[e];return{props:t}},c=a.Image},47235:function(e,t){"use strict";function n(e){let{config:t,src:n,width:r,quality:o}=e;return t.path+"?url="+encodeURIComponent(n)+"&w="+r+"&q="+(o||75)}Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"default",{enumerable:!0,get:function(){return r}}),n.__next_img_default=!0;let r=n},20255:function(e,t,n){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"default",{enumerable:!0,get:function(){return l}});let r=n(25909),o=r._(n(86006)),i=o.useLayoutEffect,a=o.useEffect;function l(e){let{headManager:t,reduceComponentsToState:n}=e;function r(){if(t&&t.mountedInstances){let r=o.Children.toArray(Array.from(t.mountedInstances).filter(Boolean));t.updateHead(n(r,e))}}return i(()=>{var n;return null==t||null==(n=t.mountedInstances)||n.add(e.children),()=>{var n;null==t||null==(n=t.mountedInstances)||n.delete(e.children)}}),i(()=>(t&&(t._pendingUpdate=r),()=>{t&&(t._pendingUpdate=r)})),a(()=>(t&&t._pendingUpdate&&(t._pendingUpdate(),t._pendingUpdate=null),()=>{t&&t._pendingUpdate&&(t._pendingUpdate(),t._pendingUpdate=null)})),null}},46731:function(e,t){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"warnOnce",{enumerable:!0,get:function(){return n}});let n=e=>{}},81973:function(){},7913:function(e,t,n){var r=n(52040),o=n(91083).Buffer;!function(){var t={992:function(e){e.exports=function(e,n,r){if(e.filter)return e.filter(n,r);if(null==e||"function"!=typeof n)throw TypeError();for(var o=[],i=0;i1?n-1:0),o=1;o1?n-1:0),o=1;o1?n-1:0),o=1;o1?n-1:0),o=1;oe.length)&&(n=e.length),e.substring(n-t.length,n)===t}var g="",m="",b="",v="",y={deepStrictEqual:"Expected values to be strictly deep-equal:",strictEqual:"Expected values to be strictly equal:",strictEqualObject:'Expected "actual" to be reference-equal to "expected":',deepEqual:"Expected values to be loosely deep-equal:",equal:"Expected values to be loosely equal:",notDeepStrictEqual:'Expected "actual" not to be strictly deep-equal to:',notStrictEqual:'Expected "actual" to be strictly unequal to:',notStrictEqualObject:'Expected "actual" not to be reference-equal to "expected":',notDeepEqual:'Expected "actual" not to be loosely deep-equal to:',notEqual:'Expected "actual" to be loosely unequal to:',notIdentical:"Values identical but not reference-equal:"};function x(e){var t=Object.keys(e),n=Object.create(Object.getPrototypeOf(e));return t.forEach(function(t){n[t]=e[t]}),Object.defineProperty(n,"message",{value:e.message}),n}function w(e){return d(e,{compact:!1,customInspect:!1,depth:1e3,maxArrayLength:1/0,showHidden:!1,breakLength:1/0,showProxy:!1,sorted:!0,getters:!0})}var E=function(e){var t,n;function l(e){if(!function(e,t){if(!(e instanceof t))throw TypeError("Cannot call a class as a function")}(this,l),"object"!==f(e)||null===e)throw new p("options","Object",e);var t,n=e.message,o=e.operator,s=e.stackStartFn,c=e.actual,d=e.expected,E=Error.stackTraceLimit;if(Error.stackTraceLimit=0,null!=n)t=i(this,u(l).call(this,String(n)));else if(r.stderr&&r.stderr.isTTY&&(r.stderr&&r.stderr.getColorDepth&&1!==r.stderr.getColorDepth()?(g="\x1b[34m",m="\x1b[32m",v="\x1b[39m",b="\x1b[31m"):(g="",m="",v="",b="")),"object"===f(c)&&null!==c&&"object"===f(d)&&null!==d&&"stack"in c&&c instanceof Error&&"stack"in d&&d instanceof Error&&(c=x(c),d=x(d)),"deepStrictEqual"===o||"strictEqual"===o)t=i(this,u(l).call(this,function(e,t,n){var o="",i="",a=0,l="",s=!1,c=w(e),u=c.split("\n"),d=w(t).split("\n"),p=0,x="";if("strictEqual"===n&&"object"===f(e)&&"object"===f(t)&&null!==e&&null!==t&&(n="strictEqualObject"),1===u.length&&1===d.length&&u[0]!==d[0]){var E=u[0].length+d[0].length;if(E<=10){if(("object"!==f(e)||null===e)&&("object"!==f(t)||null===t)&&(0!==e||0!==t))return"".concat(y[n],"\n\n")+"".concat(u[0]," !== ").concat(d[0],"\n")}else if("strictEqualObject"!==n&&E<(r.stderr&&r.stderr.isTTY?r.stderr.columns:80)){for(;u[0][p]===d[0][p];)p++;p>2&&(x="\n ".concat(function(e,t){if(t=Math.floor(t),0==e.length||0==t)return"";var n=e.length*t;for(t=Math.floor(Math.log(t)/Math.log(2));t;)e+=e,t--;return e+e.substring(0,n-e.length)}(" ",p),"^"),p=0)}}for(var S=u[u.length-1],k=d[d.length-1];S===k&&(p++<2?l="\n ".concat(S).concat(l):o=S,u.pop(),d.pop(),0!==u.length&&0!==d.length);)S=u[u.length-1],k=d[d.length-1];var _=Math.max(u.length,d.length);if(0===_){var O=c.split("\n");if(O.length>30)for(O[26]="".concat(g,"...").concat(v);O.length>27;)O.pop();return"".concat(y.notIdentical,"\n\n").concat(O.join("\n"),"\n")}p>3&&(l="\n".concat(g,"...").concat(v).concat(l),s=!0),""!==o&&(l="\n ".concat(o).concat(l),o="");var C=0,A=y[n]+"\n".concat(m,"+ actual").concat(v," ").concat(b,"- expected").concat(v),N=" ".concat(g,"...").concat(v," Lines skipped");for(p=0;p<_;p++){var R=p-a;if(u.length1&&p>2&&(R>4?(i+="\n".concat(g,"...").concat(v),s=!0):R>3&&(i+="\n ".concat(d[p-2]),C++),i+="\n ".concat(d[p-1]),C++),a=p,o+="\n".concat(b,"-").concat(v," ").concat(d[p]),C++;else if(d.length1&&p>2&&(R>4?(i+="\n".concat(g,"...").concat(v),s=!0):R>3&&(i+="\n ".concat(u[p-2]),C++),i+="\n ".concat(u[p-1]),C++),a=p,i+="\n".concat(m,"+").concat(v," ").concat(u[p]),C++;else{var T=d[p],P=u[p],M=P!==T&&(!h(P,",")||P.slice(0,-1)!==T);M&&h(T,",")&&T.slice(0,-1)===P&&(M=!1,P+=","),M?(R>1&&p>2&&(R>4?(i+="\n".concat(g,"...").concat(v),s=!0):R>3&&(i+="\n ".concat(u[p-2]),C++),i+="\n ".concat(u[p-1]),C++),a=p,i+="\n".concat(m,"+").concat(v," ").concat(P),o+="\n".concat(b,"-").concat(v," ").concat(T),C+=2):(i+=o,o="",(1===R||0===p)&&(i+="\n ".concat(P),C++))}if(C>20&&p<_-2)return"".concat(A).concat(N,"\n").concat(i,"\n").concat(g,"...").concat(v).concat(o,"\n")+"".concat(g,"...").concat(v)}return"".concat(A).concat(s?N:"","\n").concat(i).concat(o).concat(l).concat(x)}(c,d,o)));else if("notDeepStrictEqual"===o||"notStrictEqual"===o){var S=y[o],k=w(c).split("\n");if("notStrictEqual"===o&&"object"===f(c)&&null!==c&&(S=y.notStrictEqualObject),k.length>30)for(k[26]="".concat(g,"...").concat(v);k.length>27;)k.pop();t=1===k.length?i(this,u(l).call(this,"".concat(S," ").concat(k[0]))):i(this,u(l).call(this,"".concat(S,"\n\n").concat(k.join("\n"),"\n")))}else{var _=w(c),O="",C=y[o];"notDeepEqual"===o||"notEqual"===o?(_="".concat(y[o],"\n\n").concat(_)).length>1024&&(_="".concat(_.slice(0,1021),"...")):(O="".concat(w(d)),_.length>512&&(_="".concat(_.slice(0,509),"...")),O.length>512&&(O="".concat(O.slice(0,509),"...")),"deepEqual"===o||"equal"===o?_="".concat(C,"\n\n").concat(_,"\n\nshould equal\n\n"):O=" ".concat(o," ").concat(O)),t=i(this,u(l).call(this,"".concat(_).concat(O)))}return Error.stackTraceLimit=E,t.generatedMessage=!n,Object.defineProperty(a(t),"name",{value:"AssertionError [ERR_ASSERTION]",enumerable:!1,writable:!0,configurable:!0}),t.code="ERR_ASSERTION",t.actual=c,t.expected=d,t.operator=o,Error.captureStackTrace&&Error.captureStackTrace(a(t),s),t.stack,t.name="AssertionError",i(t)}return!function(e,t){if("function"!=typeof t&&null!==t)throw TypeError("Super expression must either be null or a function");e.prototype=Object.create(t&&t.prototype,{constructor:{value:e,writable:!0,configurable:!0}}),t&&c(e,t)}(l,e),t=[{key:"toString",value:function(){return"".concat(this.name," [").concat(this.code,"]: ").concat(this.message)}},{key:d.custom,value:function(e,t){return d(this,function(e){for(var t=1;t2)?"one of ".concat(t," ").concat(e.slice(0,n-1).join(", "),", or ")+e[n-1]:2===n?"one of ".concat(t," ").concat(e[0]," or ").concat(e[1]):"of ".concat(t," ").concat(e[0])}c("ERR_AMBIGUOUS_ARGUMENT",'The "%s" argument is ambiguous. %s',TypeError),c("ERR_INVALID_ARG_TYPE",function(e,t,o){if((void 0===a&&(a=n(167)),a("string"==typeof e,"'name' must be a string"),"string"==typeof t&&(i="not ",t.substr(!l||l<0?0:+l,i.length)===i))?(d="must not be",t=t.replace(/^not /,"")):d="must be",s=" argument",(void 0===c||c>e.length)&&(c=e.length),e.substring(c-s.length,c)===s)p="The ".concat(e," ").concat(d," ").concat(u(t,"type"));else{var i,l,s,c,f,d,p,h=("number"!=typeof f&&(f=0),f+1>e.length||-1===e.indexOf(".",f))?"argument":"property";p='The "'.concat(e,'" ').concat(h," ").concat(d," ").concat(u(t,"type"))}return p+". Received type ".concat(r(o))},TypeError),c("ERR_INVALID_ARG_VALUE",function(e,t){var r=arguments.length>2&&void 0!==arguments[2]?arguments[2]:"is invalid";void 0===l&&(l=n(177));var o=l.inspect(t);return o.length>128&&(o="".concat(o.slice(0,128),"...")),"The argument '".concat(e,"' ").concat(r,". Received ").concat(o)},TypeError,RangeError),c("ERR_INVALID_RETURN_VALUE",function(e,t,n){var o;return o=n&&n.constructor&&n.constructor.name?"instance of ".concat(n.constructor.name):"type ".concat(r(n)),"Expected ".concat(e,' to be returned from the "').concat(t,'"')+" function but got ".concat(o,".")},TypeError),c("ERR_MISSING_ARGS",function(){for(var e=arguments.length,t=Array(e),r=0;r0,"At least one arg needs to be specified");var o="The ",i=t.length;switch(t=t.map(function(e){return'"'.concat(e,'"')}),i){case 1:o+="".concat(t[0]," argument");break;case 2:o+="".concat(t[0]," and ").concat(t[1]," arguments");break;default:o+=t.slice(0,i-1).join(", ")+", and ".concat(t[i-1]," arguments")}return"".concat(o," must be specified")},TypeError),e.exports.codes=s},176:function(e,t,n){"use strict";function r(e,t){return function(e){if(Array.isArray(e))return e}(e)||function(e,t){var n=[],r=!0,o=!1,i=void 0;try{for(var a,l=e[Symbol.iterator]();!(r=(a=l.next()).done)&&(n.push(a.value),!t||n.length!==t);r=!0);}catch(e){o=!0,i=e}finally{try{r||null==l.return||l.return()}finally{if(o)throw i}}return n}(e,t)||function(){throw TypeError("Invalid attempt to destructure non-iterable instance")}()}function o(e){return(o="function"==typeof Symbol&&"symbol"==typeof Symbol.iterator?function(e){return typeof e}:function(e){return e&&"function"==typeof Symbol&&e.constructor===Symbol&&e!==Symbol.prototype?"symbol":typeof e})(e)}var i=void 0!==/a/g.flags,a=function(e){var t=[];return e.forEach(function(e){return t.push(e)}),t},l=function(e){var t=[];return e.forEach(function(e,n){return t.push([n,e])}),t},s=Object.is?Object.is:n(208),c=Object.getOwnPropertySymbols?Object.getOwnPropertySymbols:function(){return[]},u=Number.isNaN?Number.isNaN:n(718);function f(e){return e.call.bind(e)}var d=f(Object.prototype.hasOwnProperty),p=f(Object.prototype.propertyIsEnumerable),h=f(Object.prototype.toString),g=n(177).types,m=g.isAnyArrayBuffer,b=g.isArrayBufferView,v=g.isDate,y=g.isMap,x=g.isRegExp,w=g.isSet,E=g.isNativeError,S=g.isBoxedPrimitive,k=g.isNumberObject,_=g.isStringObject,O=g.isBooleanObject,C=g.isBigIntObject,A=g.isSymbolObject,N=g.isFloat32Array,R=g.isFloat64Array;function T(e){if(0===e.length||e.length>10)return!0;for(var t=0;t57)return!0}return 10===e.length&&e>=4294967296}function P(e){return Object.keys(e).filter(T).concat(c(e).filter(Object.prototype.propertyIsEnumerable.bind(e)))}/*! - * The buffer module from node.js, for the browser. - * - * @author Feross Aboukhadijeh - * @license MIT - */function M(e,t){if(e===t)return 0;for(var n=e.length,r=t.length,o=0,i=Math.min(n,r);o-1?o(n):n}},139:function(e,t,n){"use strict";var r=n(174),o=n(500),i=o("%Function.prototype.apply%"),a=o("%Function.prototype.call%"),l=o("%Reflect.apply%",!0)||r.call(a,i),s=o("%Object.getOwnPropertyDescriptor%",!0),c=o("%Object.defineProperty%",!0),u=o("%Math.max%");if(c)try{c({},"a",{value:1})}catch(e){c=null}e.exports=function(e){var t=l(r,a,arguments);return s&&c&&s(t,"length").configurable&&c(t,"length",{value:1+u(0,e.length-(arguments.length-1))}),t};var f=function(){return l(r,i,arguments)};c?c(e.exports,"apply",{value:f}):e.exports.apply=f},69:function(e,t,n){"use strict";var r=n(935),o="function"==typeof Symbol&&"symbol"==typeof Symbol("foo"),i=Object.prototype.toString,a=Array.prototype.concat,l=Object.defineProperty,s=l&&function(){var e={};try{for(var t in l(e,"x",{enumerable:!1,value:e}),e)return!1;return e.x===e}catch(e){return!1}}(),c=function(e,t,n,r){(!(t in e)||"function"==typeof r&&"[object Function]"===i.call(r)&&r())&&(s?l(e,t,{configurable:!0,enumerable:!1,value:n,writable:!0}):e[t]=n)},u=function(e,t){var n=arguments.length>2?arguments[2]:{},i=r(t);o&&(i=a.call(i,Object.getOwnPropertySymbols(t)));for(var l=0;l1&&"boolean"!=typeof t)throw new a('"allowMissing" argument must be a boolean');if(null===k(/^%?[^%]*%?$/g,e))throw new o("`%` may not be present anywhere but at the beginning and end of the intrinsic name");var n=C(e),r=n.length>0?n[0]:"",i=A("%"+r+"%",t),l=i.name,c=i.value,u=!1,f=i.alias;f&&(r=f[0],w(n,x([0,1],f)));for(var d=1,p=!0;d=n.length){var v=s(c,h);c=(p=!!v)&&"get"in v&&!("originalValue"in v.get)?v.get:c[h]}else p=y(c,h),c=c[h];p&&!u&&(g[l]=c)}}return c}},942:function(e,t,n){"use strict";var r="undefined"!=typeof Symbol&&Symbol,o=n(773);e.exports=function(){return"function"==typeof r&&"function"==typeof Symbol&&"symbol"==typeof r("foo")&&"symbol"==typeof Symbol("bar")&&o()}},773:function(e){"use strict";e.exports=function(){if("function"!=typeof Symbol||"function"!=typeof Object.getOwnPropertySymbols)return!1;if("symbol"==typeof Symbol.iterator)return!0;var e={},t=Symbol("test"),n=Object(t);if("string"==typeof t||"[object Symbol]"!==Object.prototype.toString.call(t)||"[object Symbol]"!==Object.prototype.toString.call(n))return!1;for(t in e[t]=42,e)return!1;if("function"==typeof Object.keys&&0!==Object.keys(e).length||"function"==typeof Object.getOwnPropertyNames&&0!==Object.getOwnPropertyNames(e).length)return!1;var r=Object.getOwnPropertySymbols(e);if(1!==r.length||r[0]!==t||!Object.prototype.propertyIsEnumerable.call(e,t))return!1;if("function"==typeof Object.getOwnPropertyDescriptor){var o=Object.getOwnPropertyDescriptor(e,t);if(42!==o.value||!0!==o.enumerable)return!1}return!0}},115:function(e,t,n){"use strict";var r="undefined"!=typeof Symbol&&Symbol,o=n(832);e.exports=function(){return"function"==typeof r&&"function"==typeof Symbol&&"symbol"==typeof r("foo")&&"symbol"==typeof Symbol("bar")&&o()}},832:function(e){"use strict";e.exports=function(){if("function"!=typeof Symbol||"function"!=typeof Object.getOwnPropertySymbols)return!1;if("symbol"==typeof Symbol.iterator)return!0;var e={},t=Symbol("test"),n=Object(t);if("string"==typeof t||"[object Symbol]"!==Object.prototype.toString.call(t)||"[object Symbol]"!==Object.prototype.toString.call(n))return!1;for(t in e[t]=42,e)return!1;if("function"==typeof Object.keys&&0!==Object.keys(e).length||"function"==typeof Object.getOwnPropertyNames&&0!==Object.getOwnPropertyNames(e).length)return!1;var r=Object.getOwnPropertySymbols(e);if(1!==r.length||r[0]!==t||!Object.prototype.propertyIsEnumerable.call(e,t))return!1;if("function"==typeof Object.getOwnPropertyDescriptor){var o=Object.getOwnPropertyDescriptor(e,t);if(42!==o.value||!0!==o.enumerable)return!1}return!0}},101:function(e,t,n){"use strict";var r=n(174);e.exports=r.call(Function.call,Object.prototype.hasOwnProperty)},782:function(e){"function"==typeof Object.create?e.exports=function(e,t){t&&(e.super_=t,e.prototype=Object.create(t.prototype,{constructor:{value:e,enumerable:!1,writable:!0,configurable:!0}}))}:e.exports=function(e,t){if(t){e.super_=t;var n=function(){};n.prototype=t.prototype,e.prototype=new n,e.prototype.constructor=e}}},157:function(e){"use strict";var t="function"==typeof Symbol&&"symbol"==typeof Symbol.toStringTag,n=Object.prototype.toString,r=function(e){return(!t||!e||"object"!=typeof e||!(Symbol.toStringTag in e))&&"[object Arguments]"===n.call(e)},o=function(e){return!!r(e)||null!==e&&"object"==typeof e&&"number"==typeof e.length&&e.length>=0&&"[object Array]"!==n.call(e)&&"[object Function]"===n.call(e.callee)},i=function(){return r(arguments)}();r.isLegacyArguments=o,e.exports=i?r:o},391:function(e){"use strict";var t=Object.prototype.toString,n=Function.prototype.toString,r=/^\s*(?:function)?\*/,o="function"==typeof Symbol&&"symbol"==typeof Symbol.toStringTag,i=Object.getPrototypeOf,a=function(){if(!o)return!1;try{return Function("return function*() {}")()}catch(e){}}(),l=a?i(a):{};e.exports=function(e){return"function"==typeof e&&(!!r.test(n.call(e))||(o?i(e)===l:"[object GeneratorFunction]"===t.call(e)))}},460:function(e){"use strict";e.exports=function(e){return e!=e}},718:function(e,t,n){"use strict";var r=n(139),o=n(69),i=n(460),a=n(625),l=n(171),s=r(a(),Number);o(s,{getPolyfill:a,implementation:i,shim:l}),e.exports=s},625:function(e,t,n){"use strict";var r=n(460);e.exports=function(){return Number.isNaN&&Number.isNaN(NaN)&&!Number.isNaN("a")?Number.isNaN:r}},171:function(e,t,n){"use strict";var r=n(69),o=n(625);e.exports=function(){var e=o();return r(Number,{isNaN:e},{isNaN:function(){return Number.isNaN!==e}}),e}},994:function(e,t,r){"use strict";var o=r(144),i=r(349),a=r(256),l=a("Object.prototype.toString"),s=r(942)()&&"symbol"==typeof Symbol.toStringTag,c=i(),u=a("Array.prototype.indexOf",!0)||function(e,t){for(var n=0;n-1)}},208:function(e){"use strict";var t=function(e){return e!=e};e.exports=function(e,n){return 0===e&&0===n?1/e==1/n:!!(e===n||t(e)&&t(n))}},579:function(e,t,n){"use strict";var r;if(!Object.keys){var o=Object.prototype.hasOwnProperty,i=Object.prototype.toString,a=n(412),l=Object.prototype.propertyIsEnumerable,s=!l.call({toString:null},"toString"),c=l.call(function(){},"prototype"),u=["toString","toLocaleString","valueOf","hasOwnProperty","isPrototypeOf","propertyIsEnumerable","constructor"],f=function(e){var t=e.constructor;return t&&t.prototype===e},d={$applicationCache:!0,$console:!0,$external:!0,$frame:!0,$frameElement:!0,$frames:!0,$innerHeight:!0,$innerWidth:!0,$onmozfullscreenchange:!0,$onmozfullscreenerror:!0,$outerHeight:!0,$outerWidth:!0,$pageXOffset:!0,$pageYOffset:!0,$parent:!0,$scrollLeft:!0,$scrollTop:!0,$scrollX:!0,$scrollY:!0,$self:!0,$webkitIndexedDB:!0,$webkitStorageInfo:!0,$window:!0},p=function(){if("undefined"==typeof window)return!1;for(var e in window)try{if(!d["$"+e]&&o.call(window,e)&&null!==window[e]&&"object"==typeof window[e])try{f(window[e])}catch(e){return!0}}catch(e){return!0}return!1}(),h=function(e){if("undefined"==typeof window||!p)return f(e);try{return f(e)}catch(e){return!1}};r=function(e){var t=null!==e&&"object"==typeof e,n="[object Function]"===i.call(e),r=a(e),l=t&&"[object String]"===i.call(e),f=[];if(!t&&!n&&!r)throw TypeError("Object.keys called on a non-object");var d=c&&n;if(l&&e.length>0&&!o.call(e,0))for(var p=0;p0)for(var g=0;g=0&&"[object Function]"===t.call(e.callee)),r}},369:function(e){e.exports=function(e){return e instanceof o}},584:function(e,t,n){"use strict";var r=n(157),o=n(391),i=n(490),a=n(994);function l(e){return e.call.bind(e)}var s="undefined"!=typeof BigInt,c="undefined"!=typeof Symbol,u=l(Object.prototype.toString),f=l(Number.prototype.valueOf),d=l(String.prototype.valueOf),p=l(Boolean.prototype.valueOf);if(s)var h=l(BigInt.prototype.valueOf);if(c)var g=l(Symbol.prototype.valueOf);function m(e,t){if("object"!=typeof e)return!1;try{return t(e),!0}catch(e){return!1}}function b(e){return"[object Map]"===u(e)}function v(e){return"[object Set]"===u(e)}function y(e){return"[object WeakMap]"===u(e)}function x(e){return"[object WeakSet]"===u(e)}function w(e){return"[object ArrayBuffer]"===u(e)}function E(e){return"undefined"!=typeof ArrayBuffer&&(w.working?w(e):e instanceof ArrayBuffer)}function S(e){return"[object DataView]"===u(e)}function k(e){return"undefined"!=typeof DataView&&(S.working?S(e):e instanceof DataView)}t.isArgumentsObject=r,t.isGeneratorFunction=o,t.isTypedArray=a,t.isPromise=function(e){return"undefined"!=typeof Promise&&e instanceof Promise||null!==e&&"object"==typeof e&&"function"==typeof e.then&&"function"==typeof e.catch},t.isArrayBufferView=function(e){return"undefined"!=typeof ArrayBuffer&&ArrayBuffer.isView?ArrayBuffer.isView(e):a(e)||k(e)},t.isUint8Array=function(e){return"Uint8Array"===i(e)},t.isUint8ClampedArray=function(e){return"Uint8ClampedArray"===i(e)},t.isUint16Array=function(e){return"Uint16Array"===i(e)},t.isUint32Array=function(e){return"Uint32Array"===i(e)},t.isInt8Array=function(e){return"Int8Array"===i(e)},t.isInt16Array=function(e){return"Int16Array"===i(e)},t.isInt32Array=function(e){return"Int32Array"===i(e)},t.isFloat32Array=function(e){return"Float32Array"===i(e)},t.isFloat64Array=function(e){return"Float64Array"===i(e)},t.isBigInt64Array=function(e){return"BigInt64Array"===i(e)},t.isBigUint64Array=function(e){return"BigUint64Array"===i(e)},b.working="undefined"!=typeof Map&&b(new Map),t.isMap=function(e){return"undefined"!=typeof Map&&(b.working?b(e):e instanceof Map)},v.working="undefined"!=typeof Set&&v(new Set),t.isSet=function(e){return"undefined"!=typeof Set&&(v.working?v(e):e instanceof Set)},y.working="undefined"!=typeof WeakMap&&y(new WeakMap),t.isWeakMap=function(e){return"undefined"!=typeof WeakMap&&(y.working?y(e):e instanceof WeakMap)},x.working="undefined"!=typeof WeakSet&&x(new WeakSet),t.isWeakSet=function(e){return x(e)},w.working="undefined"!=typeof ArrayBuffer&&w(new ArrayBuffer),t.isArrayBuffer=E,S.working="undefined"!=typeof ArrayBuffer&&"undefined"!=typeof DataView&&S(new DataView(new ArrayBuffer(1),0,1)),t.isDataView=k;var _="undefined"!=typeof SharedArrayBuffer?SharedArrayBuffer:void 0;function O(e){return"[object SharedArrayBuffer]"===u(e)}function C(e){return void 0!==_&&(void 0===O.working&&(O.working=O(new _)),O.working?O(e):e instanceof _)}function A(e){return m(e,f)}function N(e){return m(e,d)}function R(e){return m(e,p)}function T(e){return s&&m(e,h)}function P(e){return c&&m(e,g)}t.isSharedArrayBuffer=C,t.isAsyncFunction=function(e){return"[object AsyncFunction]"===u(e)},t.isMapIterator=function(e){return"[object Map Iterator]"===u(e)},t.isSetIterator=function(e){return"[object Set Iterator]"===u(e)},t.isGeneratorObject=function(e){return"[object Generator]"===u(e)},t.isWebAssemblyCompiledModule=function(e){return"[object WebAssembly.Module]"===u(e)},t.isNumberObject=A,t.isStringObject=N,t.isBooleanObject=R,t.isBigIntObject=T,t.isSymbolObject=P,t.isBoxedPrimitive=function(e){return A(e)||N(e)||R(e)||T(e)||P(e)},t.isAnyArrayBuffer=function(e){return"undefined"!=typeof Uint8Array&&(E(e)||C(e))},["isProxy","isExternal","isModuleNamespaceObject"].forEach(function(e){Object.defineProperty(t,e,{enumerable:!1,value:function(){throw Error(e+" is not supported in userland")}})})},177:function(e,t,n){var o=Object.getOwnPropertyDescriptors||function(e){for(var t=Object.keys(e),n={},r=0;r=o)return e;switch(e){case"%s":return String(r[n++]);case"%d":return Number(r[n++]);case"%j":try{return JSON.stringify(r[n++])}catch(e){return"[Circular]"}default:return e}}),l=r[n];n=3&&(r.depth=arguments[2]),arguments.length>=4&&(r.colors=arguments[3]),m(n)?r.showHidden=n:n&&t._extend(r,n),x(r.showHidden)&&(r.showHidden=!1),x(r.depth)&&(r.depth=2),x(r.colors)&&(r.colors=!1),x(r.customInspect)&&(r.customInspect=!0),r.colors&&(r.stylize=u),d(r,e,r.depth)}function u(e,t){var n=c.styles[t];return n?"\x1b["+c.colors[n][0]+"m"+e+"\x1b["+c.colors[n][1]+"m":e}function f(e,t){return e}function d(e,n,r){if(e.customInspect&&n&&_(n.inspect)&&n.inspect!==t.inspect&&!(n.constructor&&n.constructor.prototype===n)){var o,i,a,l,s,c=n.inspect(r,e);return y(c)||(c=d(e,c,r)),c}var u=function(e,t){if(x(t))return e.stylize("undefined","undefined");if(y(t)){var n="'"+JSON.stringify(t).replace(/^"|"$/g,"").replace(/'/g,"\\'").replace(/\\"/g,'"')+"'";return e.stylize(n,"string")}return v(t)?e.stylize(""+t,"number"):m(t)?e.stylize(""+t,"boolean"):b(t)?e.stylize("null","null"):void 0}(e,n);if(u)return u;var f=Object.keys(n),E=(l={},f.forEach(function(e,t){l[e]=!0}),l);if(e.showHidden&&(f=Object.getOwnPropertyNames(n)),k(n)&&(f.indexOf("message")>=0||f.indexOf("description")>=0))return p(n);if(0===f.length){if(_(n)){var O=n.name?": "+n.name:"";return e.stylize("[Function"+O+"]","special")}if(w(n))return e.stylize(RegExp.prototype.toString.call(n),"regexp");if(S(n))return e.stylize(Date.prototype.toString.call(n),"date");if(k(n))return p(n)}var C="",A=!1,R=["{","}"];return(g(n)&&(A=!0,R=["[","]"]),_(n)&&(C=" [Function"+(n.name?": "+n.name:"")+"]"),w(n)&&(C=" "+RegExp.prototype.toString.call(n)),S(n)&&(C=" "+Date.prototype.toUTCString.call(n)),k(n)&&(C=" "+p(n)),0!==f.length||A&&0!=n.length)?r<0?w(n)?e.stylize(RegExp.prototype.toString.call(n),"regexp"):e.stylize("[Object]","special"):(e.seen.push(n),s=A?function(e,t,n,r,o){for(var i=[],a=0,l=t.length;a=0&&a++,e+t.replace(/\u001b\[\d\d?m/g,"").length+1},0)>60?i[0]+(""===o?"":o+"\n ")+" "+s.join(",\n ")+" "+i[1]:i[0]+o+" "+s.join(", ")+" "+i[1]):R[0]+C+R[1]}function p(e){return"["+Error.prototype.toString.call(e)+"]"}function h(e,t,n,r,o,i){var a,l,s;if((s=Object.getOwnPropertyDescriptor(t,o)||{value:t[o]}).get?l=s.set?e.stylize("[Getter/Setter]","special"):e.stylize("[Getter]","special"):s.set&&(l=e.stylize("[Setter]","special")),N(r,o)||(a="["+o+"]"),!l&&(0>e.seen.indexOf(s.value)?(l=b(n)?d(e,s.value,null):d(e,s.value,n-1)).indexOf("\n")>-1&&(l=i?l.split("\n").map(function(e){return" "+e}).join("\n").substr(2):"\n"+l.split("\n").map(function(e){return" "+e}).join("\n")):l=e.stylize("[Circular]","special")),x(a)){if(i&&o.match(/^\d+$/))return l;(a=JSON.stringify(""+o)).match(/^"([a-zA-Z_][a-zA-Z_0-9]*)"$/)?(a=a.substr(1,a.length-2),a=e.stylize(a,"name")):(a=a.replace(/'/g,"\\'").replace(/\\"/g,'"').replace(/(^"|"$)/g,"'"),a=e.stylize(a,"string"))}return a+": "+l}function g(e){return Array.isArray(e)}function m(e){return"boolean"==typeof e}function b(e){return null===e}function v(e){return"number"==typeof e}function y(e){return"string"==typeof e}function x(e){return void 0===e}function w(e){return E(e)&&"[object RegExp]"===O(e)}function E(e){return"object"==typeof e&&null!==e}function S(e){return E(e)&&"[object Date]"===O(e)}function k(e){return E(e)&&("[object Error]"===O(e)||e instanceof Error)}function _(e){return"function"==typeof e}function O(e){return Object.prototype.toString.call(e)}function C(e){return e<10?"0"+e.toString(10):e.toString(10)}t.debuglog=function(e){if(!a[e=e.toUpperCase()]){if(l.test(e)){var n=r.pid;a[e]=function(){var r=t.format.apply(t,arguments);console.error("%s %d: %s",e,n,r)}}else a[e]=function(){}}return a[e]},t.inspect=c,c.colors={bold:[1,22],italic:[3,23],underline:[4,24],inverse:[7,27],white:[37,39],grey:[90,39],black:[30,39],blue:[34,39],cyan:[36,39],green:[32,39],magenta:[35,39],red:[31,39],yellow:[33,39]},c.styles={special:"cyan",number:"yellow",boolean:"yellow",undefined:"grey",null:"bold",string:"green",date:"magenta",regexp:"red"},t.types=n(584),t.isArray=g,t.isBoolean=m,t.isNull=b,t.isNullOrUndefined=function(e){return null==e},t.isNumber=v,t.isString=y,t.isSymbol=function(e){return"symbol"==typeof e},t.isUndefined=x,t.isRegExp=w,t.types.isRegExp=w,t.isObject=E,t.isDate=S,t.types.isDate=S,t.isError=k,t.types.isNativeError=k,t.isFunction=_,t.isPrimitive=function(e){return null===e||"boolean"==typeof e||"number"==typeof e||"string"==typeof e||"symbol"==typeof e||void 0===e},t.isBuffer=n(369);var A=["Jan","Feb","Mar","Apr","May","Jun","Jul","Aug","Sep","Oct","Nov","Dec"];function N(e,t){return Object.prototype.hasOwnProperty.call(e,t)}t.log=function(){var e,n;console.log("%s - %s",(n=[C((e=new Date).getHours()),C(e.getMinutes()),C(e.getSeconds())].join(":"),[e.getDate(),A[e.getMonth()],n].join(" ")),t.format.apply(t,arguments))},t.inherits=n(782),t._extend=function(e,t){if(!t||!E(t))return e;for(var n=Object.keys(t),r=n.length;r--;)e[n[r]]=t[n[r]];return e};var R="undefined"!=typeof Symbol?Symbol("util.promisify.custom"):void 0;function T(e,t){if(!e){var n=Error("Promise was rejected with a falsy value");n.reason=e,e=n}return t(e)}t.promisify=function(e){if("function"!=typeof e)throw TypeError('The "original" argument must be of type Function');if(R&&e[R]){var t=e[R];if("function"!=typeof t)throw TypeError('The "util.promisify.custom" argument must be of type Function');return Object.defineProperty(t,R,{value:t,enumerable:!1,writable:!1,configurable:!0}),t}function t(){for(var t,n,r=new Promise(function(e,r){t=e,n=r}),o=[],i=0;i0?a-4:a;for(n=0;n>16&255,c[u++]=t>>8&255,c[u++]=255&t;return 2===l&&(t=r[e.charCodeAt(n)]<<2|r[e.charCodeAt(n+1)]>>4,c[u++]=255&t),1===l&&(t=r[e.charCodeAt(n)]<<10|r[e.charCodeAt(n+1)]<<4|r[e.charCodeAt(n+2)]>>2,c[u++]=t>>8&255,c[u++]=255&t),c},t.fromByteArray=function(e){for(var t,r=e.length,o=r%3,i=[],a=0,l=r-o;a>18&63]+n[o>>12&63]+n[o>>6&63]+n[63&o]);return i.join("")}(e,a,a+16383>l?l:a+16383));return 1===o?i.push(n[(t=e[r-1])>>2]+n[t<<4&63]+"=="):2===o&&i.push(n[(t=(e[r-2]<<8)+e[r-1])>>10]+n[t>>4&63]+n[t<<2&63]+"="),i.join("")};for(var n=[],r=[],o="undefined"!=typeof Uint8Array?Uint8Array:Array,i="ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/",a=0,l=i.length;a0)throw Error("Invalid string. Length must be a multiple of 4");var n=e.indexOf("=");-1===n&&(n=t);var r=n===t?0:4-n%4;return[n,r]}r["-".charCodeAt(0)]=62,r["_".charCodeAt(0)]=63},72:function(e,t,n){"use strict";/*! - * The buffer module from node.js, for the browser. - * - * @author Feross Aboukhadijeh - * @license MIT - */var r=n(675),o=n(783),i="function"==typeof Symbol&&"function"==typeof Symbol.for?Symbol.for("nodejs.util.inspect.custom"):null;function a(e){if(e>2147483647)throw RangeError('The value "'+e+'" is invalid for option "size"');var t=new Uint8Array(e);return Object.setPrototypeOf(t,l.prototype),t}function l(e,t,n){if("number"==typeof e){if("string"==typeof t)throw TypeError('The "string" argument must be of type string. Received type number');return u(e)}return s(e,t,n)}function s(e,t,n){if("string"==typeof e)return function(e,t){if(("string"!=typeof t||""===t)&&(t="utf8"),!l.isEncoding(t))throw TypeError("Unknown encoding: "+t);var n=0|p(e,t),r=a(n),o=r.write(e,t);return o!==n&&(r=r.slice(0,o)),r}(e,t);if(ArrayBuffer.isView(e))return f(e);if(null==e)throw TypeError("The first argument must be one of type string, Buffer, ArrayBuffer, Array, or Array-like Object. Received type "+typeof e);if(N(e,ArrayBuffer)||e&&N(e.buffer,ArrayBuffer)||"undefined"!=typeof SharedArrayBuffer&&(N(e,SharedArrayBuffer)||e&&N(e.buffer,SharedArrayBuffer)))return function(e,t,n){var r;if(t<0||e.byteLength=2147483647)throw RangeError("Attempt to allocate Buffer larger than maximum size: 0x7fffffff bytes");return 0|e}function p(e,t){if(l.isBuffer(e))return e.length;if(ArrayBuffer.isView(e)||N(e,ArrayBuffer))return e.byteLength;if("string"!=typeof e)throw TypeError('The "string" argument must be one of type string, Buffer, or ArrayBuffer. Received type '+typeof e);var n=e.length,r=arguments.length>2&&!0===arguments[2];if(!r&&0===n)return 0;for(var o=!1;;)switch(t){case"ascii":case"latin1":case"binary":return n;case"utf8":case"utf-8":return _(e).length;case"ucs2":case"ucs-2":case"utf16le":case"utf-16le":return 2*n;case"hex":return n>>>1;case"base64":return C(e).length;default:if(o)return r?-1:_(e).length;t=(""+t).toLowerCase(),o=!0}}function h(e,t,n){var o,i,a=!1;if((void 0===t||t<0)&&(t=0),t>this.length||((void 0===n||n>this.length)&&(n=this.length),n<=0||(n>>>=0)<=(t>>>=0)))return"";for(e||(e="utf8");;)switch(e){case"hex":return function(e,t,n){var r=e.length;(!t||t<0)&&(t=0),(!n||n<0||n>r)&&(n=r);for(var o="",i=t;i2147483647?n=2147483647:n<-2147483648&&(n=-2147483648),(i=n=+n)!=i&&(n=o?0:e.length-1),n<0&&(n=e.length+n),n>=e.length){if(o)return -1;n=e.length-1}else if(n<0){if(!o)return -1;n=0}if("string"==typeof t&&(t=l.from(t,r)),l.isBuffer(t))return 0===t.length?-1:b(e,t,n,r,o);if("number"==typeof t)return(t&=255,"function"==typeof Uint8Array.prototype.indexOf)?o?Uint8Array.prototype.indexOf.call(e,t,n):Uint8Array.prototype.lastIndexOf.call(e,t,n):b(e,[t],n,r,o);throw TypeError("val must be string, number or Buffer")}function b(e,t,n,r,o){var i,a=1,l=e.length,s=t.length;if(void 0!==r&&("ucs2"===(r=String(r).toLowerCase())||"ucs-2"===r||"utf16le"===r||"utf-16le"===r)){if(e.length<2||t.length<2)return -1;a=2,l/=2,s/=2,n/=2}function c(e,t){return 1===a?e[t]:e.readUInt16BE(t*a)}if(o){var u=-1;for(i=n;il&&(n=l-s),i=n;i>=0;i--){for(var f=!0,d=0;d239?4:c>223?3:c>191?2:1;if(o+f<=n)switch(f){case 1:c<128&&(u=c);break;case 2:(192&(i=e[o+1]))==128&&(s=(31&c)<<6|63&i)>127&&(u=s);break;case 3:i=e[o+1],a=e[o+2],(192&i)==128&&(192&a)==128&&(s=(15&c)<<12|(63&i)<<6|63&a)>2047&&(s<55296||s>57343)&&(u=s);break;case 4:i=e[o+1],a=e[o+2],l=e[o+3],(192&i)==128&&(192&a)==128&&(192&l)==128&&(s=(15&c)<<18|(63&i)<<12|(63&a)<<6|63&l)>65535&&s<1114112&&(u=s)}null===u?(u=65533,f=1):u>65535&&(u-=65536,r.push(u>>>10&1023|55296),u=56320|1023&u),r.push(u),o+=f}return function(e){var t=e.length;if(t<=4096)return String.fromCharCode.apply(String,e);for(var n="",r=0;rn)throw RangeError("Trying to access beyond buffer length")}function x(e,t,n,r,o,i){if(!l.isBuffer(e))throw TypeError('"buffer" argument must be a Buffer instance');if(t>o||te.length)throw RangeError("Index out of range")}function w(e,t,n,r,o,i){if(n+r>e.length||n<0)throw RangeError("Index out of range")}function E(e,t,n,r,i){return t=+t,n>>>=0,i||w(e,t,n,4,34028234663852886e22,-34028234663852886e22),o.write(e,t,n,r,23,4),n+4}function S(e,t,n,r,i){return t=+t,n>>>=0,i||w(e,t,n,8,17976931348623157e292,-17976931348623157e292),o.write(e,t,n,r,52,8),n+8}t.Buffer=l,t.SlowBuffer=function(e){return+e!=e&&(e=0),l.alloc(+e)},t.INSPECT_MAX_BYTES=50,t.kMaxLength=2147483647,l.TYPED_ARRAY_SUPPORT=function(){try{var e=new Uint8Array(1),t={foo:function(){return 42}};return Object.setPrototypeOf(t,Uint8Array.prototype),Object.setPrototypeOf(e,t),42===e.foo()}catch(e){return!1}}(),l.TYPED_ARRAY_SUPPORT||"undefined"==typeof console||"function"!=typeof console.error||console.error("This browser lacks typed array (Uint8Array) support which is required by `buffer` v5.x. Use `buffer` v4.x if you require old browser support."),Object.defineProperty(l.prototype,"parent",{enumerable:!0,get:function(){if(l.isBuffer(this))return this.buffer}}),Object.defineProperty(l.prototype,"offset",{enumerable:!0,get:function(){if(l.isBuffer(this))return this.byteOffset}}),l.poolSize=8192,l.from=function(e,t,n){return s(e,t,n)},Object.setPrototypeOf(l.prototype,Uint8Array.prototype),Object.setPrototypeOf(l,Uint8Array),l.alloc=function(e,t,n){return(c(e),e<=0)?a(e):void 0!==t?"string"==typeof n?a(e).fill(t,n):a(e).fill(t):a(e)},l.allocUnsafe=function(e){return u(e)},l.allocUnsafeSlow=function(e){return u(e)},l.isBuffer=function(e){return null!=e&&!0===e._isBuffer&&e!==l.prototype},l.compare=function(e,t){if(N(e,Uint8Array)&&(e=l.from(e,e.offset,e.byteLength)),N(t,Uint8Array)&&(t=l.from(t,t.offset,t.byteLength)),!l.isBuffer(e)||!l.isBuffer(t))throw TypeError('The "buf1", "buf2" arguments must be one of type Buffer or Uint8Array');if(e===t)return 0;for(var n=e.length,r=t.length,o=0,i=Math.min(n,r);on&&(e+=" ... "),""},i&&(l.prototype[i]=l.prototype.inspect),l.prototype.compare=function(e,t,n,r,o){if(N(e,Uint8Array)&&(e=l.from(e,e.offset,e.byteLength)),!l.isBuffer(e))throw TypeError('The "target" argument must be one of type Buffer or Uint8Array. Received type '+typeof e);if(void 0===t&&(t=0),void 0===n&&(n=e?e.length:0),void 0===r&&(r=0),void 0===o&&(o=this.length),t<0||n>e.length||r<0||o>this.length)throw RangeError("out of range index");if(r>=o&&t>=n)return 0;if(r>=o)return -1;if(t>=n)return 1;if(t>>>=0,n>>>=0,r>>>=0,o>>>=0,this===e)return 0;for(var i=o-r,a=n-t,s=Math.min(i,a),c=this.slice(r,o),u=e.slice(t,n),f=0;f>>=0,isFinite(n)?(n>>>=0,void 0===r&&(r="utf8")):(r=n,n=void 0);else throw Error("Buffer.write(string, encoding, offset[, length]) is no longer supported");var o,i,a,l,s,c,u,f,d,p,h,g,m=this.length-t;if((void 0===n||n>m)&&(n=m),e.length>0&&(n<0||t<0)||t>this.length)throw RangeError("Attempt to write outside buffer bounds");r||(r="utf8");for(var b=!1;;)switch(r){case"hex":return function(e,t,n,r){n=Number(n)||0;var o=e.length-n;r?(r=Number(r))>o&&(r=o):r=o;var i=t.length;r>i/2&&(r=i/2);for(var a=0;a>8,o.push(n%256),o.push(r);return o}(e,this.length-h),this,h,g);default:if(b)throw TypeError("Unknown encoding: "+r);r=(""+r).toLowerCase(),b=!0}},l.prototype.toJSON=function(){return{type:"Buffer",data:Array.prototype.slice.call(this._arr||this,0)}},l.prototype.slice=function(e,t){var n=this.length;e=~~e,t=void 0===t?n:~~t,e<0?(e+=n)<0&&(e=0):e>n&&(e=n),t<0?(t+=n)<0&&(t=0):t>n&&(t=n),t>>=0,t>>>=0,n||y(e,t,this.length);for(var r=this[e],o=1,i=0;++i>>=0,t>>>=0,n||y(e,t,this.length);for(var r=this[e+--t],o=1;t>0&&(o*=256);)r+=this[e+--t]*o;return r},l.prototype.readUInt8=function(e,t){return e>>>=0,t||y(e,1,this.length),this[e]},l.prototype.readUInt16LE=function(e,t){return e>>>=0,t||y(e,2,this.length),this[e]|this[e+1]<<8},l.prototype.readUInt16BE=function(e,t){return e>>>=0,t||y(e,2,this.length),this[e]<<8|this[e+1]},l.prototype.readUInt32LE=function(e,t){return e>>>=0,t||y(e,4,this.length),(this[e]|this[e+1]<<8|this[e+2]<<16)+16777216*this[e+3]},l.prototype.readUInt32BE=function(e,t){return e>>>=0,t||y(e,4,this.length),16777216*this[e]+(this[e+1]<<16|this[e+2]<<8|this[e+3])},l.prototype.readIntLE=function(e,t,n){e>>>=0,t>>>=0,n||y(e,t,this.length);for(var r=this[e],o=1,i=0;++i=(o*=128)&&(r-=Math.pow(2,8*t)),r},l.prototype.readIntBE=function(e,t,n){e>>>=0,t>>>=0,n||y(e,t,this.length);for(var r=t,o=1,i=this[e+--r];r>0&&(o*=256);)i+=this[e+--r]*o;return i>=(o*=128)&&(i-=Math.pow(2,8*t)),i},l.prototype.readInt8=function(e,t){return(e>>>=0,t||y(e,1,this.length),128&this[e])?-((255-this[e]+1)*1):this[e]},l.prototype.readInt16LE=function(e,t){e>>>=0,t||y(e,2,this.length);var n=this[e]|this[e+1]<<8;return 32768&n?4294901760|n:n},l.prototype.readInt16BE=function(e,t){e>>>=0,t||y(e,2,this.length);var n=this[e+1]|this[e]<<8;return 32768&n?4294901760|n:n},l.prototype.readInt32LE=function(e,t){return e>>>=0,t||y(e,4,this.length),this[e]|this[e+1]<<8|this[e+2]<<16|this[e+3]<<24},l.prototype.readInt32BE=function(e,t){return e>>>=0,t||y(e,4,this.length),this[e]<<24|this[e+1]<<16|this[e+2]<<8|this[e+3]},l.prototype.readFloatLE=function(e,t){return e>>>=0,t||y(e,4,this.length),o.read(this,e,!0,23,4)},l.prototype.readFloatBE=function(e,t){return e>>>=0,t||y(e,4,this.length),o.read(this,e,!1,23,4)},l.prototype.readDoubleLE=function(e,t){return e>>>=0,t||y(e,8,this.length),o.read(this,e,!0,52,8)},l.prototype.readDoubleBE=function(e,t){return e>>>=0,t||y(e,8,this.length),o.read(this,e,!1,52,8)},l.prototype.writeUIntLE=function(e,t,n,r){if(e=+e,t>>>=0,n>>>=0,!r){var o=Math.pow(2,8*n)-1;x(this,e,t,n,o,0)}var i=1,a=0;for(this[t]=255&e;++a>>=0,n>>>=0,!r){var o=Math.pow(2,8*n)-1;x(this,e,t,n,o,0)}var i=n-1,a=1;for(this[t+i]=255&e;--i>=0&&(a*=256);)this[t+i]=e/a&255;return t+n},l.prototype.writeUInt8=function(e,t,n){return e=+e,t>>>=0,n||x(this,e,t,1,255,0),this[t]=255&e,t+1},l.prototype.writeUInt16LE=function(e,t,n){return e=+e,t>>>=0,n||x(this,e,t,2,65535,0),this[t]=255&e,this[t+1]=e>>>8,t+2},l.prototype.writeUInt16BE=function(e,t,n){return e=+e,t>>>=0,n||x(this,e,t,2,65535,0),this[t]=e>>>8,this[t+1]=255&e,t+2},l.prototype.writeUInt32LE=function(e,t,n){return e=+e,t>>>=0,n||x(this,e,t,4,4294967295,0),this[t+3]=e>>>24,this[t+2]=e>>>16,this[t+1]=e>>>8,this[t]=255&e,t+4},l.prototype.writeUInt32BE=function(e,t,n){return e=+e,t>>>=0,n||x(this,e,t,4,4294967295,0),this[t]=e>>>24,this[t+1]=e>>>16,this[t+2]=e>>>8,this[t+3]=255&e,t+4},l.prototype.writeIntLE=function(e,t,n,r){if(e=+e,t>>>=0,!r){var o=Math.pow(2,8*n-1);x(this,e,t,n,o-1,-o)}var i=0,a=1,l=0;for(this[t]=255&e;++i>0)-l&255;return t+n},l.prototype.writeIntBE=function(e,t,n,r){if(e=+e,t>>>=0,!r){var o=Math.pow(2,8*n-1);x(this,e,t,n,o-1,-o)}var i=n-1,a=1,l=0;for(this[t+i]=255&e;--i>=0&&(a*=256);)e<0&&0===l&&0!==this[t+i+1]&&(l=1),this[t+i]=(e/a>>0)-l&255;return t+n},l.prototype.writeInt8=function(e,t,n){return e=+e,t>>>=0,n||x(this,e,t,1,127,-128),e<0&&(e=255+e+1),this[t]=255&e,t+1},l.prototype.writeInt16LE=function(e,t,n){return e=+e,t>>>=0,n||x(this,e,t,2,32767,-32768),this[t]=255&e,this[t+1]=e>>>8,t+2},l.prototype.writeInt16BE=function(e,t,n){return e=+e,t>>>=0,n||x(this,e,t,2,32767,-32768),this[t]=e>>>8,this[t+1]=255&e,t+2},l.prototype.writeInt32LE=function(e,t,n){return e=+e,t>>>=0,n||x(this,e,t,4,2147483647,-2147483648),this[t]=255&e,this[t+1]=e>>>8,this[t+2]=e>>>16,this[t+3]=e>>>24,t+4},l.prototype.writeInt32BE=function(e,t,n){return e=+e,t>>>=0,n||x(this,e,t,4,2147483647,-2147483648),e<0&&(e=4294967295+e+1),this[t]=e>>>24,this[t+1]=e>>>16,this[t+2]=e>>>8,this[t+3]=255&e,t+4},l.prototype.writeFloatLE=function(e,t,n){return E(this,e,t,!0,n)},l.prototype.writeFloatBE=function(e,t,n){return E(this,e,t,!1,n)},l.prototype.writeDoubleLE=function(e,t,n){return S(this,e,t,!0,n)},l.prototype.writeDoubleBE=function(e,t,n){return S(this,e,t,!1,n)},l.prototype.copy=function(e,t,n,r){if(!l.isBuffer(e))throw TypeError("argument should be a Buffer");if(n||(n=0),r||0===r||(r=this.length),t>=e.length&&(t=e.length),t||(t=0),r>0&&r=this.length)throw RangeError("Index out of range");if(r<0)throw RangeError("sourceEnd out of bounds");r>this.length&&(r=this.length),e.length-t=0;--i)e[i+t]=this[i+n];else Uint8Array.prototype.set.call(e,this.subarray(n,r),t);return o},l.prototype.fill=function(e,t,n,r){if("string"==typeof e){if("string"==typeof t?(r=t,t=0,n=this.length):"string"==typeof n&&(r=n,n=this.length),void 0!==r&&"string"!=typeof r)throw TypeError("encoding must be a string");if("string"==typeof r&&!l.isEncoding(r))throw TypeError("Unknown encoding: "+r);if(1===e.length){var o,i=e.charCodeAt(0);("utf8"===r&&i<128||"latin1"===r)&&(e=i)}}else"number"==typeof e?e&=255:"boolean"==typeof e&&(e=Number(e));if(t<0||this.length>>=0,n=void 0===n?this.length:n>>>0,e||(e=0),"number"==typeof e)for(o=t;o55295&&n<57344){if(!o){if(n>56319||a+1===r){(t-=3)>-1&&i.push(239,191,189);continue}o=n;continue}if(n<56320){(t-=3)>-1&&i.push(239,191,189),o=n;continue}n=(o-55296<<10|n-56320)+65536}else o&&(t-=3)>-1&&i.push(239,191,189);if(o=null,n<128){if((t-=1)<0)break;i.push(n)}else if(n<2048){if((t-=2)<0)break;i.push(n>>6|192,63&n|128)}else if(n<65536){if((t-=3)<0)break;i.push(n>>12|224,n>>6&63|128,63&n|128)}else if(n<1114112){if((t-=4)<0)break;i.push(n>>18|240,n>>12&63|128,n>>6&63|128,63&n|128)}else throw Error("Invalid code point")}return i}function O(e){for(var t=[],n=0;n=t.length)&&!(o>=e.length);++o)t[o+n]=e[o];return o}function N(e,t){return e instanceof t||null!=e&&null!=e.constructor&&null!=e.constructor.name&&e.constructor.name===t.name}var R=function(){for(var e="0123456789abcdef",t=Array(256),n=0;n<16;++n)for(var r=16*n,o=0;o<16;++o)t[r+o]=e[n]+e[o];return t}()},783:function(e,t){/*! ieee754. BSD-3-Clause License. Feross Aboukhadijeh */t.read=function(e,t,n,r,o){var i,a,l=8*o-r-1,s=(1<>1,u=-7,f=n?o-1:0,d=n?-1:1,p=e[t+f];for(f+=d,i=p&(1<<-u)-1,p>>=-u,u+=l;u>0;i=256*i+e[t+f],f+=d,u-=8);for(a=i&(1<<-u)-1,i>>=-u,u+=r;u>0;a=256*a+e[t+f],f+=d,u-=8);if(0===i)i=1-c;else{if(i===s)return a?NaN:(p?-1:1)*(1/0);a+=Math.pow(2,r),i-=c}return(p?-1:1)*a*Math.pow(2,i-r)},t.write=function(e,t,n,r,o,i){var a,l,s,c=8*i-o-1,u=(1<>1,d=23===o?5960464477539062e-23:0,p=r?0:i-1,h=r?1:-1,g=t<0||0===t&&1/t<0?1:0;for(isNaN(t=Math.abs(t))||t===1/0?(l=isNaN(t)?1:0,a=u):(a=Math.floor(Math.log(t)/Math.LN2),t*(s=Math.pow(2,-a))<1&&(a--,s*=2),a+f>=1?t+=d/s:t+=d*Math.pow(2,1-f),t*s>=2&&(a++,s/=2),a+f>=u?(l=0,a=u):a+f>=1?(l=(t*s-1)*Math.pow(2,o),a+=f):(l=t*Math.pow(2,f-1)*Math.pow(2,o),a=0));o>=8;e[n+p]=255&l,p+=h,l/=256,o-=8);for(a=a<0;e[n+p]=255&a,p+=h,a/=256,c-=8);e[n+p-h]|=128*g}}},n={};function r(e){var o=n[e];if(void 0!==o)return o.exports;var i=n[e]={exports:{}},a=!0;try{t[e](i,i.exports,r),a=!1}finally{a&&delete n[e]}return i.exports}r.ab="//";var o=r(72);e.exports=o}()},66003:function(e){!function(){var t={229:function(e){var t,n,r,o=e.exports={};function i(){throw Error("setTimeout has not been defined")}function a(){throw Error("clearTimeout has not been defined")}function l(e){if(t===setTimeout)return setTimeout(e,0);if((t===i||!t)&&setTimeout)return t=setTimeout,setTimeout(e,0);try{return t(e,0)}catch(n){try{return t.call(null,e,0)}catch(n){return t.call(this,e,0)}}}!function(){try{t="function"==typeof setTimeout?setTimeout:i}catch(e){t=i}try{n="function"==typeof clearTimeout?clearTimeout:a}catch(e){n=a}}();var s=[],c=!1,u=-1;function f(){c&&r&&(c=!1,r.length?s=r.concat(s):u=-1,s.length&&d())}function d(){if(!c){var e=l(f);c=!0;for(var t=s.length;t;){for(r=s,s=[];++u1)for(var n=1;na?1:Math.round(100*u/a)/100,t.a!==f)return{h:t.h,s:t.s,l:t.l,a:f,source:"rgb"}}else{var d=void 0;if(r!==(d=c<0?0:c>i?1:Math.round(100*c/i)/100))return{h:t.h,s:t.s,l:t.l,a:d,source:"rgb"}}return null},u={},f=function(e,t,n,r){if("undefined"==typeof document&&!r)return null;var o=r?new r:document.createElement("canvas");o.width=2*n,o.height=2*n;var i=o.getContext("2d");return i?(i.fillStyle=e,i.fillRect(0,0,o.width,o.height),i.fillStyle=t,i.fillRect(0,0,n,n),i.translate(n,n),i.fillRect(0,0,n,n),o.toDataURL()):null},d=function(e,t,n,r){var o=e+"-"+t+"-"+n+(r?"-server":"");if(u[o])return u[o];var i=f(e,t,n,r);return u[o]=i,i},p=Object.assign||function(e){for(var t=1;t-1)){var o=n.getArrowOffset(),i=38===e.keyCode?r+o:r-o;n.setUpdatedValue(i,e)}},n.handleDrag=function(e){if(n.props.dragLabel){var t=Math.round(n.props.value+e.movementX);t>=0&&t<=n.props.dragMax&&n.props.onChange&&n.props.onChange(n.getValueObjectWithLabel(t),e)}},n.handleMouseDown=function(e){n.props.dragLabel&&(e.preventDefault(),n.handleDrag(e),window.addEventListener("mousemove",n.handleDrag),window.addEventListener("mouseup",n.handleMouseUp))},n.handleMouseUp=function(){n.unbindEventListeners()},n.unbindEventListeners=function(){window.removeEventListener("mousemove",n.handleDrag),window.removeEventListener("mouseup",n.handleMouseUp)},n.state={value:String(e.value).toUpperCase(),blurValue:String(e.value).toUpperCase()},n.inputId="rc-editable-input-"+w++,n}return!function(e,t){if("function"!=typeof t&&null!==t)throw TypeError("Super expression must either be null or a function, not "+typeof t);e.prototype=Object.create(t&&t.prototype,{constructor:{value:e,enumerable:!1,writable:!0,configurable:!0}}),t&&(Object.setPrototypeOf?Object.setPrototypeOf(e,t):e.__proto__=t)}(t,e),y(t,[{key:"componentDidUpdate",value:function(e,t){this.props.value!==this.state.value&&(e.value!==this.props.value||t.value!==this.state.value)&&(this.input===document.activeElement?this.setState({blurValue:String(this.props.value).toUpperCase()}):this.setState({value:String(this.props.value).toUpperCase(),blurValue:!this.state.blurValue&&String(this.props.value).toUpperCase()}))}},{key:"componentWillUnmount",value:function(){this.unbindEventListeners()}},{key:"getValueObjectWithLabel",value:function(e){var t,n;return t={},(n=this.props.label)in t?Object.defineProperty(t,n,{value:e,enumerable:!0,configurable:!0,writable:!0}):t[n]=e,t}},{key:"getArrowOffset",value:function(){return this.props.arrowOffset||1}},{key:"setUpdatedValue",value:function(e,t){var n=this.props.label?this.getValueObjectWithLabel(e):e;this.props.onChange&&this.props.onChange(n,t),this.setState({value:e})}},{key:"render",value:function(){var e=this,t=(0,s.ZP)({default:{wrap:{position:"relative"}},"user-override":{wrap:this.props.style&&this.props.style.wrap?this.props.style.wrap:{},input:this.props.style&&this.props.style.input?this.props.style.input:{},label:this.props.style&&this.props.style.label?this.props.style.label:{}},"dragLabel-true":{label:{cursor:"ew-resize"}}},{"user-override":!0},this.props);return l.createElement("div",{style:t.wrap},l.createElement("input",{id:this.inputId,style:t.input,ref:function(t){return e.input=t},value:this.state.value,onKeyDown:this.handleKeyDown,onChange:this.handleChange,onBlur:this.handleBlur,placeholder:this.props.placeholder,spellCheck:"false"}),this.props.label&&!this.props.hideLabel?l.createElement("label",{htmlFor:this.inputId,style:t.label,onMouseDown:this.handleMouseDown},this.props.label):null)}}]),t}(l.PureComponent||l.Component),S=function(e,t,n,r){var o=r.clientWidth,i=r.clientHeight,a="number"==typeof e.pageX?e.pageX:e.touches[0].pageX,l="number"==typeof e.pageY?e.pageY:e.touches[0].pageY,s=a-(r.getBoundingClientRect().left+window.pageXOffset),c=l-(r.getBoundingClientRect().top+window.pageYOffset);if("vertical"===t){var u=void 0;if(u=c<0?359:c>i?0:360*(-(100*c/i)+100)/100,n.h!==u)return{h:u,s:n.s,l:n.l,a:n.a,source:"hsl"}}else{var f=void 0;if(f=s<0?0:s>o?359:360*(100*s/o)/100,n.h!==f)return{h:f,s:n.s,l:n.l,a:n.a,source:"hsl"}}return null},k=function(){function e(e,t){for(var n=0;n1?t[o-1]:void 0,a=o>2?t[2]:void 0;for(i=r.length>3&&"function"==typeof i?(o--,i):void 0,a&&(0,eb.Z)(t[0],t[1],a)&&(i=o<3?void 0:i,o=1),e=Object(e);++n=t||n<0||f&&r>=i}function g(){var e,n,r,o=ex();if(h(o))return m(o);l=setTimeout(g,(e=o-s,n=o-c,r=t-e,f?eP(r,i-n):r))}function m(e){return(l=void 0,d&&r)?p(e):(r=o=void 0,a)}function b(){var e,n=ex(),i=h(n);if(r=arguments,o=this,s=n,i){if(void 0===l)return c=e=s,l=setTimeout(g,t),u?p(e):a;if(f)return clearTimeout(l),l=setTimeout(g,t),p(s)}return void 0===l&&(l=setTimeout(g,t)),a}return t=eR(t)||0,(0,q.Z)(n)&&(u=!!n.leading,i=(f="maxWait"in n)?eT(eR(n.maxWait)||0,t):i,d="trailing"in n?!!n.trailing:d),b.cancel=function(){void 0!==l&&clearTimeout(l),c=0,r=s=o=l=void 0},b.flush=function(){return void 0===l?a:m(ex())},b},ej=function(e,t,n){var r=!0,o=!0;if("function"!=typeof e)throw TypeError("Expected a function");return(0,q.Z)(n)&&(r="leading"in n?!!n.leading:r,o="trailing"in n?!!n.trailing:o),eM(e,t,{leading:r,maxWait:t,trailing:o})},eL=function(e,t,n){var r=n.getBoundingClientRect(),o=r.width,i=r.height,a="number"==typeof e.pageX?e.pageX:e.touches[0].pageX,l="number"==typeof e.pageY?e.pageY:e.touches[0].pageY,s=a-(n.getBoundingClientRect().left+window.pageXOffset),c=l-(n.getBoundingClientRect().top+window.pageYOffset);s<0?s=0:s>o&&(s=o),c<0?c=0:c>i&&(c=i);var u=s/o,f=1-c/i;return{h:t.h,s:u,v:f,a:t.a,source:"hsv"}},eI=function(){function e(e,t){for(var n=0;n1&&(n-=1),n<1/6)?e+(t-e)*6*n:n<.5?t:n<2/3?e+(t-e)*(2/3-n)*6:e}if(e=te(e,360),t=te(t,100),n=te(n,100),0===t)r=o=i=n;else{var l=n<.5?n*(1+t):n+t-n*t,s=2*n-l;r=a(s,l,e+1/3),o=a(s,l,e),i=a(s,l,e-1/3)}return{r:255*r,g:255*o,b:255*i}}(n.h,i,l),s=!0,c="hsl"),n.hasOwnProperty("a")&&(o=n.a)),o=e7(o),{ok:s,format:n.format||c,r:Math.min(255,Math.max(r.r,0)),g:Math.min(255,Math.max(r.g,0)),b:Math.min(255,Math.max(r.b,0)),a:o});this._originalInput=e,this._r=E.r,this._g=E.g,this._b=E.b,this._a=E.a,this._roundA=Math.round(100*this._a)/100,this._format=t.format||E.format,this._gradientType=t.gradientType,this._r<1&&(this._r=Math.round(this._r)),this._g<1&&(this._g=Math.round(this._g)),this._b<1&&(this._b=Math.round(this._b)),this._ok=E.ok}function eq(e,t,n){var r,o,i=Math.max(e=te(e,255),t=te(t,255),n=te(n,255)),a=Math.min(e,t,n),l=(i+a)/2;if(i==a)r=o=0;else{var s=i-a;switch(o=l>.5?s/(2-i-a):s/(i+a),i){case e:r=(t-n)/s+(t>1)+720)%360;--t;)r.h=(r.h+o)%360,i.push(eZ(r));return i}function e4(e,t){t=t||6;for(var n=eZ(e).toHsv(),r=n.h,o=n.s,i=n.v,a=[],l=1/t;t--;)a.push(eZ({h:r,s:o,v:i})),i=(i+l)%1;return a}eZ.prototype={isDark:function(){return 128>this.getBrightness()},isLight:function(){return!this.isDark()},isValid:function(){return this._ok},getOriginalInput:function(){return this._originalInput},getFormat:function(){return this._format},getAlpha:function(){return this._a},getBrightness:function(){var e=this.toRgb();return(299*e.r+587*e.g+114*e.b)/1e3},getLuminance:function(){var e,t,n,r=this.toRgb();return e=r.r/255,.2126*(e<=.03928?e/12.92:Math.pow((e+.055)/1.055,2.4))+.7152*((t=r.g/255)<=.03928?t/12.92:Math.pow((t+.055)/1.055,2.4))+.0722*((n=r.b/255)<=.03928?n/12.92:Math.pow((n+.055)/1.055,2.4))},setAlpha:function(e){return this._a=e7(e),this._roundA=Math.round(100*this._a)/100,this},toHsv:function(){var e=eV(this._r,this._g,this._b);return{h:360*e.h,s:e.s,v:e.v,a:this._a}},toHsvString:function(){var e=eV(this._r,this._g,this._b),t=Math.round(360*e.h),n=Math.round(100*e.s),r=Math.round(100*e.v);return 1==this._a?"hsv("+t+", "+n+"%, "+r+"%)":"hsva("+t+", "+n+"%, "+r+"%, "+this._roundA+")"},toHsl:function(){var e=eq(this._r,this._g,this._b);return{h:360*e.h,s:e.s,l:e.l,a:this._a}},toHslString:function(){var e=eq(this._r,this._g,this._b),t=Math.round(360*e.h),n=Math.round(100*e.s),r=Math.round(100*e.l);return 1==this._a?"hsl("+t+", "+n+"%, "+r+"%)":"hsla("+t+", "+n+"%, "+r+"%, "+this._roundA+")"},toHex:function(e){return eW(this._r,this._g,this._b,e)},toHexString:function(e){return"#"+this.toHex(e)},toHex8:function(e){var t,n,r,o,i;return t=this._r,n=this._g,r=this._b,o=this._a,i=[tr(Math.round(t).toString(16)),tr(Math.round(n).toString(16)),tr(Math.round(r).toString(16)),tr(ti(o))],e&&i[0].charAt(0)==i[0].charAt(1)&&i[1].charAt(0)==i[1].charAt(1)&&i[2].charAt(0)==i[2].charAt(1)&&i[3].charAt(0)==i[3].charAt(1)?i[0].charAt(0)+i[1].charAt(0)+i[2].charAt(0)+i[3].charAt(0):i.join("")},toHex8String:function(e){return"#"+this.toHex8(e)},toRgb:function(){return{r:Math.round(this._r),g:Math.round(this._g),b:Math.round(this._b),a:this._a}},toRgbString:function(){return 1==this._a?"rgb("+Math.round(this._r)+", "+Math.round(this._g)+", "+Math.round(this._b)+")":"rgba("+Math.round(this._r)+", "+Math.round(this._g)+", "+Math.round(this._b)+", "+this._roundA+")"},toPercentageRgb:function(){return{r:Math.round(100*te(this._r,255))+"%",g:Math.round(100*te(this._g,255))+"%",b:Math.round(100*te(this._b,255))+"%",a:this._a}},toPercentageRgbString:function(){return 1==this._a?"rgb("+Math.round(100*te(this._r,255))+"%, "+Math.round(100*te(this._g,255))+"%, "+Math.round(100*te(this._b,255))+"%)":"rgba("+Math.round(100*te(this._r,255))+"%, "+Math.round(100*te(this._g,255))+"%, "+Math.round(100*te(this._b,255))+"%, "+this._roundA+")"},toName:function(){return 0===this._a?"transparent":!(this._a<1)&&(e9[eW(this._r,this._g,this._b,!0)]||!1)},toFilter:function(e){var t="#"+eG(this._r,this._g,this._b,this._a),n=t,r=this._gradientType?"GradientType = 1, ":"";if(e){var o=eZ(e);n="#"+eG(o._r,o._g,o._b,o._a)}return"progid:DXImageTransform.Microsoft.gradient("+r+"startColorstr="+t+",endColorstr="+n+")"},toString:function(e){var t=!!e;e=e||this._format;var n=!1,r=this._a<1&&this._a>=0;return!t&&r&&("hex"===e||"hex6"===e||"hex3"===e||"hex4"===e||"hex8"===e||"name"===e)?"name"===e&&0===this._a?this.toName():this.toRgbString():("rgb"===e&&(n=this.toRgbString()),"prgb"===e&&(n=this.toPercentageRgbString()),("hex"===e||"hex6"===e)&&(n=this.toHexString()),"hex3"===e&&(n=this.toHexString(!0)),"hex4"===e&&(n=this.toHex8String(!0)),"hex8"===e&&(n=this.toHex8String()),"name"===e&&(n=this.toName()),"hsl"===e&&(n=this.toHslString()),"hsv"===e&&(n=this.toHsvString()),n||this.toHexString())},clone:function(){return eZ(this.toString())},_applyModification:function(e,t){var n=e.apply(null,[this].concat([].slice.call(t)));return this._r=n._r,this._g=n._g,this._b=n._b,this.setAlpha(n._a),this},lighten:function(){return this._applyModification(eJ,arguments)},brighten:function(){return this._applyModification(eQ,arguments)},darken:function(){return this._applyModification(e0,arguments)},desaturate:function(){return this._applyModification(eK,arguments)},saturate:function(){return this._applyModification(eY,arguments)},greyscale:function(){return this._applyModification(eX,arguments)},spin:function(){return this._applyModification(e1,arguments)},_applyCombination:function(e,t){return e.apply(null,[this].concat([].slice.call(t)))},analogous:function(){return this._applyCombination(e6,arguments)},complement:function(){return this._applyCombination(e2,arguments)},monochromatic:function(){return this._applyCombination(e4,arguments)},splitcomplement:function(){return this._applyCombination(e3,arguments)},triad:function(){return this._applyCombination(e5,[3])},tetrad:function(){return this._applyCombination(e5,[4])}},eZ.fromRatio=function(e,t){if("object"==e$(e)){var n={};for(var r in e)e.hasOwnProperty(r)&&("a"===r?n[r]=e[r]:n[r]=to(e[r]));e=n}return eZ(e,t)},eZ.equals=function(e,t){return!!e&&!!t&&eZ(e).toRgbString()==eZ(t).toRgbString()},eZ.random=function(){return eZ.fromRatio({r:Math.random(),g:Math.random(),b:Math.random()})},eZ.mix=function(e,t,n){n=0===n?0:n||50;var r=eZ(e).toRgb(),o=eZ(t).toRgb(),i=n/100;return eZ({r:(o.r-r.r)*i+r.r,g:(o.g-r.g)*i+r.g,b:(o.b-r.b)*i+r.b,a:(o.a-r.a)*i+r.a})},eZ.readability=function(e,t){var n=eZ(e),r=eZ(t);return(Math.max(n.getLuminance(),r.getLuminance())+.05)/(Math.min(n.getLuminance(),r.getLuminance())+.05)},eZ.isReadable=function(e,t,n){var r,o,i,a,l,s=eZ.readability(e,t);switch(l=!1,(o=((r=(r=n)||{level:"AA",size:"small"}).level||"AA").toUpperCase(),i=(r.size||"small").toLowerCase(),"AA"!==o&&"AAA"!==o&&(o="AA"),"small"!==i&&"large"!==i&&(i="small"),a={level:o,size:i}).level+a.size){case"AAsmall":case"AAAlarge":l=s>=4.5;break;case"AAlarge":l=s>=3;break;case"AAAsmall":l=s>=7}return l},eZ.mostReadable=function(e,t,n){var r,o,i,a,l=null,s=0;o=(n=n||{}).includeFallbackColors,i=n.level,a=n.size;for(var c=0;cs&&(s=r,l=eZ(t[c]));return eZ.isReadable(e,l,{level:i,size:a})||!o?l:(n.includeFallbackColors=!1,eZ.mostReadable(e,["#fff","#000"],n))};var e8=eZ.names={aliceblue:"f0f8ff",antiquewhite:"faebd7",aqua:"0ff",aquamarine:"7fffd4",azure:"f0ffff",beige:"f5f5dc",bisque:"ffe4c4",black:"000",blanchedalmond:"ffebcd",blue:"00f",blueviolet:"8a2be2",brown:"a52a2a",burlywood:"deb887",burntsienna:"ea7e5d",cadetblue:"5f9ea0",chartreuse:"7fff00",chocolate:"d2691e",coral:"ff7f50",cornflowerblue:"6495ed",cornsilk:"fff8dc",crimson:"dc143c",cyan:"0ff",darkblue:"00008b",darkcyan:"008b8b",darkgoldenrod:"b8860b",darkgray:"a9a9a9",darkgreen:"006400",darkgrey:"a9a9a9",darkkhaki:"bdb76b",darkmagenta:"8b008b",darkolivegreen:"556b2f",darkorange:"ff8c00",darkorchid:"9932cc",darkred:"8b0000",darksalmon:"e9967a",darkseagreen:"8fbc8f",darkslateblue:"483d8b",darkslategray:"2f4f4f",darkslategrey:"2f4f4f",darkturquoise:"00ced1",darkviolet:"9400d3",deeppink:"ff1493",deepskyblue:"00bfff",dimgray:"696969",dimgrey:"696969",dodgerblue:"1e90ff",firebrick:"b22222",floralwhite:"fffaf0",forestgreen:"228b22",fuchsia:"f0f",gainsboro:"dcdcdc",ghostwhite:"f8f8ff",gold:"ffd700",goldenrod:"daa520",gray:"808080",green:"008000",greenyellow:"adff2f",grey:"808080",honeydew:"f0fff0",hotpink:"ff69b4",indianred:"cd5c5c",indigo:"4b0082",ivory:"fffff0",khaki:"f0e68c",lavender:"e6e6fa",lavenderblush:"fff0f5",lawngreen:"7cfc00",lemonchiffon:"fffacd",lightblue:"add8e6",lightcoral:"f08080",lightcyan:"e0ffff",lightgoldenrodyellow:"fafad2",lightgray:"d3d3d3",lightgreen:"90ee90",lightgrey:"d3d3d3",lightpink:"ffb6c1",lightsalmon:"ffa07a",lightseagreen:"20b2aa",lightskyblue:"87cefa",lightslategray:"789",lightslategrey:"789",lightsteelblue:"b0c4de",lightyellow:"ffffe0",lime:"0f0",limegreen:"32cd32",linen:"faf0e6",magenta:"f0f",maroon:"800000",mediumaquamarine:"66cdaa",mediumblue:"0000cd",mediumorchid:"ba55d3",mediumpurple:"9370db",mediumseagreen:"3cb371",mediumslateblue:"7b68ee",mediumspringgreen:"00fa9a",mediumturquoise:"48d1cc",mediumvioletred:"c71585",midnightblue:"191970",mintcream:"f5fffa",mistyrose:"ffe4e1",moccasin:"ffe4b5",navajowhite:"ffdead",navy:"000080",oldlace:"fdf5e6",olive:"808000",olivedrab:"6b8e23",orange:"ffa500",orangered:"ff4500",orchid:"da70d6",palegoldenrod:"eee8aa",palegreen:"98fb98",paleturquoise:"afeeee",palevioletred:"db7093",papayawhip:"ffefd5",peachpuff:"ffdab9",peru:"cd853f",pink:"ffc0cb",plum:"dda0dd",powderblue:"b0e0e6",purple:"800080",rebeccapurple:"663399",red:"f00",rosybrown:"bc8f8f",royalblue:"4169e1",saddlebrown:"8b4513",salmon:"fa8072",sandybrown:"f4a460",seagreen:"2e8b57",seashell:"fff5ee",sienna:"a0522d",silver:"c0c0c0",skyblue:"87ceeb",slateblue:"6a5acd",slategray:"708090",slategrey:"708090",snow:"fffafa",springgreen:"00ff7f",steelblue:"4682b4",tan:"d2b48c",teal:"008080",thistle:"d8bfd8",tomato:"ff6347",turquoise:"40e0d0",violet:"ee82ee",wheat:"f5deb3",white:"fff",whitesmoke:"f5f5f5",yellow:"ff0",yellowgreen:"9acd32"},e9=eZ.hexNames=function(e){var t={};for(var n in e)e.hasOwnProperty(n)&&(t[e[n]]=n);return t}(e8);function e7(e){return(isNaN(e=parseFloat(e))||e<0||e>1)&&(e=1),e}function te(e,t){"string"==typeof(n=e)&&-1!=n.indexOf(".")&&1===parseFloat(n)&&(e="100%");var n,r,o="string"==typeof(r=e)&&-1!=r.indexOf("%");return(e=Math.min(t,Math.max(0,parseFloat(e))),o&&(e=parseInt(e*t,10)/100),1e-6>Math.abs(e-t))?1:e%t/parseFloat(t)}function tt(e){return Math.min(1,Math.max(0,e))}function tn(e){return parseInt(e,16)}function tr(e){return 1==e.length?"0"+e:""+e}function to(e){return e<=1&&(e=100*e+"%"),e}function ti(e){return Math.round(255*parseFloat(e)).toString(16)}var ta=(i="[\\s|\\(]+("+(o="(?:[-\\+]?\\d*\\.\\d+%?)|(?:[-\\+]?\\d+%?)")+")[,|\\s]+("+o+")[,|\\s]+("+o+")\\s*\\)?",a="[\\s|\\(]+("+o+")[,|\\s]+("+o+")[,|\\s]+("+o+")[,|\\s]+("+o+")\\s*\\)?",{CSS_UNIT:new RegExp(o),rgb:RegExp("rgb"+i),rgba:RegExp("rgba"+a),hsl:RegExp("hsl"+i),hsla:RegExp("hsla"+a),hsv:RegExp("hsv"+i),hsva:RegExp("hsva"+a),hex3:/^#?([0-9a-fA-F]{1})([0-9a-fA-F]{1})([0-9a-fA-F]{1})$/,hex6:/^#?([0-9a-fA-F]{2})([0-9a-fA-F]{2})([0-9a-fA-F]{2})$/,hex4:/^#?([0-9a-fA-F]{1})([0-9a-fA-F]{1})([0-9a-fA-F]{1})([0-9a-fA-F]{1})$/,hex8:/^#?([0-9a-fA-F]{2})([0-9a-fA-F]{2})([0-9a-fA-F]{2})([0-9a-fA-F]{2})$/});function tl(e){return!!ta.CSS_UNIT.exec(e)}var ts=function(e){var t,n,r=0,o=0;return t=["r","g","b","a","h","s","l","v"],n=function(t){e[t]&&(r+=1,isNaN(e[t])||(o+=1),("s"===t||"l"===t)&&/^\d+%$/.test(e[t])&&(o+=1))},((0,X.Z)(t)?eF:eB.Z)(t,"function"==typeof n?n:ez.Z),r===o&&e},tc=function(e,t){var n=e.hex?eZ(e.hex):eZ(e),r=n.toHsl(),o=n.toHsv(),i=n.toRgb(),a=n.toHex();return 0===r.s&&(r.h=t||0,o.h=t||0),{hsl:r,hex:"000000"===a&&0===i.a?"transparent":"#"+a,rgb:i,hsv:o,oldHue:e.h||t||r.h,source:e.source}},tu=function(e){if("transparent"===e)return!0;var t="#"===String(e).charAt(0)?1:0;return e.length!==4+t&&e.length<7+t&&eZ(e).isValid()},tf=function(e){if(!e)return"#fff";var t=tc(e);return"transparent"===t.hex?"rgba(0,0,0,0.4)":(299*t.rgb.r+587*t.rgb.g+114*t.rgb.b)/1e3>=128?"#000":"#fff"},td=function(e,t){return eZ(t+" ("+e.replace("\xb0","")+")")._ok},tp=Object.assign||function(e){for(var t=1;t1&&void 0!==arguments[1]?arguments[1]:"span";return function(n){function r(){!function(e,t){if(!(e instanceof t))throw TypeError("Cannot call a class as a function")}(this,r);for(var e,t,n,o=arguments.length,i=Array(o),a=0;a1&&(e.a=1),n.props.onChange({h:n.props.hsl.h,s:n.props.hsl.s,l:n.props.hsl.l,a:Math.round(100*e.a)/100,source:"rgb"},t)):(e.h||e.s||e.l)&&("string"==typeof e.s&&e.s.includes("%")&&(e.s=e.s.replace("%","")),"string"==typeof e.l&&e.l.includes("%")&&(e.l=e.l.replace("%","")),1==e.s?e.s=.01:1==e.l&&(e.l=.01),n.props.onChange({h:e.h||n.props.hsl.h,s:Number(tW(e.s)?n.props.hsl.s:e.s),l:Number(tW(e.l)?n.props.hsl.l:e.l),source:"hsl"},t))},n.showHighlight=function(e){e.currentTarget.style.background="#eee"},n.hideHighlight=function(e){e.currentTarget.style.background="transparent"},1!==e.hsl.a&&"hex"===e.view?n.state={view:"rgb"}:n.state={view:e.view},n}return!function(e,t){if("function"!=typeof t&&null!==t)throw TypeError("Super expression must either be null or a function, not "+typeof t);e.prototype=Object.create(t&&t.prototype,{constructor:{value:e,enumerable:!1,writable:!0,configurable:!0}}),t&&(Object.setPrototypeOf?Object.setPrototypeOf(e,t):e.__proto__=t)}(t,e),tK(t,[{key:"render",value:function(){var e=this,t=(0,s.ZP)({default:{wrap:{paddingTop:"16px",display:"flex"},fields:{flex:"1",display:"flex",marginLeft:"-6px"},field:{paddingLeft:"6px",width:"100%"},alpha:{paddingLeft:"6px",width:"100%"},toggle:{width:"32px",textAlign:"right",position:"relative"},icon:{marginRight:"-4px",marginTop:"12px",cursor:"pointer",position:"relative"},iconHighlight:{position:"absolute",width:"24px",height:"28px",background:"#eee",borderRadius:"4px",top:"10px",left:"12px",display:"none"},input:{fontSize:"11px",color:"#333",width:"100%",borderRadius:"2px",border:"none",boxShadow:"inset 0 0 0 1px #dadada",height:"21px",textAlign:"center"},label:{textTransform:"uppercase",fontSize:"11px",lineHeight:"11px",color:"#969696",textAlign:"center",display:"block",marginTop:"12px"},svg:{fill:"#333",width:"24px",height:"24px",border:"1px transparent solid",borderRadius:"5px"}},disableAlpha:{alpha:{display:"none"}}},this.props,this.state),n=void 0;return"hex"===this.state.view?n=l.createElement("div",{style:t.fields,className:"flexbox-fix"},l.createElement("div",{style:t.field},l.createElement(E,{style:{input:t.input,label:t.label},label:"hex",value:this.props.hex,onChange:this.handleChange}))):"rgb"===this.state.view?n=l.createElement("div",{style:t.fields,className:"flexbox-fix"},l.createElement("div",{style:t.field},l.createElement(E,{style:{input:t.input,label:t.label},label:"r",value:this.props.rgb.r,onChange:this.handleChange})),l.createElement("div",{style:t.field},l.createElement(E,{style:{input:t.input,label:t.label},label:"g",value:this.props.rgb.g,onChange:this.handleChange})),l.createElement("div",{style:t.field},l.createElement(E,{style:{input:t.input,label:t.label},label:"b",value:this.props.rgb.b,onChange:this.handleChange})),l.createElement("div",{style:t.alpha},l.createElement(E,{style:{input:t.input,label:t.label},label:"a",value:this.props.rgb.a,arrowOffset:.01,onChange:this.handleChange}))):"hsl"===this.state.view&&(n=l.createElement("div",{style:t.fields,className:"flexbox-fix"},l.createElement("div",{style:t.field},l.createElement(E,{style:{input:t.input,label:t.label},label:"h",value:Math.round(this.props.hsl.h),onChange:this.handleChange})),l.createElement("div",{style:t.field},l.createElement(E,{style:{input:t.input,label:t.label},label:"s",value:Math.round(100*this.props.hsl.s)+"%",onChange:this.handleChange})),l.createElement("div",{style:t.field},l.createElement(E,{style:{input:t.input,label:t.label},label:"l",value:Math.round(100*this.props.hsl.l)+"%",onChange:this.handleChange})),l.createElement("div",{style:t.alpha},l.createElement(E,{style:{input:t.input,label:t.label},label:"a",value:this.props.hsl.a,arrowOffset:.01,onChange:this.handleChange})))),l.createElement("div",{style:t.wrap,className:"flexbox-fix"},n,l.createElement("div",{style:t.toggle},l.createElement("div",{style:t.icon,onClick:this.toggleViews,ref:function(t){return e.icon=t}},l.createElement(tG.Z,{style:t.svg,onMouseOver:this.showHighlight,onMouseEnter:this.showHighlight,onMouseOut:this.hideHighlight}))))}}],[{key:"getDerivedStateFromProps",value:function(e,t){return 1!==e.hsl.a&&"hex"===t.view?{view:"rgb"}:null}}]),t}(l.Component);tY.defaultProps={view:"hex"};var tX=function(){var e=(0,s.ZP)({default:{picker:{width:"12px",height:"12px",borderRadius:"6px",transform:"translate(-6px, -1px)",backgroundColor:"rgb(248, 248, 248)",boxShadow:"0 1px 4px 0 rgba(0, 0, 0, 0.37)"}}});return l.createElement("div",{style:e.picker})},tJ=function(){var e=(0,s.ZP)({default:{picker:{width:"12px",height:"12px",borderRadius:"6px",boxShadow:"inset 0 0 0 1px #fff",transform:"translate(-6px, -6px)"}}});return l.createElement("div",{style:e.picker})},tQ=function(e){var t=e.width,n=e.onChange,r=e.disableAlpha,o=e.rgb,i=e.hsl,a=e.hsv,c=e.hex,u=e.renderers,f=e.styles,d=e.className,p=e.defaultView,g=(0,s.ZP)(ev({default:{picker:{width:t,background:"#fff",borderRadius:"2px",boxShadow:"0 0 2px rgba(0,0,0,.3), 0 4px 8px rgba(0,0,0,.3)",boxSizing:"initial",fontFamily:"Menlo"},saturation:{width:"100%",paddingBottom:"55%",position:"relative",borderRadius:"2px 2px 0 0",overflow:"hidden"},Saturation:{radius:"2px 2px 0 0"},body:{padding:"16px 16px 12px"},controls:{display:"flex"},color:{width:"32px"},swatch:{marginTop:"6px",width:"16px",height:"16px",borderRadius:"8px",position:"relative",overflow:"hidden"},active:{absolute:"0px 0px 0px 0px",borderRadius:"8px",boxShadow:"inset 0 0 0 1px rgba(0,0,0,.1)",background:"rgba("+o.r+", "+o.g+", "+o.b+", "+o.a+")",zIndex:"2"},toggles:{flex:"1"},hue:{height:"10px",position:"relative",marginBottom:"8px"},Hue:{radius:"2px"},alpha:{height:"10px",position:"relative"},Alpha:{radius:"2px"}},disableAlpha:{color:{width:"22px"},alpha:{display:"none"},hue:{marginBottom:"0px"},swatch:{width:"10px",height:"10px",marginTop:"0px"}}},void 0===f?{}:f),{disableAlpha:r});return l.createElement("div",{style:g.picker,className:"chrome-picker "+(void 0===d?"":d)},l.createElement("div",{style:g.saturation},l.createElement(eD,{style:g.Saturation,hsl:i,hsv:a,pointer:tJ,onChange:n})),l.createElement("div",{style:g.body},l.createElement("div",{style:g.controls,className:"flexbox-fix"},l.createElement("div",{style:g.color},l.createElement("div",{style:g.swatch},l.createElement("div",{style:g.active}),l.createElement(h,{renderers:u}))),l.createElement("div",{style:g.toggles},l.createElement("div",{style:g.hue},l.createElement(O,{style:g.Hue,hsl:i,pointer:tX,onChange:n})),l.createElement("div",{style:g.alpha},l.createElement(v,{style:g.Alpha,rgb:o,hsl:i,pointer:tX,renderers:u,onChange:n})))),l.createElement(tY,{rgb:o,hsl:i,hex:c,view:p,onChange:n,disableAlpha:r})))};tQ.propTypes={width:A().oneOfType([A().string,A().number]),disableAlpha:A().bool,styles:A().object,defaultView:A().oneOf(["hex","rgb","hsl"])},tQ.defaultProps={width:225,disableAlpha:!1,styles:{}},tg(tQ);var t0=function(e){var t=e.color,n=e.onClick,r=e.onSwatchHover,o=e.active,i=(0,s.ZP)({default:{color:{background:t,width:"15px",height:"15px",float:"left",marginRight:"5px",marginBottom:"5px",position:"relative",cursor:"pointer"},dot:{absolute:"5px 5px 5px 5px",background:tf(t),borderRadius:"50%",opacity:"0"}},active:{dot:{opacity:"1"}},"color-#FFFFFF":{color:{boxShadow:"inset 0 0 0 1px #ddd"},dot:{background:"#000"}},transparent:{dot:{background:"#000"}}},{active:o,"color-#FFFFFF":"#FFFFFF"===t,transparent:"transparent"===t});return l.createElement(tx,{style:i.color,color:t,onClick:void 0===n?function(){}:n,onHover:r,focusStyle:{boxShadow:"0 0 4px "+t}},l.createElement("div",{style:i.dot}))},t1=function(e){var t=e.hex,n=e.rgb,r=e.onChange,o=(0,s.ZP)({default:{fields:{display:"flex",paddingBottom:"6px",paddingRight:"5px",position:"relative"},active:{position:"absolute",top:"6px",left:"5px",height:"9px",width:"9px",background:t},HEXwrap:{flex:"6",position:"relative"},HEXinput:{width:"80%",padding:"0px",paddingLeft:"20%",border:"none",outline:"none",background:"none",fontSize:"12px",color:"#333",height:"16px"},HEXlabel:{display:"none"},RGBwrap:{flex:"3",position:"relative"},RGBinput:{width:"70%",padding:"0px",paddingLeft:"30%",border:"none",outline:"none",background:"none",fontSize:"12px",color:"#333",height:"16px"},RGBlabel:{position:"absolute",top:"3px",left:"0px",lineHeight:"16px",textTransform:"uppercase",fontSize:"12px",color:"#999"}}}),i=function(e,t){e.r||e.g||e.b?r({r:e.r||n.r,g:e.g||n.g,b:e.b||n.b,source:"rgb"},t):r({hex:e.hex,source:"hex"},t)};return l.createElement("div",{style:o.fields,className:"flexbox-fix"},l.createElement("div",{style:o.active}),l.createElement(E,{style:{wrap:o.HEXwrap,input:o.HEXinput,label:o.HEXlabel},label:"hex",value:t,onChange:i}),l.createElement(E,{style:{wrap:o.RGBwrap,input:o.RGBinput,label:o.RGBlabel},label:"r",value:n.r,onChange:i}),l.createElement(E,{style:{wrap:o.RGBwrap,input:o.RGBinput,label:o.RGBlabel},label:"g",value:n.g,onChange:i}),l.createElement(E,{style:{wrap:o.RGBwrap,input:o.RGBinput,label:o.RGBlabel},label:"b",value:n.b,onChange:i}))},t2=function(e){var t=e.onChange,n=e.onSwatchHover,r=e.colors,o=e.hex,i=e.rgb,a=e.styles,c=void 0===a?{}:a,u=e.className,f=(0,s.ZP)(ev({default:{Compact:{background:"#f6f6f6",radius:"4px"},compact:{paddingTop:"5px",paddingLeft:"5px",boxSizing:"initial",width:"240px"},clear:{clear:"both"}}},c)),d=function(e,n){e.hex?tu(e.hex)&&t({hex:e.hex,source:"hex"},n):t(e,n)};return l.createElement(ey,{style:f.Compact,styles:c},l.createElement("div",{style:f.compact,className:"compact-picker "+(void 0===u?"":u)},l.createElement("div",null,(0,tS.Z)(r,function(e){return l.createElement(t0,{key:e,color:e,active:e.toLowerCase()===o,onClick:d,onSwatchHover:n})}),l.createElement("div",{style:f.clear})),l.createElement(t1,{hex:o,rgb:i,onChange:d})))};t2.propTypes={colors:A().arrayOf(A().string),styles:A().object},t2.defaultProps={colors:["#4D4D4D","#999999","#FFFFFF","#F44E3B","#FE9200","#FCDC00","#DBDF00","#A4DD00","#68CCCA","#73D8FF","#AEA1FF","#FDA1FF","#333333","#808080","#cccccc","#D33115","#E27300","#FCC400","#B0BC00","#68BC00","#16A5A5","#009CE0","#7B64FF","#FA28FF","#000000","#666666","#B3B3B3","#9F0500","#C45100","#FB9E00","#808900","#194D33","#0C797D","#0062B1","#653294","#AB149E"],styles:{}},tg(t2);var t5=(0,s.tz)(function(e){var t=e.hover,n=e.color,r=e.onClick,o=e.onSwatchHover,i={position:"relative",zIndex:"2",outline:"2px solid #fff",boxShadow:"0 0 5px 2px rgba(0,0,0,0.25)"},a=(0,s.ZP)({default:{swatch:{width:"25px",height:"25px",fontSize:"0"}},hover:{swatch:i}},{hover:t});return l.createElement("div",{style:a.swatch},l.createElement(tx,{color:n,onClick:r,onHover:o,focusStyle:i}))}),t3=function(e){var t=e.width,n=e.colors,r=e.onChange,o=e.onSwatchHover,i=e.triangle,a=e.styles,c=e.className,u=(0,s.ZP)(ev({default:{card:{width:t,background:"#fff",border:"1px solid rgba(0,0,0,0.2)",boxShadow:"0 3px 12px rgba(0,0,0,0.15)",borderRadius:"4px",position:"relative",padding:"5px",display:"flex",flexWrap:"wrap"},triangle:{position:"absolute",border:"7px solid transparent",borderBottomColor:"#fff"},triangleShadow:{position:"absolute",border:"8px solid transparent",borderBottomColor:"rgba(0,0,0,0.15)"}},"hide-triangle":{triangle:{display:"none"},triangleShadow:{display:"none"}},"top-left-triangle":{triangle:{top:"-14px",left:"10px"},triangleShadow:{top:"-16px",left:"9px"}},"top-right-triangle":{triangle:{top:"-14px",right:"10px"},triangleShadow:{top:"-16px",right:"9px"}},"bottom-left-triangle":{triangle:{top:"35px",left:"10px",transform:"rotate(180deg)"},triangleShadow:{top:"37px",left:"9px",transform:"rotate(180deg)"}},"bottom-right-triangle":{triangle:{top:"35px",right:"10px",transform:"rotate(180deg)"},triangleShadow:{top:"37px",right:"9px",transform:"rotate(180deg)"}}},void 0===a?{}:a),{"hide-triangle":"hide"===i,"top-left-triangle":"top-left"===i,"top-right-triangle":"top-right"===i,"bottom-left-triangle":"bottom-left"===i,"bottom-right-triangle":"bottom-right"===i}),f=function(e,t){return r({hex:e,source:"hex"},t)};return l.createElement("div",{style:u.card,className:"github-picker "+(void 0===c?"":c)},l.createElement("div",{style:u.triangleShadow}),l.createElement("div",{style:u.triangle}),(0,tS.Z)(n,function(e){return l.createElement(t5,{color:e,key:e,onClick:f,onSwatchHover:o})}))};t3.propTypes={width:A().oneOfType([A().string,A().number]),colors:A().arrayOf(A().string),triangle:A().oneOf(["hide","top-left","top-right","bottom-left","bottom-right"]),styles:A().object},t3.defaultProps={width:200,colors:["#B80000","#DB3E00","#FCCB00","#008B02","#006B76","#1273DE","#004DCF","#5300EB","#EB9694","#FAD0C3","#FEF3BD","#C1E1C5","#BEDADC","#C4DEF6","#BED3F3","#D4C4FB"],triangle:"top-left",styles:{}},tg(t3);var t6=Object.assign||function(e){for(var t=1;t.5});return l.createElement("div",{style:n.picker})},t7=function(){var e=(0,s.ZP)({default:{triangle:{width:0,height:0,borderStyle:"solid",borderWidth:"4px 0 4px 6px",borderColor:"transparent transparent transparent #fff",position:"absolute",top:"1px",left:"1px"},triangleBorder:{width:0,height:0,borderStyle:"solid",borderWidth:"5px 0 5px 8px",borderColor:"transparent transparent transparent #555"},left:{Extend:"triangleBorder",transform:"translate(-13px, -4px)"},leftInside:{Extend:"triangle",transform:"translate(-8px, -5px)"},right:{Extend:"triangleBorder",transform:"translate(20px, -14px) rotate(180deg)"},rightInside:{Extend:"triangle",transform:"translate(-8px, -5px)"}}});return l.createElement("div",{style:e.pointer},l.createElement("div",{style:e.left},l.createElement("div",{style:e.leftInside})),l.createElement("div",{style:e.right},l.createElement("div",{style:e.rightInside})))},ne=function(e){var t=e.onClick,n=e.label,r=e.children,o=e.active,i=(0,s.ZP)({default:{button:{backgroundImage:"linear-gradient(-180deg, #FFFFFF 0%, #E6E6E6 100%)",border:"1px solid #878787",borderRadius:"2px",height:"20px",boxShadow:"0 1px 0 0 #EAEAEA",fontSize:"14px",color:"#000",lineHeight:"20px",textAlign:"center",marginBottom:"10px",cursor:"pointer"}},active:{button:{boxShadow:"0 0 0 1px #878787"}}},{active:o});return l.createElement("div",{style:i.button,onClick:t},n||r)},nt=function(e){var t=e.rgb,n=e.currentColor,r=(0,s.ZP)({default:{swatches:{border:"1px solid #B3B3B3",borderBottom:"1px solid #F0F0F0",marginBottom:"2px",marginTop:"1px"},new:{height:"34px",background:"rgb("+t.r+","+t.g+", "+t.b+")",boxShadow:"inset 1px 0 0 #000, inset -1px 0 0 #000, inset 0 1px 0 #000"},current:{height:"34px",background:n,boxShadow:"inset 1px 0 0 #000, inset -1px 0 0 #000, inset 0 -1px 0 #000"},label:{fontSize:"14px",color:"#000",textAlign:"center"}}});return l.createElement("div",null,l.createElement("div",{style:r.label},"new"),l.createElement("div",{style:r.swatches},l.createElement("div",{style:r.new}),l.createElement("div",{style:r.current})),l.createElement("div",{style:r.label},"current"))},nn=function(){function e(e,t){for(var n=0;n100&&(e.a=100),e.a/=100,t({h:r.h,s:r.s,l:r.l,a:e.a,source:"rgb"},o))};return l.createElement("div",{style:a.fields,className:"flexbox-fix"},l.createElement("div",{style:a.double},l.createElement(E,{style:{input:a.input,label:a.label},label:"hex",value:o.replace("#",""),onChange:c})),l.createElement("div",{style:a.single},l.createElement(E,{style:{input:a.input,label:a.label},label:"r",value:n.r,onChange:c,dragLabel:"true",dragMax:"255"})),l.createElement("div",{style:a.single},l.createElement(E,{style:{input:a.input,label:a.label},label:"g",value:n.g,onChange:c,dragLabel:"true",dragMax:"255"})),l.createElement("div",{style:a.single},l.createElement(E,{style:{input:a.input,label:a.label},label:"b",value:n.b,onChange:c,dragLabel:"true",dragMax:"255"})),l.createElement("div",{style:a.alpha},l.createElement(E,{style:{input:a.input,label:a.label},label:"a",value:Math.round(100*n.a),onChange:c,dragLabel:"true",dragMax:"100"})))},ni=Object.assign||function(e){for(var t=1;tMath.abs(n.l-.8)&&.1>Math.abs(n.s-.5),onClick:t,first:!0})),l.createElement("div",{style:r.swatch},l.createElement(nc,{hsl:n,offset:".65",active:.1>Math.abs(n.l-.65)&&.1>Math.abs(n.s-.5),onClick:t})),l.createElement("div",{style:r.swatch},l.createElement(nc,{hsl:n,offset:".50",active:.1>Math.abs(n.l-.5)&&.1>Math.abs(n.s-.5),onClick:t})),l.createElement("div",{style:r.swatch},l.createElement(nc,{hsl:n,offset:".35",active:.1>Math.abs(n.l-.35)&&.1>Math.abs(n.s-.5),onClick:t})),l.createElement("div",{style:r.swatch},l.createElement(nc,{hsl:n,offset:".20",active:.1>Math.abs(n.l-.2)&&.1>Math.abs(n.s-.5),onClick:t,last:!0})),l.createElement("div",{style:r.clear}))},nf=function(e){var t=e.hsl,n=e.onChange,r=e.pointer,o=e.styles,i=e.className,a=(0,s.ZP)(ev({default:{hue:{height:"12px",position:"relative"},Hue:{radius:"2px"}}},void 0===o?{}:o));return l.createElement("div",{style:a.wrap||{},className:"slider-picker "+(void 0===i?"":i)},l.createElement("div",{style:a.hue},l.createElement(O,{style:a.Hue,hsl:t,pointer:r,onChange:n})),l.createElement("div",{style:a.swatches},l.createElement(nu,{hsl:t,onClick:n})))};nf.propTypes={styles:A().object},nf.defaultProps={pointer:function(){var e=(0,s.ZP)({default:{picker:{width:"14px",height:"14px",borderRadius:"6px",transform:"translate(-7px, -1px)",backgroundColor:"rgb(248, 248, 248)",boxShadow:"0 1px 4px 0 rgba(0, 0, 0, 0.37)"}}});return l.createElement("div",{style:e.picker})},styles:{}},tg(nf);var nd=n(29872),np=function(e){var t=e.color,n=e.onClick,r=e.onSwatchHover,o=e.first,i=e.last,a=e.active,c=(0,s.ZP)({default:{color:{width:"40px",height:"24px",cursor:"pointer",background:t,marginBottom:"1px"},check:{color:tf(t),marginLeft:"8px",display:"none"}},first:{color:{overflow:"hidden",borderRadius:"2px 2px 0 0"}},last:{color:{overflow:"hidden",borderRadius:"0 0 2px 2px"}},active:{check:{display:"block"}},"color-#FFFFFF":{color:{boxShadow:"inset 0 0 0 1px #ddd"},check:{color:"#333"}},transparent:{check:{color:"#333"}}},{first:o,last:i,active:a,"color-#FFFFFF":"#FFFFFF"===t,transparent:"transparent"===t});return l.createElement(tx,{color:t,style:c.color,onClick:void 0===n?function(){}:n,onHover:r,focusStyle:{boxShadow:"0 0 4px "+t}},l.createElement("div",{style:c.check},l.createElement(nd.Z,null)))},nh=function(e){var t=e.onClick,n=e.onSwatchHover,r=e.group,o=e.active,i=(0,s.ZP)({default:{group:{paddingBottom:"10px",width:"40px",float:"left",marginRight:"10px"}}});return l.createElement("div",{style:i.group},(0,tS.Z)(r,function(e,i){return l.createElement(np,{key:e,color:e,active:e.toLowerCase()===o,first:0===i,last:i===r.length-1,onClick:t,onSwatchHover:n})}))},ng=function(e){var t=e.width,n=e.height,r=e.onChange,o=e.onSwatchHover,i=e.colors,a=e.hex,c=e.styles,u=e.className,f=(0,s.ZP)(ev({default:{picker:{width:t,height:n},overflow:{height:n,overflowY:"scroll"},body:{padding:"16px 0 6px 16px"},clear:{clear:"both"}}},void 0===c?{}:c)),d=function(e,t){return r({hex:e,source:"hex"},t)};return l.createElement("div",{style:f.picker,className:"swatches-picker "+(void 0===u?"":u)},l.createElement(ey,null,l.createElement("div",{style:f.overflow},l.createElement("div",{style:f.body},(0,tS.Z)(i,function(e){return l.createElement(nh,{key:e.toString(),group:e,active:a,onClick:d,onSwatchHover:o})}),l.createElement("div",{style:f.clear})))))};ng.propTypes={width:A().oneOfType([A().string,A().number]),height:A().oneOfType([A().string,A().number]),colors:A().arrayOf(A().arrayOf(A().string)),styles:A().object},ng.defaultProps={width:320,height:240,colors:[[tO["900"],tO["700"],tO["500"],tO["300"],tO["100"]],[tC["900"],tC["700"],tC["500"],tC["300"],tC["100"]],[tA["900"],tA["700"],tA["500"],tA["300"],tA["100"]],[tN["900"],tN["700"],tN["500"],tN["300"],tN["100"]],[tR["900"],tR["700"],tR["500"],tR["300"],tR["100"]],[tT["900"],tT["700"],tT["500"],tT["300"],tT["100"]],[tP["900"],tP["700"],tP["500"],tP["300"],tP["100"]],[tM["900"],tM["700"],tM["500"],tM["300"],tM["100"]],[tj["900"],tj["700"],tj["500"],tj["300"],tj["100"]],["#194D33",tL["700"],tL["500"],tL["300"],tL["100"]],[tI["900"],tI["700"],tI["500"],tI["300"],tI["100"]],[tD["900"],tD["700"],tD["500"],tD["300"],tD["100"]],[tF["900"],tF["700"],tF["500"],tF["300"],tF["100"]],[tB["900"],tB["700"],tB["500"],tB["300"],tB["100"]],[tz["900"],tz["700"],tz["500"],tz["300"],tz["100"]],[t$["900"],t$["700"],t$["500"],t$["300"],t$["100"]],[tU["900"],tU["700"],tU["500"],tU["300"],tU["100"]],[tH["900"],tH["700"],tH["500"],tH["300"],tH["100"]],["#000000","#525252","#969696","#D9D9D9","#FFFFFF"]],styles:{}},tg(ng);var nm=function(e){var t=e.onChange,n=e.onSwatchHover,r=e.hex,o=e.colors,i=e.width,a=e.triangle,c=e.styles,u=e.className,f=(0,s.ZP)(ev({default:{card:{width:i,background:"#fff",border:"0 solid rgba(0,0,0,0.25)",boxShadow:"0 1px 4px rgba(0,0,0,0.25)",borderRadius:"4px",position:"relative"},body:{padding:"15px 9px 9px 15px"},label:{fontSize:"18px",color:"#fff"},triangle:{width:"0px",height:"0px",borderStyle:"solid",borderWidth:"0 9px 10px 9px",borderColor:"transparent transparent #fff transparent",position:"absolute"},triangleShadow:{width:"0px",height:"0px",borderStyle:"solid",borderWidth:"0 9px 10px 9px",borderColor:"transparent transparent rgba(0,0,0,.1) transparent",position:"absolute"},hash:{background:"#F0F0F0",height:"30px",width:"30px",borderRadius:"4px 0 0 4px",float:"left",color:"#98A1A4",display:"flex",alignItems:"center",justifyContent:"center"},input:{width:"100px",fontSize:"14px",color:"#666",border:"0px",outline:"none",height:"28px",boxShadow:"inset 0 0 0 1px #F0F0F0",boxSizing:"content-box",borderRadius:"0 4px 4px 0",float:"left",paddingLeft:"8px"},swatch:{width:"30px",height:"30px",float:"left",borderRadius:"4px",margin:"0 6px 6px 0"},clear:{clear:"both"}},"hide-triangle":{triangle:{display:"none"},triangleShadow:{display:"none"}},"top-left-triangle":{triangle:{top:"-10px",left:"12px"},triangleShadow:{top:"-11px",left:"12px"}},"top-right-triangle":{triangle:{top:"-10px",right:"12px"},triangleShadow:{top:"-11px",right:"12px"}}},void 0===c?{}:c),{"hide-triangle":"hide"===a,"top-left-triangle":"top-left"===a,"top-right-triangle":"top-right"===a}),d=function(e,n){console.log("click"),tu(e)&&t({hex:e,source:"hex"},n)};return l.createElement("div",{style:f.card,className:"twitter-picker "+(void 0===u?"":u)},l.createElement("div",{style:f.triangleShadow}),l.createElement("div",{style:f.triangle}),l.createElement("div",{style:f.body},(0,tS.Z)(o,function(e,t){return l.createElement(tx,{key:t,color:e,hex:e,style:f.swatch,onClick:d,onHover:n,focusStyle:{boxShadow:"0 0 4px "+e}})}),l.createElement("div",{style:f.hash},"#"),l.createElement(E,{label:null,style:{input:f.input},value:r.replace("#",""),onChange:d}),l.createElement("div",{style:f.clear})))};nm.propTypes={width:A().oneOfType([A().string,A().number]),triangle:A().oneOf(["hide","top-left","top-right"]),colors:A().arrayOf(A().string),styles:A().object},nm.defaultProps={width:276,colors:["#FF6900","#FCB900","#7BDCB5","#00D084","#8ED1FC","#0693E3","#ABB8C3","#EB144C","#F78DA7","#9900EF"],triangle:"top-left",styles:{}};var nb=tg(nm),nv=function(e){var t=(0,s.ZP)({default:{picker:{width:"20px",height:"20px",borderRadius:"22px",border:"2px #fff solid",transform:"translate(-12px, -13px)",background:"hsl("+Math.round(e.hsl.h)+", "+Math.round(100*e.hsl.s)+"%, "+Math.round(100*e.hsl.l)+"%)"}}});return l.createElement("div",{style:t.picker})};nv.propTypes={hsl:A().shape({h:A().number,s:A().number,l:A().number,a:A().number})},nv.defaultProps={hsl:{a:1,h:249.94,l:.2,s:.5}};var ny=function(e){var t=(0,s.ZP)({default:{picker:{width:"20px",height:"20px",borderRadius:"22px",transform:"translate(-10px, -7px)",background:"hsl("+Math.round(e.hsl.h)+", 100%, 50%)",border:"2px white solid"}}});return l.createElement("div",{style:t.picker})};ny.propTypes={hsl:A().shape({h:A().number,s:A().number,l:A().number,a:A().number})},ny.defaultProps={hsl:{a:1,h:249.94,l:.2,s:.5}};var nx=function(e){var t=e.onChange,n=e.rgb,r=e.hsl,o=e.hex,i=e.hsv,a=function(e,n){if(e.hex)tu(e.hex)&&t({hex:e.hex,source:"hex"},n);else if(e.rgb){var r=e.rgb.split(",");td(e.rgb,"rgb")&&t({r:r[0],g:r[1],b:r[2],a:1,source:"rgb"},n)}else if(e.hsv){var o=e.hsv.split(",");td(e.hsv,"hsv")&&(o[2]=o[2].replace("%",""),o[1]=o[1].replace("%",""),o[0]=o[0].replace("\xb0",""),1==o[1]?o[1]=.01:1==o[2]&&(o[2]=.01),t({h:Number(o[0]),s:Number(o[1]),v:Number(o[2]),source:"hsv"},n))}else if(e.hsl){var i=e.hsl.split(",");td(e.hsl,"hsl")&&(i[2]=i[2].replace("%",""),i[1]=i[1].replace("%",""),i[0]=i[0].replace("\xb0",""),1==d[1]?d[1]=.01:1==d[2]&&(d[2]=.01),t({h:Number(i[0]),s:Number(i[1]),v:Number(i[2]),source:"hsl"},n))}},c=(0,s.ZP)({default:{wrap:{display:"flex",height:"100px",marginTop:"4px"},fields:{width:"100%"},column:{paddingTop:"10px",display:"flex",justifyContent:"space-between"},double:{padding:"0px 4.4px",boxSizing:"border-box"},input:{width:"100%",height:"38px",boxSizing:"border-box",padding:"4px 10% 3px",textAlign:"center",border:"1px solid #dadce0",fontSize:"11px",textTransform:"lowercase",borderRadius:"5px",outline:"none",fontFamily:"Roboto,Arial,sans-serif"},input2:{height:"38px",width:"100%",border:"1px solid #dadce0",boxSizing:"border-box",fontSize:"11px",textTransform:"lowercase",borderRadius:"5px",outline:"none",paddingLeft:"10px",fontFamily:"Roboto,Arial,sans-serif"},label:{textAlign:"center",fontSize:"12px",background:"#fff",position:"absolute",textTransform:"uppercase",color:"#3c4043",width:"35px",top:"-6px",left:"0",right:"0",marginLeft:"auto",marginRight:"auto",fontFamily:"Roboto,Arial,sans-serif"},label2:{left:"10px",textAlign:"center",fontSize:"12px",background:"#fff",position:"absolute",textTransform:"uppercase",color:"#3c4043",width:"32px",top:"-6px",fontFamily:"Roboto,Arial,sans-serif"},single:{flexGrow:"1",margin:"0px 4.4px"}}}),u=n.r+", "+n.g+", "+n.b,f=Math.round(r.h)+"\xb0, "+Math.round(100*r.s)+"%, "+Math.round(100*r.l)+"%",d=Math.round(i.h)+"\xb0, "+Math.round(100*i.s)+"%, "+Math.round(100*i.v)+"%";return l.createElement("div",{style:c.wrap,className:"flexbox-fix"},l.createElement("div",{style:c.fields},l.createElement("div",{style:c.double},l.createElement(E,{style:{input:c.input,label:c.label},label:"hex",value:o,onChange:a})),l.createElement("div",{style:c.column},l.createElement("div",{style:c.single},l.createElement(E,{style:{input:c.input2,label:c.label2},label:"rgb",value:u,onChange:a})),l.createElement("div",{style:c.single},l.createElement(E,{style:{input:c.input2,label:c.label2},label:"hsv",value:d,onChange:a})),l.createElement("div",{style:c.single},l.createElement(E,{style:{input:c.input2,label:c.label2},label:"hsl",value:f,onChange:a})))))},nw=function(e){var t=e.width,n=e.onChange,r=e.rgb,o=e.hsl,i=e.hsv,a=e.hex,c=e.header,u=e.styles,f=e.className,d=(0,s.ZP)(ev({default:{picker:{width:t,background:"#fff",border:"1px solid #dfe1e5",boxSizing:"initial",display:"flex",flexWrap:"wrap",borderRadius:"8px 8px 0px 0px"},head:{height:"57px",width:"100%",paddingTop:"16px",paddingBottom:"16px",paddingLeft:"16px",fontSize:"20px",boxSizing:"border-box",fontFamily:"Roboto-Regular,HelveticaNeue,Arial,sans-serif"},saturation:{width:"70%",padding:"0px",position:"relative",overflow:"hidden"},swatch:{width:"30%",height:"228px",padding:"0px",background:"rgba("+r.r+", "+r.g+", "+r.b+", 1)",position:"relative",overflow:"hidden"},body:{margin:"auto",width:"95%"},controls:{display:"flex",boxSizing:"border-box",height:"52px",paddingTop:"22px"},color:{width:"32px"},hue:{height:"8px",position:"relative",margin:"0px 16px 0px 16px",width:"100%"},Hue:{radius:"2px"}}},void 0===u?{}:u));return l.createElement("div",{style:d.picker,className:"google-picker "+(void 0===f?"":f)},l.createElement("div",{style:d.head},c),l.createElement("div",{style:d.swatch}),l.createElement("div",{style:d.saturation},l.createElement(eD,{hsl:o,hsv:i,pointer:nv,onChange:n})),l.createElement("div",{style:d.body},l.createElement("div",{style:d.controls,className:"flexbox-fix"},l.createElement("div",{style:d.hue},l.createElement(O,{style:d.Hue,hsl:o,radius:"4px",pointer:ny,onChange:n}))),l.createElement(nx,{rgb:r,hsl:o,hex:a,hsv:i,onChange:n})))};nw.propTypes={width:A().oneOfType([A().string,A().number]),styles:A().object,header:A().string},nw.defaultProps={width:652,styles:{},header:"Color picker"},tg(nw)},58467:function(e,t,n){"use strict";function r(e){return(r="function"==typeof Symbol&&"symbol"==typeof Symbol.iterator?function(e){return typeof e}:function(e){return e&&"function"==typeof Symbol&&e.constructor===Symbol&&e!==Symbol.prototype?"symbol":typeof e})(e)}Object.defineProperty(t,"__esModule",{value:!0}),t.CopyToClipboard=void 0;var o=l(n(86006)),i=l(n(27652)),a=["text","onCopy","options","children"];function l(e){return e&&e.__esModule?e:{default:e}}function s(e,t){var n=Object.keys(e);if(Object.getOwnPropertySymbols){var r=Object.getOwnPropertySymbols(e);t&&(r=r.filter(function(t){return Object.getOwnPropertyDescriptor(e,t).enumerable})),n.push.apply(n,r)}return n}function c(e){for(var t=1;t=0||(o[n]=e[n]);return o}(e,t);if(Object.getOwnPropertySymbols){var i=Object.getOwnPropertySymbols(e);for(r=0;r=0)&&Object.prototype.propertyIsEnumerable.call(e,n)&&(o[n]=e[n])}return o}(e,a),r=o.default.Children.only(t);return o.default.cloneElement(r,c(c({},n),{},{onClick:this.onClick}))}}],u(g.prototype,n),l&&u(g,l),Object.defineProperty(g,"prototype",{writable:!1}),g}(o.default.PureComponent);t.CopyToClipboard=g,h(g,"defaultProps",{onCopy:void 0,options:void 0})},10688:function(e,t,n){"use strict";var r=n(58467).CopyToClipboard;r.CopyToClipboard=r,e.exports=r},83393:function(e,t,n){"use strict";n.d(t,{Ybf:function(){return i},jRj:function(){return o}});var r=n(83270);function o(e){return(0,r.w_)({tag:"svg",attr:{viewBox:"0 0 24 24",fill:"none",stroke:"currentColor",strokeWidth:"2",strokeLinecap:"round",strokeLinejoin:"round"},child:[{tag:"circle",attr:{cx:"11",cy:"11",r:"8"}},{tag:"line",attr:{x1:"21",y1:"21",x2:"16.65",y2:"16.65"}}]})(e)}function i(e){return(0,r.w_)({tag:"svg",attr:{viewBox:"0 0 24 24",fill:"none",stroke:"currentColor",strokeWidth:"2",strokeLinecap:"round",strokeLinejoin:"round"},child:[{tag:"polyline",attr:{points:"3 6 5 6 21 6"}},{tag:"path",attr:{d:"M19 6v14a2 2 0 0 1-2 2H7a2 2 0 0 1-2-2V6m3 0V4a2 2 0 0 1 2-2h4a2 2 0 0 1 2 2v2"}},{tag:"line",attr:{x1:"10",y1:"11",x2:"10",y2:"17"}},{tag:"line",attr:{x1:"14",y1:"11",x2:"14",y2:"17"}}]})(e)}},83270:function(e,t,n){"use strict";n.d(t,{w_:function(){return s}});var r=n(86006),o={color:void 0,size:void 0,className:void 0,style:void 0,attr:void 0},i=r.createContext&&r.createContext(o),a=function(){return(a=Object.assign||function(e){for(var t,n=1,r=arguments.length;nt.indexOf(r)&&(n[r]=e[r]);if(null!=e&&"function"==typeof Object.getOwnPropertySymbols)for(var o=0,r=Object.getOwnPropertySymbols(e);ot.indexOf(r[o])&&Object.prototype.propertyIsEnumerable.call(e,r[o])&&(n[r[o]]=e[r[o]]);return n};function s(e){return function(t){return r.createElement(c,a({attr:a({},e.attr)},t),function e(t){return t&&t.map(function(t,n){return r.createElement(t.tag,a({key:n},t.attr),e(t.child))})}(e.child))}}function c(e){var t=function(t){var n,o=e.attr,i=e.size,s=e.title,c=l(e,["attr","size","title"]),u=i||t.size||"1em";return t.className&&(n=t.className),e.className&&(n=(n?n+" ":"")+e.className),r.createElement("svg",a({stroke:"currentColor",fill:"currentColor",strokeWidth:"0"},t.attr,o,c,{className:n,style:a(a({color:e.color||t.color},t.style),e.style),height:u,width:u,xmlns:"http://www.w3.org/2000/svg"}),s&&r.createElement("title",null,s),e.children)};return void 0!==i?r.createElement(i.Consumer,null,function(e){return t(e)}):t(o)}},29389:function(e,t){"use strict";/** - * @license React - * react-is.production.min.js - * - * Copyright (c) Facebook, Inc. and its affiliates. - * - * This source code is licensed under the MIT license found in the - * LICENSE file in the root directory of this source tree. - */var n,r=Symbol.for("react.element"),o=Symbol.for("react.portal"),i=Symbol.for("react.fragment"),a=Symbol.for("react.strict_mode"),l=Symbol.for("react.profiler"),s=Symbol.for("react.provider"),c=Symbol.for("react.context"),u=Symbol.for("react.server_context"),f=Symbol.for("react.forward_ref"),d=Symbol.for("react.suspense"),p=Symbol.for("react.suspense_list"),h=Symbol.for("react.memo"),g=Symbol.for("react.lazy"),m=Symbol.for("react.offscreen");function b(e){if("object"==typeof e&&null!==e){var t=e.$$typeof;switch(t){case r:switch(e=e.type){case i:case l:case a:case d:case p:return e;default:switch(e=e&&e.$$typeof){case u:case c:case f:case g:case h:case s:return e;default:return t}}case o:return t}}}n=Symbol.for("react.module.reference"),t.ContextConsumer=c,t.ContextProvider=s,t.Element=r,t.ForwardRef=f,t.Fragment=i,t.Lazy=g,t.Memo=h,t.Portal=o,t.Profiler=l,t.StrictMode=a,t.Suspense=d,t.SuspenseList=p,t.isAsyncMode=function(){return!1},t.isConcurrentMode=function(){return!1},t.isContextConsumer=function(e){return b(e)===c},t.isContextProvider=function(e){return b(e)===s},t.isElement=function(e){return"object"==typeof e&&null!==e&&e.$$typeof===r},t.isForwardRef=function(e){return b(e)===f},t.isFragment=function(e){return b(e)===i},t.isLazy=function(e){return b(e)===g},t.isMemo=function(e){return b(e)===h},t.isPortal=function(e){return b(e)===o},t.isProfiler=function(e){return b(e)===l},t.isStrictMode=function(e){return b(e)===a},t.isSuspense=function(e){return b(e)===d},t.isSuspenseList=function(e){return b(e)===p},t.isValidElementType=function(e){return"string"==typeof e||"function"==typeof e||e===i||e===l||e===a||e===d||e===p||e===m||"object"==typeof e&&null!==e&&(e.$$typeof===g||e.$$typeof===h||e.$$typeof===s||e.$$typeof===c||e.$$typeof===f||e.$$typeof===n||void 0!==e.getModuleId)},t.typeOf=b},59605:function(e,t,n){"use strict";e.exports=n(29389)},30458:function(e,t,n){"use strict";let r=n(86006),o=function(e){let t="";return"string"==typeof e?t=e:"number"==typeof e?t=e.toString():e instanceof Array?e.forEach(function(e){t+=o(e)}):(0,r.isValidElement)(e)&&(t+=o(e.props.children)),t};t.Z=o},61555:function(e,t,n){"use strict";n.d(t,{Av:function(){return a},pF:function(){return r},xv:function(){return i},zi:function(){return o}});var r="right-scroll-bar-position",o="width-before-scroll-bar",i="with-scroll-bars-hidden",a="--removed-body-scroll-bar-size"},90450:function(e,t,n){"use strict";n.d(t,{jp:function(){return d}});var r=n(86006),o=n(85481),i=n(61555),a={left:0,top:0,right:0,gap:0},l=function(e){return parseInt(e||"",10)||0},s=function(e){var t=window.getComputedStyle(document.body),n=t["padding"===e?"paddingLeft":"marginLeft"],r=t["padding"===e?"paddingTop":"marginTop"],o=t["padding"===e?"paddingRight":"marginRight"];return[l(n),l(r),l(o)]},c=function(e){if(void 0===e&&(e="margin"),"undefined"==typeof window)return a;var t=s(e),n=document.documentElement.clientWidth,r=window.innerWidth;return{left:t[0],top:t[1],right:t[2],gap:Math.max(0,r-n+t[2]-t[0])}},u=(0,o.Ws)(),f=function(e,t,n,r){var o=e.left,a=e.top,l=e.right,s=e.gap;return void 0===n&&(n="margin"),"\n .".concat(i.xv," {\n overflow: hidden ").concat(r,";\n padding-right: ").concat(s,"px ").concat(r,";\n }\n body {\n overflow: hidden ").concat(r,";\n overscroll-behavior: contain;\n ").concat([t&&"position: relative ".concat(r,";"),"margin"===n&&"\n padding-left: ".concat(o,"px;\n padding-top: ").concat(a,"px;\n padding-right: ").concat(l,"px;\n margin-left:0;\n margin-top:0;\n margin-right: ").concat(s,"px ").concat(r,";\n "),"padding"===n&&"padding-right: ".concat(s,"px ").concat(r,";")].filter(Boolean).join(""),"\n }\n \n .").concat(i.pF," {\n right: ").concat(s,"px ").concat(r,";\n }\n \n .").concat(i.zi," {\n margin-right: ").concat(s,"px ").concat(r,";\n }\n \n .").concat(i.pF," .").concat(i.pF," {\n right: 0 ").concat(r,";\n }\n \n .").concat(i.zi," .").concat(i.zi," {\n margin-right: 0 ").concat(r,";\n }\n \n body {\n ").concat(i.Av,": ").concat(s,"px;\n }\n")},d=function(e){var t=e.noRelative,n=e.noImportant,o=e.gapMode,i=void 0===o?"margin":o,a=r.useMemo(function(){return c(i)},[i]);return r.createElement(u,{styles:f(a,!t,i,n?"":"!important")})}},51859:function(e,t,n){"use strict";n.d(t,{ZP:function(){return t_}});var r,o,i=n(82685),a=n(3708),l=n(83161),s=n(99889),c=n(24245),u=n(35413);function f(e,t){(null==t||t>e.length)&&(t=e.length);for(var n=0,r=Array(t);n0?d[b]+" "+v:ec(v,/&\f/g,d[b])).trim())&&(s[m++]=y);return ew(e,t,n,0===o?eP:l,s,c,u)}function eF(e,t,n,r){return ew(e,t,n,eM,ed(e,0,r),ed(e,r+1,-1),r)}var eB=function(e,t,n){for(var r=0,o=0;r=o,o=ek(),38===r&&12===o&&(t[n]=1),!e_(o);)eS();return ed(ex,e,ev)},ez=function(e,t){var n=-1,r=44;do switch(e_(r)){case 0:38===r&&12===ek()&&(t[n]=1),e[n]+=eB(ev-1,t,n);break;case 2:e[n]+=eC(r);break;case 4:if(44===r){e[++n]=58===ek()?"&\f":"",t[n]=e[n].length;break}default:e[n]+=el(r)}while(r=eS());return e},e$=function(e,t){var n;return n=ez(eO(e),t),ex="",n},eU=new WeakMap,eH=function(e){if("rule"===e.type&&e.parent&&!(e.length<1)){for(var t=e.value,n=e.parent,r=e.column===n.column&&e.line===n.line;"rule"!==n.type;)if(!(n=n.parent))return;if((1!==e.props.length||58===t.charCodeAt(0)||eU.get(n))&&!r){eU.set(e,!0);for(var o=[],i=e$(t,o),a=n.props,l=0,s=0;l-1&&!e.return)switch(e.type){case eM:e.return=function e(t,n){switch(45^ef(t,0)?(((n<<2^ef(t,0))<<2^ef(t,1))<<2^ef(t,2))<<2^ef(t,3):0){case 5103:return eR+"print-"+t+t;case 5737:case 4201:case 3177:case 3433:case 1641:case 4457:case 2921:case 5572:case 6356:case 5844:case 3191:case 6645:case 3005:case 6391:case 5879:case 5623:case 6135:case 4599:case 4855:case 4215:case 6389:case 5109:case 5365:case 5621:case 3829:return eR+t+t;case 5349:case 4246:case 4810:case 6968:case 2756:return eR+t+eN+t+eA+t+t;case 6828:case 4268:return eR+t+eA+t+t;case 6165:return eR+t+eA+"flex-"+t+t;case 5187:return eR+t+ec(t,/(\w+).+(:[^]+)/,eR+"box-$1$2"+eA+"flex-$1$2")+t;case 5443:return eR+t+eA+"flex-item-"+ec(t,/flex-|-self/,"")+t;case 4675:return eR+t+eA+"flex-line-pack"+ec(t,/align-content|flex-|-self/,"")+t;case 5548:return eR+t+eA+ec(t,"shrink","negative")+t;case 5292:return eR+t+eA+ec(t,"basis","preferred-size")+t;case 6060:return eR+"box-"+ec(t,"-grow","")+eR+t+eA+ec(t,"grow","positive")+t;case 4554:return eR+ec(t,/([^-])(transform)/g,"$1"+eR+"$2")+t;case 6187:return ec(ec(ec(t,/(zoom-|grab)/,eR+"$1"),/(image-set)/,eR+"$1"),t,"")+t;case 5495:case 3959:return ec(t,/(image-set\([^]*)/,eR+"$1$`$1");case 4968:return ec(ec(t,/(.+:)(flex-)?(.*)/,eR+"box-pack:$3"+eA+"flex-pack:$3"),/s.+-b[^;]+/,"justify")+eR+t+t;case 4095:case 3583:case 4068:case 2532:return ec(t,/(.+)-inline(.+)/,eR+"$1$2")+t;case 8116:case 7059:case 5753:case 5535:case 5445:case 5701:case 4933:case 4677:case 5533:case 5789:case 5021:case 4765:if(ep(t)-1-n>6)switch(ef(t,n+1)){case 109:if(45!==ef(t,n+4))break;case 102:return ec(t,/(.+:)(.+)-([^]+)/,"$1"+eR+"$2-$3$1"+eN+(108==ef(t,n+3)?"$3":"$2-$3"))+t;case 115:return~eu(t,"stretch")?e(ec(t,"stretch","fill-available"),n)+t:t}break;case 4949:if(115!==ef(t,n+1))break;case 6444:switch(ef(t,ep(t)-3-(~eu(t,"!important")&&10))){case 107:return ec(t,":",":"+eR)+t;case 101:return ec(t,/(.+:)([^;!]+)(;|!.+)?/,"$1"+eR+(45===ef(t,14)?"inline-":"")+"box$3$1"+eR+"$2$3$1"+eA+"$2box$3")+t}break;case 5936:switch(ef(t,n+11)){case 114:return eR+t+eA+ec(t,/[svh]\w+-[tblr]{2}/,"tb")+t;case 108:return eR+t+eA+ec(t,/[svh]\w+-[tblr]{2}/,"tb-rl")+t;case 45:return eR+t+eA+ec(t,/[svh]\w+-[tblr]{2}/,"lr")+t}return eR+t+eA+t+t}return t}(e.value,e.length);break;case ej:return eL([eE(e,{value:ec(e.value,"@","@"+eR)})],r);case eP:if(e.length)return e.props.map(function(t){var n;switch(n=t,(n=/(::plac\w+|:read-\w+)/.exec(n))?n[0]:n){case":read-only":case":read-write":return eL([eE(e,{props:[ec(t,/:(read-\w+)/,":"+eN+"$1")]})],r);case"::placeholder":return eL([eE(e,{props:[ec(t,/:(plac\w+)/,":"+eR+"input-$1")]}),eE(e,{props:[ec(t,/:(plac\w+)/,":"+eN+"$1")]}),eE(e,{props:[ec(t,/:(plac\w+)/,eA+"input-$1")]})],r)}return""}).join("")}}],eV=function(e){var t,n,r,o,i,a=e.key;if("css"===a){var l=document.querySelectorAll("style[data-emotion]:not([data-s])");Array.prototype.forEach.call(l,function(e){-1!==e.getAttribute("data-emotion").indexOf(" ")&&(document.head.appendChild(e),e.setAttribute("data-s",""))})}var s=e.stylisPlugins||eq,c={},u=[];o=e.container||document.head,Array.prototype.forEach.call(document.querySelectorAll('style[data-emotion^="'+a+' "]'),function(e){for(var t=e.getAttribute("data-emotion").split(" "),n=1;n2||e_(ey)>3?"":" "}(m);break;case 92:_+=function(e,t){for(var n;--t&&eS()&&!(ey<48)&&!(ey>102)&&(!(ey>57)||!(ey<65))&&(!(ey>70)||!(ey<97)););return n=ev+(t<6&&32==ek()&&32==eS()),ed(ex,e,n)}(ev-1,7);continue;case 47:switch(ek()){case 42:case 47:eh(ew(u=function(e,t){for(;eS();)if(e+ey===57)break;else if(e+ey===84&&47===ek())break;return"/*"+ed(ex,t,ev-1)+"*"+el(47===e?e:eS())}(eS(),ev),n,r,eT,el(ey),ed(u,2,-2),0),c);break;default:_+="/"}break;case 123*b:s[f++]=ep(_)*y;case 125*b:case 59:case 0:switch(x){case 0:case 125:v=0;case 59+d:-1==y&&(_=ec(_,/\f/g,"")),g>0&&ep(_)-p&&eh(g>32?eF(_+";",o,r,p-1):eF(ec(_," ","")+";",o,r,p-2),c);break;case 59:_+=";";default:if(eh(k=eD(_,n,r,f,d,i,s,w,E=[],S=[],p),a),123===x){if(0===d)e(_,n,k,k,E,a,p,s,S);else switch(99===h&&110===ef(_,3)?100:h){case 100:case 108:case 109:case 115:e(t,k,k,o&&eh(eD(t,k,k,0,0,i,s,w,i,E=[],p),S),i,S,p,s,o?E:S);break;default:e(_,k,k,k,[""],S,0,s,S)}}}f=d=g=0,b=y=1,w=_="",p=l;break;case 58:p=1+ep(_),g=m;default:if(b<1){if(123==x)--b;else if(125==x&&0==b++&&125==(ey=ev>0?ef(ex,--ev):0,em--,10===ey&&(em=1,eg--),ey))continue}switch(_+=el(x),x*b){case 38:y=d>0?1:(_+="\f",-1);break;case 44:s[f++]=(ep(_)-1)*y,y=1;break;case 64:45===ek()&&(_+=eC(eS())),h=ek(),d=p=ep(w=_+=function(e){for(;!e_(ek());)eS();return ed(ex,e,ev)}(ev)),x++;break;case 45:45===m&&2==ep(_)&&(b=0)}}return a}("",null,null,null,[""],t=eO(t=e),0,[0],t),ex="",n),f)},p={key:a,sheet:new ei({key:a,container:o,nonce:e.nonce,speedy:e.speedy,prepend:e.prepend,insertionPoint:e.insertionPoint}),nonce:e.nonce,inserted:c,registered:{},insert:function(e,t,n,r){i=n,d(e?e+"{"+t.styles+"}":t.styles),r&&(p.inserted[t.name]=!0)}};return p.sheet.hydrate(u),p},eW={animationIterationCount:1,aspectRatio:1,borderImageOutset:1,borderImageSlice:1,borderImageWidth:1,boxFlex:1,boxFlexGroup:1,boxOrdinalGroup:1,columnCount:1,columns:1,flex:1,flexGrow:1,flexPositive:1,flexShrink:1,flexNegative:1,flexOrder:1,gridRow:1,gridRowEnd:1,gridRowSpan:1,gridRowStart:1,gridColumn:1,gridColumnEnd:1,gridColumnSpan:1,gridColumnStart:1,msGridRow:1,msGridRowSpan:1,msGridColumn:1,msGridColumnSpan:1,fontWeight:1,lineHeight:1,opacity:1,order:1,orphans:1,tabSize:1,widows:1,zIndex:1,zoom:1,WebkitLineClamp:1,fillOpacity:1,floodOpacity:1,stopOpacity:1,strokeDasharray:1,strokeDashoffset:1,strokeMiterlimit:1,strokeOpacity:1,strokeWidth:1},eG=/[A-Z]|^ms/g,eK=/_EMO_([^_]+?)_([^]*?)_EMO_/g,eY=function(e){return 45===e.charCodeAt(1)},eX=function(e){return null!=e&&"boolean"!=typeof e},eJ=(r=Object.create(null),function(e){return void 0===r[e]&&(r[e]=eY(e)?e:e.replace(eG,"-$&").toLowerCase()),r[e]}),eQ=function(e,t){switch(e){case"animation":case"animationName":if("string"==typeof t)return t.replace(eK,function(e,t,n){return o={name:t,styles:n,next:o},t})}return 1===eW[e]||eY(e)||"number"!=typeof t||0===t?t:t+"px"};function e0(e,t,n){if(null==n)return"";if(void 0!==n.__emotion_styles)return n;switch(typeof n){case"boolean":return"";case"object":if(1===n.anim)return o={name:n.name,styles:n.styles,next:o},n.name;if(void 0!==n.styles){var r=n.next;if(void 0!==r)for(;void 0!==r;)o={name:r.name,styles:r.styles,next:o},r=r.next;return n.styles+";"}return function(e,t,n){var r="";if(Array.isArray(n))for(var o=0;o=4;++r,o-=4)t=(65535&(t=255&e.charCodeAt(r)|(255&e.charCodeAt(++r))<<8|(255&e.charCodeAt(++r))<<16|(255&e.charCodeAt(++r))<<24))*1540483477+((t>>>16)*59797<<16),t^=t>>>24,n=(65535&t)*1540483477+((t>>>16)*59797<<16)^(65535&n)*1540483477+((n>>>16)*59797<<16);switch(o){case 3:n^=(255&e.charCodeAt(r+2))<<16;case 2:n^=(255&e.charCodeAt(r+1))<<8;case 1:n^=255&e.charCodeAt(r),n=(65535&n)*1540483477+((n>>>16)*59797<<16)}return n^=n>>>13,(((n=(65535&n)*1540483477+((n>>>16)*59797<<16))^n>>>15)>>>0).toString(36)}(a)+c,styles:a,next:o}};function e5(e,t,n){var r="";return n.split(" ").forEach(function(n){void 0!==e[n]?t.push(e[n]+";"):r+=n+" "}),r}var e3=function(e,t,n){var r=e.key+"-"+t.name;!1===n&&void 0===e.registered[r]&&(e.registered[r]=t.styles)},e6=function(e,t,n){e3(e,t,n);var r=e.key+"-"+t.name;if(void 0===e.inserted[t.name]){var o=t;do e.insert(t===o?"."+r:"",o,e.sheet,!0),o=o.next;while(void 0!==o)}};function e4(e,t){if(void 0===e.inserted[t.name])return e.insert("",t,e.sheet,!0)}function e8(e,t,n){var r=[],o=e5(e,r,n);return r.length<2?n:o+t(r)}var e9=function e(t){for(var n="",r=0;r1&&void 0!==arguments[1]?arguments[1]:"white",n="background-color: ".concat(e,"; border-radius: 4px; padding: 2px 4px;");return t&&(n+=" color: ".concat(t,";")),[n,""]}function ti(e,t){for(var n,r,o=arguments.length,i=Array(o>2?o-2:0),a=2;at?(e.apply(void 0,i),n=l):(clearTimeout(r),r=tl()(function(){e.apply(void 0,i),n=U()()},Math.max(0,t-l+n)))}}(function(e){var t=i.current;t&&t(e)},t)},[t,i]),l=(0,v.useCallback)(function(e){e.timeStampLow=U()(),a(e)},[a]);return(0,v.useLayoutEffect)(function(){return o.addEventListener(n,l,{passive:!0}),l({target:o,type:n}),function(){return o.removeEventListener(n,l)}},[n,l,o]),!1};ts.defaultProps={debounce:200};var tc=n(44170),tu=n.n(tc);function tf(e,t){var n=tu()(t-e),r=Math.sqrt(Math.abs(t-e)),o=e+r*n;return n>0?Math.min(t,o):Math.max(t,o)}var td=function(e){var t=e.name,n=e.onEnd,r=e.target,o=e.value,i=(0,v.useRef)(),a=(0,v.useCallback)(function(e,t,o,l){var s=arguments.length>4&&void 0!==arguments[4]?arguments[4]:U()();("100%"===o||"number"==typeof o)&&(cancelAnimationFrame(i.current),i.current=requestAnimationFrame(function(){if(r){var i="100%"===o?r.scrollHeight-r.offsetHeight:o,c=function(e,t,n,r){for(var o=e,i=0;iMath.abs(i-c)&&(c=i),r[e]=c,i===c?n&&n(!0):a(e,t,o,l+1,s)}}))},[i,n,r]),l=(0,v.useCallback)(function(){cancelAnimationFrame(i.current),n&&n(!1)},[n]);return(0,v.useLayoutEffect)(function(){return(a(t,r[t],o,1),r)?(r.addEventListener("pointerdown",l,{passive:!0}),r.addEventListener("wheel",l,{passive:!0}),function(){r.removeEventListener("pointerdown",l),r.removeEventListener("wheel",l),cancelAnimationFrame(i.current)}):function(){return cancelAnimationFrame(i.current)}},[a,i,l,t,r,o]),!1};function tp(e){var t=p((0,v.useState)(e),2),n=t[0],r=t[1],o=(0,v.useRef)(),i=(0,v.useCallback)(function(e){"function"==typeof e?i(function(t){return e=e(t),o.current=e,e}):(o.current=e,i(e))},[o]);return o.current=n,[n,r,o]}function th(e,t){var n=V()(e);if(G()){var r=G()(e);t&&(r=Y()(r).call(r,function(t){return J()(e,t).enumerable})),n.push.apply(n,r)}return n}function tg(e){for(var t=1;t1&&void 0!==arguments[1]?arguments[1]:{},n=t.force;return void 0!==n&&n?function(){for(var t=arguments.length,n=Array(t),r=0;r",{force:o})},[o]);a="top"===a?"top":"bottom";var u=(0,v.useRef)(0),f=(0,v.useRef)(i),d=p(tp("top"===a?0:"100%"),3),h=d[0],g=d[1],m=d[2],b=p(tp(null),3),S=b[0],_=b[1],O=b[2],C=(0,v.useRef)(0),A=(0,v.useRef)(0),N=(0,v.useRef)(0),R=p((0,v.useState)(!0),2),T=R[0],M=R[1],L=p((0,v.useState)(!0),2),D=L[0],B=L[1],$=p((0,v.useState)(!0),2),H=$[0],q=$[1],V=p((0,v.useState)(!1),2),W=V[0],G=V[1],K=p(tp(!0),3),Y=K[0],X=K[1],J=K[2],Q=(0,v.useRef)([]),ee=(0,v.useCallback)(function(e){var t=O.current;return Q.current.push(e),t&&e({scrollTop:t.scrollTop}),function(){var t=Q.current,n=I()(t).call(t,e);~n&&F()(t).call(t,n,1)}},[Q,O]),et=(0,v.useCallback)(function(){var e=m.current;c(function(){var t;return z()(t=["%cSpineTo%c: %conEnd%c is fired."]).call(t,P(to("magenta")),P(to("orange")),[{animateTo:e}])}),u.current=U()(),tv(e,a)||X(!1),g(null)},[m,c,u,a,g,X]),en=(0,v.useCallback)(function(e){var t=arguments.length>1&&void 0!==arguments[1]?arguments[1]:{},n=t.behavior,r=O.current;if("number"!=typeof e&&"100%"!==e)return console.warn('react-scroll-to-bottom: Arguments passed to scrollTo() must be either number or "100%".');c(function(){var t;return[z()(t=["%cscrollTo%c: Will scroll to %c".concat("number"==typeof e?e+"px":e.replace(/%/g,"%%"),"%c")]).call(t,P(to("lime","")),P(to("purple"))),{behavior:n,nextAnimateTo:e,target:r}]}),"auto"===n?(et(),r&&(r.scrollTop="100%"===e?r.scrollHeight-r.offsetHeight:e)):("smooth"!==n&&console.warn('react-scroll-to-bottom: Please set "behavior" when calling "scrollTo". In future versions, the default behavior will be changed from smooth scrolling to discrete scrolling to align with HTML Standard.'),g(e)),tv(e,a)&&(c(function(){var t;return[z()(t=["%cscrollTo%c: Scrolling to end, will set sticky to %ctrue%c."]).call(t,P(to("lime","")),P(to("purple"))),[{mode:a,nextAnimateTo:e}]]}),X(!0))},[c,et,a,g,X,O]),er=(0,v.useCallback)(function(){var e=arguments.length>0&&void 0!==arguments[0]?arguments[0]:{},t=e.behavior;c(function(){var e;return z()(e=["%cscrollToBottom%c: Called"]).call(e,P(to("yellow","")))}),"smooth"!==t&&console.warn('react-scroll-to-bottom: Please set "behavior" when calling "scrollToBottom". In future versions, the default behavior will be changed from smooth scrolling to discrete scrolling to align with HTML Standard.'),en("100%",{behavior:t||"smooth"})},[c,en]),eo=(0,v.useCallback)(function(){var e=arguments.length>0&&void 0!==arguments[0]?arguments[0]:{},t=e.behavior;c(function(){var e;return z()(e=["%cscrollToTop%c: Called"]).call(e,P(to("yellow","")))}),"smooth"!==t&&console.warn('react-scroll-to-bottom: Please set "behavior" when calling "scrollToTop". In future versions, the default behavior will be changed from smooth scrolling to discrete scrolling to align with HTML Standard.'),en(0,{behavior:t||"smooth"})},[c,en]),ei=(0,v.useCallback)(function(){var e=arguments.length>0&&void 0!==arguments[0]?arguments[0]:{},t=e.behavior;c(function(){var e;return z()(e=["%cscrollToEnd%c: Called"]).call(e,P(to("yellow","")))}),"smooth"!==t&&console.warn('react-scroll-to-bottom: Please set "behavior" when calling "scrollToEnd". In future versions, the default behavior will be changed from smooth scrolling to discrete scrolling to align with HTML Standard.');var n={behavior:t||"smooth"};"top"===a?eo(n):er(n)},[c,a,er,eo]),ea=(0,v.useCallback)(function(){var e=arguments.length>0&&void 0!==arguments[0]?arguments[0]:{},t=e.behavior;c(function(){var e;return z()(e=["%cscrollToStart%c: Called"]).call(e,P(to("yellow","")))}),"smooth"!==t&&console.warn('react-scroll-to-bottom: Please set "behavior" when calling "scrollToStart". In future versions, the default behavior will be changed from smooth scrolling to discrete scrolling to align with HTML Standard.');var n={behavior:t||"smooth"};"top"===a?er(n):eo(n)},[c,a,er,eo]),el=(0,v.useCallback)(function(){var e=O.current;if(e){if("auto"===f.current){c(function(){var e;return z()(e=["%ctarget changed%c: Initial scroll"]).call(e,P(to("blue")))}),e.scrollTop="top"===a?0:e.scrollHeight-e.offsetHeight,f.current=!1;return}var t,n=C.current,r=e.offsetHeight,o=e.scrollHeight,i=e.scrollTop,l="top"===a?0:Math.max(0,o-r-i),u=Math.max(0,n-i),d=s({maxValue:l,minValue:u,offsetHeight:r,scrollHeight:o,scrollTop:i}),p=Math.max(0,Math.min(l,d));t="top"===a||p!==l?i+p:"100%",c(function(){var e,a,s;return[z()(e=[z()(a=z()(s="%cscrollToSticky%c: Will animate from %c".concat(n,"px%c to %c")).call(s,"number"==typeof t?t+"px":t.replace(/%/g,"%%"),"%c (%c")).call(a,("100%"===t?l:t)+n,"px%c)")]).call(e,P(to("orange")),P(to("purple")),P(to("purple")),P(to("purple"))),{animateFrom:n,maxValue:l,minValue:u,nextAnimateTo:t,nextValue:p,offsetHeight:r,rawNextValue:d,scrollHeight:o,scrollTop:i}]}),en(t,{behavior:"smooth"})}},[C,c,a,s,en,O]),es=(0,v.useCallback)(function(e){var t,n=e.timeStampLow,r=m.current,o=O.current,i=null!==r;if(!(n<=u.current)&&o){var l=tb({mode:a,target:o}),s=l.atBottom,f=l.atEnd,d=l.atStart,p=l.atTop;M(s),B(f),G(d),q(p);var h=o.offsetHeight,g=o.scrollHeight,b=A.current,v=N.current,y=h!==b,x=g!==v;if(y&&(A.current=h),x&&(N.current=g),y||x)J.current&&(c(function(){var e;return[z()(e=["%conScroll%c: Size changed while sticky, calling %cscrollToSticky()%c"]).call(e,P(to("red")),P(to("orange")),[{offsetHeightChanged:y,scrollHeightChanged:x}]),{nextOffsetHeight:h,prevOffsetHeight:b,nextScrollHeight:g,prevScrollHeight:v}]}),el());else{var w=i&&tv(r,a)||f;J.current!==w&&(c(function(){var e,t,n,l;return[z()(e=["%conScroll%c: %csetSticky%c(%c".concat(w,"%c)")]).call(e,P(to("red")),P(to("red")),P(to("purple"))),z()(t=[z()(n=z()(l="(animating = %c".concat(i,"%c && isEnd = %c")).call(l,tv(r,a),"%c) || atEnd = %c")).call(n,f,"%c")]).call(t,P(to("purple")),P(to("purple")),P(to("purple")),[{animating:i,animateTo:r,atEnd:f,mode:a,offsetHeight:o.offsetHeight,scrollHeight:o.scrollHeight,sticky:J.current,nextSticky:w}])]}),X(w))}var E=o.scrollTop;Z()(t=Q.current).call(t,function(e){return e({scrollTop:E})})}},[m,c,u,a,A,N,Q,el,M,B,G,q,X,J,O]);(0,v.useEffect)(function(){if(S){var e,n,r=!1,o=(e=function(){var e=O.current,t=null!==m.current;J.current?tb({mode:a,target:e}).atEnd?r=!1:r?U()()-r>34&&(t||(C.current=e.scrollTop,c(function(){var e;return z()(e=["%cInterval check%c: Should sticky but not at end, calling %cscrollToSticky()%c to scroll"]).call(e,P(to("navy")),P(to("orange")))}),el()),r=!1):r=U()():e.scrollHeight<=e.offsetHeight&&!J.current&&(c(function(){var t;return[z()(t=["%cInterval check%c: Container is emptied, setting sticky back to %ctrue%c"]).call(t,P(to("navy")),P(to("purple"))),[{offsetHeight:e.offsetHeight,scrollHeight:e.scrollHeight,sticky:J.current}]]}),X(!0))},n=Math.max(17,t)||17,e(),j()(e,n));return function(){return clearInterval(o)}}},[m,t,c,a,el,X,J,S,O]);var ec=(0,v.useMemo)(function(){var e=tm[l]||(tm[l]=e7({key:"react-scroll-to-bottom--css-"+tt()().toString(26).substr(2,5).replace(/[0-9]/g,function(e){return String.fromCharCode(e.charCodeAt(0)+65)}),nonce:l}));return function(t){return e.css(t)+""}},[l]),eu=(0,v.useMemo)(function(){return{observeScrollPosition:ee,setTarget:_,styleToClassName:ec}},[ee,_,ec]),ef=(0,v.useMemo)(function(){return{atBottom:T,atEnd:D,atStart:W,atTop:H,mode:a}},[T,D,W,H,a]),ed=(0,v.useMemo)(function(){var e=null!==h;return{animating:e,animatingToEnd:e&&tv(h,a),sticky:Y}},[h,a,Y]),ep=(0,v.useMemo)(function(){return tg(tg({},ef),ed)},[ef,ed]),eh=(0,v.useMemo)(function(){return{scrollTo:en,scrollToBottom:er,scrollToEnd:ei,scrollToStart:ea,scrollToTop:eo}},[en,er,ei,ea,eo]);return(0,v.useEffect)(function(){if(S){var e=function(){N.current=S.scrollHeight};return S.addEventListener("focus",e,{capture:!0,passive:!0}),function(){return S.removeEventListener("focus",e)}}},[S]),c(function(){var e;return[z()(e=["%cRender%c: Render"]).call(e,P(to("cyan",""))),{animateTo:h,animating:null!==h,sticky:Y,target:S}]}),v.createElement(k.Provider,{value:eu},v.createElement(y.Provider,{value:eh},v.createElement(E.Provider,{value:ep},v.createElement(x.Provider,{value:ef},v.createElement(w.Provider,{value:ed},n,S&&v.createElement(ts,{debounce:r,name:"scroll",onEvent:es,target:S}),S&&null!==h&&v.createElement(td,{name:"scrollTop",onEnd:et,target:S,value:h}))))))};ty.defaultProps={checkInterval:100,children:void 0,debounce:17,debug:void 0,initialScrollBehavior:"smooth",mode:void 0,nonce:void 0,scroller:function(){return 1/0}},ty.propTypes={checkInterval:b().number,children:b().any,debounce:b().number,debug:b().bool,initialScrollBehavior:b().oneOf(["auto","smooth"]),mode:b().oneOf(["bottom","top"]),nonce:b().string,scroller:b().func};var tx={height:"100%",overflowY:"auto",width:"100%"},tw=function(e){var t=e.children,n=e.className,r=(0,v.useContext)(k).setTarget,o=_()(tx);return v.createElement("div",{className:g()(o,(n||"")+""),ref:r},t)};tw.defaultProps={children:void 0,className:void 0},tw.propTypes={children:b().any,className:b().string};var tE={position:"relative"},tS=function(e){var t=e.children,n=e.className,r=e.followButtonClassName,o=e.scrollViewClassName,i=_()(tE);return v.createElement("div",{className:g()(i,(n||"")+"")},v.createElement(tw,{className:(o||"")+""},t),v.createElement(C,{className:(r||"")+""}))};tS.defaultProps={children:void 0,className:void 0,followButtonClassName:void 0,scrollViewClassName:void 0},tS.propTypes={children:b().any,className:b().string,followButtonClassName:b().string,scrollViewClassName:b().string};var tk=function(e){var t=e.checkInterval,n=e.children,r=e.className,o=e.debounce,i=e.debug,a=e.followButtonClassName,l=e.initialScrollBehavior,s=e.mode,c=e.nonce,u=e.scroller,f=e.scrollViewClassName;return v.createElement(ty,{checkInterval:t,debounce:o,debug:i,initialScrollBehavior:l,mode:s,nonce:c,scroller:u},v.createElement(tS,{className:r,followButtonClassName:a,scrollViewClassName:f},n))};tk.defaultProps={checkInterval:void 0,children:void 0,className:void 0,debounce:void 0,debug:void 0,followButtonClassName:void 0,initialScrollBehavior:"smooth",mode:void 0,nonce:void 0,scroller:void 0,scrollViewClassName:void 0},tk.propTypes={checkInterval:b().number,children:b().any,className:b().string,debounce:b().number,debug:b().bool,followButtonClassName:b().string,initialScrollBehavior:b().oneOf(["auto","smooth"]),mode:b().oneOf(["bottom","top"]),nonce:b().string,scroller:b().func,scrollViewClassName:b().string};var t_=tk;!function(e,t){try{var r=n.g.document;if(void 0!==r&&r.createElement&&r.head&&r.head.appendChild){var o=r.querySelector('html meta[name="'.concat(encodeURI(e),'"]'))||r.createElement("meta");o.setAttribute("name",e),o.setAttribute("content",t),r.head.appendChild(o)}}catch(e){}}("react-scroll-to-bottom:version","4.2.0")},32580:function(e,t){var n;/*! - Copyright (c) 2018 Jed Watson. - Licensed under the MIT License (MIT), see - http://jedwatson.github.io/classnames -*/!function(){"use strict";var r={}.hasOwnProperty;function o(){for(var e=[],t=0;tt.indexOf(r)&&(n[r]=e[r]);if(null!=e&&"function"==typeof Object.getOwnPropertySymbols)for(var o=0,r=Object.getOwnPropertySymbols(e);ot.indexOf(r[o])&&Object.prototype.propertyIsEnumerable.call(e,r[o])&&(n[r[o]]=e[r[o]]);return n},s=function(e,t,n){var r="react-spinners-".concat(e,"-").concat(n);if("undefined"==typeof window||!window.document)return r;var o=document.createElement("style");document.head.appendChild(o);var i=o.sheet,a="\n @keyframes ".concat(r," {\n ").concat(t,"\n }\n ");return i&&i.insertRule(a,0),r}("BeatLoader","50% {transform: scale(0.75);opacity: 0.2} 100% {transform: scale(1);opacity: 1}","beat"),c=function(e){var t=e.loading,n=e.color,o=void 0===n?"#000000":n,c=e.speedMultiplier,u=void 0===c?1:c,f=e.cssOverride,d=e.size,p=void 0===d?15:d,h=e.margin,g=void 0===h?2:h,m=l(e,["loading","color","speedMultiplier","cssOverride","size","margin"]),b=a({display:"inherit"},void 0===f?{}:f),v=function(e){return{display:"inline-block",backgroundColor:o,width:i(p),height:i(p),margin:i(g),borderRadius:"100%",animation:"".concat(s," ").concat(.7/u,"s ").concat(e%2?"0s":"".concat(.35/u,"s")," infinite linear"),animationFillMode:"both"}};return void 0===t||t?r.createElement("span",a({style:b},m),r.createElement("span",{style:v(1)}),r.createElement("span",{style:v(2)}),r.createElement("span",{style:v(3)})):null}},85481:function(e,t,n){"use strict";n.d(t,{Ws:function(){return l}});var r,o=n(86006),i=function(){var e=0,t=null;return{add:function(o){if(0==e&&(t=function(){if(!document)return null;var e=document.createElement("style");e.type="text/css";var t=r||n.nc;return t&&e.setAttribute("nonce",t),e}())){var i,a;(i=t).styleSheet?i.styleSheet.cssText=o:i.appendChild(document.createTextNode(o)),a=t,(document.head||document.getElementsByTagName("head")[0]).appendChild(a)}e++},remove:function(){--e||!t||(t.parentNode&&t.parentNode.removeChild(t),t=null)}}},a=function(){var e=i();return function(t,n){o.useEffect(function(){return e.add(t),function(){e.remove()}},[t&&n])}},l=function(){var e=a();return function(t){return e(t.styles,t.dynamic),null}}},35036:function(e,t,n){"use strict";n.d(t,{Z:function(){return w}});var r=n(40431),o=n(86006),i=o.useLayoutEffect,a=function(e){var t=o.useRef(e);return i(function(){t.current=e}),t},l=function(e,t){if("function"==typeof e){e(t);return}e.current=t},s=function(e,t){var n=(0,o.useRef)();return(0,o.useCallback)(function(r){e.current=r,n.current&&l(n.current,null),n.current=t,t&&l(t,r)},[t])},c={"min-height":"0","max-height":"none",height:"0",visibility:"hidden",overflow:"hidden",position:"absolute","z-index":"-1000",top:"0",right:"0"},u=function(e){Object.keys(c).forEach(function(t){e.style.setProperty(t,c[t],"important")})},f=null,d=function(e,t){var n=e.scrollHeight;return"border-box"===t.sizingStyle.boxSizing?n+t.borderSize:n-t.paddingSize},p=function(){},h=["borderBottomWidth","borderLeftWidth","borderRightWidth","borderTopWidth","boxSizing","fontFamily","fontSize","fontStyle","fontWeight","letterSpacing","lineHeight","paddingBottom","paddingLeft","paddingRight","paddingTop","tabSize","textIndent","textRendering","textTransform","width","wordBreak"],g=!!document.documentElement.currentStyle,m=function(e){var t=window.getComputedStyle(e);if(null===t)return null;var n=h.reduce(function(e,n){return e[n]=t[n],e},{}),r=n.boxSizing;if(""===r)return null;g&&"border-box"===r&&(n.width=parseFloat(n.width)+parseFloat(n.borderRightWidth)+parseFloat(n.borderLeftWidth)+parseFloat(n.paddingRight)+parseFloat(n.paddingLeft)+"px");var o=parseFloat(n.paddingBottom)+parseFloat(n.paddingTop),i=parseFloat(n.borderBottomWidth)+parseFloat(n.borderTopWidth);return{sizingStyle:n,paddingSize:o,borderSize:i}};function b(e,t,n){var r=a(n);o.useLayoutEffect(function(){var n=function(e){return r.current(e)};if(e)return e.addEventListener(t,n),function(){return e.removeEventListener(t,n)}},[])}var v=function(e){b(window,"resize",e)},y=function(e){b(document.fonts,"loadingdone",e)},x=["cacheMeasurements","maxRows","minRows","onChange","onHeightChange"],w=o.forwardRef(function(e,t){var n=e.cacheMeasurements,i=e.maxRows,a=e.minRows,l=e.onChange,c=void 0===l?p:l,h=e.onHeightChange,g=void 0===h?p:h,b=function(e,t){if(null==e)return{};var n,r,o={},i=Object.keys(e);for(r=0;r=0||(o[n]=e[n]);return o}(e,x),w=void 0!==b.value,E=o.useRef(null),S=s(E,t),k=o.useRef(0),_=o.useRef(),O=function(){var e,t,r,o,l,s,c,p,h,b,v,y=E.current,x=n&&_.current?_.current:m(y);if(x){_.current=x;var w=(e=y.value||y.placeholder||"x",void 0===(t=a)&&(t=1),void 0===(r=i)&&(r=1/0),f||((f=document.createElement("textarea")).setAttribute("tabindex","-1"),f.setAttribute("aria-hidden","true"),u(f)),null===f.parentNode&&document.body.appendChild(f),o=x.paddingSize,l=x.borderSize,c=(s=x.sizingStyle).boxSizing,Object.keys(s).forEach(function(e){f.style[e]=s[e]}),u(f),f.value=e,p=d(f,x),f.value=e,p=d(f,x),f.value="x",b=(h=f.scrollHeight-o)*t,"border-box"===c&&(b=b+o+l),p=Math.max(b,p),v=h*r,"border-box"===c&&(v=v+o+l),[p=Math.min(v,p),h]),S=w[0],O=w[1];k.current!==S&&(k.current=S,y.style.setProperty("height",S+"px","important"),g(S,{rowHeight:O}))}};return o.useLayoutEffect(O),v(O),y(O),o.createElement("textarea",(0,r.Z)({},b,{onChange:function(e){w||O(),c(e)},ref:S}))})},18160:function(e,t,n){"use strict";t.b=void 0;let r=n(9268),o=n(86006),i="undefined"==typeof window,a=!i&&(()=>{try{return"ontouchstart"in window||navigator.maxTouchPoints}catch(e){return!1}})(),l=!i&&(()=>{try{return window.CSS.supports("overflow-anchor: auto")}catch(e){return!1}})(),s=a&&!l,c={top:"top",bottom:"bottom",clientHeight:"clientHeight",scrollHeight:"scrollHeight",scrollTop:"scrollTop",overflowY:"overflowY",height:"height",minHeight:"minHeight",maxHeight:"maxHeight",marginTop:"marginTop"},u={top:"left",bottom:"right",scrollHeight:"scrollWidth",clientHeight:"clientWidth",scrollTop:"scrollLeft",overflowY:"overflowX",minHeight:"minWidth",height:"width",maxHeight:"maxWidth",marginTop:"marginLeft"},f=(e,t,n=1/0)=>Math.max(Math.min(t,n),e),d=(e,t,n)=>Math.ceil(Math.abs(e-t)/n),p=i?o.useEffect:o.useLayoutEffect,h=(e,t,n)=>{let r=[];for(let o=e;o{let i=n,a=e;for(;a&&a!==t;){if(o(a,i))return[a,i];r?(i++,a=a.nextSibling):(i--,a=a.previousSibling)}return[null,-1]},m=/auto|scroll/gi,b=(e,t)=>{if(!t||t===document.body||t===document.documentElement)return document.documentElement;let n=window.getComputedStyle(t);return m.test(n[e.overflowY])||m.test(n.overflow)?t:b(e,t.parentNode)},v=(e,t,n=0)=>({padding:0,margin:0,border:"none",visibility:"hidden",overflowAnchor:"none",[e.minHeight]:t,[e.height]:t,[e.maxHeight]:t,[e.marginTop]:n});t.b=(0,o.forwardRef)(({items:e=[],count:t,children:n,viewportRef:i,itemSize:m=0,itemMargin:y=-1,overscan:x=1,axis:w="y",initialIndex:E=-1,initialAlignToTop:S=!0,initialOffset:k=0,initialDelay:_=-1,initialPrerender:O=0,onViewportIndexesChange:C,overflowAnchor:A="auto",withCache:N=!0,scrollThreshold:R=0,renderSpacer:T=({ref:e,style:t})=>(0,r.jsx)("div",{ref:e,style:t},void 0),indexesShift:P=0,getItemBoundingClientRect:M=e=>e.getBoundingClientRect()},j)=>{let L;let I="y"===w?c:u,D="number"==typeof t,F=(D?t:e.length)-1,[[B,z],$]=(0,o.useState)(()=>[f(0,m),f(-1,y)]),U=f(0,B+z),H=f(0,Math.ceil(x*U)),[Z,q]=(0,o.useState)([E-O,E+O]),V=(0,o.useRef)(null),W=(0,o.useRef)(-1),G=(0,o.useRef)(null),K=(0,o.useRef)(null),Y=(0,o.useRef)(!1),X=(0,o.useRef)(P),J=(0,o.useRef)([]),Q=(0,o.useRef)(E>=0?{index:E,alignToTop:S,offset:k,delay:_,prerender:O}:null),ee=(0,o.useRef)(null),et=(0,o.useRef)(0),en=(0,o.useRef)([-1,-1]),er=(0,o.useRef)(null),[eo,ei]=(0,o.useMemo)(()=>{Z[0]=f(0,Z[0],F),Z[1]=f(Z[0],Z[1],F);let e=P-X.current;X.current=P;let t=G.current;return t&&e&&(Z[0]=f(0,Z[0]+e,F),Z[1]=f(Z[0],Z[1]+e,F),V.current=t.nextSibling,W.current=Z[0],Y.current=!0),Z},[P,Z,F]),ea=(0,o.useMemo)(()=>v(I,(N?J.current:[]).slice(0,eo).reduce((e,t)=>e+(t-B),eo*U),et.current),[I,N,eo,U,B]),el=(0,o.useMemo)(()=>v(I,(N?J.current:[]).slice(ei+1,F+1).reduce((e,t)=>e+(t-B),U*(F-ei))),[I,N,ei,F,U,B]),es=(0,o.useMemo)(()=>{let e=null;return()=>{if(i)return i.current===document.body?document.documentElement:i.current;if(e&&e.isConnected)return e;let t=G.current;return t?e=b(I,t.parentNode):null}},[I,i]),ec=(0,o.useRef)(()=>{}),eu=(0,o.useRef)(()=>({index:-1,offset:0}));return p(()=>{ec.current=()=>{let e=es(),t=G.current,n=K.current;if(!e||!t||!n)return;let r=t.nextSibling,o=n.previousSibling,i=e.getBoundingClientRect(),a=t.getBoundingClientRect(),l=n.getBoundingClientRect(),c={[I.top]:e===document.documentElement?0:i[I.top],[I.bottom]:e===document.documentElement?document.documentElement[I.clientHeight]:i[I.bottom]},u={[I.top]:c[I.top]-H,[I.bottom]:c[I.bottom]+H};if(et.current<0&&a[I.top]-et.current>=u[I.top]||et.current>0&&a[I.top]>=u[I.top]||et.current&&Q.current){t.style[I.marginTop]="0px",e.style[I.overflowY]="hidden",e[I.scrollTop]+=-et.current,e.style[I.overflowY]="",et.current=0;return}if(0===B||-1===z){let e=0;if(g({fromElement:r,toElement:n,fromIndex:eo,compare:t=>(e+=M(t)[I.height],!1)}),!e)return;let t=ei-eo+1,o=0===B?Math.ceil(e/t):B,i=-1===z?Math.ceil((l[I.top]-a[I.bottom]-e)/t):z;$([o,i]);return}if(ee.current)return;if(Q.current){let t=f(0,Q.current.index,F);if(tei){q([t-Q.current.prerender,t+Q.current.prerender]);return}let[o]=g({fromElement:r,toElement:n,fromIndex:eo,compare:(e,n)=>n===t});if(!o)return;let{alignToTop:i,offset:a,delay:l}=Q.current;Q.current=null;let u=()=>{let t=M(o),n=i?t[I.top]-c[I.top]+a:t[I.bottom]-c[I.top]-e[I.clientHeight]+a;e[I.scrollTop]+=n,ee.current=null},d=l<0&&s?30:l;if(d>0){ee.current=setTimeout(u,d);return}u();return}if(null===er.current)er.current=e.scrollTop;else if(er.current!==e.scrollTop){let t=Math.abs(e.scrollTop-er.current);if(er.current=e.scrollTop,R>0&&t>R)return}let p=r===n?n:r.nextSibling,h=o===t?t:o.previousSibling,m=Math.ceil((l[I.top]-a[I.bottom])/(ei+1-eo)),b=a[I.bottom]>u[I.bottom],v=l[I.top]u[I.top],x=!b&&!v&&l[I.top]u[I.bottom],E=!b&&!v&&(p===n?l:M(p))[I.top]M(e)[I.bottom]<=u[I.bottom]});-1!==e&&(k=e+1)}if(E){let[,e]=g({fromElement:r,toElement:n,fromIndex:eo,compare:e=>M(e)[I.top]>=u[I.top]});-1!==e&&(S=e-1)}if(C){let[,e]=g({fromElement:r,toElement:n,fromIndex:eo,compare:e=>M(e)[I.bottom]>c[I.top]});-1===e&&(e=eo);let[,i]=g({fromElement:o,toElement:t,fromIndex:ei,asc:!1,compare:e=>M(e)[I.top]=S)V.current=r,W.current=eo;else{let[e,t]=g({fromElement:r,toElement:n,fromIndex:eo,compare:(e,t)=>{if(t===S)return!0;let n=M(e);return n[I.height]!==B&&(J.current[t]=n[I.height]),!1}});e?(V.current=e,W.current=t):(V.current=o,W.current=ei)}}q([S,k])}},eu.current=()=>{let e=es(),t=G.current,n=K.current,r=-1,o=0;if(!e||!t||!n)return{index:r,offset:o};let i=t.nextSibling,a=e.getBoundingClientRect(),l={[I.top]:e===document.documentElement?0:a[I.top],[I.bottom]:e===document.documentElement?document.documentElement[I.clientHeight]:a[I.bottom]};return g({fromElement:i,toElement:n,fromIndex:eo,compare:(e,t)=>{let n=M(e);return r=t,o=l[I.top]-n[I.top],n[I.bottom]>l[I.top]}}),{index:r,offset:o}}}),V.current&&es()&&G.current&&(L=M(V.current)[I.top]-(es()===document.documentElement?0:es().getBoundingClientRect()[I.top])),p(()=>{V.current=null;let e=W.current,t=Y.current;W.current=-1,Y.current=!1;let n=es(),r=G.current,o=K.current;if(-1===e||!n||!r||!o||void 0===L||l&&"none"!==A&&!t)return;let i=null;if(e>=eo&&e<=ei){let[t]=g({fromElement:r.nextSibling,toElement:o,fromIndex:eo,compare:(t,n)=>n===e});t&&(i=M(t)[I.top])}else ee+(t-B),e*U):e<=F&&(i=o.getBoundingClientRect()[I.top]+(N?J.current:[]).slice(ei+1,e).reduce((e,t)=>e+(t-B),U*(e-1-ei)));if(null===i)return;let s=i-(n===document.documentElement?0:n.getBoundingClientRect()[I.top])-L;if(s){if(a){et.current-=s,r.style[I.marginTop]=`${et.current}px`;return}n[I.scrollTop]+=s}},[eo]),p(()=>{let e;let t=()=>{e=requestAnimationFrame(t),ec.current()};return t(),()=>{cancelAnimationFrame(e),ee.current&&clearTimeout(ee.current)}},[]),(0,o.useImperativeHandle)(j,()=>({scrollToIndex:({index:e=-1,alignToTop:t=!0,offset:n=0,delay:r=-1,prerender:o=0})=>{Q.current={index:e,alignToTop:t,offset:n,delay:r,prerender:o},ec.current()},getScrollPosition:()=>eu.current()}),[]),(0,r.jsxs)(o.Fragment,{children:[T({ref:G,style:ea,type:"top"}),(!!t||!!e.length)&&h(eo,ei+1,D?n:t=>n(e[t],t,e)),T({ref:K,style:el,type:"bottom"})]},void 0)})},99231:function(e,t,n){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),t.autoprefix=void 0;var r,o=(r=n(17766))&&r.__esModule?r:{default:r},i=Object.assign||function(e){for(var t=1;t1&&void 0!==arguments[1]?arguments[1]:"span";return function(n){function r(){!function(e,t){if(!(e instanceof t))throw TypeError("Cannot call a class as a function")}(this,r);for(var n,l,s,c=arguments.length,u=Array(c),f=0;f1&&void 0!==arguments[1]?arguments[1]:"span";return function(n){function r(){!function(e,t){if(!(e instanceof t))throw TypeError("Cannot call a class as a function")}(this,r);for(var n,l,s,c=arguments.length,u=Array(c),f=0;f0&&void 0!==arguments[0]?arguments[0]:[],n=[];return(0,a.default)(t,function(t){Array.isArray(t)?e(t).map(function(e){return n.push(e)}):(0,i.default)(t)?(0,o.default)(t,function(e,t){!0===e&&n.push(t),n.push(t+"-"+e)}):(0,r.default)(t)&&n.push(t)}),n};t.default=s},25319:function(e,t,n){"use strict";t.tz=void 0;var r=c(n(83378)),o=c(n(26189)),i=c(n(99231)),a=c(n(79071)),l=c(n(84913)),s=c(n(71906));function c(e){return e&&e.__esModule?e:{default:e}}a.default,t.tz=a.default,l.default,s.default,t.ZP=function(e){for(var t=arguments.length,n=Array(t>1?t-1:0),a=1;a1)||void 0===arguments[1]||arguments[1];n[e]=t};return 0===e&&r("first-child"),e===t-1&&r("last-child"),(0===e||e%2==0)&&r("even"),1===Math.abs(e%2)&&r("odd"),r("nth-child",e),n}},26189:function(e,t,n){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),t.mergeClasses=void 0;var r=a(n(17766)),o=a(n(48797)),i=Object.assign||function(e){for(var t=1;t1&&void 0!==arguments[1]?arguments[1]:[],n=e.default&&(0,o.default)(e.default)||{};return t.map(function(t){var o=e[t];return o&&(0,r.default)(o,function(e,t){n[t]||(n[t]={}),n[t]=i({},n[t],o[t])}),t}),n};t.default=l},72093:function(e,t,n){var r=n(24645);function o(e,t){var n,o,i,a=null;if(!e||"string"!=typeof e)return a;for(var l=r(e),s="function"==typeof t,c=0,u=l.length;c - * @license MIT - */e.exports=function(e){return null!=e&&null!=e.constructor&&"function"==typeof e.constructor.isBuffer&&e.constructor.isBuffer(e)}},83940:function(e,t,n){"use strict";n.d(t,{q:function(){return o}});var r=n(86006);function o(e,t){var n,o,i;return n=t||null,o=function(t){return e.forEach(function(e){return"function"==typeof e?e(t):e&&(e.current=t),e})},(i=(0,r.useState)(function(){return{value:n,callback:o,facade:{get current(){return i.value},set current(value){var e=i.value;e!==value&&(i.value=value,i.callback(value,e))}}}})[0]).callback=o,i.facade}},11503:function(e,t,n){"use strict";n.d(t,{L:function(){return a}});var r=n(78466),o=n(86006),i=function(e){var t=e.sideCar,n=(0,r._T)(e,["sideCar"]);if(!t)throw Error("Sidecar: please provide `sideCar` property to import the right car");var i=t.read();if(!i)throw Error("Sidecar medium not found");return o.createElement(i,(0,r.pi)({},n))};function a(e,t){return e.useMedium(t),i}i.isSideCarExport=!0},37445:function(e,t,n){"use strict";n.d(t,{_:function(){return i}});var r=n(78466);function o(e){return e}function i(e){void 0===e&&(e={});var t,n,i,a=(void 0===t&&(t=o),n=[],i=!1,{read:function(){if(i)throw Error("Sidecar: could not `read` from an `assigned` medium. `read` could be used only with `useMedium`.");return n.length?n[n.length-1]:null},useMedium:function(e){var r=t(e,i);return n.push(r),function(){n=n.filter(function(e){return e!==r})}},assignSyncMedium:function(e){for(i=!0;n.length;){var t=n;n=[],t.forEach(e)}n={push:function(t){return e(t)},filter:function(){return n}}},assignMedium:function(e){i=!0;var t=[];if(n.length){var r=n;n=[],r.forEach(e),t=n}var o=function(){var n=t;t=[],n.forEach(e)},a=function(){return Promise.resolve().then(o)};a(),n={push:function(e){t.push(e),a()},filter:function(e){return t=t.filter(e),n}}}});return a.options=(0,r.pi)({async:!0,ssr:!1},e),a}},98727:function(e,t,n){"use strict";/** - * @license React - * use-sync-external-store-shim.production.min.js - * - * Copyright (c) Facebook, Inc. and its affiliates. - * - * This source code is licensed under the MIT license found in the - * LICENSE file in the root directory of this source tree. - */var r=n(86006),o="function"==typeof Object.is?Object.is:function(e,t){return e===t&&(0!==e||1/e==1/t)||e!=e&&t!=t},i=r.useState,a=r.useEffect,l=r.useLayoutEffect,s=r.useDebugValue;function c(e){var t=e.getSnapshot;e=e.value;try{var n=t();return!o(e,n)}catch(e){return!0}}var u="undefined"==typeof window||void 0===window.document||void 0===window.document.createElement?function(e,t){return t()}:function(e,t){var n=t(),r=i({inst:{value:n,getSnapshot:t}}),o=r[0].inst,u=r[1];return l(function(){o.value=n,o.getSnapshot=t,c(o)&&u({inst:o})},[e,n,t]),a(function(){return c(o)&&u({inst:o}),e(function(){c(o)&&u({inst:o})})},[e]),s(n),n};t.useSyncExternalStore=void 0!==r.useSyncExternalStore?r.useSyncExternalStore:u},94464:function(e,t,n){"use strict";/** - * @license React - * use-sync-external-store-shim/with-selector.production.min.js - * - * Copyright (c) Facebook, Inc. and its affiliates. - * - * This source code is licensed under the MIT license found in the - * LICENSE file in the root directory of this source tree. - */var r=n(86006),o=n(3276),i="function"==typeof Object.is?Object.is:function(e,t){return e===t&&(0!==e||1/e==1/t)||e!=e&&t!=t},a=o.useSyncExternalStore,l=r.useRef,s=r.useEffect,c=r.useMemo,u=r.useDebugValue;t.useSyncExternalStoreWithSelector=function(e,t,n,r,o){var f=l(null);if(null===f.current){var d={hasValue:!1,value:null};f.current=d}else d=f.current;f=c(function(){function e(e){if(!s){if(s=!0,a=e,e=r(e),void 0!==o&&d.hasValue){var t=d.value;if(o(t,e))return l=t}return l=e}if(t=l,i(a,e))return t;var n=r(e);return void 0!==o&&o(t,n)?t:(a=e,l=n)}var a,l,s=!1,c=void 0===n?null:n;return[function(){return e(t())},null===c?void 0:function(){return e(c())}]},[t,n,r,o]);var p=a(e,f[0],f[1]);return s(function(){d.hasValue=!0,d.value=p},[p]),u(p),p}},3276:function(e,t,n){"use strict";e.exports=n(98727)},97737:function(e,t,n){"use strict";e.exports=n(94464)},86462:function(e,t,n){"use strict";let r;n.d(t,{Z:function(){return c}});let o="undefined"!=typeof crypto&&crypto.randomUUID&&crypto.randomUUID.bind(crypto);var i={randomUUID:o};let a=new Uint8Array(16);function l(){if(!r&&!(r="undefined"!=typeof crypto&&crypto.getRandomValues&&crypto.getRandomValues.bind(crypto)))throw Error("crypto.getRandomValues() not supported. See https://github.com/uuidjs/uuid#getrandomvalues-not-supported");return r(a)}let s=[];for(let e=0;e<256;++e)s.push((e+256).toString(16).slice(1));var c=function(e,t,n){if(i.randomUUID&&!t&&!e)return i.randomUUID();e=e||{};let r=e.random||(e.rng||l)();if(r[6]=15&r[6]|64,r[8]=63&r[8]|128,t){n=n||0;for(let e=0;e<16;++e)t[n+e]=r[e];return t}return function(e,t=0){return(s[e[t+0]]+s[e[t+1]]+s[e[t+2]]+s[e[t+3]]+"-"+s[e[t+4]]+s[e[t+5]]+"-"+s[e[t+6]]+s[e[t+7]]+"-"+s[e[t+8]]+s[e[t+9]]+"-"+s[e[t+10]]+s[e[t+11]]+s[e[t+12]]+s[e[t+13]]+s[e[t+14]]+s[e[t+15]]).toLowerCase()}(r)}},16394:function(e){/*! - * Determine if an object is a Buffer - * - * @author Feross Aboukhadijeh - * @license MIT - */e.exports=function(e){return null!=e&&null!=e.constructor&&"function"==typeof e.constructor.isBuffer&&e.constructor.isBuffer(e)}},75478:function(e){e.exports={area:!0,base:!0,br:!0,col:!0,embed:!0,hr:!0,img:!0,input:!0,link:!0,meta:!0,param:!0,source:!0,track:!0,wbr:!0}},56509:function(e,t,n){"use strict";function r(e,t){return Array(t+1).join(e)}n.r(t);var o,i,a=["ADDRESS","ARTICLE","ASIDE","AUDIO","BLOCKQUOTE","BODY","CANVAS","CENTER","DD","DIR","DIV","DL","DT","FIELDSET","FIGCAPTION","FIGURE","FOOTER","FORM","FRAMESET","H1","H2","H3","H4","H5","H6","HEADER","HGROUP","HR","HTML","ISINDEX","LI","MAIN","MENU","NAV","NOFRAMES","NOSCRIPT","OL","OUTPUT","P","PRE","SECTION","TABLE","TBODY","TD","TFOOT","TH","THEAD","TR","UL"];function l(e){return f(e,a)}var s=["AREA","BASE","BR","COL","COMMAND","EMBED","HR","IMG","INPUT","KEYGEN","LINK","META","PARAM","SOURCE","TRACK","WBR"];function c(e){return f(e,s)}var u=["A","TABLE","THEAD","TBODY","TFOOT","TH","TD","IFRAME","SCRIPT","AUDIO","VIDEO"];function f(e,t){return t.indexOf(e.nodeName)>=0}function d(e,t){return e.getElementsByTagName&&t.some(function(t){return e.getElementsByTagName(t).length})}var p={};function h(e){return e?e.replace(/(\n+\s*)+/g,"\n"):""}function g(e){for(var t in this.options=e,this._keep=[],this._remove=[],this.blankRule={replacement:e.blankReplacement},this.keepReplacement=e.keepReplacement,this.defaultRule={replacement:e.defaultReplacement},this.array=[],e.rules)this.array.push(e.rules[t])}function m(e,t,n){for(var r=0;r-1)return!0}else if("function"==typeof r){if(r.call(e,t,n))return!0}else throw TypeError("`filter` needs to be a string, array, or function")}(o,t,n))return o}}function b(e){var t=e.nextSibling||e.parentNode;return e.parentNode.removeChild(e),t}function v(e,t,n){return e&&e.parentNode===t||n(t)?t.nextSibling||t.parentNode:t.firstChild||t.nextSibling||t.parentNode}p.paragraph={filter:"p",replacement:function(e){return"\n\n"+e+"\n\n"}},p.lineBreak={filter:"br",replacement:function(e,t,n){return n.br+"\n"}},p.heading={filter:["h1","h2","h3","h4","h5","h6"],replacement:function(e,t,n){var o=Number(t.nodeName.charAt(1));if("setext"!==n.headingStyle||!(o<3))return"\n\n"+r("#",o)+" "+e+"\n\n";var i=r(1===o?"=":"-",e.length);return"\n\n"+e+"\n"+i+"\n\n"}},p.blockquote={filter:"blockquote",replacement:function(e){return"\n\n"+(e=(e=e.replace(/^\n+|\n+$/g,"")).replace(/^/gm,"> "))+"\n\n"}},p.list={filter:["ul","ol"],replacement:function(e,t){var n=t.parentNode;return"LI"===n.nodeName&&n.lastElementChild===t?"\n"+e:"\n\n"+e+"\n\n"}},p.listItem={filter:"li",replacement:function(e,t,n){e=e.replace(/^\n+/,"").replace(/\n+$/,"\n").replace(/\n/gm,"\n ");var r=n.bulletListMarker+" ",o=t.parentNode;if("OL"===o.nodeName){var i=o.getAttribute("start"),a=Array.prototype.indexOf.call(o.children,t);r=(i?Number(i)+a:a+1)+". "}return r+e+(t.nextSibling&&!/\n$/.test(e)?"\n":"")}},p.indentedCodeBlock={filter:function(e,t){return"indented"===t.codeBlockStyle&&"PRE"===e.nodeName&&e.firstChild&&"CODE"===e.firstChild.nodeName},replacement:function(e,t,n){return"\n\n "+t.firstChild.textContent.replace(/\n/g,"\n ")+"\n\n"}},p.fencedCodeBlock={filter:function(e,t){return"fenced"===t.codeBlockStyle&&"PRE"===e.nodeName&&e.firstChild&&"CODE"===e.firstChild.nodeName},replacement:function(e,t,n){for(var o,i=((t.firstChild.getAttribute("class")||"").match(/language-(\S+)/)||[null,""])[1],a=t.firstChild.textContent,l=n.fence.charAt(0),s=3,c=RegExp("^"+l+"{3,}","gm");o=c.exec(a);)o[0].length>=s&&(s=o[0].length+1);var u=r(l,s);return"\n\n"+u+i+"\n"+a.replace(/\n$/,"")+"\n"+u+"\n\n"}},p.horizontalRule={filter:"hr",replacement:function(e,t,n){return"\n\n"+n.hr+"\n\n"}},p.inlineLink={filter:function(e,t){return"inlined"===t.linkStyle&&"A"===e.nodeName&&e.getAttribute("href")},replacement:function(e,t){var n=t.getAttribute("href"),r=h(t.getAttribute("title"));return r&&(r=' "'+r+'"'),"["+e+"]("+n+r+")"}},p.referenceLink={filter:function(e,t){return"referenced"===t.linkStyle&&"A"===e.nodeName&&e.getAttribute("href")},replacement:function(e,t,n){var r,o,i=t.getAttribute("href"),a=h(t.getAttribute("title"));switch(a&&(a=' "'+a+'"'),n.linkReferenceStyle){case"collapsed":r="["+e+"][]",o="["+e+"]: "+i+a;break;case"shortcut":r="["+e+"]",o="["+e+"]: "+i+a;break;default:var l=this.references.length+1;r="["+e+"]["+l+"]",o="["+l+"]: "+i+a}return this.references.push(o),r},references:[],append:function(e){var t="";return this.references.length&&(t="\n\n"+this.references.join("\n")+"\n\n",this.references=[]),t}},p.emphasis={filter:["em","i"],replacement:function(e,t,n){return e.trim()?n.emDelimiter+e+n.emDelimiter:""}},p.strong={filter:["strong","b"],replacement:function(e,t,n){return e.trim()?n.strongDelimiter+e+n.strongDelimiter:""}},p.code={filter:function(e){var t=e.previousSibling||e.nextSibling,n="PRE"===e.parentNode.nodeName&&!t;return"CODE"===e.nodeName&&!n},replacement:function(e){if(!e)return"";e=e.replace(/\r?\n|\r/g," ");for(var t=/^`|^ .*?[^ ].* $|`$/.test(e)?" ":"",n="`",r=e.match(/`+/gm)||[];-1!==r.indexOf(n);)n+="`";return n+t+e+t+n}},p.image={filter:"img",replacement:function(e,t){var n=h(t.getAttribute("alt")),r=t.getAttribute("src")||"",o=h(t.getAttribute("title"));return r?"!["+n+"]("+r+(o?' "'+o+'"':"")+")":""}},g.prototype={add:function(e,t){this.array.unshift(t)},keep:function(e){this._keep.unshift({filter:e,replacement:this.keepReplacement})},remove:function(e){this._remove.unshift({filter:e,replacement:function(){return""}})},forNode:function(e){var t;return e.isBlank?this.blankRule:(t=m(this.array,e,this.options))||(t=m(this._keep,e,this.options))||(t=m(this._remove,e,this.options))?t:this.defaultRule},forEach:function(e){for(var t=0;t'+e+"","text/html").getElementById("turndown-root"):e.cloneNode(!0),isBlock:l,isVoid:c,isPre:t.preformattedCode?E:null}),n}function E(e){return"PRE"===e.nodeName||"CODE"===e.nodeName}function S(e,t){return e.isBlock=l(e),e.isCode="CODE"===e.nodeName||e.parentNode.isCode,e.isBlank=!c(e)&&!f(e,u)&&/^\s*$/i.test(e.textContent)&&!d(e,s)&&!d(e,u),e.flankingWhitespace=function(e,t){if(e.isBlock||t.preformattedCode&&e.isCode)return{leading:"",trailing:""};var n,r={leading:(n=e.textContent.match(/^(([ \t\r\n]*)(\s*))(?:(?=\S)[\s\S]*\S)?((\s*?)([ \t\r\n]*))$/))[1],leadingAscii:n[2],leadingNonAscii:n[3],trailing:n[4],trailingNonAscii:n[5],trailingAscii:n[6]};return r.leadingAscii&&k("left",e,t)&&(r.leading=r.leadingNonAscii),r.trailingAscii&&k("right",e,t)&&(r.trailing=r.trailingNonAscii),{leading:r.leading,trailing:r.trailing}}(e,t),e}function k(e,t,n){var r,o,i;return"left"===e?(r=t.previousSibling,o=/ $/):(r=t.nextSibling,o=/^ /),r&&(3===r.nodeType?i=o.test(r.nodeValue):n.preformattedCode&&"CODE"===r.nodeName?i=!1:1!==r.nodeType||l(r)||(i=o.test(r.textContent))),i}var _=Array.prototype.reduce;function O(e){if(!(this instanceof O))return new O(e);this.options=function(e){for(var t=1;t0&&"\n"===e[t-1];)t--;return e.substring(0,t)}(e),r=t.replace(/^\n*/,""),o=Math.max(e.length-n.length,t.length-r.length);return n+"\n\n".substring(0,o)+r}O.prototype={turndown:function(e){if(!(null!=e&&("string"==typeof e||e.nodeType&&(1===e.nodeType||9===e.nodeType||11===e.nodeType))))throw TypeError(e+" is not a string, or an element/document/fragment node.");return""===e?"":A.call(this,C.call(this,new w(e,this.options)))},use:function(e){if(Array.isArray(e))for(var t=0;t/g,">").replace(/"/g,""").replace(/'/g,"'")}function r(e,...t){let n=Object.create(null);for(let t in e)n[t]=e[t];return t.forEach(function(e){for(let t in e)n[t]=e[t]}),n}let o=e=>!!e.scope,i=(e,{prefix:t})=>{if(e.startsWith("language:"))return e.replace("language:","language-");if(e.includes(".")){let n=e.split(".");return[`${t}${n.shift()}`,...n.map((e,t)=>`${e}${"_".repeat(t+1)}`)].join(" ")}return`${t}${e}`};class a{constructor(e,t){this.buffer="",this.classPrefix=t.classPrefix,e.walk(this)}addText(e){this.buffer+=n(e)}openNode(e){if(!o(e))return;let t=i(e.scope,{prefix:this.classPrefix});this.span(t)}closeNode(e){o(e)&&(this.buffer+="")}value(){return this.buffer}span(e){this.buffer+=``}}let l=(e={})=>{let t={children:[]};return Object.assign(t,e),t};class s{constructor(){this.rootNode=l(),this.stack=[this.rootNode]}get top(){return this.stack[this.stack.length-1]}get root(){return this.rootNode}add(e){this.top.children.push(e)}openNode(e){let t=l({scope:e});this.add(t),this.stack.push(t)}closeNode(){if(this.stack.length>1)return this.stack.pop()}closeAllNodes(){for(;this.closeNode(););}toJSON(){return JSON.stringify(this.rootNode,null,4)}walk(e){return this.constructor._walk(e,this.rootNode)}static _walk(e,t){return"string"==typeof t?e.addText(t):t.children&&(e.openNode(t),t.children.forEach(t=>this._walk(e,t)),e.closeNode(t)),e}static _collapse(e){"string"!=typeof e&&e.children&&(e.children.every(e=>"string"==typeof e)?e.children=[e.children.join("")]:e.children.forEach(e=>{s._collapse(e)}))}}class c extends s{constructor(e){super(),this.options=e}addText(e){""!==e&&this.add(e)}startScope(e){this.openNode(e)}endScope(){this.closeNode()}__addSublanguage(e,t){let n=e.root;t&&(n.scope=`language:${t}`),this.add(n)}toHTML(){let e=new a(this,this.options);return e.value()}finalize(){return this.closeAllNodes(),!0}}function u(e){return e?"string"==typeof e?e:e.source:null}function f(e){return h("(?=",e,")")}function d(e){return h("(?:",e,")*")}function p(e){return h("(?:",e,")?")}function h(...e){let t=e.map(e=>u(e)).join("");return t}function g(...e){let t=function(e){let t=e[e.length-1];return"object"==typeof t&&t.constructor===Object?(e.splice(e.length-1,1),t):{}}(e),n="("+(t.capture?"":"?:")+e.map(e=>u(e)).join("|")+")";return n}function m(e){return RegExp(e.toString()+"|").exec("").length-1}let b=/\[(?:[^\\\]]|\\.)*\]|\(\??|\\([1-9][0-9]*)|\\./;function v(e,{joinWith:t}){let n=0;return e.map(e=>{n+=1;let t=n,r=u(e),o="";for(;r.length>0;){let e=b.exec(r);if(!e){o+=r;break}o+=r.substring(0,e.index),r=r.substring(e.index+e[0].length),"\\"===e[0][0]&&e[1]?o+="\\"+String(Number(e[1])+t):(o+=e[0],"("===e[0]&&n++)}return o}).map(e=>`(${e})`).join(t)}let y="[a-zA-Z]\\w*",x="[a-zA-Z_]\\w*",w="\\b\\d+(\\.\\d+)?",E="(-?)(\\b0[xX][a-fA-F0-9]+|(\\b\\d+(\\.\\d*)?|\\.\\d+)([eE][-+]?\\d+)?)",S="\\b(0b[01]+)",k={begin:"\\\\[\\s\\S]",relevance:0},_=function(e,t,n={}){let o=r({scope:"comment",begin:e,end:t,contains:[]},n);o.contains.push({scope:"doctag",begin:"[ ]*(?=(TODO|FIXME|NOTE|BUG|OPTIMIZE|HACK|XXX):)",end:/(TODO|FIXME|NOTE|BUG|OPTIMIZE|HACK|XXX):/,excludeBegin:!0,relevance:0});let i=g("I","a","is","so","us","to","at","if","in","it","on",/[A-Za-z]+['](d|ve|re|ll|t|s|n)/,/[A-Za-z]+[-][a-z]+/,/[A-Za-z][a-z]{2,}/);return o.contains.push({begin:h(/[ ]+/,"(",i,/[.]?[:]?([.][ ]|[ ])/,"){3}")}),o},O=_("//","$"),C=_("/\\*","\\*/"),A=_("#","$");var N=Object.freeze({__proto__:null,MATCH_NOTHING_RE:/\b\B/,IDENT_RE:y,UNDERSCORE_IDENT_RE:x,NUMBER_RE:w,C_NUMBER_RE:E,BINARY_NUMBER_RE:S,RE_STARTERS_RE:"!|!=|!==|%|%=|&|&&|&=|\\*|\\*=|\\+|\\+=|,|-|-=|/=|/|:|;|<<|<<=|<=|<|===|==|=|>>>=|>>=|>=|>>>|>>|>|\\?|\\[|\\{|\\(|\\^|\\^=|\\||\\|=|\\|\\||~",SHEBANG:(e={})=>{let t=/^#![ ]*\//;return e.binary&&(e.begin=h(t,/.*\b/,e.binary,/\b.*/)),r({scope:"meta",begin:t,end:/$/,relevance:0,"on:begin":(e,t)=>{0!==e.index&&t.ignoreMatch()}},e)},BACKSLASH_ESCAPE:k,APOS_STRING_MODE:{scope:"string",begin:"'",end:"'",illegal:"\\n",contains:[k]},QUOTE_STRING_MODE:{scope:"string",begin:'"',end:'"',illegal:"\\n",contains:[k]},PHRASAL_WORDS_MODE:{begin:/\b(a|an|the|are|I'm|isn't|don't|doesn't|won't|but|just|should|pretty|simply|enough|gonna|going|wtf|so|such|will|you|your|they|like|more)\b/},COMMENT:_,C_LINE_COMMENT_MODE:O,C_BLOCK_COMMENT_MODE:C,HASH_COMMENT_MODE:A,NUMBER_MODE:{scope:"number",begin:w,relevance:0},C_NUMBER_MODE:{scope:"number",begin:E,relevance:0},BINARY_NUMBER_MODE:{scope:"number",begin:S,relevance:0},REGEXP_MODE:{begin:/(?=\/[^/\n]*\/)/,contains:[{scope:"regexp",begin:/\//,end:/\/[gimuy]*/,illegal:/\n/,contains:[k,{begin:/\[/,end:/\]/,relevance:0,contains:[k]}]}]},TITLE_MODE:{scope:"title",begin:y,relevance:0},UNDERSCORE_TITLE_MODE:{scope:"title",begin:x,relevance:0},METHOD_GUARD:{begin:"\\.\\s*"+x,relevance:0},END_SAME_AS_BEGIN:function(e){return Object.assign(e,{"on:begin":(e,t)=>{t.data._beginMatch=e[1]},"on:end":(e,t)=>{t.data._beginMatch!==e[1]&&t.ignoreMatch()}})}});function R(e,t){let n=e.input[e.index-1];"."===n&&t.ignoreMatch()}function T(e,t){void 0!==e.className&&(e.scope=e.className,delete e.className)}function P(e,t){t&&e.beginKeywords&&(e.begin="\\b("+e.beginKeywords.split(" ").join("|")+")(?!\\.)(?=\\b|\\s)",e.__beforeBegin=R,e.keywords=e.keywords||e.beginKeywords,delete e.beginKeywords,void 0===e.relevance&&(e.relevance=0))}function M(e,t){Array.isArray(e.illegal)&&(e.illegal=g(...e.illegal))}function j(e,t){if(e.match){if(e.begin||e.end)throw Error("begin & end are not supported with match");e.begin=e.match,delete e.match}}function L(e,t){void 0===e.relevance&&(e.relevance=1)}let I=(e,t)=>{if(!e.beforeMatch)return;if(e.starts)throw Error("beforeMatch cannot be used with starts");let n=Object.assign({},e);Object.keys(e).forEach(t=>{delete e[t]}),e.keywords=n.keywords,e.begin=h(n.beforeMatch,f(n.begin)),e.starts={relevance:0,contains:[Object.assign(n,{endsParent:!0})]},e.relevance=0,delete n.beforeMatch},D=["of","and","for","in","not","or","if","then","parent","list","value"],F={},B=e=>{console.error(e)},z=(e,...t)=>{console.log(`WARN: ${e}`,...t)},$=(e,t)=>{F[`${e}/${t}`]||(console.log(`Deprecated as of ${e}. ${t}`),F[`${e}/${t}`]=!0)},U=Error();function H(e,t,{key:n}){let r=0,o=e[n],i={},a={};for(let e=1;e<=t.length;e++)a[e+r]=o[e],i[e+r]=!0,r+=m(t[e-1]);e[n]=a,e[n]._emit=i,e[n]._multi=!0}function Z(e){e.scope&&"object"==typeof e.scope&&null!==e.scope&&(e.beginScope=e.scope,delete e.scope),"string"==typeof e.beginScope&&(e.beginScope={_wrap:e.beginScope}),"string"==typeof e.endScope&&(e.endScope={_wrap:e.endScope}),function(e){if(Array.isArray(e.begin)){if(e.skip||e.excludeBegin||e.returnBegin)throw B("skip, excludeBegin, returnBegin not compatible with beginScope: {}"),U;if("object"!=typeof e.beginScope||null===e.beginScope)throw B("beginScope must be object"),U;H(e,e.begin,{key:"beginScope"}),e.begin=v(e.begin,{joinWith:""})}}(e),function(e){if(Array.isArray(e.end)){if(e.skip||e.excludeEnd||e.returnEnd)throw B("skip, excludeEnd, returnEnd not compatible with endScope: {}"),U;if("object"!=typeof e.endScope||null===e.endScope)throw B("endScope must be object"),U;H(e,e.end,{key:"endScope"}),e.end=v(e.end,{joinWith:""})}}(e)}class q extends Error{constructor(e,t){super(e),this.name="HTMLInjectionError",this.html=t}}let V=Symbol("nomatch"),W=function(e){let o=Object.create(null),i=Object.create(null),a=[],l=!0,s="Could not find the language '{}', did you forget to load/include a language module?",b={disableAutodetect:!0,name:"Plain text",contains:[]},y={ignoreUnescapedHTML:!1,throwUnescapedHTML:!1,noHighlightRe:/^(no-?highlight)$/i,languageDetectRe:/\blang(?:uage)?-([\w-]+)\b/i,classPrefix:"hljs-",cssSelector:"pre code",languages:null,__emitter:c};function x(e){return y.noHighlightRe.test(e)}function w(e,t,n){let r="",o="";"object"==typeof t?(r=e,n=t.ignoreIllegals,o=t.language):($("10.7.0","highlight(lang, code, ...args) has been deprecated."),$("10.7.0","Please use highlight(code, options) instead.\nhttps://github.com/highlightjs/highlight.js/issues/2277"),o=e,r=t),void 0===n&&(n=!0);let i={code:r,language:o};F("before:highlight",i);let a=i.result?i.result:E(i.language,i.code,n);return a.code=i.code,F("after:highlight",a),a}function E(e,i,a,c){let f=Object.create(null);function d(){if(!A.keywords){R.addText(F);return}let e=0;A.keywordPatternRe.lastIndex=0;let t=A.keywordPatternRe.exec(F),n="";for(;t;){n+=F.substring(e,t.index);let r=k.case_insensitive?t[0].toLowerCase():t[0],o=A.keywords[r];if(o){let[e,i]=o;if(R.addText(n),n="",f[r]=(f[r]||0)+1,f[r]<=7&&(z+=i),e.startsWith("_"))n+=t[0];else{let n=k.classNameAliases[e]||e;h(t[0],n)}}else n+=t[0];e=A.keywordPatternRe.lastIndex,t=A.keywordPatternRe.exec(F)}n+=F.substring(e),R.addText(n)}function p(){null!=A.subLanguage?function(){if(""===F)return;let e=null;if("string"==typeof A.subLanguage){if(!o[A.subLanguage]){R.addText(F);return}e=E(A.subLanguage,F,!0,N[A.subLanguage]),N[A.subLanguage]=e._top}else e=S(F,A.subLanguage.length?A.subLanguage:null);A.relevance>0&&(z+=e.relevance),R.__addSublanguage(e._emitter,e.language)}():d(),F=""}function h(e,t){""!==e&&(R.startScope(t),R.addText(e),R.endScope())}function g(e,t){let n=1,r=t.length-1;for(;n<=r;){if(!e._emit[n]){n++;continue}let r=k.classNameAliases[e[n]]||e[n],o=t[n];r?h(o,r):(F=o,d(),F=""),n++}}function b(e,t){return e.scope&&"string"==typeof e.scope&&R.openNode(k.classNameAliases[e.scope]||e.scope),e.beginScope&&(e.beginScope._wrap?(h(F,k.classNameAliases[e.beginScope._wrap]||e.beginScope._wrap),F=""):e.beginScope._multi&&(g(e.beginScope,t),F="")),A=Object.create(e,{parent:{value:A}})}let x={};function w(n,r){let o=r&&r[0];if(F+=n,null==o)return p(),0;if("begin"===x.type&&"end"===r.type&&x.index===r.index&&""===o){if(F+=i.slice(r.index,r.index+1),!l){let t=Error(`0 width match regex (${e})`);throw t.languageName=e,t.badRule=x.rule,t}return 1}if(x=r,"begin"===r.type)return function(e){let n=e[0],r=e.rule,o=new t(r),i=[r.__beforeBegin,r["on:begin"]];for(let t of i)if(t&&(t(e,o),o.isMatchIgnored))return 0===A.matcher.regexIndex?(F+=n[0],1):(H=!0,0);return r.skip?F+=n:(r.excludeBegin&&(F+=n),p(),r.returnBegin||r.excludeBegin||(F=n)),b(r,e),r.returnBegin?0:n.length}(r);if("illegal"!==r.type||a){if("end"===r.type){let e=function(e){let n=e[0],r=i.substring(e.index),o=function e(n,r,o){let i=function(e,t){let n=e&&e.exec(t);return n&&0===n.index}(n.endRe,o);if(i){if(n["on:end"]){let e=new t(n);n["on:end"](r,e),e.isMatchIgnored&&(i=!1)}if(i){for(;n.endsParent&&n.parent;)n=n.parent;return n}}if(n.endsWithParent)return e(n.parent,r,o)}(A,e,r);if(!o)return V;let a=A;A.endScope&&A.endScope._wrap?(p(),h(n,A.endScope._wrap)):A.endScope&&A.endScope._multi?(p(),g(A.endScope,e)):a.skip?F+=n:(a.returnEnd||a.excludeEnd||(F+=n),p(),a.excludeEnd&&(F=n));do A.scope&&R.closeNode(),A.skip||A.subLanguage||(z+=A.relevance),A=A.parent;while(A!==o.parent);return o.starts&&b(o.starts,e),a.returnEnd?0:n.length}(r);if(e!==V)return e}}else{let e=Error('Illegal lexeme "'+o+'" for mode "'+(A.scope||"")+'"');throw e.mode=A,e}if("illegal"===r.type&&""===o)return 1;if(U>1e5&&U>3*r.index){let e=Error("potential infinite loop, way more iterations than matches");throw e}return F+=o,o.length}let k=C(e);if(!k)throw B(s.replace("{}",e)),Error('Unknown language: "'+e+'"');let _=function(e){function t(t,n){return RegExp(u(t),"m"+(e.case_insensitive?"i":"")+(e.unicodeRegex?"u":"")+(n?"g":""))}class n{constructor(){this.matchIndexes={},this.regexes=[],this.matchAt=1,this.position=0}addRule(e,t){t.position=this.position++,this.matchIndexes[this.matchAt]=t,this.regexes.push([t,e]),this.matchAt+=m(e)+1}compile(){0===this.regexes.length&&(this.exec=()=>null);let e=this.regexes.map(e=>e[1]);this.matcherRe=t(v(e,{joinWith:"|"}),!0),this.lastIndex=0}exec(e){this.matcherRe.lastIndex=this.lastIndex;let t=this.matcherRe.exec(e);if(!t)return null;let n=t.findIndex((e,t)=>t>0&&void 0!==e),r=this.matchIndexes[n];return t.splice(0,n),Object.assign(t,r)}}class o{constructor(){this.rules=[],this.multiRegexes=[],this.count=0,this.lastIndex=0,this.regexIndex=0}getMatcher(e){if(this.multiRegexes[e])return this.multiRegexes[e];let t=new n;return this.rules.slice(e).forEach(([e,n])=>t.addRule(e,n)),t.compile(),this.multiRegexes[e]=t,t}resumingScanAtSamePosition(){return 0!==this.regexIndex}considerAll(){this.regexIndex=0}addRule(e,t){this.rules.push([e,t]),"begin"===t.type&&this.count++}exec(e){let t=this.getMatcher(this.regexIndex);t.lastIndex=this.lastIndex;let n=t.exec(e);if(this.resumingScanAtSamePosition()){if(n&&n.index===this.lastIndex);else{let t=this.getMatcher(0);t.lastIndex=this.lastIndex+1,n=t.exec(e)}}return n&&(this.regexIndex+=n.position+1,this.regexIndex===this.count&&this.considerAll()),n}}if(e.compilerExtensions||(e.compilerExtensions=[]),e.contains&&e.contains.includes("self"))throw Error("ERR: contains `self` is not supported at the top-level of a language. See documentation.");return e.classNameAliases=r(e.classNameAliases||{}),function n(i,a){if(i.isCompiled)return i;[T,j,Z,I].forEach(e=>e(i,a)),e.compilerExtensions.forEach(e=>e(i,a)),i.__beforeBegin=null,[P,M,L].forEach(e=>e(i,a)),i.isCompiled=!0;let l=null;return"object"==typeof i.keywords&&i.keywords.$pattern&&(i.keywords=Object.assign({},i.keywords),l=i.keywords.$pattern,delete i.keywords.$pattern),l=l||/\w+/,i.keywords&&(i.keywords=function e(t,n,r="keyword"){let o=Object.create(null);return"string"==typeof t?i(r,t.split(" ")):Array.isArray(t)?i(r,t):Object.keys(t).forEach(function(r){Object.assign(o,e(t[r],n,r))}),o;function i(e,t){n&&(t=t.map(e=>e.toLowerCase())),t.forEach(function(t){var n,r;let i=t.split("|");o[i[0]]=[e,(n=i[0],(r=i[1])?Number(r):D.includes(n.toLowerCase())?0:1)]})}}(i.keywords,e.case_insensitive)),i.keywordPatternRe=t(l,!0),a&&(i.begin||(i.begin=/\B|\b/),i.beginRe=t(i.begin),i.end||i.endsWithParent||(i.end=/\B|\b/),i.end&&(i.endRe=t(i.end)),i.terminatorEnd=u(i.end)||"",i.endsWithParent&&a.terminatorEnd&&(i.terminatorEnd+=(i.end?"|":"")+a.terminatorEnd)),i.illegal&&(i.illegalRe=t(i.illegal)),i.contains||(i.contains=[]),i.contains=[].concat(...i.contains.map(function(e){var t;return((t="self"===e?i:e).variants&&!t.cachedVariants&&(t.cachedVariants=t.variants.map(function(e){return r(t,{variants:null},e)})),t.cachedVariants)?t.cachedVariants:!function e(t){return!!t&&(t.endsWithParent||e(t.starts))}(t)?Object.isFrozen(t)?r(t):t:r(t,{starts:t.starts?r(t.starts):null})})),i.contains.forEach(function(e){n(e,i)}),i.starts&&n(i.starts,a),i.matcher=function(e){let t=new o;return e.contains.forEach(e=>t.addRule(e.begin,{rule:e,type:"begin"})),e.terminatorEnd&&t.addRule(e.terminatorEnd,{type:"end"}),e.illegal&&t.addRule(e.illegal,{type:"illegal"}),t}(i),i}(e)}(k),O="",A=c||_,N={},R=new y.__emitter(y);!function(){let e=[];for(let t=A;t!==k;t=t.parent)t.scope&&e.unshift(t.scope);e.forEach(e=>R.openNode(e))}();let F="",z=0,$=0,U=0,H=!1;try{if(k.__emitTokens)k.__emitTokens(i,R);else{for(A.matcher.considerAll();;){U++,H?H=!1:A.matcher.considerAll(),A.matcher.lastIndex=$;let e=A.matcher.exec(i);if(!e)break;let t=i.substring($,e.index),n=w(t,e);$=e.index+n}w(i.substring($))}return R.finalize(),O=R.toHTML(),{language:e,value:O,relevance:z,illegal:!1,_emitter:R,_top:A}}catch(t){if(t.message&&t.message.includes("Illegal"))return{language:e,value:n(i),illegal:!0,relevance:0,_illegalBy:{message:t.message,index:$,context:i.slice($-100,$+100),mode:t.mode,resultSoFar:O},_emitter:R};if(l)return{language:e,value:n(i),illegal:!1,relevance:0,errorRaised:t,_emitter:R,_top:A};throw t}}function S(e,t){t=t||y.languages||Object.keys(o);let r=function(e){let t={value:n(e),illegal:!1,relevance:0,_top:b,_emitter:new y.__emitter(y)};return t._emitter.addText(e),t}(e),i=t.filter(C).filter(R).map(t=>E(t,e,!1));i.unshift(r);let a=i.sort((e,t)=>{if(e.relevance!==t.relevance)return t.relevance-e.relevance;if(e.language&&t.language){if(C(e.language).supersetOf===t.language)return 1;if(C(t.language).supersetOf===e.language)return -1}return 0}),[l,s]=a;return l.secondBest=s,l}function k(e){let t=null,n=function(e){let t=e.className+" ";t+=e.parentNode?e.parentNode.className:"";let n=y.languageDetectRe.exec(t);if(n){let t=C(n[1]);return t||(z(s.replace("{}",n[1])),z("Falling back to no-highlight mode for this block.",e)),t?n[1]:"no-highlight"}return t.split(/\s+/).find(e=>x(e)||C(e))}(e);if(x(n))return;if(F("before:highlightElement",{el:e,language:n}),e.children.length>0&&(y.ignoreUnescapedHTML||(console.warn("One of your code blocks includes unescaped HTML. This is a potentially serious security risk."),console.warn("https://github.com/highlightjs/highlight.js/wiki/security"),console.warn("The element with unescaped HTML:"),console.warn(e)),y.throwUnescapedHTML)){let t=new q("One of your code blocks includes unescaped HTML.",e.innerHTML);throw t}t=e;let r=t.textContent,o=n?w(r,{language:n,ignoreIllegals:!0}):S(r);e.innerHTML=o.value,function(e,t,n){let r=t&&i[t]||n;e.classList.add("hljs"),e.classList.add(`language-${r}`)}(e,n,o.language),e.result={language:o.language,re:o.relevance,relevance:o.relevance},o.secondBest&&(e.secondBest={language:o.secondBest.language,relevance:o.secondBest.relevance}),F("after:highlightElement",{el:e,result:o,text:r})}let _=!1;function O(){if("loading"===document.readyState){_=!0;return}let e=document.querySelectorAll(y.cssSelector);e.forEach(k)}function C(e){return o[e=(e||"").toLowerCase()]||o[i[e]]}function A(e,{languageName:t}){"string"==typeof e&&(e=[e]),e.forEach(e=>{i[e.toLowerCase()]=t})}function R(e){let t=C(e);return t&&!t.disableAutodetect}function F(e,t){a.forEach(function(n){n[e]&&n[e](t)})}for(let t in"undefined"!=typeof window&&window.addEventListener&&window.addEventListener("DOMContentLoaded",function(){_&&O()},!1),Object.assign(e,{highlight:w,highlightAuto:S,highlightAll:O,highlightElement:k,highlightBlock:function(e){return $("10.7.0","highlightBlock will be removed entirely in v12.0"),$("10.7.0","Please use highlightElement now."),k(e)},configure:function(e){y=r(y,e)},initHighlighting:()=>{O(),$("10.6.0","initHighlighting() deprecated. Use highlightAll() now.")},initHighlightingOnLoad:function(){O(),$("10.6.0","initHighlightingOnLoad() deprecated. Use highlightAll() now.")},registerLanguage:function(t,n){let r=null;try{r=n(e)}catch(e){if(B("Language definition for '{}' could not be registered.".replace("{}",t)),l)B(e);else throw e;r=b}r.name||(r.name=t),o[t]=r,r.rawDefinition=n.bind(null,e),r.aliases&&A(r.aliases,{languageName:t})},unregisterLanguage:function(e){for(let t of(delete o[e],Object.keys(i)))i[t]===e&&delete i[t]},listLanguages:function(){return Object.keys(o)},getLanguage:C,registerAliases:A,autoDetection:R,inherit:r,addPlugin:function(e){var t;(t=e)["before:highlightBlock"]&&!t["before:highlightElement"]&&(t["before:highlightElement"]=e=>{t["before:highlightBlock"](Object.assign({block:e.el},e))}),t["after:highlightBlock"]&&!t["after:highlightElement"]&&(t["after:highlightElement"]=e=>{t["after:highlightBlock"](Object.assign({block:e.el},e))}),a.push(e)},removePlugin:function(e){let t=a.indexOf(e);-1!==t&&a.splice(t,1)}}),e.debugMode=function(){l=!1},e.safeMode=function(){l=!0},e.versionString="11.8.0",e.regex={concat:h,lookahead:f,either:g,optional:p,anyNumberOfTimes:d},N)"object"==typeof N[t]&&function e(t){return t instanceof Map?t.clear=t.delete=t.set=function(){throw Error("map is read-only")}:t instanceof Set&&(t.add=t.clear=t.delete=function(){throw Error("set is read-only")}),Object.freeze(t),Object.getOwnPropertyNames(t).forEach(n=>{let r=t[n],o=typeof r;"object"!==o&&"function"!==o||Object.isFrozen(r)||e(r)}),t}(N[t]);return Object.assign(e,N),e},G=W({});G.newInstance=()=>W({}),e.exports=G,G.HighlightJS=G,G.default=G},86351:function(e,t,n){"use strict";function r(e){if(Array.isArray(e))return e}n.d(t,{Z:function(){return r}})},18050:function(e,t,n){"use strict";function r(e,t){if(!(e instanceof t))throw TypeError("Cannot call a class as a function")}n.d(t,{Z:function(){return r}})},49449:function(e,t,n){"use strict";n.d(t,{Z:function(){return i}});var r=n(58774);function o(e,t){for(var n=0;ne.length)&&(t=e.length);for(var n=0,r=Array(t);n{let{placement:r="bottom",strategy:o="absolute",middleware:i=[],platform:a}=n,s=i.filter(Boolean),c=await (null==a.isRTL?void 0:a.isRTL(t)),u=await a.getElementRects({reference:e,floating:t,strategy:o}),{x:f,y:d}=l(u,r,c),p=r,h={},g=0;for(let n=0;n({name:"arrow",options:e,async fn(t){let{x:n,y:i,placement:l,rects:s,platform:f,elements:d}=t,{element:g,padding:m=0}=c(e,t)||{};if(null==g)return{};let b=u(m),v={x:n,y:i},y=a(l),x=o(y),w=await f.getDimensions(g),E="y"===y,S=E?"clientHeight":"clientWidth",k=s.reference[x]+s.reference[y]-v[y]-s.floating[x],_=v[y]-s.reference[y],O=await (null==f.getOffsetParent?void 0:f.getOffsetParent(g)),C=O?O[S]:0;C&&await (null==f.isElement?void 0:f.isElement(O))||(C=d.floating[S]||s.floating[x]);let A=C/2-w[x]/2-1,N=p(b[E?"top":"left"],A),R=p(b[E?"bottom":"right"],A),T=C-w[x]-R,P=C/2-w[x]/2+(k/2-_/2),M=h(N,p(P,T)),j=null!=r(l)&&P!=M&&s.reference[x]/2-(Pe.concat(t,t+"-start",t+"-end"),[]),{left:"right",right:"left",bottom:"top",top:"bottom"});function v(e){return e.replace(/left|right|bottom|top/g,e=>b[e])}let y={start:"end",end:"start"};function x(e){return e.replace(/start|end/g,e=>y[e])}let w=function(e){return void 0===e&&(e={}),{name:"flip",options:e,async fn(t){var n,l,s,u;let{placement:f,middlewareData:p,rects:h,initialPlacement:g,platform:m,elements:b}=t,{mainAxis:y=!0,crossAxis:w=!0,fallbackPlacements:E,fallbackStrategy:S="bestFit",fallbackAxisSideDirection:k="none",flipAlignment:_=!0,...O}=c(e,t),C=i(f),A=i(g)===g,N=await (null==m.isRTL?void 0:m.isRTL(b.floating)),R=E||(A||!_?[v(g)]:function(e){let t=v(e);return[x(e),t,x(t)]}(g));E||"none"===k||R.push(...function(e,t,n,o){let a=r(e),l=function(e,t,n){let r=["left","right"],o=["right","left"];switch(e){case"top":case"bottom":return n?t?o:r:t?r:o;case"left":case"right":return t?["top","bottom"]:["bottom","top"];default:return[]}}(i(e),"start"===n,o);return a&&(l=l.map(e=>e+"-"+a),t&&(l=l.concat(l.map(x)))),l}(g,_,k,N));let T=[g,...R],P=await d(t,O),M=[],j=(null==(n=p.flip)?void 0:n.overflows)||[];if(y&&M.push(P[C]),w){let{main:e,cross:t}=function(e,t,n){void 0===n&&(n=!1);let i=r(e),l=a(e),s=o(l),c="x"===l?i===(n?"end":"start")?"right":"left":"start"===i?"bottom":"top";return t.reference[s]>t.floating[s]&&(c=v(c)),{main:c,cross:v(c)}}(f,h,N);M.push(P[e],P[t])}if(j=[...j,{placement:f,overflows:M}],!M.every(e=>e<=0)){let e=((null==(l=p.flip)?void 0:l.index)||0)+1,t=T[e];if(t)return{data:{index:e,overflows:j},reset:{placement:t}};let n=null==(s=j.filter(e=>e.overflows[0]<=0).sort((e,t)=>e.overflows[1]-t.overflows[1])[0])?void 0:s.placement;if(!n)switch(S){case"bestFit":{let e=null==(u=j.map(e=>[e.placement,e.overflows.filter(e=>e>0).reduce((e,t)=>e+t,0)]).sort((e,t)=>e[1]-t[1])[0])?void 0:u[0];e&&(n=e);break}case"initialPlacement":n=g}if(f!==n)return{reset:{placement:n}}}return{}}}};function E(e,t){return{top:e.top-t.height,right:e.right-t.width,bottom:e.bottom-t.height,left:e.left-t.width}}function S(e){return m.some(t=>e[t]>=0)}let k=function(e){return void 0===e&&(e={}),{name:"hide",options:e,async fn(t){let{rects:n}=t,{strategy:r="referenceHidden",...o}=c(e,t);switch(r){case"referenceHidden":{let e=E(await d(t,{...o,elementContext:"reference"}),n.reference);return{data:{referenceHiddenOffsets:e,referenceHidden:S(e)}}}case"escaped":{let e=E(await d(t,{...o,altBoundary:!0}),n.floating);return{data:{escapedOffsets:e,escaped:S(e)}}}default:return{}}}}},_=function(e){return void 0===e&&(e=0),{name:"offset",options:e,async fn(t){let{x:n,y:o}=t,l=await async function(e,t){let{placement:n,platform:o,elements:l}=e,s=await (null==o.isRTL?void 0:o.isRTL(l.floating)),u=i(n),f=r(n),d="x"===a(n),p=["left","top"].includes(u)?-1:1,h=s&&d?-1:1,g=c(t,e),{mainAxis:m,crossAxis:b,alignmentAxis:v}="number"==typeof g?{mainAxis:g,crossAxis:0,alignmentAxis:null}:{mainAxis:0,crossAxis:0,alignmentAxis:null,...g};return f&&"number"==typeof v&&(b="end"===f?-1*v:v),d?{x:b*h,y:m*p}:{x:m*p,y:b*h}}(t,e);return{x:n+l.x,y:o+l.y,data:l}}}};function O(e){return"x"===e?"y":"x"}let C=function(e){return void 0===e&&(e={}),{name:"shift",options:e,async fn(t){let{x:n,y:r,placement:o}=t,{mainAxis:l=!0,crossAxis:s=!1,limiter:u={fn:e=>{let{x:t,y:n}=e;return{x:t,y:n}}},...f}=c(e,t),g={x:n,y:r},m=await d(t,f),b=a(i(o)),v=O(b),y=g[b],x=g[v];if(l){let e="y"===b?"bottom":"right";y=h(y+m["y"===b?"top":"left"],p(y,y-m[e]))}s&&(x=h(x+m["y"===v?"top":"left"],p(x,x-m["y"===v?"bottom":"right"])));let w=u.fn({...t,[b]:y,[v]:x});return{...w,data:{x:w.x-n,y:w.y-r}}}}},A=function(e){return void 0===e&&(e={}),{options:e,fn(t){let{x:n,y:r,placement:o,rects:l,middlewareData:s}=t,{offset:u=0,mainAxis:f=!0,crossAxis:d=!0}=c(e,t),p={x:n,y:r},h=a(o),g=O(h),m=p[h],b=p[g],v=c(u,t),y="number"==typeof v?{mainAxis:v,crossAxis:0}:{mainAxis:0,crossAxis:0,...v};if(f){let e="y"===h?"height":"width",t=l.reference[h]-l.floating[e]+y.mainAxis,n=l.reference[h]+l.reference[e]-y.mainAxis;mn&&(m=n)}if(d){var x,w;let e="y"===h?"width":"height",t=["top","left"].includes(i(o)),n=l.reference[g]-l.floating[e]+(t&&(null==(x=s.offset)?void 0:x[g])||0)+(t?0:y.crossAxis),r=l.reference[g]+l.reference[e]+(t?0:(null==(w=s.offset)?void 0:w[g])||0)-(t?y.crossAxis:0);br&&(b=r)}return{[h]:m,[g]:b}}}},N=function(e){return void 0===e&&(e={}),{name:"size",options:e,async fn(t){let n,o;let{placement:l,rects:s,platform:u,elements:f}=t,{apply:g=()=>{},...m}=c(e,t),b=await d(t,m),v=i(l),y=r(l),x="x"===a(l),{width:w,height:E}=s.floating;"top"===v||"bottom"===v?(n=v,o=y===(await (null==u.isRTL?void 0:u.isRTL(f.floating))?"start":"end")?"left":"right"):(o=v,n="end"===y?"top":"bottom");let S=E-b[n],k=w-b[o],_=!t.middlewareData.shift,O=S,C=k;if(x){let e=w-b.left-b.right;C=y||_?p(k,e):e}else{let e=E-b.top-b.bottom;O=y||_?p(S,e):e}if(_&&!y){let e=h(b.left,0),t=h(b.right,0),n=h(b.top,0),r=h(b.bottom,0);x?C=w-2*(0!==e||0!==t?e+t:h(b.left,b.right)):O=E-2*(0!==n||0!==r?n+r:h(b.top,b.bottom))}await g({...t,availableWidth:C,availableHeight:O});let A=await u.getDimensions(f.floating);return w!==A.width||E!==A.height?{reset:{rects:!0}}:{}}}}},41778:function(e,t,n){"use strict";n.d(t,{Kx:function(){return R},Me:function(){return L},oo:function(){return I}});var r=n(21828);function o(e){var t;return(null==e||null==(t=e.ownerDocument)?void 0:t.defaultView)||window}function i(e){return o(e).getComputedStyle(e)}function a(e){return e instanceof o(e).Node}function l(e){return a(e)?(e.nodeName||"").toLowerCase():"#document"}function s(e){return e instanceof HTMLElement||e instanceof o(e).HTMLElement}function c(e){return"undefined"!=typeof ShadowRoot&&(e instanceof o(e).ShadowRoot||e instanceof ShadowRoot)}function u(e){let{overflow:t,overflowX:n,overflowY:r,display:o}=i(e);return/auto|scroll|overlay|hidden|clip/.test(t+r+n)&&!["inline","contents"].includes(o)}function f(e){let t=d(),n=i(e);return"none"!==n.transform||"none"!==n.perspective||!!n.containerType&&"normal"!==n.containerType||!t&&!!n.backdropFilter&&"none"!==n.backdropFilter||!t&&!!n.filter&&"none"!==n.filter||["transform","perspective","filter"].some(e=>(n.willChange||"").includes(e))||["paint","layout","strict","content"].some(e=>(n.contain||"").includes(e))}function d(){return!("undefined"==typeof CSS||!CSS.supports)&&CSS.supports("-webkit-backdrop-filter","none")}function p(e){return["html","body","#document"].includes(l(e))}let h=Math.min,g=Math.max,m=Math.round,b=Math.floor,v=e=>({x:e,y:e});function y(e){let t=i(e),n=parseFloat(t.width)||0,r=parseFloat(t.height)||0,o=s(e),a=o?e.offsetWidth:n,l=o?e.offsetHeight:r,c=m(n)!==a||m(r)!==l;return c&&(n=a,r=l),{width:n,height:r,$:c}}function x(e){return e instanceof Element||e instanceof o(e).Element}function w(e){return x(e)?e:e.contextElement}function E(e){let t=w(e);if(!s(t))return v(1);let n=t.getBoundingClientRect(),{width:r,height:o,$:i}=y(t),a=(i?m(n.width):n.width)/r,l=(i?m(n.height):n.height)/o;return a&&Number.isFinite(a)||(a=1),l&&Number.isFinite(l)||(l=1),{x:a,y:l}}let S=v(0);function k(e){let t=o(e);return d()&&t.visualViewport?{x:t.visualViewport.offsetLeft,y:t.visualViewport.offsetTop}:S}function _(e,t,n,i){var a;void 0===t&&(t=!1),void 0===n&&(n=!1);let l=e.getBoundingClientRect(),s=w(e),c=v(1);t&&(i?x(i)&&(c=E(i)):c=E(e));let u=(void 0===(a=n)&&(a=!1),!(!i||a&&i!==o(s))&&a)?k(s):v(0),f=(l.left+u.x)/c.x,d=(l.top+u.y)/c.y,p=l.width/c.x,h=l.height/c.y;if(s){let e=o(s),t=i&&x(i)?o(i):i,n=e.frameElement;for(;n&&i&&t!==e;){let e=E(n),t=n.getBoundingClientRect(),r=getComputedStyle(n),i=t.left+(n.clientLeft+parseFloat(r.paddingLeft))*e.x,a=t.top+(n.clientTop+parseFloat(r.paddingTop))*e.y;f*=e.x,d*=e.y,p*=e.x,h*=e.y,f+=i,d+=a,n=o(n).frameElement}}return(0,r.JB)({width:p,height:h,x:f,y:d})}function O(e){return x(e)?{scrollLeft:e.scrollLeft,scrollTop:e.scrollTop}:{scrollLeft:e.pageXOffset,scrollTop:e.pageYOffset}}function C(e){var t;return null==(t=(a(e)?e.ownerDocument:e.document)||window.document)?void 0:t.documentElement}function A(e){return _(C(e)).left+O(e).scrollLeft}function N(e){if("html"===l(e))return e;let t=e.assignedSlot||e.parentNode||c(e)&&e.host||C(e);return c(t)?t.host:t}function R(e,t){var n;void 0===t&&(t=[]);let r=function e(t){let n=N(t);return p(n)?t.ownerDocument?t.ownerDocument.body:t.body:s(n)&&u(n)?n:e(n)}(e),i=r===(null==(n=e.ownerDocument)?void 0:n.body),a=o(r);return i?t.concat(a,a.visualViewport||[],u(r)?r:[]):t.concat(r,R(r))}function T(e,t,n){let a;if("viewport"===t)a=function(e,t){let n=o(e),r=C(e),i=n.visualViewport,a=r.clientWidth,l=r.clientHeight,s=0,c=0;if(i){a=i.width,l=i.height;let e=d();(!e||e&&"fixed"===t)&&(s=i.offsetLeft,c=i.offsetTop)}return{width:a,height:l,x:s,y:c}}(e,n);else if("document"===t)a=function(e){let t=C(e),n=O(e),r=e.ownerDocument.body,o=g(t.scrollWidth,t.clientWidth,r.scrollWidth,r.clientWidth),a=g(t.scrollHeight,t.clientHeight,r.scrollHeight,r.clientHeight),l=-n.scrollLeft+A(e),s=-n.scrollTop;return"rtl"===i(r).direction&&(l+=g(t.clientWidth,r.clientWidth)-o),{width:o,height:a,x:l,y:s}}(C(e));else if(x(t))a=function(e,t){let n=_(e,!0,"fixed"===t),r=n.top+e.clientTop,o=n.left+e.clientLeft,i=s(e)?E(e):v(1);return{width:e.clientWidth*i.x,height:e.clientHeight*i.y,x:o*i.x,y:r*i.y}}(t,n);else{let n=k(e);a={...t,x:t.x-n.x,y:t.y-n.y}}return(0,r.JB)(a)}function P(e,t){return s(e)&&"fixed"!==i(e).position?t?t(e):e.offsetParent:null}function M(e,t){let n=o(e);if(!s(e))return n;let r=P(e,t);for(;r&&["table","td","th"].includes(l(r))&&"static"===i(r).position;)r=P(r,t);return r&&("html"===l(r)||"body"===l(r)&&"static"===i(r).position&&!f(r))?n:r||function(e){let t=N(e);for(;s(t)&&!p(t);){if(f(t))return t;t=N(t)}return null}(e)||n}let j={convertOffsetParentRelativeRectToViewportRelativeRect:function(e){let{rect:t,offsetParent:n,strategy:r}=e,o=s(n),i=C(n);if(n===i)return t;let a={scrollLeft:0,scrollTop:0},c=v(1),f=v(0);if((o||!o&&"fixed"!==r)&&(("body"!==l(n)||u(i))&&(a=O(n)),s(n))){let e=_(n);c=E(n),f.x=e.x+n.clientLeft,f.y=e.y+n.clientTop}return{width:t.width*c.x,height:t.height*c.y,x:t.x*c.x-a.scrollLeft*c.x+f.x,y:t.y*c.y-a.scrollTop*c.y+f.y}},getDocumentElement:C,getClippingRect:function(e){let{element:t,boundary:n,rootBoundary:r,strategy:o}=e,a=[..."clippingAncestors"===n?function(e,t){let n=t.get(e);if(n)return n;let r=R(e).filter(e=>x(e)&&"body"!==l(e)),o=null,a="fixed"===i(e).position,s=a?N(e):e;for(;x(s)&&!p(s);){let t=i(s),n=f(s);n||"fixed"!==t.position||(o=null),(a?!n&&!o:!n&&"static"===t.position&&o&&["absolute","fixed"].includes(o.position)||u(s)&&!n&&function e(t,n){let r=N(t);return!(r===n||!x(r)||p(r))&&("fixed"===i(r).position||e(r,n))}(e,s))?r=r.filter(e=>e!==s):o=t,s=N(s)}return t.set(e,r),r}(t,this._c):[].concat(n),r],s=a[0],c=a.reduce((e,n)=>{let r=T(t,n,o);return e.top=g(r.top,e.top),e.right=h(r.right,e.right),e.bottom=h(r.bottom,e.bottom),e.left=g(r.left,e.left),e},T(t,s,o));return{width:c.right-c.left,height:c.bottom-c.top,x:c.left,y:c.top}},getOffsetParent:M,getElementRects:async function(e){let{reference:t,floating:n,strategy:r}=e,o=this.getOffsetParent||M,i=this.getDimensions;return{reference:function(e,t,n){let r=s(t),o=C(t),i="fixed"===n,a=_(e,!0,i,t),c={scrollLeft:0,scrollTop:0},f=v(0);if(r||!r&&!i){if(("body"!==l(t)||u(o))&&(c=O(t)),s(t)){let e=_(t,!0,i,t);f.x=e.x+t.clientLeft,f.y=e.y+t.clientTop}else o&&(f.x=A(o))}return{x:a.left+c.scrollLeft-f.x,y:a.top+c.scrollTop-f.y,width:a.width,height:a.height}}(t,await o(n),r),floating:{x:0,y:0,...await i(n)}}},getClientRects:function(e){return Array.from(e.getClientRects())},getDimensions:function(e){return y(e)},getScale:E,isElement:x,isRTL:function(e){return"rtl"===getComputedStyle(e).direction}};function L(e,t,n,r){void 0===r&&(r={});let{ancestorScroll:o=!0,ancestorResize:i=!0,elementResize:a="function"==typeof ResizeObserver,layoutShift:l="function"==typeof IntersectionObserver,animationFrame:s=!1}=r,c=w(e),u=o||i?[...c?R(c):[],...R(t)]:[];u.forEach(e=>{o&&e.addEventListener("scroll",n,{passive:!0}),i&&e.addEventListener("resize",n)});let f=c&&l?function(e,t){let n,r=null,o=C(e);function i(){clearTimeout(n),r&&r.disconnect(),r=null}return function a(l,s){void 0===l&&(l=!1),void 0===s&&(s=1),i();let{left:c,top:u,width:f,height:d}=e.getBoundingClientRect();if(l||t(),!f||!d)return;let p={rootMargin:-b(u)+"px "+-b(o.clientWidth-(c+f))+"px "+-b(o.clientHeight-(u+d))+"px "+-b(c)+"px",threshold:g(0,h(1,s))||1},m=!0;function v(e){let t=e[0].intersectionRatio;if(t!==s){if(!m)return a();t?a(!1,t):n=setTimeout(()=>{a(!1,1e-7)},100)}m=!1}try{r=new IntersectionObserver(v,{...p,root:o.ownerDocument})}catch(e){r=new IntersectionObserver(v,p)}r.observe(e)}(!0),i}(c,n):null,d,p=-1,m=null;a&&(m=new ResizeObserver(e=>{let[r]=e;r&&r.target===c&&m&&(m.unobserve(t),cancelAnimationFrame(p),p=requestAnimationFrame(()=>{m&&m.observe(t)})),n()}),c&&!s&&m.observe(c),m.observe(t));let v=s?_(e):null;return s&&function t(){let r=_(e);v&&(r.x!==v.x||r.y!==v.y||r.width!==v.width||r.height!==v.height)&&n(),v=r,d=requestAnimationFrame(t)}(),n(),()=>{u.forEach(e=>{o&&e.removeEventListener("scroll",n),i&&e.removeEventListener("resize",n)}),f&&f(),m&&m.disconnect(),m=null,s&&cancelAnimationFrame(d)}}let I=(e,t,n)=>{let o=new Map,i={platform:j,...n},a={...i.platform,_c:o};return(0,r.oo)(e,t,{...i,platform:a})}},4058:function(e,t,n){"use strict";n.d(t,{d:function(){return f},f:function(){return u}});var r=n(86006),o=n(53858),i=n(42810),a=n(60961),l=n(68496),s=n(3562);let c=(0,r.createContext)(null);function u(){let[e,t]=(0,r.useState)([]);return[e.length>0?e.join(" "):void 0,(0,r.useMemo)(()=>function(e){let n=(0,s.z)(e=>(t(t=>[...t,e]),()=>t(t=>{let n=t.slice(),r=n.indexOf(e);return -1!==r&&n.splice(r,1),n}))),o=(0,r.useMemo)(()=>({register:n,slot:e.slot,name:e.name,props:e.props}),[n,e.slot,e.name,e.props]);return r.createElement(c.Provider,{value:o},e.children)},[t])]}let f=Object.assign((0,i.yV)(function(e,t){let n=(0,o.M)(),{id:s=`headlessui-description-${n}`,...u}=e,f=function e(){let t=(0,r.useContext)(c);if(null===t){let t=Error("You used a component, but it is not inside a relevant parent.");throw Error.captureStackTrace&&Error.captureStackTrace(t,e),t}return t}(),d=(0,l.T)(t);(0,a.e)(()=>f.register(s),[s,f.register]);let p={ref:d,...f.props,id:s};return(0,i.sY)({ourProps:p,theirProps:u,slot:f.slot||{},defaultTag:"p",name:f.name||"Description"})}),{})},22940:function(e,t,n){"use strict";let r,o;n.d(t,{V:function(){return eb}});var i,a,l,s,c,u,f=n(86006),d=n.t(f,2),p=n(59325),h=n(42810),g=n(68496),m=n(68277),b=n(24373),v=n(53858),y=n(11405),x=n(45106),w=n(32243),E=n(3562),S=n(58257),k=((i=k||{})[i.Forwards=0]="Forwards",i[i.Backwards=1]="Backwards",i),_=n(58260),O=n(29101),C=n(1485);function A(e,t,n,r){let o=(0,C.E)(n);(0,f.useEffect)(()=>{function n(e){o.current(e)}return(e=null!=e?e:window).addEventListener(t,n,r),()=>e.removeEventListener(t,n,r)},[e,t,r])}var N=n(10670);function R(e,t){let n=(0,f.useRef)([]),r=(0,E.z)(e);(0,f.useEffect)(()=>{let e=[...n.current];for(let[o,i]of t.entries())if(n.current[o]!==i){let o=r(t,e);return n.current=t,o}},[r,...t])}var T=n(48807);function P(e){let t=(0,E.z)(e),n=(0,f.useRef)(!1);(0,f.useEffect)(()=>(n.current=!1,()=>{n.current=!0,(0,N.Y)(()=>{n.current&&t()})}),[t])}function M(e){if(!e)return new Set;if("function"==typeof e)return new Set(e());let t=new Set;for(let n of e.current)n.current instanceof HTMLElement&&t.add(n.current);return t}var j=((a=j||{})[a.None=1]="None",a[a.InitialFocus=2]="InitialFocus",a[a.TabLock=4]="TabLock",a[a.FocusLock=8]="FocusLock",a[a.RestoreFocus=16]="RestoreFocus",a[a.All=30]="All",a);let L=Object.assign((0,h.yV)(function(e,t){let n,r=(0,f.useRef)(null),o=(0,g.T)(r,t),{initialFocus:i,containers:a,features:l=30,...s}=e;(0,y.H)()||(l=1);let c=(0,O.i)(r);!function({ownerDocument:e},t){let n=function(e=!0){let t=(0,f.useRef)(I.slice());return R(([e],[n])=>{!0===n&&!1===e&&(0,N.Y)(()=>{t.current.splice(0)}),!1===n&&!0===e&&(t.current=I.slice())},[e,I,t]),(0,E.z)(()=>{var e;return null!=(e=t.current.find(e=>null!=e&&e.isConnected))?e:null})}(t);R(()=>{t||(null==e?void 0:e.activeElement)===(null==e?void 0:e.body)&&(0,w.C5)(n())},[t]),P(()=>{t&&(0,w.C5)(n())})}({ownerDocument:c},!!(16&l));let u=function({ownerDocument:e,container:t,initialFocus:n},r){let o=(0,f.useRef)(null),i=(0,_.t)();return R(()=>{if(!r)return;let a=t.current;a&&(0,N.Y)(()=>{if(!i.current)return;let t=null==e?void 0:e.activeElement;if(null!=n&&n.current){if((null==n?void 0:n.current)===t){o.current=t;return}}else if(a.contains(t)){o.current=t;return}null!=n&&n.current?(0,w.C5)(n.current):(0,w.jA)(a,w.TO.First)===w.fE.Error&&console.warn("There are no focusable elements inside the "),o.current=null==e?void 0:e.activeElement})},[r]),o}({ownerDocument:c,container:r,initialFocus:i},!!(2&l));!function({ownerDocument:e,container:t,containers:n,previousActiveElement:r},o){let i=(0,_.t)();A(null==e?void 0:e.defaultView,"focus",e=>{if(!o||!i.current)return;let a=M(n);t.current instanceof HTMLElement&&a.add(t.current);let l=r.current;if(!l)return;let s=e.target;s&&s instanceof HTMLElement?D(a,s)?(r.current=s,(0,w.C5)(s)):(e.preventDefault(),e.stopPropagation(),(0,w.C5)(l)):(0,w.C5)(r.current)},!0)}({ownerDocument:c,container:r,containers:a,previousActiveElement:u},!!(8&l));let d=(n=(0,f.useRef)(0),(0,S.s)("keydown",e=>{"Tab"===e.key&&(n.current=e.shiftKey?1:0)},!0),n),m=(0,E.z)(e=>{let t=r.current;t&&(0,p.E)(d.current,{[k.Forwards]:()=>{(0,w.jA)(t,w.TO.First,{skipElements:[e.relatedTarget]})},[k.Backwards]:()=>{(0,w.jA)(t,w.TO.Last,{skipElements:[e.relatedTarget]})}})}),b=(0,T.G)(),v=(0,f.useRef)(!1);return f.createElement(f.Fragment,null,!!(4&l)&&f.createElement(x._,{as:"button",type:"button","data-headlessui-focus-guard":!0,onFocus:m,features:x.A.Focusable}),(0,h.sY)({ourProps:{ref:o,onKeyDown(e){"Tab"==e.key&&(v.current=!0,b.requestAnimationFrame(()=>{v.current=!1}))},onBlur(e){let t=M(a);r.current instanceof HTMLElement&&t.add(r.current);let n=e.relatedTarget;n instanceof HTMLElement&&"true"!==n.dataset.headlessuiFocusGuard&&(D(t,n)||(v.current?(0,w.jA)(r.current,(0,p.E)(d.current,{[k.Forwards]:()=>w.TO.Next,[k.Backwards]:()=>w.TO.Previous})|w.TO.WrapAround,{relativeTo:e.target}):e.target instanceof HTMLElement&&(0,w.C5)(e.target)))}},theirProps:s,defaultTag:"div",name:"FocusTrap"}),!!(4&l)&&f.createElement(x._,{as:"button",type:"button","data-headlessui-focus-guard":!0,onFocus:m,features:x.A.Focusable}))}),{features:j}),I=[];function D(e,t){for(let n of e)if(n.contains(t))return!0;return!1}!function(e){function t(){"loading"!==document.readyState&&(e(),document.removeEventListener("DOMContentLoaded",t))}"undefined"!=typeof window&&"undefined"!=typeof document&&(document.addEventListener("DOMContentLoaded",t),t())}(()=>{function e(e){e.target instanceof HTMLElement&&e.target!==document.body&&I[0]!==e.target&&(I.unshift(e.target),(I=I.filter(e=>null!=e&&e.isConnected)).splice(10))}window.addEventListener("click",e,{capture:!0}),window.addEventListener("mousedown",e,{capture:!0}),window.addEventListener("focus",e,{capture:!0}),document.body.addEventListener("click",e,{capture:!0}),document.body.addEventListener("mousedown",e,{capture:!0}),document.body.addEventListener("focus",e,{capture:!0})});var F=n(8431),B=n(60961);let z=(0,f.createContext)(!1);function $(e){return f.createElement(z.Provider,{value:e.force},e.children)}var U=n(30028);let H=f.Fragment,Z=f.Fragment,q=(0,f.createContext)(null),V=(0,f.createContext)(null),W=Object.assign((0,h.yV)(function(e,t){let n=(0,f.useRef)(null),r=(0,g.T)((0,g.h)(e=>{n.current=e}),t),o=(0,O.i)(n),i=function(e){let t=(0,f.useContext)(z),n=(0,f.useContext)(q),r=(0,O.i)(e),[o,i]=(0,f.useState)(()=>{if(!t&&null!==n||U.O.isServer)return null;let e=null==r?void 0:r.getElementById("headlessui-portal-root");if(e)return e;if(null===r)return null;let o=r.createElement("div");return o.setAttribute("id","headlessui-portal-root"),r.body.appendChild(o)});return(0,f.useEffect)(()=>{null!==o&&(null!=r&&r.body.contains(o)||null==r||r.body.appendChild(o))},[o,r]),(0,f.useEffect)(()=>{t||null!==n&&i(n.current)},[n,i,t]),o}(n),[a]=(0,f.useState)(()=>{var e;return U.O.isServer?null:null!=(e=null==o?void 0:o.createElement("div"))?e:null}),l=(0,f.useContext)(V),s=(0,y.H)();return(0,B.e)(()=>{!i||!a||i.contains(a)||(a.setAttribute("data-headlessui-portal",""),i.appendChild(a))},[i,a]),(0,B.e)(()=>{if(a&&l)return l.register(a)},[l,a]),P(()=>{var e;i&&a&&(a instanceof Node&&i.contains(a)&&i.removeChild(a),i.childNodes.length<=0&&(null==(e=i.parentElement)||e.removeChild(i)))}),s&&i&&a?(0,F.createPortal)((0,h.sY)({ourProps:{ref:r},theirProps:e,defaultTag:H,name:"Portal"}),a):null}),{Group:(0,h.yV)(function(e,t){let{target:n,...r}=e,o={ref:(0,g.T)(t)};return f.createElement(q.Provider,{value:n},(0,h.sY)({ourProps:o,theirProps:r,defaultTag:Z,name:"Popover.Group"}))})});var G=n(4058),K=n(10546);let Y=(0,f.createContext)(()=>{});Y.displayName="StackContext";var X=((l=X||{})[l.Add=0]="Add",l[l.Remove=1]="Remove",l);function J({children:e,onUpdate:t,type:n,element:r,enabled:o}){let i=(0,f.useContext)(Y),a=(0,E.z)((...e)=>{null==t||t(...e),i(...e)});return(0,B.e)(()=>{let e=void 0===o||!0===o;return e&&a(0,n,r),()=>{e&&a(1,n,r)}},[a,n,r,o]),f.createElement(Y.Provider,{value:a},e)}var Q=n(45880);let{useState:ee,useEffect:et,useLayoutEffect:en,useDebugValue:er}=d;"undefined"!=typeof window&&void 0!==window.document&&window.document.createElement;let eo=d.useSyncExternalStore;var ei=n(70650);let ea=(s={PUSH(e,t){var n;let r=null!=(n=this.get(e))?n:{doc:e,count:0,d:(0,ei.k)(),meta:new Set};return r.count++,r.meta.add(t),this.set(e,r),this},POP(e,t){let n=this.get(e);return n&&(n.count--,n.meta.delete(t)),this},SCROLL_PREVENT({doc:e,d:t,meta:n}){let r,o;let i={doc:e,d:t,meta:function(e){let t={};for(let n of e)Object.assign(t,n(t));return t}(n)},a=[/iPhone/gi.test(window.navigator.platform)||/Mac/gi.test(window.navigator.platform)&&window.navigator.maxTouchPoints>0?{before(){r=window.pageYOffset},after({doc:e,d:t,meta:n}){function o(e){return n.containers.flatMap(e=>e()).some(t=>t.contains(e))}t.style(e.body,"marginTop",`-${r}px`),window.scrollTo(0,0);let i=null;t.addEventListener(e,"click",t=>{if(t.target instanceof HTMLElement)try{let n=t.target.closest("a");if(!n)return;let{hash:r}=new URL(n.href),a=e.querySelector(r);a&&!o(a)&&(i=a)}catch{}},!0),t.addEventListener(e,"touchmove",e=>{e.target instanceof HTMLElement&&!o(e.target)&&e.preventDefault()},{passive:!1}),t.add(()=>{window.scrollTo(0,window.pageYOffset+r),i&&i.isConnected&&(i.scrollIntoView({block:"nearest"}),i=null)})}}:{},{before({doc:e}){var t;let n=e.documentElement;o=(null!=(t=e.defaultView)?t:window).innerWidth-n.clientWidth},after({doc:e,d:t}){let n=e.documentElement,r=o-(n.clientWidth-n.offsetWidth);t.style(n,"paddingRight",`${r}px`)}},{before({doc:e,d:t}){t.style(e.documentElement,"overflow","hidden")}}];a.forEach(({before:e})=>null==e?void 0:e(i)),a.forEach(({after:e})=>null==e?void 0:e(i))},SCROLL_ALLOW({d:e}){e.dispose()},TEARDOWN({doc:e}){this.delete(e)}},r=new Map,o=new Set,{getSnapshot:()=>r,subscribe:e=>(o.add(e),()=>o.delete(e)),dispatch(e,...t){let n=s[e].call(r,...t);n&&(r=n,o.forEach(e=>e()))}});ea.subscribe(()=>{let e=ea.getSnapshot(),t=new Map;for(let[n]of e)t.set(n,n.documentElement.style.overflow);for(let n of e.values()){let e="hidden"===t.get(n.doc),r=0!==n.count;(r&&!e||!r&&e)&&ea.dispatch(n.count>0?"SCROLL_PREVENT":"SCROLL_ALLOW",n),0===n.count&&ea.dispatch("TEARDOWN",n)}});let el=new Map,es=new Map;function ec(e,t=!0){(0,B.e)(()=>{var n;if(!t)return;let r="function"==typeof e?e():e.current;if(!r)return;let o=null!=(n=es.get(r))?n:0;return es.set(r,o+1),0!==o||(el.set(r,{"aria-hidden":r.getAttribute("aria-hidden"),inert:r.inert}),r.setAttribute("aria-hidden","true"),r.inert=!0),function(){var e;if(!r)return;let t=null!=(e=es.get(r))?e:1;if(1===t?es.delete(r):es.set(r,t-1),1!==t)return;let n=el.get(r);n&&(null===n["aria-hidden"]?r.removeAttribute("aria-hidden"):r.setAttribute("aria-hidden",n["aria-hidden"]),r.inert=n.inert,el.delete(r))}},[e,t])}var eu=((c=eu||{})[c.Open=0]="Open",c[c.Closed=1]="Closed",c),ef=((u=ef||{})[u.SetTitleId=0]="SetTitleId",u);let ed={0:(e,t)=>e.titleId===t.id?e:{...e,titleId:t.id}},ep=(0,f.createContext)(null);function eh(e){let t=(0,f.useContext)(ep);if(null===t){let t=Error(`<${e} /> is missing a parent

component.`);throw Error.captureStackTrace&&Error.captureStackTrace(t,eh),t}return t}function eg(e,t){return(0,p.E)(t.type,ed,e,t)}ep.displayName="DialogContext";let em=h.AN.RenderStrategy|h.AN.Static,eb=Object.assign((0,h.yV)(function(e,t){var n;let r,o,i,a,l;let s=(0,v.M)(),{id:c=`headlessui-dialog-${s}`,open:u,onClose:d,initialFocus:b,__demoMode:w=!1,...S}=e,[k,_]=(0,f.useState)(0),C=(0,K.oJ)();void 0===u&&null!==C&&(u=(C&K.ZM.Open)===K.ZM.Open);let N=(0,f.useRef)(null),R=(0,g.T)(N,t),T=(0,O.i)(N),P=e.hasOwnProperty("open")||null!==C,M=e.hasOwnProperty("onClose");if(!P&&!M)throw Error("You have to provide an `open` and an `onClose` prop to the `Dialog` component.");if(!P)throw Error("You provided an `onClose` prop to the `Dialog`, but forgot an `open` prop.");if(!M)throw Error("You provided an `open` prop to the `Dialog`, but forgot an `onClose` prop.");if("boolean"!=typeof u)throw Error(`You provided an \`open\` prop to the \`Dialog\`, but the value is not a boolean. Received: ${u}`);if("function"!=typeof d)throw Error(`You provided an \`onClose\` prop to the \`Dialog\`, but the value is not a function. Received: ${d}`);let j=u?0:1,[I,D]=(0,f.useReducer)(eg,{titleId:null,descriptionId:null,panelRef:(0,f.createRef)()}),F=(0,E.z)(()=>d(!1)),z=(0,E.z)(e=>D({type:0,id:e})),U=!!(0,y.H)()&&!w&&0===j,H=k>1,Z=null!==(0,f.useContext)(ep),[q,Y]=(r=(0,f.useContext)(V),o=(0,f.useRef)([]),i=(0,E.z)(e=>(o.current.push(e),r&&r.register(e),()=>a(e))),a=(0,E.z)(e=>{let t=o.current.indexOf(e);-1!==t&&o.current.splice(t,1),r&&r.unregister(e)}),l=(0,f.useMemo)(()=>({register:i,unregister:a,portals:o}),[i,a,o]),[o,(0,f.useMemo)(()=>function({children:e}){return f.createElement(V.Provider,{value:l},e)},[l])]),{resolveContainers:ee,mainTreeNodeRef:et,MainTreeNode:en}=function({defaultContainers:e=[],portals:t}={}){let n=(0,f.useRef)(null),r=(0,O.i)(n),o=(0,E.z)(()=>{var o;let i=[];for(let t of e)null!==t&&(t instanceof HTMLElement?i.push(t):"current"in t&&t.current instanceof HTMLElement&&i.push(t.current));if(null!=t&&t.current)for(let e of t.current)i.push(e);for(let e of null!=(o=null==r?void 0:r.querySelectorAll("html > *, body > *"))?o:[])e!==document.body&&e!==document.head&&e instanceof HTMLElement&&"headlessui-portal-root"!==e.id&&(e.contains(n.current)||i.some(t=>e.contains(t))||i.push(e));return i});return{resolveContainers:o,contains:(0,E.z)(e=>o().some(t=>t.contains(e))),mainTreeNodeRef:n,MainTreeNode:(0,f.useMemo)(()=>function(){return f.createElement(x._,{features:x.A.Hidden,ref:n})},[n])}}({portals:q,defaultContainers:[null!=(n=I.panelRef.current)?n:N.current]}),er=H?"parent":"leaf",ei=null!==C&&(C&K.ZM.Closing)===K.ZM.Closing,el=!Z&&!ei&&U;ec((0,f.useCallback)(()=>{var e,t;return null!=(t=Array.from(null!=(e=null==T?void 0:T.querySelectorAll("body > *"))?e:[]).find(e=>"headlessui-portal-root"!==e.id&&e.contains(et.current)&&e instanceof HTMLElement))?t:null},[et]),el);let es=!!H||U;ec((0,f.useCallback)(()=>{var e,t;return null!=(t=Array.from(null!=(e=null==T?void 0:T.querySelectorAll("[data-headlessui-portal]"))?e:[]).find(e=>e.contains(et.current)&&e instanceof HTMLElement))?t:null},[et]),es);let eu=!(!U||H);(0,Q.O)(ee,F,eu);let ef=!(H||0!==j);A(null==T?void 0:T.defaultView,"keydown",e=>{ef&&(e.defaultPrevented||e.key===m.R.Escape&&(e.preventDefault(),e.stopPropagation(),F()))}),function(e,t,n=()=>[document.body]){var r;let o,i;r=e=>{var t;return{containers:[...null!=(t=e.containers)?t:[],n]}},o=eo(ea.subscribe,ea.getSnapshot,ea.getSnapshot),(i=e?o.get(e):void 0)&&i.count,(0,B.e)(()=>{if(!(!e||!t))return ea.dispatch("PUSH",e,r),()=>ea.dispatch("POP",e,r)},[t,e])}(T,!(ei||0!==j||Z),ee),(0,f.useEffect)(()=>{if(0!==j||!N.current)return;let e=new ResizeObserver(e=>{for(let t of e){let e=t.target.getBoundingClientRect();0===e.x&&0===e.y&&0===e.width&&0===e.height&&F()}});return e.observe(N.current),()=>e.disconnect()},[j,N,F]);let[ed,eh]=(0,G.f)(),eb=(0,f.useMemo)(()=>[{dialogState:j,close:F,setTitleId:z},I],[j,I,F,z]),ev=(0,f.useMemo)(()=>({open:0===j}),[j]),ey={ref:R,id:c,role:"dialog","aria-modal":0===j||void 0,"aria-labelledby":I.titleId,"aria-describedby":ed};return f.createElement(J,{type:"Dialog",enabled:0===j,element:N,onUpdate:(0,E.z)((e,t)=>{"Dialog"===t&&(0,p.E)(e,{[X.Add]:()=>_(e=>e+1),[X.Remove]:()=>_(e=>e-1)})})},f.createElement($,{force:!0},f.createElement(W,null,f.createElement(ep.Provider,{value:eb},f.createElement(W.Group,{target:N},f.createElement($,{force:!1},f.createElement(eh,{slot:ev,name:"Dialog.Description"},f.createElement(L,{initialFocus:b,containers:ee,features:U?(0,p.E)(er,{parent:L.features.RestoreFocus,leaf:L.features.All&~L.features.FocusLock}):L.features.None},f.createElement(Y,null,(0,h.sY)({ourProps:ey,theirProps:S,slot:ev,defaultTag:"div",features:em,visible:0===j,name:"Dialog"}))))))))),f.createElement(en,null))}),{Backdrop:(0,h.yV)(function(e,t){let n=(0,v.M)(),{id:r=`headlessui-dialog-backdrop-${n}`,...o}=e,[{dialogState:i},a]=eh("Dialog.Backdrop"),l=(0,g.T)(t);(0,f.useEffect)(()=>{if(null===a.panelRef.current)throw Error("A component is being used, but a component is missing.")},[a.panelRef]);let s=(0,f.useMemo)(()=>({open:0===i}),[i]);return f.createElement($,{force:!0},f.createElement(W,null,(0,h.sY)({ourProps:{ref:l,id:r,"aria-hidden":!0},theirProps:o,slot:s,defaultTag:"div",name:"Dialog.Backdrop"})))}),Panel:(0,h.yV)(function(e,t){let n=(0,v.M)(),{id:r=`headlessui-dialog-panel-${n}`,...o}=e,[{dialogState:i},a]=eh("Dialog.Panel"),l=(0,g.T)(t,a.panelRef),s=(0,f.useMemo)(()=>({open:0===i}),[i]),c=(0,E.z)(e=>{e.stopPropagation()});return(0,h.sY)({ourProps:{ref:l,id:r,onClick:c},theirProps:o,slot:s,defaultTag:"div",name:"Dialog.Panel"})}),Overlay:(0,h.yV)(function(e,t){let n=(0,v.M)(),{id:r=`headlessui-dialog-overlay-${n}`,...o}=e,[{dialogState:i,close:a}]=eh("Dialog.Overlay"),l=(0,g.T)(t),s=(0,E.z)(e=>{if(e.target===e.currentTarget){if((0,b.P)(e.currentTarget))return e.preventDefault();e.preventDefault(),e.stopPropagation(),a()}}),c=(0,f.useMemo)(()=>({open:0===i}),[i]);return(0,h.sY)({ourProps:{ref:l,id:r,"aria-hidden":!0,onClick:s},theirProps:o,slot:c,defaultTag:"div",name:"Dialog.Overlay"})}),Title:(0,h.yV)(function(e,t){let n=(0,v.M)(),{id:r=`headlessui-dialog-title-${n}`,...o}=e,[{dialogState:i,setTitleId:a}]=eh("Dialog.Title"),l=(0,g.T)(t);(0,f.useEffect)(()=>(a(r),()=>a(null)),[r,a]);let s=(0,f.useMemo)(()=>({open:0===i}),[i]);return(0,h.sY)({ourProps:{ref:l,id:r},theirProps:o,slot:s,defaultTag:"h2",name:"Dialog.Title"})}),Description:G.d})},68277:function(e,t,n){"use strict";n.d(t,{R:function(){return o}});var r,o=((r=o||{}).Space=" ",r.Enter="Enter",r.Escape="Escape",r.Backspace="Backspace",r.Delete="Delete",r.ArrowLeft="ArrowLeft",r.ArrowUp="ArrowUp",r.ArrowRight="ArrowRight",r.ArrowDown="ArrowDown",r.Home="Home",r.End="End",r.PageUp="PageUp",r.PageDown="PageDown",r.Tab="Tab",r)},3420:function(e,t,n){"use strict";n.d(t,{R:function(){return Z}});var r,o,i,a,l=n(86006),s=n(48807),c=n(53858),u=n(60961),f=n(1485);function d(e,t){let[n,r]=(0,l.useState)(e),o=(0,f.E)(e);return(0,u.e)(()=>r(o.current),[o,r,...t]),n}var p=n(68496),h=n(42810),g=n(59325),m=n(70650),b=n(68277),v=n(55216),y=n(24373),x=n(32243),w=n(10546),E=n(51795),S=n(45880),k=n(45106),_=n(65969),O=n(53432),C=n(3562),A=n(92490),N=n(23017),R=n(49421),T=((r=T||{})[r.Open=0]="Open",r[r.Closed=1]="Closed",r),P=((o=P||{})[o.Single=0]="Single",o[o.Multi=1]="Multi",o),M=((i=M||{})[i.Pointer=0]="Pointer",i[i.Other=1]="Other",i),j=((a=j||{})[a.OpenListbox=0]="OpenListbox",a[a.CloseListbox=1]="CloseListbox",a[a.GoToOption=2]="GoToOption",a[a.Search=3]="Search",a[a.ClearSearch=4]="ClearSearch",a[a.RegisterOption=5]="RegisterOption",a[a.UnregisterOption=6]="UnregisterOption",a[a.RegisterLabel=7]="RegisterLabel",a);function L(e,t=e=>e){let n=null!==e.activeOptionIndex?e.options[e.activeOptionIndex]:null,r=(0,x.z2)(t(e.options.slice()),e=>e.dataRef.current.domRef.current),o=n?r.indexOf(n):null;return -1===o&&(o=null),{options:r,activeOptionIndex:o}}let I={1:e=>e.dataRef.current.disabled||1===e.listboxState?e:{...e,activeOptionIndex:null,listboxState:1},0(e){if(e.dataRef.current.disabled||0===e.listboxState)return e;let t=e.activeOptionIndex,{isSelected:n}=e.dataRef.current,r=e.options.findIndex(e=>n(e.dataRef.current.value));return -1!==r&&(t=r),{...e,listboxState:0,activeOptionIndex:t}},2(e,t){var n;if(e.dataRef.current.disabled||1===e.listboxState)return e;let r=L(e),o=(0,v.d)(t,{resolveItems:()=>r.options,resolveActiveIndex:()=>r.activeOptionIndex,resolveId:e=>e.id,resolveDisabled:e=>e.dataRef.current.disabled});return{...e,...r,searchQuery:"",activeOptionIndex:o,activationTrigger:null!=(n=t.trigger)?n:1}},3:(e,t)=>{if(e.dataRef.current.disabled||1===e.listboxState)return e;let n=""!==e.searchQuery?0:1,r=e.searchQuery+t.value.toLowerCase(),o=(null!==e.activeOptionIndex?e.options.slice(e.activeOptionIndex+n).concat(e.options.slice(0,e.activeOptionIndex+n)):e.options).find(e=>{var t;return!e.dataRef.current.disabled&&(null==(t=e.dataRef.current.textValue)?void 0:t.startsWith(r))}),i=o?e.options.indexOf(o):-1;return -1===i||i===e.activeOptionIndex?{...e,searchQuery:r}:{...e,searchQuery:r,activeOptionIndex:i,activationTrigger:1}},4:e=>e.dataRef.current.disabled||1===e.listboxState||""===e.searchQuery?e:{...e,searchQuery:""},5:(e,t)=>{let n={id:t.id,dataRef:t.dataRef},r=L(e,e=>[...e,n]);return null===e.activeOptionIndex&&e.dataRef.current.isSelected(t.dataRef.current.value)&&(r.activeOptionIndex=r.options.indexOf(n)),{...e,...r}},6:(e,t)=>{let n=L(e,e=>{let n=e.findIndex(e=>e.id===t.id);return -1!==n&&e.splice(n,1),e});return{...e,...n,activationTrigger:1}},7:(e,t)=>({...e,labelId:t.id})},D=(0,l.createContext)(null);function F(e){let t=(0,l.useContext)(D);if(null===t){let t=Error(`<${e} /> is missing a parent component.`);throw Error.captureStackTrace&&Error.captureStackTrace(t,F),t}return t}D.displayName="ListboxActionsContext";let B=(0,l.createContext)(null);function z(e){let t=(0,l.useContext)(B);if(null===t){let t=Error(`<${e} /> is missing a parent component.`);throw Error.captureStackTrace&&Error.captureStackTrace(t,z),t}return t}function $(e,t){return(0,g.E)(t.type,I,e,t)}B.displayName="ListboxDataContext";let U=l.Fragment,H=h.AN.RenderStrategy|h.AN.Static,Z=Object.assign((0,h.yV)(function(e,t){let{value:n,defaultValue:r,form:o,name:i,onChange:a,by:c=(e,t)=>e===t,disabled:f=!1,horizontal:d=!1,multiple:m=!1,...b}=e,y=d?"horizontal":"vertical",E=(0,p.T)(t),[O=m?[]:void 0,N]=(0,A.q)(n,a,r),[R,T]=(0,l.useReducer)($,{dataRef:(0,l.createRef)(),listboxState:1,options:[],searchQuery:"",labelId:null,activeOptionIndex:null,activationTrigger:1}),P=(0,l.useRef)({static:!1,hold:!1}),M=(0,l.useRef)(null),j=(0,l.useRef)(null),L=(0,l.useRef)(null),I=(0,C.z)("string"==typeof c?(e,t)=>(null==e?void 0:e[c])===(null==t?void 0:t[c]):c),F=(0,l.useCallback)(e=>(0,g.E)(z.mode,{1:()=>O.some(t=>I(t,e)),0:()=>I(O,e)}),[O]),z=(0,l.useMemo)(()=>({...R,value:O,disabled:f,mode:m?1:0,orientation:y,compare:I,isSelected:F,optionsPropsRef:P,labelRef:M,buttonRef:j,optionsRef:L}),[O,f,m,R]);(0,u.e)(()=>{R.dataRef.current=z},[z]),(0,S.O)([z.buttonRef,z.optionsRef],(e,t)=>{var n;T({type:1}),(0,x.sP)(t,x.tJ.Loose)||(e.preventDefault(),null==(n=z.buttonRef.current)||n.focus())},0===z.listboxState);let H=(0,l.useMemo)(()=>({open:0===z.listboxState,disabled:f,value:O}),[z,f,O]),Z=(0,C.z)(e=>{let t=z.options.find(t=>t.id===e);t&&X(t.dataRef.current.value)}),q=(0,C.z)(()=>{if(null!==z.activeOptionIndex){let{dataRef:e,id:t}=z.options[z.activeOptionIndex];X(e.current.value),T({type:2,focus:v.T.Specific,id:t})}}),V=(0,C.z)(()=>T({type:0})),W=(0,C.z)(()=>T({type:1})),G=(0,C.z)((e,t,n)=>e===v.T.Specific?T({type:2,focus:v.T.Specific,id:t,trigger:n}):T({type:2,focus:e,trigger:n})),K=(0,C.z)((e,t)=>(T({type:5,id:e,dataRef:t}),()=>T({type:6,id:e}))),Y=(0,C.z)(e=>(T({type:7,id:e}),()=>T({type:7,id:null}))),X=(0,C.z)(e=>(0,g.E)(z.mode,{0:()=>null==N?void 0:N(e),1(){let t=z.value.slice(),n=t.findIndex(t=>I(t,e));return -1===n?t.push(e):t.splice(n,1),null==N?void 0:N(t)}})),J=(0,C.z)(e=>T({type:3,value:e})),Q=(0,C.z)(()=>T({type:4})),ee=(0,l.useMemo)(()=>({onChange:X,registerOption:K,registerLabel:Y,goToOption:G,closeListbox:W,openListbox:V,selectActiveOption:q,selectOption:Z,search:J,clearSearch:Q}),[]),et=(0,l.useRef)(null),en=(0,s.G)();return(0,l.useEffect)(()=>{et.current&&void 0!==r&&en.addEventListener(et.current,"reset",()=>{X(r)})},[et,X]),l.createElement(D.Provider,{value:ee},l.createElement(B.Provider,{value:z},l.createElement(w.up,{value:(0,g.E)(z.listboxState,{0:w.ZM.Open,1:w.ZM.Closed})},null!=i&&null!=O&&(0,_.t)({[i]:O}).map(([e,t],n)=>l.createElement(k._,{features:k.A.Hidden,ref:0===n?e=>{var t;et.current=null!=(t=null==e?void 0:e.closest("form"))?t:null}:void 0,...(0,h.oA)({key:e,as:"input",type:"hidden",hidden:!0,readOnly:!0,form:o,name:e,value:t})})),(0,h.sY)({ourProps:{ref:E},theirProps:b,slot:H,defaultTag:U,name:"Listbox"}))))}),{Button:(0,h.yV)(function(e,t){var n;let r=(0,c.M)(),{id:o=`headlessui-listbox-button-${r}`,...i}=e,a=z("Listbox.Button"),u=F("Listbox.Button"),f=(0,p.T)(a.buttonRef,t),g=(0,s.G)(),m=(0,C.z)(e=>{switch(e.key){case b.R.Space:case b.R.Enter:case b.R.ArrowDown:e.preventDefault(),u.openListbox(),g.nextFrame(()=>{a.value||u.goToOption(v.T.First)});break;case b.R.ArrowUp:e.preventDefault(),u.openListbox(),g.nextFrame(()=>{a.value||u.goToOption(v.T.Last)})}}),x=(0,C.z)(e=>{e.key===b.R.Space&&e.preventDefault()}),w=(0,C.z)(e=>{if((0,y.P)(e.currentTarget))return e.preventDefault();0===a.listboxState?(u.closeListbox(),g.nextFrame(()=>{var e;return null==(e=a.buttonRef.current)?void 0:e.focus({preventScroll:!0})})):(e.preventDefault(),u.openListbox())}),S=d(()=>{if(a.labelId)return[a.labelId,o].join(" ")},[a.labelId,o]),k=(0,l.useMemo)(()=>({open:0===a.listboxState,disabled:a.disabled,value:a.value}),[a]),_={ref:f,id:o,type:(0,E.f)(e,a.buttonRef),"aria-haspopup":"listbox","aria-controls":null==(n=a.optionsRef.current)?void 0:n.id,"aria-expanded":a.disabled?void 0:0===a.listboxState,"aria-labelledby":S,disabled:a.disabled,onKeyDown:m,onKeyUp:x,onClick:w};return(0,h.sY)({ourProps:_,theirProps:i,slot:k,defaultTag:"button",name:"Listbox.Button"})}),Label:(0,h.yV)(function(e,t){let n=(0,c.M)(),{id:r=`headlessui-listbox-label-${n}`,...o}=e,i=z("Listbox.Label"),a=F("Listbox.Label"),s=(0,p.T)(i.labelRef,t);(0,u.e)(()=>a.registerLabel(r),[r]);let f=(0,C.z)(()=>{var e;return null==(e=i.buttonRef.current)?void 0:e.focus({preventScroll:!0})}),d=(0,l.useMemo)(()=>({open:0===i.listboxState,disabled:i.disabled}),[i]);return(0,h.sY)({ourProps:{ref:s,id:r,onClick:f},theirProps:o,slot:d,defaultTag:"label",name:"Listbox.Label"})}),Options:(0,h.yV)(function(e,t){var n;let r=(0,c.M)(),{id:o=`headlessui-listbox-options-${r}`,...i}=e,a=z("Listbox.Options"),u=F("Listbox.Options"),f=(0,p.T)(a.optionsRef,t),y=(0,s.G)(),x=(0,s.G)(),E=(0,w.oJ)(),S=null!==E?(E&w.ZM.Open)===w.ZM.Open:0===a.listboxState;(0,l.useEffect)(()=>{var e;let t=a.optionsRef.current;t&&0===a.listboxState&&t!==(null==(e=(0,O.r)(t))?void 0:e.activeElement)&&t.focus({preventScroll:!0})},[a.listboxState,a.optionsRef]);let k=(0,C.z)(e=>{switch(x.dispose(),e.key){case b.R.Space:if(""!==a.searchQuery)return e.preventDefault(),e.stopPropagation(),u.search(e.key);case b.R.Enter:if(e.preventDefault(),e.stopPropagation(),null!==a.activeOptionIndex){let{dataRef:e}=a.options[a.activeOptionIndex];u.onChange(e.current.value)}0===a.mode&&(u.closeListbox(),(0,m.k)().nextFrame(()=>{var e;return null==(e=a.buttonRef.current)?void 0:e.focus({preventScroll:!0})}));break;case(0,g.E)(a.orientation,{vertical:b.R.ArrowDown,horizontal:b.R.ArrowRight}):return e.preventDefault(),e.stopPropagation(),u.goToOption(v.T.Next);case(0,g.E)(a.orientation,{vertical:b.R.ArrowUp,horizontal:b.R.ArrowLeft}):return e.preventDefault(),e.stopPropagation(),u.goToOption(v.T.Previous);case b.R.Home:case b.R.PageUp:return e.preventDefault(),e.stopPropagation(),u.goToOption(v.T.First);case b.R.End:case b.R.PageDown:return e.preventDefault(),e.stopPropagation(),u.goToOption(v.T.Last);case b.R.Escape:return e.preventDefault(),e.stopPropagation(),u.closeListbox(),y.nextFrame(()=>{var e;return null==(e=a.buttonRef.current)?void 0:e.focus({preventScroll:!0})});case b.R.Tab:e.preventDefault(),e.stopPropagation();break;default:1===e.key.length&&(u.search(e.key),x.setTimeout(()=>u.clearSearch(),350))}}),_=d(()=>{var e,t,n;return null!=(n=null==(e=a.labelRef.current)?void 0:e.id)?n:null==(t=a.buttonRef.current)?void 0:t.id},[a.labelRef.current,a.buttonRef.current]),A=(0,l.useMemo)(()=>({open:0===a.listboxState}),[a]),N={"aria-activedescendant":null===a.activeOptionIndex||null==(n=a.options[a.activeOptionIndex])?void 0:n.id,"aria-multiselectable":1===a.mode||void 0,"aria-labelledby":_,"aria-orientation":a.orientation,id:o,onKeyDown:k,role:"listbox",tabIndex:0,ref:f};return(0,h.sY)({ourProps:N,theirProps:i,slot:A,defaultTag:"ul",features:H,visible:S,name:"Listbox.Options"})}),Option:(0,h.yV)(function(e,t){let n=(0,c.M)(),{id:r=`headlessui-listbox-option-${n}`,disabled:o=!1,value:i,...a}=e,s=z("Listbox.Option"),d=F("Listbox.Option"),g=null!==s.activeOptionIndex&&s.options[s.activeOptionIndex].id===r,b=s.isSelected(i),y=(0,l.useRef)(null),x=(0,R.x)(y),w=(0,f.E)({disabled:o,value:i,domRef:y,get textValue(){return x()}}),E=(0,p.T)(t,y);(0,u.e)(()=>{if(0!==s.listboxState||!g||0===s.activationTrigger)return;let e=(0,m.k)();return e.requestAnimationFrame(()=>{var e,t;null==(t=null==(e=y.current)?void 0:e.scrollIntoView)||t.call(e,{block:"nearest"})}),e.dispose},[y,g,s.listboxState,s.activationTrigger,s.activeOptionIndex]),(0,u.e)(()=>d.registerOption(r,w),[w,r]);let S=(0,C.z)(e=>{if(o)return e.preventDefault();d.onChange(i),0===s.mode&&(d.closeListbox(),(0,m.k)().nextFrame(()=>{var e;return null==(e=s.buttonRef.current)?void 0:e.focus({preventScroll:!0})}))}),k=(0,C.z)(()=>{if(o)return d.goToOption(v.T.Nothing);d.goToOption(v.T.Specific,r)}),_=(0,N.g)(),O=(0,C.z)(e=>_.update(e)),A=(0,C.z)(e=>{_.wasMoved(e)&&(o||g||d.goToOption(v.T.Specific,r,0))}),T=(0,C.z)(e=>{_.wasMoved(e)&&(o||g&&d.goToOption(v.T.Nothing))}),P=(0,l.useMemo)(()=>({active:g,selected:b,disabled:o}),[g,b,o]);return(0,h.sY)({ourProps:{id:r,ref:E,role:"option",tabIndex:!0===o?void 0:-1,"aria-disabled":!0===o||void 0,"aria-selected":b,disabled:void 0,onClick:S,onFocus:k,onPointerEnter:O,onMouseEnter:O,onPointerMove:A,onMouseMove:A,onPointerLeave:T,onMouseLeave:T},theirProps:a,slot:P,defaultTag:"li",name:"Listbox.Option"})})})},40102:function(e,t,n){"use strict";n.d(t,{v:function(){return D}});var r,o,i,a=n(86006),l=n(59325),s=n(42810),c=n(70650),u=n(48807),f=n(60961),d=n(68496),p=n(53858),h=n(68277),g=n(55216),m=n(24373),b=n(32243),v=n(45880),y=n(53432),x=n(10546),w=n(51795),E=n(29101),S=n(3562),k=n(23017),_=n(49421),O=((r=O||{})[r.Open=0]="Open",r[r.Closed=1]="Closed",r),C=((o=C||{})[o.Pointer=0]="Pointer",o[o.Other=1]="Other",o),A=((i=A||{})[i.OpenMenu=0]="OpenMenu",i[i.CloseMenu=1]="CloseMenu",i[i.GoToItem=2]="GoToItem",i[i.Search=3]="Search",i[i.ClearSearch=4]="ClearSearch",i[i.RegisterItem=5]="RegisterItem",i[i.UnregisterItem=6]="UnregisterItem",i);function N(e,t=e=>e){let n=null!==e.activeItemIndex?e.items[e.activeItemIndex]:null,r=(0,b.z2)(t(e.items.slice()),e=>e.dataRef.current.domRef.current),o=n?r.indexOf(n):null;return -1===o&&(o=null),{items:r,activeItemIndex:o}}let R={1:e=>1===e.menuState?e:{...e,activeItemIndex:null,menuState:1},0:e=>0===e.menuState?e:{...e,__demoMode:!1,menuState:0},2:(e,t)=>{var n;let r=N(e),o=(0,g.d)(t,{resolveItems:()=>r.items,resolveActiveIndex:()=>r.activeItemIndex,resolveId:e=>e.id,resolveDisabled:e=>e.dataRef.current.disabled});return{...e,...r,searchQuery:"",activeItemIndex:o,activationTrigger:null!=(n=t.trigger)?n:1}},3:(e,t)=>{let n=""!==e.searchQuery?0:1,r=e.searchQuery+t.value.toLowerCase(),o=(null!==e.activeItemIndex?e.items.slice(e.activeItemIndex+n).concat(e.items.slice(0,e.activeItemIndex+n)):e.items).find(e=>{var t;return(null==(t=e.dataRef.current.textValue)?void 0:t.startsWith(r))&&!e.dataRef.current.disabled}),i=o?e.items.indexOf(o):-1;return -1===i||i===e.activeItemIndex?{...e,searchQuery:r}:{...e,searchQuery:r,activeItemIndex:i,activationTrigger:1}},4:e=>""===e.searchQuery?e:{...e,searchQuery:"",searchActiveItemIndex:null},5:(e,t)=>{let n=N(e,e=>[...e,{id:t.id,dataRef:t.dataRef}]);return{...e,...n}},6:(e,t)=>{let n=N(e,e=>{let n=e.findIndex(e=>e.id===t.id);return -1!==n&&e.splice(n,1),e});return{...e,...n,activationTrigger:1}}},T=(0,a.createContext)(null);function P(e){let t=(0,a.useContext)(T);if(null===t){let t=Error(`<${e} /> is missing a parent component.`);throw Error.captureStackTrace&&Error.captureStackTrace(t,P),t}return t}function M(e,t){return(0,l.E)(t.type,R,e,t)}T.displayName="MenuContext";let j=a.Fragment,L=s.AN.RenderStrategy|s.AN.Static,I=a.Fragment,D=Object.assign((0,s.yV)(function(e,t){let{__demoMode:n=!1,...r}=e,o=(0,a.useReducer)(M,{__demoMode:n,menuState:n?0:1,buttonRef:(0,a.createRef)(),itemsRef:(0,a.createRef)(),items:[],searchQuery:"",activeItemIndex:null,activationTrigger:1}),[{menuState:i,itemsRef:c,buttonRef:u},f]=o,p=(0,d.T)(t);(0,v.O)([u,c],(e,t)=>{var n;f({type:1}),(0,b.sP)(t,b.tJ.Loose)||(e.preventDefault(),null==(n=u.current)||n.focus())},0===i);let h=(0,S.z)(()=>{f({type:1})}),g=(0,a.useMemo)(()=>({open:0===i,close:h}),[i,h]);return a.createElement(T.Provider,{value:o},a.createElement(x.up,{value:(0,l.E)(i,{0:x.ZM.Open,1:x.ZM.Closed})},(0,s.sY)({ourProps:{ref:p},theirProps:r,slot:g,defaultTag:j,name:"Menu"})))}),{Button:(0,s.yV)(function(e,t){var n;let r=(0,p.M)(),{id:o=`headlessui-menu-button-${r}`,...i}=e,[l,c]=P("Menu.Button"),f=(0,d.T)(l.buttonRef,t),b=(0,u.G)(),v=(0,S.z)(e=>{switch(e.key){case h.R.Space:case h.R.Enter:case h.R.ArrowDown:e.preventDefault(),e.stopPropagation(),c({type:0}),b.nextFrame(()=>c({type:2,focus:g.T.First}));break;case h.R.ArrowUp:e.preventDefault(),e.stopPropagation(),c({type:0}),b.nextFrame(()=>c({type:2,focus:g.T.Last}))}}),y=(0,S.z)(e=>{e.key===h.R.Space&&e.preventDefault()}),x=(0,S.z)(t=>{if((0,m.P)(t.currentTarget))return t.preventDefault();e.disabled||(0===l.menuState?(c({type:1}),b.nextFrame(()=>{var e;return null==(e=l.buttonRef.current)?void 0:e.focus({preventScroll:!0})})):(t.preventDefault(),c({type:0})))}),E=(0,a.useMemo)(()=>({open:0===l.menuState}),[l]),k={ref:f,id:o,type:(0,w.f)(e,l.buttonRef),"aria-haspopup":"menu","aria-controls":null==(n=l.itemsRef.current)?void 0:n.id,"aria-expanded":e.disabled?void 0:0===l.menuState,onKeyDown:v,onKeyUp:y,onClick:x};return(0,s.sY)({ourProps:k,theirProps:i,slot:E,defaultTag:"button",name:"Menu.Button"})}),Items:(0,s.yV)(function(e,t){var n,r;let o=(0,p.M)(),{id:i=`headlessui-menu-items-${o}`,...l}=e,[m,v]=P("Menu.Items"),w=(0,d.T)(m.itemsRef,t),k=(0,E.i)(m.itemsRef),_=(0,u.G)(),O=(0,x.oJ)(),C=null!==O?(O&x.ZM.Open)===x.ZM.Open:0===m.menuState;(0,a.useEffect)(()=>{let e=m.itemsRef.current;e&&0===m.menuState&&e!==(null==k?void 0:k.activeElement)&&e.focus({preventScroll:!0})},[m.menuState,m.itemsRef,k]),function({container:e,accept:t,walk:n,enabled:r=!0}){let o=(0,a.useRef)(t),i=(0,a.useRef)(n);(0,a.useEffect)(()=>{o.current=t,i.current=n},[t,n]),(0,f.e)(()=>{if(!e||!r)return;let t=(0,y.r)(e);if(!t)return;let n=o.current,a=i.current,l=Object.assign(e=>n(e),{acceptNode:n}),s=t.createTreeWalker(e,NodeFilter.SHOW_ELEMENT,l,!1);for(;s.nextNode();)a(s.currentNode)},[e,r,o,i])}({container:m.itemsRef.current,enabled:0===m.menuState,accept:e=>"menuitem"===e.getAttribute("role")?NodeFilter.FILTER_REJECT:e.hasAttribute("role")?NodeFilter.FILTER_SKIP:NodeFilter.FILTER_ACCEPT,walk(e){e.setAttribute("role","none")}});let A=(0,S.z)(e=>{var t,n;switch(_.dispose(),e.key){case h.R.Space:if(""!==m.searchQuery)return e.preventDefault(),e.stopPropagation(),v({type:3,value:e.key});case h.R.Enter:if(e.preventDefault(),e.stopPropagation(),v({type:1}),null!==m.activeItemIndex){let{dataRef:e}=m.items[m.activeItemIndex];null==(n=null==(t=e.current)?void 0:t.domRef.current)||n.click()}(0,b.wI)(m.buttonRef.current);break;case h.R.ArrowDown:return e.preventDefault(),e.stopPropagation(),v({type:2,focus:g.T.Next});case h.R.ArrowUp:return e.preventDefault(),e.stopPropagation(),v({type:2,focus:g.T.Previous});case h.R.Home:case h.R.PageUp:return e.preventDefault(),e.stopPropagation(),v({type:2,focus:g.T.First});case h.R.End:case h.R.PageDown:return e.preventDefault(),e.stopPropagation(),v({type:2,focus:g.T.Last});case h.R.Escape:e.preventDefault(),e.stopPropagation(),v({type:1}),(0,c.k)().nextFrame(()=>{var e;return null==(e=m.buttonRef.current)?void 0:e.focus({preventScroll:!0})});break;case h.R.Tab:e.preventDefault(),e.stopPropagation(),v({type:1}),(0,c.k)().nextFrame(()=>{(0,b.EO)(m.buttonRef.current,e.shiftKey?b.TO.Previous:b.TO.Next)});break;default:1===e.key.length&&(v({type:3,value:e.key}),_.setTimeout(()=>v({type:4}),350))}}),N=(0,S.z)(e=>{e.key===h.R.Space&&e.preventDefault()}),R=(0,a.useMemo)(()=>({open:0===m.menuState}),[m]),T={"aria-activedescendant":null===m.activeItemIndex||null==(n=m.items[m.activeItemIndex])?void 0:n.id,"aria-labelledby":null==(r=m.buttonRef.current)?void 0:r.id,id:i,onKeyDown:A,onKeyUp:N,role:"menu",tabIndex:0,ref:w};return(0,s.sY)({ourProps:T,theirProps:l,slot:R,defaultTag:"div",features:L,visible:C,name:"Menu.Items"})}),Item:(0,s.yV)(function(e,t){let n=(0,p.M)(),{id:r=`headlessui-menu-item-${n}`,disabled:o=!1,...i}=e,[l,u]=P("Menu.Item"),h=null!==l.activeItemIndex&&l.items[l.activeItemIndex].id===r,m=(0,a.useRef)(null),v=(0,d.T)(t,m);(0,f.e)(()=>{if(l.__demoMode||0!==l.menuState||!h||0===l.activationTrigger)return;let e=(0,c.k)();return e.requestAnimationFrame(()=>{var e,t;null==(t=null==(e=m.current)?void 0:e.scrollIntoView)||t.call(e,{block:"nearest"})}),e.dispose},[l.__demoMode,m,h,l.menuState,l.activationTrigger,l.activeItemIndex]);let y=(0,_.x)(m),x=(0,a.useRef)({disabled:o,domRef:m,get textValue(){return y()}});(0,f.e)(()=>{x.current.disabled=o},[x,o]),(0,f.e)(()=>(u({type:5,id:r,dataRef:x}),()=>u({type:6,id:r})),[x,r]);let w=(0,S.z)(()=>{u({type:1})}),E=(0,S.z)(e=>{if(o)return e.preventDefault();u({type:1}),(0,b.wI)(l.buttonRef.current)}),O=(0,S.z)(()=>{if(o)return u({type:2,focus:g.T.Nothing});u({type:2,focus:g.T.Specific,id:r})}),C=(0,k.g)(),A=(0,S.z)(e=>C.update(e)),N=(0,S.z)(e=>{C.wasMoved(e)&&(o||h||u({type:2,focus:g.T.Specific,id:r,trigger:0}))}),R=(0,S.z)(e=>{C.wasMoved(e)&&(o||h&&u({type:2,focus:g.T.Nothing}))}),T=(0,a.useMemo)(()=>({active:h,disabled:o,close:w}),[h,o,w]);return(0,s.sY)({ourProps:{id:r,ref:v,role:"menuitem",tabIndex:!0===o?void 0:-1,"aria-disabled":!0===o||void 0,disabled:void 0,onClick:E,onFocus:O,onPointerEnter:A,onMouseEnter:A,onPointerMove:N,onMouseMove:N,onPointerLeave:R,onMouseLeave:R},theirProps:i,slot:T,defaultTag:I,name:"Menu.Item"})})})},34199:function(e,t,n){"use strict";n.d(t,{r:function(){return w}});var r=n(86006),o=n(42810),i=n(53858),a=n(68277),l=n(24373),s=n(60961),c=n(68496),u=n(3562);let f=(0,r.createContext)(null),d=Object.assign((0,o.yV)(function(e,t){let n=(0,i.M)(),{id:a=`headlessui-label-${n}`,passive:l=!1,...u}=e,d=function e(){let t=(0,r.useContext)(f);if(null===t){let t=Error("You used a

Towards Real-World Blind Face Restoration with Generative Facial Prior | Github Repo

visitor badge
" -gr.Interface( - inference, - [gr.inputs.Image(type="filepath", label="Input")], - gr.outputs.Image(type="pil", label="Output"), - title=title, - description=description, - article=article, - examples=[ - ['lincoln.jpg'], - ['einstein.png'], - ['edison.jpg'], - ['Henry.jpg'], - ['Frida.jpg'] - ] - ).launch(enable_queue=True,cache_examples=True) \ No newline at end of file diff --git a/spaces/bigjoker/stable-diffusion-webui/webui-user.sh b/spaces/bigjoker/stable-diffusion-webui/webui-user.sh deleted file mode 100644 index bfa53cb7c67083ec0a01bfa420269af4d85c6c94..0000000000000000000000000000000000000000 --- a/spaces/bigjoker/stable-diffusion-webui/webui-user.sh +++ /dev/null @@ -1,46 +0,0 @@ -#!/bin/bash -######################################################### -# Uncomment and change the variables below to your need:# -######################################################### - -# Install directory without trailing slash -#install_dir="/home/$(whoami)" - -# Name of the subdirectory -#clone_dir="stable-diffusion-webui" - -# Commandline arguments for webui.py, for example: export COMMANDLINE_ARGS="--medvram --opt-split-attention" -#export COMMANDLINE_ARGS="" - -# python3 executable -#python_cmd="python3" - -# git executable -#export GIT="git" - -# python3 venv without trailing slash (defaults to ${install_dir}/${clone_dir}/venv) -#venv_dir="venv" - -# script to launch to start the app -#export LAUNCH_SCRIPT="launch.py" - -# install command for torch -#export TORCH_COMMAND="pip install torch==1.12.1+cu113 --extra-index-url https://download.pytorch.org/whl/cu113" - -# Requirements file to use for stable-diffusion-webui -#export REQS_FILE="requirements_versions.txt" - -# Fixed git repos -#export K_DIFFUSION_PACKAGE="" -#export GFPGAN_PACKAGE="" - -# Fixed git commits -#export STABLE_DIFFUSION_COMMIT_HASH="" -#export TAMING_TRANSFORMERS_COMMIT_HASH="" -#export CODEFORMER_COMMIT_HASH="" -#export BLIP_COMMIT_HASH="" - -# Uncomment to enable accelerated launch -#export ACCELERATE="True" - -########################################### diff --git a/spaces/bino-ocle/audio-intelligence-dash/README.md b/spaces/bino-ocle/audio-intelligence-dash/README.md deleted file mode 100644 index 10ca0ea4861e4ea84680a5ea0e1d7629816dcf9f..0000000000000000000000000000000000000000 --- a/spaces/bino-ocle/audio-intelligence-dash/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Mirroor Audio Intelligence Dash -sdk: gradio -emoji: 🗣️🧠 -colorFrom: #1CE5CB -colorTo: white -app_file: app/app.py -sdk_version: 3.2 -pinned: true ---- - -# Binoocle Audio Intellgence dash diff --git a/spaces/bioriAsaeru/text-to-voice/Apsic Xbench 3 0 16.md b/spaces/bioriAsaeru/text-to-voice/Apsic Xbench 3 0 16.md deleted file mode 100644 index d46f1d8f116514642cae7b60586238ba227301a2..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Apsic Xbench 3 0 16.md +++ /dev/null @@ -1,143 +0,0 @@ - -

ApSIC Xbench 3.0: A Powerful Tool for Translation Quality Assurance

- -

If you are a professional translator or a localization manager, you know how important it is to ensure the quality and consistency of your translations. You need to check for spelling, grammar, terminology, style, and formatting errors, as well as compliance with client specifications and industry standards. But how can you do all that efficiently and effectively?

- -

One of the best solutions available in the market is ApSIC Xbench 3.0, a software tool that allows you to search and QA many translation bilingual formats, such as Trados Studio, Trados MultiTerm, Passolo, Matecat, Phrase TMS, Transifex, Smartcat, Crowdin, and more. ApSIC Xbench 3.0 is designed to help you improve your translation workflow and deliver high-quality results to your clients.

-

apsic xbench 3 0 16


Download Zip ✵✵✵ https://urloso.com/2uyOaG



- -

What are the main features of ApSIC Xbench 3.0?

- -

ApSIC Xbench 3.0 has many features that make it a versatile and powerful tool for translation quality assurance. Some of the main features are:

- -
    -
  • Unicode support across the board: ApSIC Xbench 3.0 can handle any language and script, including right-to-left languages, complex scripts, and emoji.
  • -
  • 32-bit and 64-bit editions: ApSIC Xbench 3.0 can run on any Windows operating system from Windows XP to Windows 11.
  • -
  • Integrated installation of Hunspell spell-checking dictionaries: ApSIC Xbench 3.0 comes with a built-in spell-checker that supports over 100 languages and can be customized with your own dictionaries.
  • -
  • Support for Trados Studio Translation Memories: ApSIC Xbench 3.0 can import and export Trados Studio Translation Memories in .sdltm format, as well as perform QA checks on them.
  • -
  • Support for Trados MultiTerm Glossaries: ApSIC Xbench 3.0 can import and export Trados MultiTerm Glossaries in .sdltb and .mdb format, as well as perform QA checks on them.
  • -
  • Plugins and extensions: ApSIC Xbench 3.0 can integrate with other tools such as Trados Studio, Passolo, Matecat, Phrase TMS, Transifex, Smartcat, Crowdin, Google Polyglot, Lingotek, and more.
  • -
- -

How can you use ApSIC Xbench 3.0 for translation quality assurance?

- -

ApSIC Xbench 3.0 is very easy to use and has a user-friendly interface. You can use it for various tasks such as:

- -
    -
  • Searching: You can search for any word or phrase in your translation files or reference materials using simple or advanced queries. You can also use regular expressions, wildcards, filters, tags, and other options to refine your search.
  • -
  • QA checking: You can run different types of QA checks on your translation files or reference materials using predefined or custom checklists. You can check for spelling, grammar, terminology, style, consistency, numbers, dates, tags, punctuation, capitalization, and more.
  • -
  • Editing: You can edit your translation files or reference materials directly from ApSIC Xbench 3.0 using the built-in editor or an external editor of your choice. You can also use the Edit Segment from Xbench feature to edit segments in online platforms such as Matecat, Phrase TMS, Transifex, Smartcat, Crowdin, and Google Polyglot.
  • -
  • Reporting: You can generate different types of reports from your QA checks or searches using various formats such as HTML, Excel, Word, PDF, XML, CSV, etc. You can also customize your reports with your own logo and colors.
  • -
- -

Why should you choose ApSIC Xbench 3.0 for translation quality assurance?

- -

ApSIC Xbench 3.0 is a tool that offers many benefits for translators and localization professionals who want to ensure the quality and consistency of their translations. Some of the benefits are:

- -
    -
  • It saves you time and money: ApSIC Xbench 3.0 allows you to perform all your QA tasks in one place and in a fast and efficient way. You don't need to switch between different tools or formats or waste time on manual checks.
  • -
  • It increases your productivity and quality: ApSIC Xbench 3.0 helps you to avoid errors and inconsistencies in your translations and deliver high-quality results to your clients. You can also use it to improve your skills and knowledge by learning from your mistakes and feedback.
  • -
  • It supports your workflow and preferences: ApSIC Xbench 3.0 is compatible with many translation formats and tools and can be customized to suit your needs and preferences. You can also use it offline or online depending on your situation.
  • -
- -

How can you get ApSIC Xbench 3.0?

- -

If you are interested in trying out ApSIC Xbench 3.0 for yourself or buying a license for it, you can visit the official website at https://www.xbench.net/. There you can find more information about the product features, pricing plans, -download links, -user guides, -videos, -blog posts, -and more. -You can also contact the support team if you have any questions or issues.

- -

ApSIC Xbench 3.0 is a tool that every translator and localization professional should have in their toolbox. -It is a tool that will help you to improve your translation workflow -and deliver high-quality results to your clients. -Don't miss this opportunity -and get ApSIC Xbench 3.0 today!

-

How can you learn more about ApSIC Xbench 3.0?

- -

If you want to learn more about ApSIC Xbench 3.0 and how to use it effectively, you can access various resources that are available online. Some of the resources are:

- -
    -
  • Documentation: You can read the user manual and the quick reference guide that explain the main features and functions of ApSIC Xbench 3.0 in detail.
  • -
  • Blog: You can follow the official blog of ApSIC Xbench 3.0 where you can find news, updates, tips, tricks, and best practices about the tool.
  • -
  • Community Forum: You can join the community forum of ApSIC Xbench 3.0 where you can ask questions, share experiences, give feedback, and interact with other users and developers of the tool.
  • -
  • Training: You can sign up for online or onsite training courses that cover the basics and advanced topics of ApSIC Xbench 3.0. You can also request customized training sessions according to your needs and preferences.
  • -
- -

What are the testimonials of ApSIC Xbench 3.0 users?

- -

ApSIC Xbench 3.0 has been used and praised by many translators and localization professionals around the world. Here are some of the testimonials of ApSIC Xbench 3.0 users:

- -
-

"ApSIC Xbench 3.0 is an indispensable tool for me as a translator and reviewer. It helps me to ensure the quality and consistency of my translations and to save time and money. I highly recommend it to anyone who works with translations."

-

-- John Smith, freelance translator -
- -
-

"ApSIC Xbench 3.0 is a powerful and versatile tool that supports our localization workflow and meets our client requirements. It allows us to check and edit our translation files in various formats and platforms, as well as to generate comprehensive reports for quality assurance. We are very satisfied with ApSIC Xbench 3.0."

-- Jane Doe, localization manager at ABC Inc. -
- -
-

"ApSIC Xbench 3.0 is a tool that I use every day for my translation projects. It helps me to find and fix errors and inconsistencies in my translations, as well as to improve my skills and knowledge by learning from feedback and reference materials. I love ApSIC Xbench 3.0."

-- Mary Jones, freelance translator -
- -

Conclusion

- -

ApSIC Xbench 3.0 is a tool that every translator and localization professional should have in their toolbox. -It is a tool that will help you to improve your translation workflow -and deliver high-quality results to your clients. -Don't miss this opportunity -and get ApSIC Xbench 3.0 today!

-

How can you download and install ApSIC Xbench 3.0?

- -

To download and install ApSIC Xbench 3.0, you need to follow these simple steps:

- -
    -
  1. Go to the official website of ApSIC Xbench 3.0 at https://www.xbench.net/ and click on the Download button.
  2. -
  3. Choose the edition that suits your system: 32-bit or 64-bit.
  4. -
  5. Save the installer file on your computer and run it.
  6. -
  7. Follow the instructions on the screen to complete the installation process.
  8. -
  9. Launch ApSIC Xbench 3.0 and enter your license key or start a free trial.
  10. -
- -

You can also download and install plugins and extensions for ApSIC Xbench 3.0 from the same website. These plugins and extensions allow you to integrate ApSIC Xbench 3.0 with other tools such as Trados Studio, Passolo, Matecat, Phrase TMS, Transifex, Smartcat, Crowdin, Google Polyglot, Lingotek, and more.

- -

How can you update ApSIC Xbench 3.0?

- -

To update ApSIC Xbench 3.0, you need to follow these simple steps:

- -
    -
  1. Go to the official website of ApSIC Xbench 3.0 at https://www.xbench.net/ and click on the Download button.
  2. -
  3. Choose the edition that suits your system: 32-bit or 64-bit.
  4. -
  5. Save the installer file on your computer and run it.
  6. -
  7. The installer will detect your existing version of ApSIC Xbench 3.0 and ask you if you want to update it.
  8. -
  9. Click Yes and follow the instructions on the screen to complete the update process.
  10. -
- -

You can also update your plugins and extensions for ApSIC Xbench 3.0 from the same website. These plugins and extensions allow you to integrate ApSIC Xbench 3.0 with other tools such as Trados Studio, Passolo, Matecat, Phrase TMS, Transifex, Smartcat, Crowdin, Google Polyglot, Lingotek, and more.

- -

How can you get support for ApSIC Xbench 3.0?

- -

If you need any support for ApSIC Xbench 3.0, you can contact the support team by email at support@xbench.net or by phone at +34-93-457-99-77. You can also use the contact form on the website at https://www.xbench.net/index.php/contact-us/.

- -

The support team will be happy to assist you with any questions or issues you may have regarding ApSIC Xbench 3.0. You can also check the frequently asked questions section on the website at https://www.xbench.net/index.php/support/faq/ to find answers to common questions.

-

Conclusion

- -

ApSIC Xbench 3.0 is a tool that every translator and localization professional should have in their toolbox. -It is a tool that will help you to improve your translation workflow -and deliver high-quality results to your clients. -It has many features and functions that allow you to search, QA, edit, and report on your translation files and reference materials in various formats and platforms. -It also supports many languages and scripts, integrates with other tools, and can be customized to your needs and preferences. -You can download and install ApSIC Xbench 3.0 from the official website at https://www.xbench.net/ and try it free of charge for 30 days or buy a license for it. -You can also access various resources to learn more about ApSIC Xbench 3.0 and how to use it effectively, such as documentation, blog, community forum, and training. -You can also contact the support team if you need any assistance or feedback.

- -

Don't miss this opportunity -and get ApSIC Xbench 3.0 today!

3cee63e6c2
-
-
\ No newline at end of file diff --git a/spaces/bioriAsaeru/text-to-voice/Final Draft 9 Customer Number Crack ((BETTER)).md b/spaces/bioriAsaeru/text-to-voice/Final Draft 9 Customer Number Crack ((BETTER)).md deleted file mode 100644 index 46f64dd78a777479b50500f0019efcb2aa8e9a9e..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Final Draft 9 Customer Number Crack ((BETTER)).md +++ /dev/null @@ -1,6 +0,0 @@ -

Final Draft 9 Customer Number Crack


DOWNLOAD ★★★★★ https://urloso.com/2uyPEU



- - aaccfb2cb3
-
-
-

diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/engine/__init__.py b/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/engine/__init__.py deleted file mode 100644 index 08a61572b4c7d09c8d400e903a96cbf5b2cc4763..0000000000000000000000000000000000000000 --- a/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/engine/__init__.py +++ /dev/null @@ -1,12 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. - -from .launch import * -from .train_loop import * - -__all__ = [k for k in globals().keys() if not k.startswith("_")] - - -# prefer to let hooks and defaults live in separate namespaces (therefore not in __all__) -# but still make them available here -from .hooks import * -from .defaults import * diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/projects/DensePose/densepose/data/combined_loader.py b/spaces/brjathu/HMR2.0/vendor/detectron2/projects/DensePose/densepose/data/combined_loader.py deleted file mode 100644 index 5bfbbdeaf53e184b83a6e0f951867b79d3d9f1fd..0000000000000000000000000000000000000000 --- a/spaces/brjathu/HMR2.0/vendor/detectron2/projects/DensePose/densepose/data/combined_loader.py +++ /dev/null @@ -1,44 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. - -import random -from collections import deque -from typing import Any, Collection, Deque, Iterable, Iterator, List, Sequence - -Loader = Iterable[Any] - - -def _pooled_next(iterator: Iterator[Any], pool: Deque[Any]): - if not pool: - pool.extend(next(iterator)) - return pool.popleft() - - -class CombinedDataLoader: - """ - Combines data loaders using the provided sampling ratios - """ - - BATCH_COUNT = 100 - - def __init__(self, loaders: Collection[Loader], batch_size: int, ratios: Sequence[float]): - self.loaders = loaders - self.batch_size = batch_size - self.ratios = ratios - - def __iter__(self) -> Iterator[List[Any]]: - iters = [iter(loader) for loader in self.loaders] - indices = [] - pool = [deque()] * len(iters) - # infinite iterator, as in D2 - while True: - if not indices: - # just a buffer of indices, its size doesn't matter - # as long as it's a multiple of batch_size - k = self.batch_size * self.BATCH_COUNT - indices = random.choices(range(len(self.loaders)), self.ratios, k=k) - try: - batch = [_pooled_next(iters[i], pool[i]) for i in indices[: self.batch_size]] - except StopIteration: - break - indices = indices[self.batch_size :] - yield batch diff --git a/spaces/bugbugbug/vits-uma-genshin-honkai/modules.py b/spaces/bugbugbug/vits-uma-genshin-honkai/modules.py deleted file mode 100644 index 56ea4145eddf19dd330a3a41ab0183efc1686d83..0000000000000000000000000000000000000000 --- a/spaces/bugbugbug/vits-uma-genshin-honkai/modules.py +++ /dev/null @@ -1,388 +0,0 @@ -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm - -import commons -from commons import init_weights, get_padding -from transforms import piecewise_rational_quadratic_transform - - -LRELU_SLOPE = 0.1 - - -class LayerNorm(nn.Module): - def __init__(self, channels, eps=1e-5): - super().__init__() - self.channels = channels - self.eps = eps - - self.gamma = nn.Parameter(torch.ones(channels)) - self.beta = nn.Parameter(torch.zeros(channels)) - - def forward(self, x): - x = x.transpose(1, -1) - x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps) - return x.transpose(1, -1) - - -class ConvReluNorm(nn.Module): - def __init__(self, in_channels, hidden_channels, out_channels, kernel_size, n_layers, p_dropout): - super().__init__() - self.in_channels = in_channels - self.hidden_channels = hidden_channels - self.out_channels = out_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - assert n_layers > 1, "Number of layers should be larger than 0." - - self.conv_layers = nn.ModuleList() - self.norm_layers = nn.ModuleList() - self.conv_layers.append(nn.Conv1d(in_channels, hidden_channels, kernel_size, padding=kernel_size//2)) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.relu_drop = nn.Sequential( - nn.ReLU(), - nn.Dropout(p_dropout)) - for _ in range(n_layers-1): - self.conv_layers.append(nn.Conv1d(hidden_channels, hidden_channels, kernel_size, padding=kernel_size//2)) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask): - x_org = x - for i in range(self.n_layers): - x = self.conv_layers[i](x * x_mask) - x = self.norm_layers[i](x) - x = self.relu_drop(x) - x = x_org + self.proj(x) - return x * x_mask - - -class DDSConv(nn.Module): - """ - Dialted and Depth-Separable Convolution - """ - def __init__(self, channels, kernel_size, n_layers, p_dropout=0.): - super().__init__() - self.channels = channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - - self.drop = nn.Dropout(p_dropout) - self.convs_sep = nn.ModuleList() - self.convs_1x1 = nn.ModuleList() - self.norms_1 = nn.ModuleList() - self.norms_2 = nn.ModuleList() - for i in range(n_layers): - dilation = kernel_size ** i - padding = (kernel_size * dilation - dilation) // 2 - self.convs_sep.append(nn.Conv1d(channels, channels, kernel_size, - groups=channels, dilation=dilation, padding=padding - )) - self.convs_1x1.append(nn.Conv1d(channels, channels, 1)) - self.norms_1.append(LayerNorm(channels)) - self.norms_2.append(LayerNorm(channels)) - - def forward(self, x, x_mask, g=None): - if g is not None: - x = x + g - for i in range(self.n_layers): - y = self.convs_sep[i](x * x_mask) - y = self.norms_1[i](y) - y = F.gelu(y) - y = self.convs_1x1[i](y) - y = self.norms_2[i](y) - y = F.gelu(y) - y = self.drop(y) - x = x + y - return x * x_mask - - -class WN(torch.nn.Module): - def __init__(self, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=0, p_dropout=0): - super(WN, self).__init__() - assert(kernel_size % 2 == 1) - self.hidden_channels =hidden_channels - self.kernel_size = kernel_size, - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - self.p_dropout = p_dropout - - self.in_layers = torch.nn.ModuleList() - self.res_skip_layers = torch.nn.ModuleList() - self.drop = nn.Dropout(p_dropout) - - if gin_channels != 0: - cond_layer = torch.nn.Conv1d(gin_channels, 2*hidden_channels*n_layers, 1) - self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name='weight') - - for i in range(n_layers): - dilation = dilation_rate ** i - padding = int((kernel_size * dilation - dilation) / 2) - in_layer = torch.nn.Conv1d(hidden_channels, 2*hidden_channels, kernel_size, - dilation=dilation, padding=padding) - in_layer = torch.nn.utils.weight_norm(in_layer, name='weight') - self.in_layers.append(in_layer) - - # last one is not necessary - if i < n_layers - 1: - res_skip_channels = 2 * hidden_channels - else: - res_skip_channels = hidden_channels - - res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1) - res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name='weight') - self.res_skip_layers.append(res_skip_layer) - - def forward(self, x, x_mask, g=None, **kwargs): - output = torch.zeros_like(x) - n_channels_tensor = torch.IntTensor([self.hidden_channels]) - - if g is not None: - g = self.cond_layer(g) - - for i in range(self.n_layers): - x_in = self.in_layers[i](x) - if g is not None: - cond_offset = i * 2 * self.hidden_channels - g_l = g[:,cond_offset:cond_offset+2*self.hidden_channels,:] - else: - g_l = torch.zeros_like(x_in) - - acts = commons.fused_add_tanh_sigmoid_multiply( - x_in, - g_l, - n_channels_tensor) - acts = self.drop(acts) - - res_skip_acts = self.res_skip_layers[i](acts) - if i < self.n_layers - 1: - res_acts = res_skip_acts[:,:self.hidden_channels,:] - x = (x + res_acts) * x_mask - output = output + res_skip_acts[:,self.hidden_channels:,:] - else: - output = output + res_skip_acts - return output * x_mask - - def remove_weight_norm(self): - if self.gin_channels != 0: - torch.nn.utils.remove_weight_norm(self.cond_layer) - for l in self.in_layers: - torch.nn.utils.remove_weight_norm(l) - for l in self.res_skip_layers: - torch.nn.utils.remove_weight_norm(l) - - -class ResBlock1(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)): - super(ResBlock1, self).__init__() - self.convs1 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]))) - ]) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))) - ]) - self.convs2.apply(init_weights) - - def forward(self, x, x_mask=None): - for c1, c2 in zip(self.convs1, self.convs2): - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c1(xt) - xt = F.leaky_relu(xt, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c2(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_weight_norm(l) - for l in self.convs2: - remove_weight_norm(l) - - -class ResBlock2(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3)): - super(ResBlock2, self).__init__() - self.convs = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))) - ]) - self.convs.apply(init_weights) - - def forward(self, x, x_mask=None): - for c in self.convs: - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs: - remove_weight_norm(l) - - -class Log(nn.Module): - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask - logdet = torch.sum(-y, [1, 2]) - return y, logdet - else: - x = torch.exp(x) * x_mask - return x - - -class Flip(nn.Module): - def forward(self, x, *args, reverse=False, **kwargs): - x = torch.flip(x, [1]) - if not reverse: - logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device) - return x, logdet - else: - return x - - -class ElementwiseAffine(nn.Module): - def __init__(self, channels): - super().__init__() - self.channels = channels - self.m = nn.Parameter(torch.zeros(channels,1)) - self.logs = nn.Parameter(torch.zeros(channels,1)) - - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = self.m + torch.exp(self.logs) * x - y = y * x_mask - logdet = torch.sum(self.logs * x_mask, [1,2]) - return y, logdet - else: - x = (x - self.m) * torch.exp(-self.logs) * x_mask - return x - - -class ResidualCouplingLayer(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=0, - gin_channels=0, - mean_only=False): - assert channels % 2 == 0, "channels should be divisible by 2" - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.half_channels = channels // 2 - self.mean_only = mean_only - - self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1) - self.enc = WN(hidden_channels, kernel_size, dilation_rate, n_layers, p_dropout=p_dropout, gin_channels=gin_channels) - self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1) - self.post.weight.data.zero_() - self.post.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) * x_mask - h = self.enc(h, x_mask, g=g) - stats = self.post(h) * x_mask - if not self.mean_only: - m, logs = torch.split(stats, [self.half_channels]*2, 1) - else: - m = stats - logs = torch.zeros_like(m) - - if not reverse: - x1 = m + x1 * torch.exp(logs) * x_mask - x = torch.cat([x0, x1], 1) - logdet = torch.sum(logs, [1,2]) - return x, logdet - else: - x1 = (x1 - m) * torch.exp(-logs) * x_mask - x = torch.cat([x0, x1], 1) - return x - - -class ConvFlow(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, n_layers, num_bins=10, tail_bound=5.0): - super().__init__() - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.num_bins = num_bins - self.tail_bound = tail_bound - self.half_channels = in_channels // 2 - - self.pre = nn.Conv1d(self.half_channels, filter_channels, 1) - self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.) - self.proj = nn.Conv1d(filter_channels, self.half_channels * (num_bins * 3 - 1), 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) - h = self.convs(h, x_mask, g=g) - h = self.proj(h) * x_mask - - b, c, t = x0.shape - h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?] - - unnormalized_widths = h[..., :self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_heights = h[..., self.num_bins:2*self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_derivatives = h[..., 2 * self.num_bins:] - - x1, logabsdet = piecewise_rational_quadratic_transform(x1, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=reverse, - tails='linear', - tail_bound=self.tail_bound - ) - - x = torch.cat([x0, x1], 1) * x_mask - logdet = torch.sum(logabsdet * x_mask, [1,2]) - if not reverse: - return x, logdet - else: - return x diff --git a/spaces/camenduru-com/seamless/style.css b/spaces/camenduru-com/seamless/style.css deleted file mode 100644 index 114adf441e9032febb46bc056b2a8bb651075f0d..0000000000000000000000000000000000000000 --- a/spaces/camenduru-com/seamless/style.css +++ /dev/null @@ -1,28 +0,0 @@ -body { - padding: 2rem; - font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif; -} - -h1 { - font-size: 16px; - margin-top: 0; -} - -p { - color: rgb(107, 114, 128); - font-size: 15px; - margin-bottom: 10px; - margin-top: 5px; -} - -.card { - max-width: 620px; - margin: 0 auto; - padding: 16px; - border: 1px solid lightgray; - border-radius: 16px; -} - -.card p:last-child { - margin-bottom: 0; -} diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/DensePose/densepose/converters/segm_to_mask.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/DensePose/densepose/converters/segm_to_mask.py deleted file mode 100644 index 6433d5dec75c3d6141252af144b61d8999077bb7..0000000000000000000000000000000000000000 --- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/DensePose/densepose/converters/segm_to_mask.py +++ /dev/null @@ -1,150 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. - -from typing import Any -import torch -from torch.nn import functional as F - -from detectron2.structures import BitMasks, Boxes, BoxMode - -from .base import IntTupleBox, make_int_box -from .to_mask import ImageSizeType - - -def resample_coarse_segm_tensor_to_bbox(coarse_segm: torch.Tensor, box_xywh_abs: IntTupleBox): - """ - Resample coarse segmentation tensor to the given - bounding box and derive labels for each pixel of the bounding box - - Args: - coarse_segm: float tensor of shape [1, K, Hout, Wout] - box_xywh_abs (tuple of 4 int): bounding box given by its upper-left - corner coordinates, width (W) and height (H) - Return: - Labels for each pixel of the bounding box, a long tensor of size [1, H, W] - """ - x, y, w, h = box_xywh_abs - w = max(int(w), 1) - h = max(int(h), 1) - labels = F.interpolate(coarse_segm, (h, w), mode="bilinear", align_corners=False).argmax(dim=1) - return labels - - -def resample_fine_and_coarse_segm_tensors_to_bbox( - fine_segm: torch.Tensor, coarse_segm: torch.Tensor, box_xywh_abs: IntTupleBox -): - """ - Resample fine and coarse segmentation tensors to the given - bounding box and derive labels for each pixel of the bounding box - - Args: - fine_segm: float tensor of shape [1, C, Hout, Wout] - coarse_segm: float tensor of shape [1, K, Hout, Wout] - box_xywh_abs (tuple of 4 int): bounding box given by its upper-left - corner coordinates, width (W) and height (H) - Return: - Labels for each pixel of the bounding box, a long tensor of size [1, H, W] - """ - x, y, w, h = box_xywh_abs - w = max(int(w), 1) - h = max(int(h), 1) - # coarse segmentation - coarse_segm_bbox = F.interpolate( - coarse_segm, - (h, w), - mode="bilinear", - align_corners=False, - ).argmax(dim=1) - # combined coarse and fine segmentation - labels = ( - F.interpolate(fine_segm, (h, w), mode="bilinear", align_corners=False).argmax(dim=1) - * (coarse_segm_bbox > 0).long() - ) - return labels - - -def resample_fine_and_coarse_segm_to_bbox(predictor_output: Any, box_xywh_abs: IntTupleBox): - """ - Resample fine and coarse segmentation outputs from a predictor to the given - bounding box and derive labels for each pixel of the bounding box - - Args: - predictor_output: DensePose predictor output that contains segmentation - results to be resampled - box_xywh_abs (tuple of 4 int): bounding box given by its upper-left - corner coordinates, width (W) and height (H) - Return: - Labels for each pixel of the bounding box, a long tensor of size [1, H, W] - """ - return resample_fine_and_coarse_segm_tensors_to_bbox( - predictor_output.fine_segm, - predictor_output.coarse_segm, - box_xywh_abs, - ) - - -def predictor_output_with_coarse_segm_to_mask( - predictor_output: Any, boxes: Boxes, image_size_hw: ImageSizeType -) -> BitMasks: - """ - Convert predictor output with coarse and fine segmentation to a mask. - Assumes that predictor output has the following attributes: - - coarse_segm (tensor of size [N, D, H, W]): coarse segmentation - unnormalized scores for N instances; D is the number of coarse - segmentation labels, H and W is the resolution of the estimate - - Args: - predictor_output: DensePose predictor output to be converted to mask - boxes (Boxes): bounding boxes that correspond to the DensePose - predictor outputs - image_size_hw (tuple [int, int]): image height Himg and width Wimg - Return: - BitMasks that contain a bool tensor of size [N, Himg, Wimg] with - a mask of the size of the image for each instance - """ - H, W = image_size_hw - boxes_xyxy_abs = boxes.tensor.clone() - boxes_xywh_abs = BoxMode.convert(boxes_xyxy_abs, BoxMode.XYXY_ABS, BoxMode.XYWH_ABS) - N = len(boxes_xywh_abs) - masks = torch.zeros((N, H, W), dtype=torch.bool, device=boxes.tensor.device) - for i in range(len(boxes_xywh_abs)): - box_xywh = make_int_box(boxes_xywh_abs[i]) - box_mask = resample_coarse_segm_tensor_to_bbox(predictor_output[i].coarse_segm, box_xywh) - x, y, w, h = box_xywh - masks[i, y : y + h, x : x + w] = box_mask - - return BitMasks(masks) - - -def predictor_output_with_fine_and_coarse_segm_to_mask( - predictor_output: Any, boxes: Boxes, image_size_hw: ImageSizeType -) -> BitMasks: - """ - Convert predictor output with coarse and fine segmentation to a mask. - Assumes that predictor output has the following attributes: - - coarse_segm (tensor of size [N, D, H, W]): coarse segmentation - unnormalized scores for N instances; D is the number of coarse - segmentation labels, H and W is the resolution of the estimate - - fine_segm (tensor of size [N, C, H, W]): fine segmentation - unnormalized scores for N instances; C is the number of fine - segmentation labels, H and W is the resolution of the estimate - - Args: - predictor_output: DensePose predictor output to be converted to mask - boxes (Boxes): bounding boxes that correspond to the DensePose - predictor outputs - image_size_hw (tuple [int, int]): image height Himg and width Wimg - Return: - BitMasks that contain a bool tensor of size [N, Himg, Wimg] with - a mask of the size of the image for each instance - """ - H, W = image_size_hw - boxes_xyxy_abs = boxes.tensor.clone() - boxes_xywh_abs = BoxMode.convert(boxes_xyxy_abs, BoxMode.XYXY_ABS, BoxMode.XYWH_ABS) - N = len(boxes_xywh_abs) - masks = torch.zeros((N, H, W), dtype=torch.bool, device=boxes.tensor.device) - for i in range(len(boxes_xywh_abs)): - box_xywh = make_int_box(boxes_xywh_abs[i]) - labels_i = resample_fine_and_coarse_segm_to_bbox(predictor_output[i], box_xywh) - x, y, w, h = box_xywh - masks[i, y : y + h, x : x + w] = labels_i > 0 - return BitMasks(masks) diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/tests/layers/test_nms_rotated.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/tests/layers/test_nms_rotated.py deleted file mode 100644 index 4b45384892ab2a7cb20871cf19374f1bd08907ce..0000000000000000000000000000000000000000 --- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/tests/layers/test_nms_rotated.py +++ /dev/null @@ -1,172 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -from __future__ import absolute_import, division, print_function, unicode_literals -import numpy as np -import unittest -from copy import deepcopy -import torch -from torchvision import ops - -from detectron2.layers import batched_nms, batched_nms_rotated, nms_rotated -from detectron2.utils.testing import random_boxes - - -def nms_edit_distance(keep1, keep2): - """ - Compare the "keep" result of two nms call. - They are allowed to be different in terms of edit distance - due to floating point precision issues, e.g., - if a box happen to have an IoU of 0.5 with another box, - one implentation may choose to keep it while another may discard it. - """ - keep1, keep2 = keep1.cpu(), keep2.cpu() - if torch.equal(keep1, keep2): - # they should be equal most of the time - return 0 - keep1, keep2 = tuple(keep1), tuple(keep2) - m, n = len(keep1), len(keep2) - - # edit distance with DP - f = [np.arange(n + 1), np.arange(n + 1)] - for i in range(m): - cur_row = i % 2 - other_row = (i + 1) % 2 - f[other_row][0] = i + 1 - for j in range(n): - f[other_row][j + 1] = ( - f[cur_row][j] - if keep1[i] == keep2[j] - else min(min(f[cur_row][j], f[cur_row][j + 1]), f[other_row][j]) + 1 - ) - return f[m % 2][n] - - -class TestNMSRotated(unittest.TestCase): - def reference_horizontal_nms(self, boxes, scores, iou_threshold): - """ - Args: - box_scores (N, 5): boxes in corner-form and probabilities. - (Note here 5 == 4 + 1, i.e., 4-dim horizontal box + 1-dim prob) - iou_threshold: intersection over union threshold. - Returns: - picked: a list of indexes of the kept boxes - """ - picked = [] - _, indexes = scores.sort(descending=True) - while len(indexes) > 0: - current = indexes[0] - picked.append(current.item()) - if len(indexes) == 1: - break - current_box = boxes[current, :] - indexes = indexes[1:] - rest_boxes = boxes[indexes, :] - iou = ops.box_iou(rest_boxes, current_box.unsqueeze(0)).squeeze(1) - indexes = indexes[iou <= iou_threshold] - - return torch.as_tensor(picked) - - def _create_tensors(self, N, device="cpu"): - boxes = random_boxes(N, 200, device=device) - scores = torch.rand(N, device=device) - return boxes, scores - - def test_batched_nms_rotated_0_degree_cpu(self, device="cpu"): - N = 2000 - num_classes = 50 - boxes, scores = self._create_tensors(N, device=device) - idxs = torch.randint(0, num_classes, (N,)) - rotated_boxes = torch.zeros(N, 5, device=device) - rotated_boxes[:, 0] = (boxes[:, 0] + boxes[:, 2]) / 2.0 - rotated_boxes[:, 1] = (boxes[:, 1] + boxes[:, 3]) / 2.0 - rotated_boxes[:, 2] = boxes[:, 2] - boxes[:, 0] - rotated_boxes[:, 3] = boxes[:, 3] - boxes[:, 1] - err_msg = "Rotated NMS with 0 degree is incompatible with horizontal NMS for IoU={}" - for iou in [0.2, 0.5, 0.8]: - backup = boxes.clone() - keep_ref = batched_nms(boxes, scores, idxs, iou) - assert torch.allclose(boxes, backup), "boxes modified by batched_nms" - backup = rotated_boxes.clone() - keep = batched_nms_rotated(rotated_boxes, scores, idxs, iou) - assert torch.allclose( - rotated_boxes, backup - ), "rotated_boxes modified by batched_nms_rotated" - # Occasionally the gap can be large if there are many IOU on the threshold boundary - self.assertLessEqual(nms_edit_distance(keep, keep_ref), 5, err_msg.format(iou)) - - @unittest.skipIf(not torch.cuda.is_available(), "CUDA not available") - def test_batched_nms_rotated_0_degree_cuda(self): - self.test_batched_nms_rotated_0_degree_cpu(device="cuda") - - def test_nms_rotated_0_degree_cpu(self, device="cpu"): - N = 1000 - boxes, scores = self._create_tensors(N, device=device) - rotated_boxes = torch.zeros(N, 5, device=device) - rotated_boxes[:, 0] = (boxes[:, 0] + boxes[:, 2]) / 2.0 - rotated_boxes[:, 1] = (boxes[:, 1] + boxes[:, 3]) / 2.0 - rotated_boxes[:, 2] = boxes[:, 2] - boxes[:, 0] - rotated_boxes[:, 3] = boxes[:, 3] - boxes[:, 1] - err_msg = "Rotated NMS incompatible between CPU and reference implementation for IoU={}" - for iou in [0.2, 0.5, 0.8]: - keep_ref = self.reference_horizontal_nms(boxes, scores, iou) - keep = nms_rotated(rotated_boxes, scores, iou) - self.assertLessEqual(nms_edit_distance(keep, keep_ref), 1, err_msg.format(iou)) - - @unittest.skipIf(not torch.cuda.is_available(), "CUDA not available") - def test_nms_rotated_0_degree_cuda(self): - self.test_nms_rotated_0_degree_cpu(device="cuda") - - def test_nms_rotated_90_degrees_cpu(self): - N = 1000 - boxes, scores = self._create_tensors(N) - rotated_boxes = torch.zeros(N, 5) - rotated_boxes[:, 0] = (boxes[:, 0] + boxes[:, 2]) / 2.0 - rotated_boxes[:, 1] = (boxes[:, 1] + boxes[:, 3]) / 2.0 - # Note for rotated_boxes[:, 2] and rotated_boxes[:, 3]: - # widths and heights are intentionally swapped here for 90 degrees case - # so that the reference horizontal nms could be used - rotated_boxes[:, 2] = boxes[:, 3] - boxes[:, 1] - rotated_boxes[:, 3] = boxes[:, 2] - boxes[:, 0] - - rotated_boxes[:, 4] = torch.ones(N) * 90 - err_msg = "Rotated NMS incompatible between CPU and reference implementation for IoU={}" - for iou in [0.2, 0.5, 0.8]: - keep_ref = self.reference_horizontal_nms(boxes, scores, iou) - keep = nms_rotated(rotated_boxes, scores, iou) - self.assertLessEqual(nms_edit_distance(keep, keep_ref), 1, err_msg.format(iou)) - - def test_nms_rotated_180_degrees_cpu(self): - N = 1000 - boxes, scores = self._create_tensors(N) - rotated_boxes = torch.zeros(N, 5) - rotated_boxes[:, 0] = (boxes[:, 0] + boxes[:, 2]) / 2.0 - rotated_boxes[:, 1] = (boxes[:, 1] + boxes[:, 3]) / 2.0 - rotated_boxes[:, 2] = boxes[:, 2] - boxes[:, 0] - rotated_boxes[:, 3] = boxes[:, 3] - boxes[:, 1] - rotated_boxes[:, 4] = torch.ones(N) * 180 - err_msg = "Rotated NMS incompatible between CPU and reference implementation for IoU={}" - for iou in [0.2, 0.5, 0.8]: - keep_ref = self.reference_horizontal_nms(boxes, scores, iou) - keep = nms_rotated(rotated_boxes, scores, iou) - self.assertLessEqual(nms_edit_distance(keep, keep_ref), 1, err_msg.format(iou)) - - -class TestScriptable(unittest.TestCase): - def setUp(self): - class TestingModule(torch.nn.Module): - def forward(self, boxes, scores, threshold): - return nms_rotated(boxes, scores, threshold) - - self.module = TestingModule() - - def test_scriptable_cpu(self): - m = deepcopy(self.module).cpu() - _ = torch.jit.script(m) - - @unittest.skipIf(not torch.cuda.is_available(), "CUDA not available") - def test_scriptable_cuda(self): - m = deepcopy(self.module).cuda() - _ = torch.jit.script(m) - - -if __name__ == "__main__": - unittest.main() diff --git a/spaces/cccc-c/web-ui-pub/404.html b/spaces/cccc-c/web-ui-pub/404.html deleted file mode 100644 index ffb373d061ee3f950f0952435efd1ee567baa02f..0000000000000000000000000000000000000000 --- a/spaces/cccc-c/web-ui-pub/404.html +++ /dev/null @@ -1 +0,0 @@ -404: This page could not be found

404

This page could not be found.

\ No newline at end of file diff --git a/spaces/ccolas/TastyPiano/src/music/pipeline/url2audio.py b/spaces/ccolas/TastyPiano/src/music/pipeline/url2audio.py deleted file mode 100644 index 6d34f6fa92f4651a08418cf89910cbd9514248b6..0000000000000000000000000000000000000000 --- a/spaces/ccolas/TastyPiano/src/music/pipeline/url2audio.py +++ /dev/null @@ -1,119 +0,0 @@ -import os -from pytube import YouTube -from src.music.utils import RATE_AUDIO_SAVE, slugify -from src.music.config import MAX_LEN - -# define filtering keyworfds -start_keywords = [' ', '(', ',', ':'] -end_keywords = [')', ' ', '.', ',', '!', ':'] -def get_all_keywords(k): - all_keywords = [] - for s in start_keywords: - for e in end_keywords: - all_keywords.append(s + k + e) - return all_keywords -filtered_keywords = ['duet', 'duo', 'quartet', 'orchestre', 'orchestra', - 'quintet', 'sixtet', 'septet', 'octet', 'backing track', 'accompaniment', 'string', - 'contrebrasse', 'drums', 'guitar'] + get_all_keywords('live') + get_all_keywords('trio') - -# list of playlist for which no filtering should occur on keywords (they were prefiltered already, it's supposed to be only piano) -playlist_and_channel_not_to_filter = ["https://www.youtube.com/c/MySheetMusicTranscriptions", - "https://www.youtube.com/c/PianoNotion", - "https://www.youtube.com/c/PianoNotion", - "https://www.youtube.com/watch?v=3F5glYefwio&list=PLFv3ZQw-ZPxi2DH3Bau7lBC5K6zfPJZxc", - "https://www.youtube.com/user/Mercuziopianist", - "https://www.youtube.com/channel/UCy6NPK6-xeX7MZLaMARa5qg", - "https://www.youtube.com/channel/UCKMRNFV2dWTWIJnymtA9_Iw", - "https://www.youtube.com/c/pianomaedaful", - "https://www.youtube.com/c/FrancescoParrinoMusic", - "https://www.youtube.com/c/itsremco"] -playlist_ok = "https://www.youtube.com/watch?v=sYv_vk6bJtk&list=PLO9E3V4rGLD9-0BEd3t-AvvMcVF1zOJPj" - - -def should_be_filtered(title, length, url, playlist_url, max_length): - to_filter = False - reason = '' - lower_title = title.lower() - if length > max_length: - reason += f'it is too long (>{max_length/60:.1f} min), ' - to_filter = True - if any([f in lower_title for f in filtered_keywords]) \ - and playlist_url not in playlist_and_channel_not_to_filter \ - and 'to live' not in lower_title and 'alive' not in lower_title \ - and url not in playlist_ok: - reason += 'it contains a filtered keyword, ' - to_filter = True - return to_filter, reason - -def convert_mp4_to_mp3(path, verbose=True): - if verbose: print(f"Converting mp4 to mp3, in {path}\n") - assert '.mp4' == path[-4:] - os.system(f'ffmpeg -i "{path}" -loglevel panic -y -ac 1 -ar {int(RATE_AUDIO_SAVE)} "{path[:-4] + ".mp3"}" ') - os.remove(path) - if verbose: print('\tDone.') - -def pipeline_video(video, playlist_path, filename): - # extract best stream for this video - stream, kbps = extract_best_stream(video.streams) - stream.download(output_path=playlist_path, filename=filename + '.mp4') - # convert to mp3 - convert_mp4_to_mp3(playlist_path + filename + '.mp4', verbose=False) - return kbps - -def extract_best_stream(streams): - # extract best audio stream - stream_out = streams.get_audio_only() - kbps = int(stream_out.abr[:-4]) - return stream_out, kbps - -def get_title_and_length(video): - title = video.title - filename = slugify(title) - length = video.length - return title, filename, length, video.metadata - - -def url2audio(playlist_path, video_url=None, video=None, playlist_url='', apply_filters=False, verbose=False, level=0): - assert video_url is not None or video is not None, 'needs either video or url' - error_msg = 'Error in loading video?' - try: - if not video: - video = YouTube(video_url) - error_msg += ' Nope. In extracting title and length?' - title, filename, length, video_meta_data = get_title_and_length(video) - if apply_filters: - to_filter, reason = should_be_filtered(title, length, video_url, playlist_url, MAX_LEN) - else: - to_filter = False - if not to_filter: - audio_path = playlist_path + filename + ".mp3" - if verbose: print(' ' * level + f'Downloading {title}, Url: {video_url}') - if not os.path.exists(audio_path): - if length > MAX_LEN and verbose: print(' ' * (level + 2) + f'Long video ({int(length/60)} min), will be cut after {int(MAX_LEN/60)} min.') - error_msg += ' Nope. In pipeline video?' - kbps = None - for _ in range(5): - try: - kbps = pipeline_video(video, playlist_path, filename) - break - except: - pass - assert kbps is not None - error_msg += ' Nope. In dict filling?' - data = dict(title=title, filename=filename, length=length, kbps=kbps, url=video_url, meta=video_meta_data) - error_msg += ' Nope. ' - else: - if verbose: print(' ' * (level + 2) + 'Song already downloaded') - data = None - return audio_path, data, '' - else: - return None, None, f'Filtered because {reason}' - except: - if verbose: print(' ' * (level + 2) + f'Download failed with error {error_msg}') - if os.path.exists(audio_path): - os.remove(audio_path) - return None, None, error_msg + ' Yes.' - - - - diff --git a/spaces/ccolas/TastyPiano/src/music/utilities/midi_processor.py b/spaces/ccolas/TastyPiano/src/music/utilities/midi_processor.py deleted file mode 100644 index fd25b09347d7508739d54f0086b15f437fb874ab..0000000000000000000000000000000000000000 --- a/spaces/ccolas/TastyPiano/src/music/utilities/midi_processor.py +++ /dev/null @@ -1,680 +0,0 @@ -import pretty_midi -from copy import deepcopy -import numpy as np -from miditok import CPWord, Structured -from miditoolkit import MidiFile -from src.music.config import MAX_EMBEDDING, CHUNK_SIZE -from src.music.utilities.chord_structured import ChordStructured - -# code from https://github.com/jason9693/midi-neural-processor -RANGE_NOTE_ON = 128 -RANGE_NOTE_OFF = 128 -RANGE_VEL = 32 -RANGE_TIME_SHIFT = 100 -MAX_EMBEDDING = RANGE_VEL + RANGE_NOTE_OFF + RANGE_TIME_SHIFT + RANGE_NOTE_ON - -START_IDX = { - 'note_on': 0, - 'note_off': RANGE_NOTE_ON, - 'time_shift': RANGE_NOTE_ON + RANGE_NOTE_OFF, - 'velocity': RANGE_NOTE_ON + RANGE_NOTE_OFF + RANGE_TIME_SHIFT -} - -# Our parameters -pitch_range = range(21, 109) -beat_res = {(0, 4): 8, (4, 12): 4} -nb_velocities = 32 -additional_tokens = {'Chord': True, 'Rest': True, 'Tempo': True, 'TimeSignature': False, 'Program': False, - 'rest_range': (2, 8), # (half, 8 beats) - 'nb_tempos': 32, # nb of tempo bins - 'tempo_range': (40, 250)} # (min, max) - -# Creates the tokenizer_cp and loads a MIDI -# tokenizer_cp = CPWord(pitch_range, beat_res, nb_velocities, additional_tokens) -tokenizer_structured = ChordStructured(pitch_range, beat_res, nb_velocities) - -class SustainAdapter: - def __init__(self, time, type): - self.start = time - self.type = type - - -class SustainDownManager: - def __init__(self, start, end): - self.start = start - self.end = end - self.managed_notes = [] - self._note_dict = {} # key: pitch, value: note.start - - def add_managed_note(self, note: pretty_midi.Note): - self.managed_notes.append(note) - - def transposition_notes(self): - for note in reversed(self.managed_notes): - try: - note.end = self._note_dict[note.pitch] - except KeyError: - note.end = max(self.end, note.end) - self._note_dict[note.pitch] = note.start - - -# Divided note by note_on, note_off -class SplitNote: - def __init__(self, type, time, value, velocity): - ## type: note_on, note_off - self.type = type - self.time = time - self.velocity = velocity - self.value = value - - def __repr__(self): - return '<[SNote] time: {} type: {}, value: {}, velocity: {}>'\ - .format(self.time, self.type, self.value, self.velocity) - - -class Event: - def __init__(self, event_type, value): - self.type = event_type - self.value = value - - def __repr__(self): - return ''.format(self.type, self.value) - - def to_int(self): - return START_IDX[self.type] + self.value - - @staticmethod - def from_int(int_value): - info = Event._type_check(int_value) - return Event(info['type'], info['value']) - - @staticmethod - def _type_check(int_value): - range_note_on = range(0, RANGE_NOTE_ON) - range_note_off = range(RANGE_NOTE_ON, RANGE_NOTE_ON+RANGE_NOTE_OFF) - range_time_shift = range(RANGE_NOTE_ON+RANGE_NOTE_OFF,RANGE_NOTE_ON+RANGE_NOTE_OFF+RANGE_TIME_SHIFT) - - valid_value = int_value - - if int_value in range_note_on: - return {'type': 'note_on', 'value': valid_value} - elif int_value in range_note_off: - valid_value -= RANGE_NOTE_ON - return {'type': 'note_off', 'value': valid_value} - elif int_value in range_time_shift: - valid_value -= (RANGE_NOTE_ON + RANGE_NOTE_OFF) - return {'type': 'time_shift', 'value': valid_value} - else: - valid_value -= (RANGE_NOTE_ON + RANGE_NOTE_OFF + RANGE_TIME_SHIFT) - return {'type': 'velocity', 'value': valid_value} - - -def _divide_note(notes): - result_array = [] - notes.sort(key=lambda x: x.start) - - for note in notes: - on = SplitNote('note_on', note.start, note.pitch, note.velocity) - off = SplitNote('note_off', note.end, note.pitch, None) - result_array += [on, off] - return result_array - - -def _merge_note(snote_sequence): - note_on_dict = {} - result_array = [] - - for snote in snote_sequence: - # print(note_on_dict) - if snote.type == 'note_on': - note_on_dict[snote.value] = snote - elif snote.type == 'note_off': - try: - on = note_on_dict[snote.value] - off = snote - if off.time - on.time == 0: - continue - result = pretty_midi.Note(on.velocity, snote.value, on.time, off.time) - result_array.append(result) - except: - print('info removed pitch: {}'.format(snote.value)) - return result_array - - -def _snote2events(snote: SplitNote, prev_vel: int): - result = [] - if snote.velocity is not None: - modified_velocity = snote.velocity // 4 - if prev_vel != modified_velocity: - result.append(Event(event_type='velocity', value=modified_velocity)) - result.append(Event(event_type=snote.type, value=snote.value)) - return result - - -def _event_seq2snote_seq(event_sequence): - timeline = 0 - velocity = 0 - snote_seq = [] - - for event in event_sequence: - if event.type == 'time_shift': - timeline += ((event.value+1) / 100) - if event.type == 'velocity': - velocity = event.value * 4 - else: - snote = SplitNote(event.type, timeline, event.value, velocity) - snote_seq.append(snote) - return snote_seq - - -def _make_time_sift_events(prev_time, post_time): - time_interval = int(round((post_time - prev_time) * 100)) - results = [] - while time_interval >= RANGE_TIME_SHIFT: - results.append(Event(event_type='time_shift', value=RANGE_TIME_SHIFT-1)) - time_interval -= RANGE_TIME_SHIFT - if time_interval == 0: - return results - else: - return results + [Event(event_type='time_shift', value=time_interval-1)] - - -def _control_preprocess(ctrl_changes): - sustains = [] - - manager = None - for ctrl in ctrl_changes: - if ctrl.value >= 64 and manager is None: - # sustain down - manager = SustainDownManager(start=ctrl.time, end=None) - elif ctrl.value < 64 and manager is not None: - # sustain up - manager.end = ctrl.time - sustains.append(manager) - manager = None - elif ctrl.value < 64 and len(sustains) > 0: - sustains[-1].end = ctrl.time - return sustains - - -def _note_preprocess(susteins, notes): - note_stream = [] - count_note_processed = 0 - if susteins: # if the midi file has sustain controls - for sustain in susteins: - if len(notes) > 0: - for note_idx, note in enumerate(notes): - if note.start < sustain.start: - note_stream.append(note) - last_counted = True - elif note.start > sustain.end: - # notes = notes[note_idx:] - # sustain.transposition_notes() - last_counted = False - break - else: - sustain.add_managed_note(note) - last_counted = True - count_note_processed += 1 - sustain.transposition_notes() # transpose what in the sustain - note_stream += sustain.managed_notes # add to stream - # remove notes that were already added to the stream - last_idx = note_idx if not last_counted else note_idx + 1 - if last_idx < len(notes): - notes = notes[last_idx:] # save next notes, previous notes were stored in note stream - else: - notes = [] - note_stream += notes - count_note_processed += len(notes) - else: # else, just push everything into note stream - for note_idx, note in enumerate(notes): - note_stream.append(note) - - note_stream.sort(key= lambda x: x.start) - return note_stream - -def midi_valid(midi) -> bool: - # if any(ts.numerator != 4 or ts.denominator != 4 for ts in midi.time_signature_changes): - # return False # time signature different from 4/4 - # if midi.max_tick < 10 * midi.ticks_per_beat: - # return False # this MIDI is too short - return True - - -def encode_midi_structured(file_path, nb_aug, nb_noise): - notes = [] - mid = MidiFile(file_path) - assert midi_valid(mid) - - # Converts MIDI to tokens, and back to a MIDI - for inst in mid.instruments: - inst_notes = inst.notes - # ctrl.number is the number of sustain control. If you want to know abour the number type of control, - # see https://www.midi.org/specifications-old/item/table-3-control-change-messages-data-bytes-2 - ctrls = _control_preprocess([ctrl for ctrl in inst.control_changes if ctrl.number == 64]) - notes += _note_preprocess(ctrls, inst_notes) - - assert len(notes) == len(mid.instruments[0].notes) - - # sort notes - arg_rank = np.argsort([n.start for n in notes]) - notes = list(np.array(notes)[arg_rank]) - - original_notes = deepcopy(notes) - # convert notes to ints - encoded_main = tokenizer_structured.midi_to_tokens(mid)[0] - - min_pitch = np.min([n.pitch for n in notes]) - - encoded_augmentations = [] - noise_shift = 6 - aug_shift = 3 - embedding_noise = None - for i_aug in range(nb_aug): - a_notes = alter_notes_exact_tick(original_notes, aug_shift, min_pitch) - mid.instruments[0].notes = a_notes - assert midi_valid(mid) - embedding_aug = tokenizer_structured.midi_to_tokens(mid)[0] # encode notes - encoded_augmentations.append(embedding_aug) - if nb_noise > 0: - a_notes = alter_notes_exact_tick(original_notes, noise_shift, min_pitch) - mid.instruments[0].notes = a_notes - assert midi_valid(mid) - embedding_noise = tokenizer_structured.midi_to_tokens(mid)[0] # encode notes - - return encoded_main, encoded_augmentations, embedding_noise - -def encode_midi_cp(file_path, nb_aug, nb_noise): - notes = [] - mid = MidiFile(file_path) - assert midi_valid(mid) - - # Converts MIDI to tokens, and back to a MIDI - for inst in mid.instruments: - inst_notes = inst.notes - # ctrl.number is the number of sustain control. If you want to know abour the number type of control, - # see https://www.midi.org/specifications-old/item/table-3-control-change-messages-data-bytes-2 - ctrls = _control_preprocess([ctrl for ctrl in inst.control_changes if ctrl.number == 64]) - notes += _note_preprocess(ctrls, inst_notes) - - assert len(notes) == len(mid.instruments[0].notes) - - # sort notes - arg_rank = np.argsort([n.start for n in notes]) - notes = list(np.array(notes)[arg_rank]) - - original_notes = deepcopy(notes) - # convert notes to ints - encoded_main = tokenizer_cp.midi_to_tokens(mid)[0] - - min_pitch = np.min([n.pitch for n in notes]) - - encoded_augmentations = [] - noise_shift = 6 - aug_shift = 3 - embedding_noise = None - for i_aug in range(nb_aug): - a_notes = alter_notes_exact_tick(original_notes, aug_shift, min_pitch) - mid.instruments[0].notes = a_notes - assert midi_valid(mid) - embedding_aug = tokenizer_cp.midi_to_tokens(mid)[0] # encode notes - encoded_augmentations.append(embedding_aug) - if nb_noise > 0: - a_notes = alter_notes_exact_tick(original_notes, noise_shift, min_pitch) - mid.instruments[0].notes = a_notes - assert midi_valid(mid) - embedding_noise = tokenizer_cp.midi_to_tokens(mid)[0] # encode notes - - return encoded_main, encoded_augmentations, embedding_noise - -def alter_notes_exact_tick(notes, shift, min_pitch): - # copy original notes - a_notes = deepcopy(notes) - # sample smart augmentation - pitch_shift, time_scaling = 0, 0 - while pitch_shift == 0 and time_scaling == 0: - pitch_shift = np.random.choice(np.arange(max(-shift, -min_pitch), shift+1)) - time_scaling = np.random.choice([-5, -2.5, 0, 2.5, 5]) - assert pitch_shift <= shift and pitch_shift >= -shift - # modify notes - for e in a_notes: - e.start = int(e.start * (1. + time_scaling / 100)) - e.end = int(e.end * (1. + time_scaling / 100)) - new_pitch = max(e.pitch + pitch_shift, 0) - e.pitch = new_pitch - return a_notes - -def alter_notes(notes, shift, min_pitch): - # copy original notes - a_notes = deepcopy(notes) - # sample smart augmentation - pitch_shift, time_scaling = 0, 0 - while pitch_shift == 0 and time_scaling == 0: - pitch_shift = np.random.choice(np.arange(max(-shift, -min_pitch), shift+1)) - time_scaling = np.random.choice([-5, -2.5, 0, 2.5, 5]) - assert pitch_shift <= shift and pitch_shift >= -shift - # modify notes - for e in a_notes: - e.start = e.start * (1. + time_scaling / 100) - e.end = e.end * (1. + time_scaling / 100) - new_pitch = max(e.pitch + pitch_shift, 0) - e.pitch = new_pitch - return a_notes - -def encode_midi(file_path, nb_aug, nb_noise): - notes = [] - mid = pretty_midi.PrettyMIDI(midi_file=file_path) - - for inst in mid.instruments: - inst_notes = inst.notes - # ctrl.number is the number of sustain control. If you want to know abour the number type of control, - # see https://www.midi.org/specifications-old/item/table-3-control-change-messages-data-bytes-2 - ctrls = _control_preprocess([ctrl for ctrl in inst.control_changes if ctrl.number == 64]) - notes += _note_preprocess(ctrls, inst_notes) - - assert len(notes) == len(mid.instruments[0].notes) - # sort notes - arg_rank = np.argsort([n.start for n in notes]) - notes = list(np.array(notes)[arg_rank]) - - # convert notes to ints - encoded_main = convert_notes(notes) - - min_pitch = np.min([n.pitch for n in notes]) - - encoded_augmentations = [] - noise_shift = 6 - aug_shift = 3 - embedding_noise = None - for i_aug in range(nb_aug): - a_notes = alter_notes(notes, aug_shift, min_pitch) - embedding_group = convert_notes(a_notes) # encode notes - encoded_augmentations.append(embedding_group) - if nb_noise > 0: - a_notes = alter_notes(notes, noise_shift, min_pitch) - embedding_noise = convert_notes(a_notes) # encode notes - - return encoded_main, encoded_augmentations, embedding_noise - - -def chunk_notes(n_notes_per_chunk, notes): - index = 0 - chunks = [] - for n in n_notes_per_chunk: - chunks.append(notes[index:index+n]) - index += n - return chunks - -def chunk_first_embedding(chunk_size, embedding): - chunks = [] - index = 0 - if len(embedding) < chunk_size: - return [embedding] - else: - for i in range(chunk_size, len(embedding) + chunk_size, chunk_size): - if (len(embedding) - index) > (chunk_size / 2): - chunks.append(embedding[index:i]) - index = i - return chunks - -def encode_midi_in_chunks(file_path, n_aug, n_noise): - n_noise = 0 - notes = [] - mid = pretty_midi.PrettyMIDI(midi_file=file_path) - # preprocess midi - for inst in mid.instruments: - inst_notes = inst.notes - # ctrl.number is the number of sustain control. If you want to know abour the number type of control, - # see https://www.midi.org/specifications-old/item/table-3-control-change-messages-data-bytes-2 - ctrls = _control_preprocess([ctrl for ctrl in inst.control_changes if ctrl.number == 64]) - notes += _note_preprocess(ctrls, inst_notes) - - assert len(notes) == len(mid.instruments[0].notes) - - arg_rank = np.argsort([n.start for n in notes]) - notes = list(np.array(notes)[arg_rank]) - - # convert notes to ints - main_embedding = convert_notes(notes) - # split the sequence of events in chunks - if np.max(main_embedding) < MAX_EMBEDDING and np.min(main_embedding) >= 0: - encoded_chunks = chunk_first_embedding(CHUNK_SIZE, main_embedding) - else: - assert False - - n_notes_per_chunk = [np.argwhere(np.array(ec) < 128).flatten().size for ec in encoded_chunks] - - chunked_notes = chunk_notes(n_notes_per_chunk, notes) - - # reencode chunks by shifting notes - encoded_chunks = [] - for note_group in chunked_notes: - note_group = shift_notes(note_group) - embedding_main = convert_notes(note_group)[:CHUNK_SIZE] - encoded_chunks.append(embedding_main) - - min_pitches = [np.min([n.pitch for n in cn]) for cn in chunked_notes] - - encoded_augmentations = [] - aug_shift = 3 - for i_aug in range(n_aug): - chunked_embedding_aug = [] - for note_group, min_pitch in zip(chunked_notes, min_pitches): - a_notes = alter_notes(note_group, aug_shift, min_pitch) - a_notes = shift_notes(a_notes) - assert len(a_notes) == len(note_group) - embedding_group = convert_notes(a_notes)[:CHUNK_SIZE] # encode notes - chunked_embedding_aug.append(embedding_group) - encoded_augmentations += chunked_embedding_aug - - assert len(encoded_augmentations) == n_aug * len(encoded_chunks) - return encoded_chunks, encoded_augmentations, [] - -def encode_miditok_in_chunks(file_path, n_aug, n_noise): - n_noise = 0 - notes = [] - mid = MidiFile(file_path) - assert midi_valid(mid) - - # Converts MIDI to tokens, and back to a MIDI - for inst in mid.instruments: - inst_notes = inst.notes - # ctrl.number is the number of sustain control. If you want to know abour the number type of control, - # see https://www.midi.org/specifications-old/item/table-3-control-change-messages-data-bytes-2 - ctrls = _control_preprocess([ctrl for ctrl in inst.control_changes if ctrl.number == 64]) - notes += _note_preprocess(ctrls, inst_notes) - assert len(notes) == len(mid.instruments[0].notes) - - # sort notes - arg_rank = np.argsort([n.start for n in notes]) - notes = list(np.array(notes)[arg_rank]) - - # convert notes to ints - encoded_main = tokenizer_cp.midi_to_tokens(mid)[0] - - encoded_chunks = chunk_first_embedding(CHUNK_SIZE, encoded_main) - n_notes_per_chunk = [len([tokenizer_cp.vocab.token_to_event[e[0]] for e in enc_chunk if tokenizer_cp.vocab.token_to_event[e[0]] == 'Family_Note']) - for enc_chunk in encoded_chunks] - chunked_notes = chunk_notes(n_notes_per_chunk, notes) - - # reencode chunks by shifting notes - encoded_chunks = [] - for note_group in chunked_notes: - mid.instruments[0].notes = note_group - mid = shift_mid(mid) # shift midi - assert midi_valid(mid) - embedding_main = tokenizer_cp.midi_to_tokens(mid)[0][:CHUNK_SIZE] # tokenize midi - encoded_chunks.append(embedding_main) - - - min_pitch = np.min([n.pitch for n in notes]) - - encoded_augmentations = [] - aug_shift = 3 - for i_aug in range(n_aug): - chunked_embedding_aug = [] - for note_group in chunked_notes: - a_notes = alter_notes_exact_tick(note_group, aug_shift, min_pitch) - assert len(a_notes) == len(note_group) - mid.instruments[0].notes = a_notes - # shift midi - mid = shift_mid(mid) - assert midi_valid(mid) - # tokenize midi - embedding_aug = tokenizer_cp.midi_to_tokens(mid)[0][:CHUNK_SIZE] # encode notes - chunked_embedding_aug.append(embedding_aug) - encoded_augmentations += chunked_embedding_aug - - assert len(encoded_augmentations) == n_aug * len(encoded_chunks) - return encoded_chunks, encoded_augmentations, [] - - -def encode_midi_chunks_structured(file_path, n_aug, n_noise): - n_noise = 0 - notes = [] - mid = MidiFile(file_path) - assert midi_valid(mid) - - # Converts MIDI to tokens, and back to a MIDI - for inst in mid.instruments: - inst_notes = inst.notes - # ctrl.number is the number of sustain control. If you want to know abour the number type of control, - # see https://www.midi.org/specifications-old/item/table-3-control-change-messages-data-bytes-2 - ctrls = _control_preprocess([ctrl for ctrl in inst.control_changes if ctrl.number == 64]) - notes += _note_preprocess(ctrls, inst_notes) - assert len(notes) == len(mid.instruments[0].notes) - - nb_notes = CHUNK_SIZE // 4 - notes = notes[:50 * nb_notes] # limit to 50 chunks to speed up - # sort notes - arg_rank = np.argsort([n.start for n in notes]) - notes = list(np.array(notes)[arg_rank]) - - assert (len(notes) // nb_notes) > 1 # assert at least 3 chunks - n_notes_per_chunk = [nb_notes for _ in range(len(notes) // nb_notes)] - if len(notes) % nb_notes > nb_notes / 2: - n_notes_per_chunk.append(len(notes) % nb_notes) - chunked_notes = chunk_notes(n_notes_per_chunk, notes) - - # reencode chunks by shifting notes - encoded_chunks = [] - for note_group in chunked_notes: - mid.instruments[0].notes = note_group - mid = shift_mid(mid) # shift midi - assert midi_valid(mid) - embedding_main = tokenizer_structured.midi_to_tokens(mid)[0] # tokenize midi - encoded_chunks.append(embedding_main) - - - min_pitch = np.min([n.pitch for n in notes]) - - encoded_augmentations = [] - aug_shift = 3 - for i_aug in range(n_aug): - chunked_embedding_aug = [] - for note_group in chunked_notes: - a_notes = alter_notes_exact_tick(note_group, aug_shift, min_pitch) - assert len(a_notes) == len(note_group) - mid.instruments[0].notes = a_notes - # shift midi - mid = shift_mid(mid) - assert midi_valid(mid) - # tokenize midi - embedding_aug = tokenizer_structured.midi_to_tokens(mid)[0] # encode notes - chunked_embedding_aug.append(embedding_aug) - encoded_augmentations += chunked_embedding_aug - - assert len(encoded_augmentations) == n_aug * len(encoded_chunks) - return encoded_chunks, encoded_augmentations, [] - -def shift_mid(mid): - # mid = deepcopy(mid) - to_remove = mid.instruments[0].notes[0].start - if to_remove > 0: - for n in mid.instruments[0].notes: - n.start -= to_remove - n.end -= to_remove - - # for e in mid.tempo_changes: - # e.time = max(0, e.time - to_remove) - # - # for e in mid.time_signature_changes: - # e.time = max(0, e.time - to_remove) - # - # for e in mid.key_signature_changes: - # e.time = max(0, e.time - to_remove) - return mid - -def shift_notes(notes): - to_remove = notes[0].start - for n in notes: - n.start -= to_remove - n.end -= to_remove - return notes - -def convert_notes(notes): - events = [] - dnotes = _divide_note(notes) # split events in on / off - - # print(dnotes) - dnotes.sort(key=lambda x: x.time) - # print('sorted:') - # print(dnotes) - cur_time = 0 - cur_vel = 0 - for snote in dnotes: - events += _make_time_sift_events(prev_time=cur_time, post_time=snote.time) - events += _snote2events(snote=snote, prev_vel=cur_vel) - # events += _make_time_sift_events(prev_time=cur_time, post_time=snote.time) - - cur_time = snote.time - cur_vel = snote.velocity - - event_list = [e.to_int() for e in events] - if not (np.max(event_list) < MAX_EMBEDDING and np.min(event_list) >= 0): - print('weird') - assert False - return event_list - -def decode_midi_structured(encoding, file_path=None): - mid = tokenizer_structured.tokens_to_midi([encoding]) - if file_path: - mid.dump(file_path) - return mid - -def decode_midi_cp(encoding, file_path=None): - mid = tokenizer_cp.tokens_to_midi([encoding]) - if file_path: - mid.dump(file_path) - return mid - -def decode_midi(idx_array, file_path=None): - event_sequence = [Event.from_int(idx) for idx in idx_array] - # print(event_sequence) - snote_seq = _event_seq2snote_seq(event_sequence) - note_seq = _merge_note(snote_seq) - note_seq.sort(key=lambda x:x.start) - - mid = pretty_midi.PrettyMIDI() - # if want to change instument, see https://www.midi.org/specifications/item/gm-level-1-sound-set - instument = pretty_midi.Instrument(1, False, "Developed By Yang-Kichang") - instument.notes = note_seq - - mid.instruments.append(instument) - if file_path is not None: - mid.write(file_path) - return mid - - -if __name__ == '__main__': - encoded = encode_midi('bin/ADIG04.mid') - print(encoded) - decided = decode_midi(encoded,file_path='bin/test.mid') - - ins = pretty_midi.PrettyMIDI('bin/ADIG04.mid') - print(ins) - print(ins.instruments[0]) - for i in ins.instruments: - print(i.control_changes) - print(i.notes) - diff --git a/spaces/cheetah003/HMMC_t2v_search/app.py b/spaces/cheetah003/HMMC_t2v_search/app.py deleted file mode 100644 index 7e8ead9f96000bb908cddcb92776d6ed61a95bfd..0000000000000000000000000000000000000000 --- a/spaces/cheetah003/HMMC_t2v_search/app.py +++ /dev/null @@ -1,301 +0,0 @@ -import numpy as np -# import gradio -import torch -from transformers import BertTokenizer -import argparse -import gradio as gr -import time - -from modules.tokenization_clip import SimpleTokenizer as ClipTokenizer -from modules.modeling import BirdModel - -show_num = 9 -max_words = 32 -video_path_zh = "features/Chinese_batch_visual_output_list.npy" -frame_path_zh = "features/Chinese_batch_frame_output_list.npy" -video_fea_zh = np.load(video_path_zh) -video_fea_zh = torch.from_numpy(video_fea_zh) -frame_fea_zh = np.load(frame_path_zh) -frame_fea_zh = torch.from_numpy(frame_fea_zh) - -video_path_en = "features/English_batch_visual_output_list.npy" -frame_path_en = "features/English_batch_frame_output_list.npy" -video_fea_en = np.load(video_path_en) -video_fea_en = torch.from_numpy(video_fea_en) -frame_fea_en = np.load(frame_path_en) -frame_fea_en = torch.from_numpy(frame_fea_en) - -test_path = "test_list.txt" -# video_dir = "test1500_400_400/" -video_dir = "test1500/" - -with open(test_path, 'r', encoding='utf8') as f_list: - lines = f_list.readlines() - video_ids = [itm.strip() + ".mp4" for itm in lines] - - -def get_videoname(idx): - videoname = [] - videopath = [] - for i in idx: - videoname.append(video_ids[i]) - path = video_dir + video_ids[i] - videopath.append(path) - return videoname, videopath - - -def get_text(caption, tokenizer): - # tokenize word - words = tokenizer.tokenize(caption) - - # add cls token - words = ["<|startoftext|>"] + words - total_length_with_CLS = max_words - 1 - if len(words) > total_length_with_CLS: - words = words[:total_length_with_CLS] - - # add end token - words = words + ["<|endoftext|>"] - - # convert token to id according to the vocab - input_ids = tokenizer.convert_tokens_to_ids(words) - - # add zeros for feature of the same length - input_mask = [1] * len(input_ids) - while len(input_ids) < max_words: - input_ids.append(0) - input_mask.append(0) - - # ensure the length of feature to be equal with max words - assert len(input_ids) == max_words - assert len(input_mask) == max_words - pairs_text = np.array(input_ids).reshape(-1, max_words) - pairs_text = torch.from_numpy(pairs_text) - pairs_mask = np.array(input_mask).reshape(-1, max_words) - pairs_mask = torch.from_numpy(pairs_mask) - - return pairs_text, pairs_mask - - -def get_args(description='Retrieval Task'): - parser = argparse.ArgumentParser(description=description) - parser.add_argument("--do_pretrain", action='store_true', help="Whether to run training.") - parser.add_argument("--do_train", action='store_true', help="Whether to run training.") - parser.add_argument("--do_eval", action='store_true', help="Whether to run eval on the dev set.") - parser.add_argument("--do_params", action='store_true', help="text the params of the model.") - parser.add_argument("--use_frame_fea", action='store_true', help="whether use frame feature matching text") - parser.add_argument('--task', type=str, default="retrieval", choices=["retrieval_VT", "retrieval"], - help="choose downstream task.") - parser.add_argument('--dataset', type=str, default="bird", choices=["bird", "msrvtt", "vatex", "msvd"], - help="choose dataset.") - parser.add_argument('--num_thread_reader', type=int, default=1, help='') - parser.add_argument('--lr', type=float, default=0.0001, help='initial learning rate') - parser.add_argument('--text_lr', type=float, default=0.00001, help='text encoder learning rate') - parser.add_argument('--epochs', type=int, default=20, help='upper epoch limit') - parser.add_argument('--batch_size', type=int, default=256, help='batch size') - parser.add_argument('--batch_size_val', type=int, default=3500, help='batch size eval') - parser.add_argument('--lr_decay', type=float, default=0.9, help='Learning rate exp epoch decay') - parser.add_argument('--weight_decay', type=float, default=0.2, help='Learning rate exp epoch decay') - parser.add_argument('--n_display', type=int, default=100, help='Information display frequence') - parser.add_argument('--seed', type=int, default=42, help='random seed') - parser.add_argument('--max_words', type=int, default=32, help='') - parser.add_argument('--max_frames', type=int, default=12, help='') - parser.add_argument('--top_frames', type=int, default=3, help='') - parser.add_argument('--frame_sample', type=str, default="uniform", choices=["uniform", "random", "uniform_random"], - help='frame sample strategy') - parser.add_argument('--frame_sample_len', type=str, default="fix", choices=["dynamic", "fix"], - help='use dynamic frame length of fix frame length') - parser.add_argument('--language', type=str, default="chinese", choices=["chinese", "english"], - help='language for text encoder') - parser.add_argument('--use_temp', action='store_true', help='whether to use temporal transformer') - - parser.add_argument("--logdir", default=None, type=str, required=False, help="log dir for tensorboardX writer") - parser.add_argument("--cross_model", default="cross-base", type=str, required=False, help="Cross module") - parser.add_argument("--pretrained_text", default="hfl/chinese-roberta-wwm-ext", type=str, required=False, help="pretrained_text") - parser.add_argument("--init_model", default=None, type=str, required=False, help="Initial model.") - parser.add_argument("--warmup_proportion", default=0.1, type=float, - help="Proportion of training to perform linear learning rate warmup for. E.g., 0.1 = 10%% of training.") - parser.add_argument('--gradient_accumulation_steps', type=int, default=1, - help="Number of updates steps to accumulate before performing a backward/update pass.") - parser.add_argument('--n_gpu', type=int, default=1, help="Changed in the execute process.") - - parser.add_argument("--cache_dir", default="", type=str, - help="Where do you want to store the pre-trained models downloaded from s3") - - parser.add_argument('--enable_amp', action='store_true', help="whether to use pytorch amp") - - parser.add_argument("--world_size", default=0, type=int, help="distribted training") - parser.add_argument("--local_rank", default=0, type=int, help="distribted training") - parser.add_argument("--rank", default=0, type=int, help="distribted training") - parser.add_argument('--coef_lr', type=float, default=1., help='coefficient for bert branch.') - - args = parser.parse_args() - - # Check paramenters - args.do_eval = True - args.use_frame_fea = True - args.use_temp = True - - return args - - -def init_model(language): - time1 = time.time() - args = get_args() - args.language = language - if language == "chinese": - model_path = "models/Chinese_vatex.bin" - tokenizer = BertTokenizer.from_pretrained("hfl/chinese-roberta-wwm-ext") - elif language == "english": - model_path = "models/English_vatex.bin" - tokenizer = ClipTokenizer() - else: - raise Exception("language should be Chinese or English!") - model_state_dict = torch.load(model_path, map_location='cpu') - cross_model = "cross-base" - model = BirdModel.from_pretrained(cross_model, state_dict=model_state_dict, task_config=args) - device = torch.device("cpu") - # device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - model.to(device) - model.eval() - print("language={}".format(language)) - print("init model time: {}".format(time.time() - time1)) - print("device:{}".format(device)) - return model, tokenizer - - -model_zh, tokenizer_zh = init_model(language="chinese") -model_en, tokenizer_en = init_model(language="english") - - -def t2v_search_zh(text): - with torch.no_grad(): - time1 = time.time() - text_ids, text_mask = get_text(text, tokenizer_zh) - print("get_text time: {}".format(time.time() - time1)) - time1 = time.time() - text_fea_zh = model_zh.text_encoder(text_ids, text_mask) - print("text_encoder time: {}".format(time.time() - time1)) - # print("text_fea.shape:{}".format(text_fea.shape)) - # print("video_fea.shape:{}".format(video_fea.shape)) - # print("frame_fea.shape:{}".format(frame_fea.shape)) - time1 = time.time() - sim_video = model_zh.loose_similarity(text_fea_zh, video_fea_zh) - # print("sim_video.shape:{}".format(sim_video.shape)) - sim_frame = model_zh.loose_similarity(text_fea_zh, frame_fea_zh) - # print("sim_frame.shape:{}".format(sim_frame.shape)) - sim_frame = torch.topk(sim_frame, k=model_zh.top_frames, dim=1)[0] - sim_frame = torch.mean(sim_frame, dim=1) - sim = sim_video + sim_frame - value, index = sim.topk(show_num, dim=0, largest=True, sorted=True) - # value, index = sim_video.topk(show_num, dim=0, largest=True, sorted=True) - print("calculate_similarity time: {}".format(time.time() - time1)) - print("value:{}".format(value)) - print("index:{}".format(index)) - videoname, videopath = get_videoname(index) - print("videoname:{}".format(videoname)) - print("videopath:{}".format(videopath)) - return videopath - - -def t2v_search_en(text): - with torch.no_grad(): - time1 = time.time() - text_ids, text_mask = get_text(text, tokenizer_en) - print("get_text time: {}".format(time.time() - time1)) - time1 = time.time() - text_fea_en = model_en.text_encoder(text_ids, text_mask) - print("text_encoder time: {}".format(time.time() - time1)) - # print("text_fea.shape:{}".format(text_fea.shape)) - # print("video_fea.shape:{}".format(video_fea.shape)) - # print("frame_fea.shape:{}".format(frame_fea.shape)) - time1 = time.time() - sim_video = model_en.loose_similarity(text_fea_en, video_fea_en) - # print("sim_video.shape:{}".format(sim_video.shape)) - sim_frame = model_en.loose_similarity(text_fea_en, frame_fea_en) - # print("sim_frame.shape:{}".format(sim_frame.shape)) - sim_frame = torch.topk(sim_frame, k=model_en.top_frames, dim=1)[0] - sim_frame = torch.mean(sim_frame, dim=1) - sim = sim_video + sim_frame - value, index = sim.topk(show_num, dim=0, largest=True, sorted=True) - # value, index = sim_video.topk(show_num, dim=0, largest=True, sorted=True) - print("calculate_similarity time: {}".format(time.time() - time1)) - print("value:{}".format(value)) - print("index:{}".format(index)) - videoname, videopath = get_videoname(index) - print("videoname:{}".format(videoname)) - print("videopath:{}".format(videopath)) - return videopath - - -def hello_world(name): - return "hello world, my name is " + name + "!" - - -def search_demo(): - with gr.Blocks() as demo: - gr.Markdown("#
HMMC中英文本-视频检索 \ - Github
") - demo.title = "HMMC中英文本-视频检索" - with gr.Tab("中文"): - with gr.Column(variant="panel"): - with gr.Row(variant="compact"): - input_text = gr.Textbox( - label="输入文本", - show_label=False, - max_lines=1, - placeholder="请输入检索文本...", - ).style( - container=False, - ) - btn = gr.Button("搜索").style(full_width=False) - - with gr.Column(variant="panel", scale=2): - with gr.Row(variant="compact"): - videos_top = [gr.Video( - format="mp4", label="视频 "+str(i+1), - ).style(height=300, width=300) for i in range(3)] - with gr.Column(variant="panel", scale=1): - with gr.Row(variant="compact"): - videos_rest = [gr.Video( - format="mp4", label="视频 "+str(i+1), - ).style(height=150, width=150) for i in range(3, show_num)] - - searched_videos = videos_top + videos_rest - btn.click(t2v_search_zh, inputs=input_text, outputs=searched_videos) - - with gr.Tab("English"): - with gr.Column(variant="panel"): - with gr.Row(variant="compact"): - input_text = gr.Textbox( - label="input text", - show_label=False, - max_lines=1, - placeholder="Please input text to search...", - ).style( - container=False, - ) - btn = gr.Button("Search").style(full_width=False) - - with gr.Column(variant="panel", scale=2): - with gr.Row(variant="compact"): - videos_top = [gr.Video( - format="mp4", label="video " + str(i+1), - ).style(height=300, width=300) for i in range(3)] - with gr.Column(variant="panel", scale=1): - with gr.Row(variant="compact"): - videos_rest = [gr.Video( - format="mp4", label="video " + str(i+1), - ).style(height=150, width=150) for i in range(3, show_num)] - - searched_videos = videos_top + videos_rest - btn.click(t2v_search_en, inputs=input_text, outputs=searched_videos) - - demo.launch() - - -if __name__ == '__main__': - search_demo() - # text = "两个男人正在随着音乐跳舞,他们正在努力做着macarena舞蹈的动作。" - - # t2v_search(text) diff --git a/spaces/christse2026/WinterActivities/info.md b/spaces/christse2026/WinterActivities/info.md deleted file mode 100644 index e7b3a2043e2ab707b06de6ba81014bf9580786a0..0000000000000000000000000000000000000000 --- a/spaces/christse2026/WinterActivities/info.md +++ /dev/null @@ -1,15 +0,0 @@ -# 😌 What winter activity is best for you? - -### 🧐 Problem Statement and Research Summary -This survey is designed to get you a good recommendation for the best winter activity. - -### 🎣 Data Collection Plan -We used a google form and had as much people as possible fill it out to get as much results as needed. - -### 💥 Ethical Considerations (Data Privacy and Bias) -We make sure none of the questions are personal and none of your private information is being collected by completing this survey. - -### 👻 Our Team -This was created by a High School student for a project which was to create an AI app - -![aiEDU logo](https://images.squarespace-cdn.com/content/v1/5e4efdef6d10420691f02bc1/5db5a8a3-1761-4fce-a096-bd5f2515162f/aiEDU+_black+logo+stacked.png?format=100w) diff --git a/spaces/cihyFjudo/fairness-paper-search/Emu360v1.4.rar txt password Where to find it and how to use it.md b/spaces/cihyFjudo/fairness-paper-search/Emu360v1.4.rar txt password Where to find it and how to use it.md deleted file mode 100644 index 67e9e6973f39873f07850824b7f5ac5d0af6f71b..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Emu360v1.4.rar txt password Where to find it and how to use it.md +++ /dev/null @@ -1,6 +0,0 @@ -

emu360v1.4.rar txt password


Download »»» https://tinurli.com/2uwjrV



-
- aaccfb2cb3
-
-
-

diff --git a/spaces/cihyFjudo/fairness-paper-search/Kill The Rapist Movie In Hindi Free [TOP] Download 720p Movies.md b/spaces/cihyFjudo/fairness-paper-search/Kill The Rapist Movie In Hindi Free [TOP] Download 720p Movies.md deleted file mode 100644 index 8e0e6ef8a4e33c77d7e4c18438310a1274470081..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Kill The Rapist Movie In Hindi Free [TOP] Download 720p Movies.md +++ /dev/null @@ -1,5 +0,0 @@ -
-

The politicians decide to transfer Samar Singh while Hyder Ali is named a traitor and is killed. The goons next target Arjun by gang-raping his sister, Rakhi (Akanksha), and hurting his other family members. Arjun, in a fit of anger, kills the rapists and surrenders himself to his police colleagues. He is seen walking up the steps with police in the opening scenes of the movie. He wants to be awarded capital punishment but later the court finds the politicians wrong and

-

Kill The Rapist Movie In Hindi Free Download 720p Movies


Download Zip > https://tinurli.com/2uwkOz



aaccfb2cb3
-
-
\ No newline at end of file diff --git a/spaces/cihyFjudo/fairness-paper-search/LAmour et le Pardon de Dieu (Evangelisation t. 1) (French Edition) download epub mobi pdf fb2 Learn the Secrets of Gods Love and Forgiveness.md b/spaces/cihyFjudo/fairness-paper-search/LAmour et le Pardon de Dieu (Evangelisation t. 1) (French Edition) download epub mobi pdf fb2 Learn the Secrets of Gods Love and Forgiveness.md deleted file mode 100644 index 59263316fc90253d3a7edf820c2ae2e2c7927ab2..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/LAmour et le Pardon de Dieu (Evangelisation t. 1) (French Edition) download epub mobi pdf fb2 Learn the Secrets of Gods Love and Forgiveness.md +++ /dev/null @@ -1,6 +0,0 @@ -

L'Amour et le Pardon de Dieu (Evangelisation t. 1) (French Edition) download epub mobi pdf fb2


Download ---> https://tinurli.com/2uwhCT



-
- aaccfb2cb3
-
-
-

diff --git a/spaces/clementgyj/FNLP_D_HD/app.py b/spaces/clementgyj/FNLP_D_HD/app.py deleted file mode 100644 index 7a5931d74326878fbf487ed2018321e290398dcb..0000000000000000000000000000000000000000 --- a/spaces/clementgyj/FNLP_D_HD/app.py +++ /dev/null @@ -1,62 +0,0 @@ -import streamlit as st -from transformers import pipeline - -st.title('COS30081 Education Chatbot') -st.info('This is a Education Chatbot prototype created by Clement Goh Yung Jing 101218668 for COS30081 FNLP D & HD Task') - -# loading model -model_checkpoint = r"roberta-finetuned-squad-50k" -question_answerer = pipeline("question-answering", model=model_checkpoint) -st.success("The model is loaded") - -question = "" -context = """ - COS30081 is an Artificial Intelligence major unit. The unit name is Fundamentals of Natural Language Processing (FNLP). Its unit code is COS30081. The unit has a total contact hours of 48 hours. It has two pre-requisite units which are COS20015 Fundamentals of Data Management and COS30019 Introduction to Artificial Intelligence. The unit is delivered in blended mode - physical and online. The unit is a portfolio unit which means the assesments are consist of lab tasks and assignments. - - The aim of the unit is to introduce students to the essential natural language processing (NLP) tasks and techniques. Students will learn skills to carry out basic text data pre-processing, feature extraction, building and evaluating a text classifier, and visualising NLP results. This unit also exposes students to an advanced NLP technique. - - The unit has 4 unit learning outcomes (ULO) which are explain the basics of computational linguistics, prepare textual data into suitable representation for text analytics, apply exploratory data analysis and visualisation techniques to textual data, and develop and evaluate text classifiers for general natural language processing tasks. - - This unit may contribute to the development of 3 Swinburne Graduate Attributes which are communication skills, teamwork skills, and digital literacies. The unit's content are basic string analysis techniques, text wrangling, web scrapping and pre-processing, feature extraction, text classifiers information retreival evaluation, visualisation and advanced NLP technique, which may include only one of the following topics: topic modelling, text summarization, vector representation, deep learning, or sentiment analysis. - - There is one unit teaching staff and his name is Dr Joel Than Chia Ming. He is the lecturer and tutor of this unit . He can be contacted through his email which is jcmthan@swinburne.edu.my. The learning and teaching structure of the unit consist of two parts which are lectures and tutorials of 24 hours of total contact hours each with two hours each of contact per week. - - The week by week schedule of the unit for Semester 1, 2022 can be observed below: - Week 1 starts at 28 February - Topic: Introduction to NLP & Linguistics - Task & Assessment: Tutorial 1: Getting Started with Python - Week 2 starts at 7 March - Topic: Word Tokenization: Dot product, Token improvement, Vocabulary - Task & Assessment: Tutorial 2: Word Tokenization and Pass Task #1 - Week 3 starts at 14 March - Topic: TF-IDF vectors: Bag of words, Vectorization, Topic modelling - Task & Assessment: Tutorial 3: TF-IDF and Pass Task #2 - Week 4 starts at 21 March - Topic: Semantic Analysis: LSA - Task & Assessment: Tutorial 4: Semantic Analysis and Pass Task #3 - Week 5 starts at 28 March - Topic: Introduction to Deep Learning for NLP: Neural network, Word Vectors - Task & Assessment: Tutorial 5: Deep Learning Introduction and Pass Task #4 - Week 6 starts at 4 April - Topic: CNN basics and usage for NLP - Task & Assessment: Tutorial 6: Neural Networks & CNN and Pass Task #5 - Week 7 starts at 18 April - Topic: RNN & LSTM basics - Task & Assessment: Tutorial 7: RNN & LSTM and Pass Task #6 - Week 8 starts at 25 April - Topic: Transformer basics 1 - Task & Assessment: Tutorial 8: Transformer 1 and Pass Task #7 - Week 9 starts at 2 May - Topic: Transformer basics 2 - Task & Assessment: Tutorial 9: Transformer 2 and Pass Task #8 - Week 10 starts at 9 May - Topic: Performance Evaluation & Scaling up - Task & Assessment: Tutorial 10: Submission of Tutorial 10 and Pass Task #9 - Week 11 starts at 16 May - Topic: Showcase of Real-World Problems - Task & Assessment: Tutorial 11: Discussion of Assignment 2 and Pass Task #10 - Week 12 starts at 23 May - Topic: Conclusion and Wrap Up - Task & Assessment: Tutorial 12: Discussion of Assignment 2 - - The assessments for this unit are Portfolio (for Pass and Credit) which is an individual task with 100% Weighting and Portfolio and Interview (for Distinction and High Distinction) which is an individual task with 100% Weighting. - - There will be a total of 10 pass tasks. Each pass task is released at the start of each week and is due two weeks after its release. There will be one Credit task which will be made available on 8 April and due on 6 May. There will be one D & HD task for Distinction and High Distinction (D & HD) grade which will be made available on 9 May and due on 3 June. The minimum requirement to pass this unit is to submit a passable portfolio, which means all pass tasks must be submitted and marked as complete. To achieve credit (C) grade, all pass tasks and the credit task must be submitted and marked as complete. To achieve distinction (D), all pass tasks, credit task and the D & HD task must be submitted and obtain at least 70 marks for it. To achieve high distinction (HD), all pass tasks, credit task and the D & HD task must be submitted and obtain at least 80 marks for it. - - Unless an extension has been approved, late submissions will result in a penalty. You will be penalised 10% of your mark for each working day the task is late, up to a maximum of 5 days. After 5 working days, a zero result will be recorded. - - There will be two tutorial sessions available each week with one hybrid session and one online session. The hybrid tutorial is on the Monday of the week at 9:00 am. whereas, the online session is on the Thursday of the week at 11:00 am. - - There will be one lecture session on the Tuesday of the week at 9:00 am. All sessions or classes will be recorded. All the lecture and tutorial materials as well as the recordings for all sessions are available on Canvas. -""" - -with st.expander("Start chatting with bot here!"): - st.write(""" - CGBOT: Hi, I'm CGBOT. The COS30081 Education virtual assistant. How may I help you today? - """) - - question = st.text_area('Ask a question about COS30081!') - - if question != "": - st.write(f"You: {question}") - - predict = question_answerer(question=question, context=context) - answer = predict['answer'] - - st.write(f"CGBOT: {answer}") diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/ttLib/tables/S_T_A_T_.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/ttLib/tables/S_T_A_T_.py deleted file mode 100644 index 1769de91b5f0416354e040b52e3615c6824fd2f9..0000000000000000000000000000000000000000 --- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/ttLib/tables/S_T_A_T_.py +++ /dev/null @@ -1,5 +0,0 @@ -from .otBase import BaseTTXConverter - - -class table_S_T_A_T_(BaseTTXConverter): - pass diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/ttLib/tables/_m_a_x_p.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/ttLib/tables/_m_a_x_p.py deleted file mode 100644 index 2934149773c6909cbab65861168524c10c9e7865..0000000000000000000000000000000000000000 --- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/ttLib/tables/_m_a_x_p.py +++ /dev/null @@ -1,140 +0,0 @@ -from fontTools.misc import sstruct -from fontTools.misc.textTools import safeEval -from . import DefaultTable - -maxpFormat_0_5 = """ - > # big endian - tableVersion: i - numGlyphs: H -""" - -maxpFormat_1_0_add = """ - > # big endian - maxPoints: H - maxContours: H - maxCompositePoints: H - maxCompositeContours: H - maxZones: H - maxTwilightPoints: H - maxStorage: H - maxFunctionDefs: H - maxInstructionDefs: H - maxStackElements: H - maxSizeOfInstructions: H - maxComponentElements: H - maxComponentDepth: H -""" - - -class table__m_a_x_p(DefaultTable.DefaultTable): - - dependencies = ["glyf"] - - def decompile(self, data, ttFont): - dummy, data = sstruct.unpack2(maxpFormat_0_5, data, self) - self.numGlyphs = int(self.numGlyphs) - if self.tableVersion != 0x00005000: - dummy, data = sstruct.unpack2(maxpFormat_1_0_add, data, self) - assert len(data) == 0 - - def compile(self, ttFont): - if "glyf" in ttFont: - if ttFont.isLoaded("glyf") and ttFont.recalcBBoxes: - self.recalc(ttFont) - else: - pass # CFF - self.numGlyphs = len(ttFont.getGlyphOrder()) - if self.tableVersion != 0x00005000: - self.tableVersion = 0x00010000 - data = sstruct.pack(maxpFormat_0_5, self) - if self.tableVersion == 0x00010000: - data = data + sstruct.pack(maxpFormat_1_0_add, self) - return data - - def recalc(self, ttFont): - """Recalculate the font bounding box, and most other maxp values except - for the TT instructions values. Also recalculate the value of bit 1 - of the flags field and the font bounding box of the 'head' table. - """ - glyfTable = ttFont["glyf"] - hmtxTable = ttFont["hmtx"] - headTable = ttFont["head"] - self.numGlyphs = len(glyfTable) - INFINITY = 100000 - xMin = +INFINITY - yMin = +INFINITY - xMax = -INFINITY - yMax = -INFINITY - maxPoints = 0 - maxContours = 0 - maxCompositePoints = 0 - maxCompositeContours = 0 - maxComponentElements = 0 - maxComponentDepth = 0 - allXMinIsLsb = 1 - for glyphName in ttFont.getGlyphOrder(): - g = glyfTable[glyphName] - if g.numberOfContours: - if hmtxTable[glyphName][1] != g.xMin: - allXMinIsLsb = 0 - xMin = min(xMin, g.xMin) - yMin = min(yMin, g.yMin) - xMax = max(xMax, g.xMax) - yMax = max(yMax, g.yMax) - if g.numberOfContours > 0: - nPoints, nContours = g.getMaxpValues() - maxPoints = max(maxPoints, nPoints) - maxContours = max(maxContours, nContours) - elif g.isComposite(): - nPoints, nContours, componentDepth = g.getCompositeMaxpValues( - glyfTable - ) - maxCompositePoints = max(maxCompositePoints, nPoints) - maxCompositeContours = max(maxCompositeContours, nContours) - maxComponentElements = max(maxComponentElements, len(g.components)) - maxComponentDepth = max(maxComponentDepth, componentDepth) - if xMin == +INFINITY: - headTable.xMin = 0 - headTable.yMin = 0 - headTable.xMax = 0 - headTable.yMax = 0 - else: - headTable.xMin = xMin - headTable.yMin = yMin - headTable.xMax = xMax - headTable.yMax = yMax - self.maxPoints = maxPoints - self.maxContours = maxContours - self.maxCompositePoints = maxCompositePoints - self.maxCompositeContours = maxCompositeContours - self.maxComponentElements = maxComponentElements - self.maxComponentDepth = maxComponentDepth - if allXMinIsLsb: - headTable.flags = headTable.flags | 0x2 - else: - headTable.flags = headTable.flags & ~0x2 - - def testrepr(self): - items = sorted(self.__dict__.items()) - print(". . . . . . . . .") - for combo in items: - print(" %s: %s" % combo) - print(". . . . . . . . .") - - def toXML(self, writer, ttFont): - if self.tableVersion != 0x00005000: - writer.comment("Most of this table will be recalculated by the compiler") - writer.newline() - formatstring, names, fixes = sstruct.getformat(maxpFormat_0_5) - if self.tableVersion != 0x00005000: - formatstring, names_1_0, fixes = sstruct.getformat(maxpFormat_1_0_add) - names = names + names_1_0 - for name in names: - value = getattr(self, name) - if name == "tableVersion": - value = hex(value) - writer.simpletag(name, value=value) - writer.newline() - - def fromXML(self, name, attrs, content, ttFont): - setattr(self, name, safeEval(attrs["value"])) diff --git a/spaces/colakin/video-generater/public/ffmpeg/CONTRIBUTING.md b/spaces/colakin/video-generater/public/ffmpeg/CONTRIBUTING.md deleted file mode 100644 index c2b79e452609e2045abb0be19e5c50f2491481b0..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/CONTRIBUTING.md +++ /dev/null @@ -1,4 +0,0 @@ -# Note to Github users -Patches should be submitted to the [ffmpeg-devel mailing list](https://ffmpeg.org/mailman/listinfo/ffmpeg-devel) using `git format-patch` or `git send-email`. Github pull requests should be avoided because they are not part of our review process and **will be ignored**. - -See [https://ffmpeg.org/developer.html#Contributing](https://ffmpeg.org/developer.html#Contributing) for more information. diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/aarch64/hevcdsp_init_aarch64.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/aarch64/hevcdsp_init_aarch64.c deleted file mode 100644 index be1049a2ec751e80a072161d563f78a879f2d48d..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/aarch64/hevcdsp_init_aarch64.c +++ /dev/null @@ -1,213 +0,0 @@ -/* - * Copyright (c) 2020 Reimar Döffinger - * Copyright (c) 2023 xu fulong <839789740@qq.com> - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#include - -#include "libavutil/attributes.h" -#include "libavutil/cpu.h" -#include "libavutil/aarch64/cpu.h" -#include "libavcodec/hevcdsp.h" - -void ff_hevc_v_loop_filter_chroma_8_neon(uint8_t *_pix, ptrdiff_t _stride, - const int *_tc, const uint8_t *_no_p, const uint8_t *_no_q); -void ff_hevc_v_loop_filter_chroma_10_neon(uint8_t *_pix, ptrdiff_t _stride, - const int *_tc, const uint8_t *_no_p, const uint8_t *_no_q); -void ff_hevc_v_loop_filter_chroma_12_neon(uint8_t *_pix, ptrdiff_t _stride, - const int *_tc, const uint8_t *_no_p, const uint8_t *_no_q); -void ff_hevc_h_loop_filter_chroma_8_neon(uint8_t *_pix, ptrdiff_t _stride, - const int *_tc, const uint8_t *_no_p, const uint8_t *_no_q); -void ff_hevc_h_loop_filter_chroma_10_neon(uint8_t *_pix, ptrdiff_t _stride, - const int *_tc, const uint8_t *_no_p, const uint8_t *_no_q); -void ff_hevc_h_loop_filter_chroma_12_neon(uint8_t *_pix, ptrdiff_t _stride, - const int *_tc, const uint8_t *_no_p, const uint8_t *_no_q); -void ff_hevc_add_residual_4x4_8_neon(uint8_t *_dst, const int16_t *coeffs, - ptrdiff_t stride); -void ff_hevc_add_residual_4x4_10_neon(uint8_t *_dst, const int16_t *coeffs, - ptrdiff_t stride); -void ff_hevc_add_residual_4x4_12_neon(uint8_t *_dst, const int16_t *coeffs, - ptrdiff_t stride); -void ff_hevc_add_residual_8x8_8_neon(uint8_t *_dst, const int16_t *coeffs, - ptrdiff_t stride); -void ff_hevc_add_residual_8x8_10_neon(uint8_t *_dst, const int16_t *coeffs, - ptrdiff_t stride); -void ff_hevc_add_residual_8x8_12_neon(uint8_t *_dst, const int16_t *coeffs, - ptrdiff_t stride); -void ff_hevc_add_residual_16x16_8_neon(uint8_t *_dst, const int16_t *coeffs, - ptrdiff_t stride); -void ff_hevc_add_residual_16x16_10_neon(uint8_t *_dst, const int16_t *coeffs, - ptrdiff_t stride); -void ff_hevc_add_residual_16x16_12_neon(uint8_t *_dst, const int16_t *coeffs, - ptrdiff_t stride); -void ff_hevc_add_residual_32x32_8_neon(uint8_t *_dst, const int16_t *coeffs, - ptrdiff_t stride); -void ff_hevc_add_residual_32x32_10_neon(uint8_t *_dst, const int16_t *coeffs, - ptrdiff_t stride); -void ff_hevc_add_residual_32x32_12_neon(uint8_t *_dst, const int16_t *coeffs, - ptrdiff_t stride); -void ff_hevc_idct_4x4_8_neon(int16_t *coeffs, int col_limit); -void ff_hevc_idct_4x4_10_neon(int16_t *coeffs, int col_limit); -void ff_hevc_idct_8x8_8_neon(int16_t *coeffs, int col_limit); -void ff_hevc_idct_8x8_10_neon(int16_t *coeffs, int col_limit); -void ff_hevc_idct_16x16_8_neon(int16_t *coeffs, int col_limit); -void ff_hevc_idct_16x16_10_neon(int16_t *coeffs, int col_limit); -void ff_hevc_idct_32x32_8_neon(int16_t *coeffs, int col_limit); -void ff_hevc_idct_32x32_10_neon(int16_t *coeffs, int col_limit); -void ff_hevc_idct_4x4_dc_8_neon(int16_t *coeffs); -void ff_hevc_idct_8x8_dc_8_neon(int16_t *coeffs); -void ff_hevc_idct_16x16_dc_8_neon(int16_t *coeffs); -void ff_hevc_idct_32x32_dc_8_neon(int16_t *coeffs); -void ff_hevc_idct_4x4_dc_10_neon(int16_t *coeffs); -void ff_hevc_idct_8x8_dc_10_neon(int16_t *coeffs); -void ff_hevc_idct_16x16_dc_10_neon(int16_t *coeffs); -void ff_hevc_idct_32x32_dc_10_neon(int16_t *coeffs); -void ff_hevc_transform_luma_4x4_neon_8(int16_t *coeffs); -void ff_hevc_sao_band_filter_8x8_8_neon(uint8_t *_dst, const uint8_t *_src, - ptrdiff_t stride_dst, ptrdiff_t stride_src, - const int16_t *sao_offset_val, int sao_left_class, - int width, int height); -void ff_hevc_sao_edge_filter_16x16_8_neon(uint8_t *dst, const uint8_t *src, ptrdiff_t stride_dst, - const int16_t *sao_offset_val, int eo, int width, int height); -void ff_hevc_sao_edge_filter_8x8_8_neon(uint8_t *dst, const uint8_t *src, ptrdiff_t stride_dst, - const int16_t *sao_offset_val, int eo, int width, int height); -void ff_hevc_put_hevc_qpel_h4_8_neon(int16_t *dst, const uint8_t *_src, ptrdiff_t _srcstride, int height, - intptr_t mx, intptr_t my, int width); -void ff_hevc_put_hevc_qpel_h6_8_neon(int16_t *dst, const uint8_t *_src, ptrdiff_t _srcstride, int height, - intptr_t mx, intptr_t my, int width); -void ff_hevc_put_hevc_qpel_h8_8_neon(int16_t *dst, const uint8_t *_src, ptrdiff_t _srcstride, int height, - intptr_t mx, intptr_t my, int width); -void ff_hevc_put_hevc_qpel_h12_8_neon(int16_t *dst, const uint8_t *_src, ptrdiff_t _srcstride, int height, - intptr_t mx, intptr_t my, int width); -void ff_hevc_put_hevc_qpel_h16_8_neon(int16_t *dst, const uint8_t *_src, ptrdiff_t _srcstride, int height, - intptr_t mx, intptr_t my, int width); -void ff_hevc_put_hevc_qpel_uni_h4_8_neon(uint8_t *_dst, ptrdiff_t _dststride, const uint8_t *_src, - ptrdiff_t _srcstride, int height, intptr_t mx, intptr_t my, - int width); -void ff_hevc_put_hevc_qpel_uni_h6_8_neon(uint8_t *_dst, ptrdiff_t _dststride, const uint8_t *_src, - ptrdiff_t _srcstride, int height, intptr_t mx, intptr_t my, - int width); -void ff_hevc_put_hevc_qpel_uni_h8_8_neon(uint8_t *_dst, ptrdiff_t _dststride, const uint8_t *_src, - ptrdiff_t _srcstride, int height, intptr_t mx, intptr_t my, - int width); -void ff_hevc_put_hevc_qpel_uni_h12_8_neon(uint8_t *_dst, ptrdiff_t _dststride, const uint8_t *_src, - ptrdiff_t _srcstride, int height, intptr_t mx, intptr_t - my, int width); -void ff_hevc_put_hevc_qpel_uni_h16_8_neon(uint8_t *_dst, ptrdiff_t _dststride, const uint8_t *_src, - ptrdiff_t _srcstride, int height, intptr_t mx, intptr_t - my, int width); -void ff_hevc_put_hevc_qpel_bi_h4_8_neon(uint8_t *_dst, ptrdiff_t _dststride, const uint8_t *_src, - ptrdiff_t _srcstride, const int16_t *src2, int height, intptr_t - mx, intptr_t my, int width); -void ff_hevc_put_hevc_qpel_bi_h6_8_neon(uint8_t *_dst, ptrdiff_t _dststride, const uint8_t *_src, - ptrdiff_t _srcstride, const int16_t *src2, int height, intptr_t - mx, intptr_t my, int width); -void ff_hevc_put_hevc_qpel_bi_h8_8_neon(uint8_t *_dst, ptrdiff_t _dststride, const uint8_t *_src, - ptrdiff_t _srcstride, const int16_t *src2, int height, intptr_t - mx, intptr_t my, int width); -void ff_hevc_put_hevc_qpel_bi_h12_8_neon(uint8_t *_dst, ptrdiff_t _dststride, const uint8_t *_src, - ptrdiff_t _srcstride, const int16_t *src2, int height, intptr_t - mx, intptr_t my, int width); -void ff_hevc_put_hevc_qpel_bi_h16_8_neon(uint8_t *_dst, ptrdiff_t _dststride, const uint8_t *_src, - ptrdiff_t _srcstride, const int16_t *src2, int height, intptr_t - mx, intptr_t my, int width); - -av_cold void ff_hevc_dsp_init_aarch64(HEVCDSPContext *c, const int bit_depth) -{ - if (!have_neon(av_get_cpu_flags())) return; - - if (bit_depth == 8) { - c->hevc_h_loop_filter_chroma = ff_hevc_h_loop_filter_chroma_8_neon; - c->hevc_v_loop_filter_chroma = ff_hevc_v_loop_filter_chroma_8_neon; - c->add_residual[0] = ff_hevc_add_residual_4x4_8_neon; - c->add_residual[1] = ff_hevc_add_residual_8x8_8_neon; - c->add_residual[2] = ff_hevc_add_residual_16x16_8_neon; - c->add_residual[3] = ff_hevc_add_residual_32x32_8_neon; - c->idct[0] = ff_hevc_idct_4x4_8_neon; - c->idct[1] = ff_hevc_idct_8x8_8_neon; - c->idct[2] = ff_hevc_idct_16x16_8_neon; - c->idct[3] = ff_hevc_idct_32x32_8_neon; - c->idct_dc[0] = ff_hevc_idct_4x4_dc_8_neon; - c->idct_dc[1] = ff_hevc_idct_8x8_dc_8_neon; - c->idct_dc[2] = ff_hevc_idct_16x16_dc_8_neon; - c->idct_dc[3] = ff_hevc_idct_32x32_dc_8_neon; - c->transform_4x4_luma = ff_hevc_transform_luma_4x4_neon_8; - c->sao_band_filter[0] = - c->sao_band_filter[1] = - c->sao_band_filter[2] = - c->sao_band_filter[3] = - c->sao_band_filter[4] = ff_hevc_sao_band_filter_8x8_8_neon; - c->sao_edge_filter[0] = ff_hevc_sao_edge_filter_8x8_8_neon; - c->sao_edge_filter[1] = - c->sao_edge_filter[2] = - c->sao_edge_filter[3] = - c->sao_edge_filter[4] = ff_hevc_sao_edge_filter_16x16_8_neon; - c->put_hevc_qpel[1][0][1] = ff_hevc_put_hevc_qpel_h4_8_neon; - c->put_hevc_qpel[2][0][1] = ff_hevc_put_hevc_qpel_h6_8_neon; - c->put_hevc_qpel[3][0][1] = ff_hevc_put_hevc_qpel_h8_8_neon; - c->put_hevc_qpel[4][0][1] = - c->put_hevc_qpel[6][0][1] = ff_hevc_put_hevc_qpel_h12_8_neon; - c->put_hevc_qpel[5][0][1] = - c->put_hevc_qpel[7][0][1] = - c->put_hevc_qpel[8][0][1] = - c->put_hevc_qpel[9][0][1] = ff_hevc_put_hevc_qpel_h16_8_neon; - c->put_hevc_qpel_uni[1][0][1] = ff_hevc_put_hevc_qpel_uni_h4_8_neon; - c->put_hevc_qpel_uni[2][0][1] = ff_hevc_put_hevc_qpel_uni_h6_8_neon; - c->put_hevc_qpel_uni[3][0][1] = ff_hevc_put_hevc_qpel_uni_h8_8_neon; - c->put_hevc_qpel_uni[4][0][1] = - c->put_hevc_qpel_uni[6][0][1] = ff_hevc_put_hevc_qpel_uni_h12_8_neon; - c->put_hevc_qpel_uni[5][0][1] = - c->put_hevc_qpel_uni[7][0][1] = - c->put_hevc_qpel_uni[8][0][1] = - c->put_hevc_qpel_uni[9][0][1] = ff_hevc_put_hevc_qpel_uni_h16_8_neon; - c->put_hevc_qpel_bi[1][0][1] = ff_hevc_put_hevc_qpel_bi_h4_8_neon; - c->put_hevc_qpel_bi[2][0][1] = ff_hevc_put_hevc_qpel_bi_h6_8_neon; - c->put_hevc_qpel_bi[3][0][1] = ff_hevc_put_hevc_qpel_bi_h8_8_neon; - c->put_hevc_qpel_bi[4][0][1] = - c->put_hevc_qpel_bi[6][0][1] = ff_hevc_put_hevc_qpel_bi_h12_8_neon; - c->put_hevc_qpel_bi[5][0][1] = - c->put_hevc_qpel_bi[7][0][1] = - c->put_hevc_qpel_bi[8][0][1] = - c->put_hevc_qpel_bi[9][0][1] = ff_hevc_put_hevc_qpel_bi_h16_8_neon; - } - if (bit_depth == 10) { - c->hevc_h_loop_filter_chroma = ff_hevc_h_loop_filter_chroma_10_neon; - c->hevc_v_loop_filter_chroma = ff_hevc_v_loop_filter_chroma_10_neon; - c->add_residual[0] = ff_hevc_add_residual_4x4_10_neon; - c->add_residual[1] = ff_hevc_add_residual_8x8_10_neon; - c->add_residual[2] = ff_hevc_add_residual_16x16_10_neon; - c->add_residual[3] = ff_hevc_add_residual_32x32_10_neon; - c->idct[0] = ff_hevc_idct_4x4_10_neon; - c->idct[1] = ff_hevc_idct_8x8_10_neon; - c->idct[2] = ff_hevc_idct_16x16_10_neon; - c->idct[3] = ff_hevc_idct_32x32_10_neon; - c->idct_dc[0] = ff_hevc_idct_4x4_dc_10_neon; - c->idct_dc[1] = ff_hevc_idct_8x8_dc_10_neon; - c->idct_dc[2] = ff_hevc_idct_16x16_dc_10_neon; - c->idct_dc[3] = ff_hevc_idct_32x32_dc_10_neon; - } - if (bit_depth == 12) { - c->hevc_h_loop_filter_chroma = ff_hevc_h_loop_filter_chroma_12_neon; - c->hevc_v_loop_filter_chroma = ff_hevc_v_loop_filter_chroma_12_neon; - c->add_residual[0] = ff_hevc_add_residual_4x4_12_neon; - c->add_residual[1] = ff_hevc_add_residual_8x8_12_neon; - c->add_residual[2] = ff_hevc_add_residual_16x16_12_neon; - c->add_residual[3] = ff_hevc_add_residual_32x32_12_neon; - } -} diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/cbs_av1_syntax_template.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/cbs_av1_syntax_template.c deleted file mode 100644 index e95925a493e11b209bd191044ddb1a50f54513ae..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/cbs_av1_syntax_template.c +++ /dev/null @@ -1,2050 +0,0 @@ -/* - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -static int FUNC(obu_header)(CodedBitstreamContext *ctx, RWContext *rw, - AV1RawOBUHeader *current) -{ - CodedBitstreamAV1Context *priv = ctx->priv_data; - int err; - - HEADER("OBU header"); - - fc(1, obu_forbidden_bit, 0, 0); - - fc(4, obu_type, 0, AV1_OBU_PADDING); - flag(obu_extension_flag); - flag(obu_has_size_field); - - fc(1, obu_reserved_1bit, 0, 0); - - if (current->obu_extension_flag) { - fb(3, temporal_id); - fb(2, spatial_id); - fc(3, extension_header_reserved_3bits, 0, 0); - } else { - infer(temporal_id, 0); - infer(spatial_id, 0); - } - - priv->temporal_id = current->temporal_id; - priv->spatial_id = current->spatial_id; - - return 0; -} - -static int FUNC(trailing_bits)(CodedBitstreamContext *ctx, RWContext *rw, int nb_bits) -{ - int err; - - av_assert0(nb_bits > 0); - - fixed(1, trailing_one_bit, 1); - --nb_bits; - - while (nb_bits > 0) { - fixed(1, trailing_zero_bit, 0); - --nb_bits; - } - - return 0; -} - -static int FUNC(byte_alignment)(CodedBitstreamContext *ctx, RWContext *rw) -{ - int err; - - while (byte_alignment(rw) != 0) - fixed(1, zero_bit, 0); - - return 0; -} - -static int FUNC(color_config)(CodedBitstreamContext *ctx, RWContext *rw, - AV1RawColorConfig *current, int seq_profile) -{ - CodedBitstreamAV1Context *priv = ctx->priv_data; - int err; - - flag(high_bitdepth); - - if (seq_profile == FF_PROFILE_AV1_PROFESSIONAL && - current->high_bitdepth) { - flag(twelve_bit); - priv->bit_depth = current->twelve_bit ? 12 : 10; - } else { - priv->bit_depth = current->high_bitdepth ? 10 : 8; - } - - if (seq_profile == FF_PROFILE_AV1_HIGH) - infer(mono_chrome, 0); - else - flag(mono_chrome); - priv->num_planes = current->mono_chrome ? 1 : 3; - - flag(color_description_present_flag); - if (current->color_description_present_flag) { - fb(8, color_primaries); - fb(8, transfer_characteristics); - fb(8, matrix_coefficients); - } else { - infer(color_primaries, AVCOL_PRI_UNSPECIFIED); - infer(transfer_characteristics, AVCOL_TRC_UNSPECIFIED); - infer(matrix_coefficients, AVCOL_SPC_UNSPECIFIED); - } - - if (current->mono_chrome) { - flag(color_range); - - infer(subsampling_x, 1); - infer(subsampling_y, 1); - infer(chroma_sample_position, AV1_CSP_UNKNOWN); - infer(separate_uv_delta_q, 0); - - } else if (current->color_primaries == AVCOL_PRI_BT709 && - current->transfer_characteristics == AVCOL_TRC_IEC61966_2_1 && - current->matrix_coefficients == AVCOL_SPC_RGB) { - infer(color_range, 1); - infer(subsampling_x, 0); - infer(subsampling_y, 0); - flag(separate_uv_delta_q); - - } else { - flag(color_range); - - if (seq_profile == FF_PROFILE_AV1_MAIN) { - infer(subsampling_x, 1); - infer(subsampling_y, 1); - } else if (seq_profile == FF_PROFILE_AV1_HIGH) { - infer(subsampling_x, 0); - infer(subsampling_y, 0); - } else { - if (priv->bit_depth == 12) { - fb(1, subsampling_x); - if (current->subsampling_x) - fb(1, subsampling_y); - else - infer(subsampling_y, 0); - } else { - infer(subsampling_x, 1); - infer(subsampling_y, 0); - } - } - if (current->subsampling_x && current->subsampling_y) { - fc(2, chroma_sample_position, AV1_CSP_UNKNOWN, - AV1_CSP_COLOCATED); - } - - flag(separate_uv_delta_q); - } - - return 0; -} - -static int FUNC(timing_info)(CodedBitstreamContext *ctx, RWContext *rw, - AV1RawTimingInfo *current) -{ - int err; - - fc(32, num_units_in_display_tick, 1, MAX_UINT_BITS(32)); - fc(32, time_scale, 1, MAX_UINT_BITS(32)); - - flag(equal_picture_interval); - if (current->equal_picture_interval) - uvlc(num_ticks_per_picture_minus_1, 0, MAX_UINT_BITS(32) - 1); - - return 0; -} - -static int FUNC(decoder_model_info)(CodedBitstreamContext *ctx, RWContext *rw, - AV1RawDecoderModelInfo *current) -{ - int err; - - fb(5, buffer_delay_length_minus_1); - fb(32, num_units_in_decoding_tick); - fb(5, buffer_removal_time_length_minus_1); - fb(5, frame_presentation_time_length_minus_1); - - return 0; -} - -static int FUNC(sequence_header_obu)(CodedBitstreamContext *ctx, RWContext *rw, - AV1RawSequenceHeader *current) -{ - int i, err; - - HEADER("Sequence Header"); - - fc(3, seq_profile, FF_PROFILE_AV1_MAIN, - FF_PROFILE_AV1_PROFESSIONAL); - flag(still_picture); - flag(reduced_still_picture_header); - - if (current->reduced_still_picture_header) { - infer(timing_info_present_flag, 0); - infer(decoder_model_info_present_flag, 0); - infer(initial_display_delay_present_flag, 0); - infer(operating_points_cnt_minus_1, 0); - infer(operating_point_idc[0], 0); - - fb(5, seq_level_idx[0]); - - infer(seq_tier[0], 0); - infer(decoder_model_present_for_this_op[0], 0); - infer(initial_display_delay_present_for_this_op[0], 0); - - } else { - flag(timing_info_present_flag); - if (current->timing_info_present_flag) { - CHECK(FUNC(timing_info)(ctx, rw, ¤t->timing_info)); - - flag(decoder_model_info_present_flag); - if (current->decoder_model_info_present_flag) { - CHECK(FUNC(decoder_model_info) - (ctx, rw, ¤t->decoder_model_info)); - } - } else { - infer(decoder_model_info_present_flag, 0); - } - - flag(initial_display_delay_present_flag); - - fb(5, operating_points_cnt_minus_1); - for (i = 0; i <= current->operating_points_cnt_minus_1; i++) { - fbs(12, operating_point_idc[i], 1, i); - fbs(5, seq_level_idx[i], 1, i); - - if (current->seq_level_idx[i] > 7) - flags(seq_tier[i], 1, i); - else - infer(seq_tier[i], 0); - - if (current->decoder_model_info_present_flag) { - flags(decoder_model_present_for_this_op[i], 1, i); - if (current->decoder_model_present_for_this_op[i]) { - int n = current->decoder_model_info.buffer_delay_length_minus_1 + 1; - fbs(n, decoder_buffer_delay[i], 1, i); - fbs(n, encoder_buffer_delay[i], 1, i); - flags(low_delay_mode_flag[i], 1, i); - } - } else { - infer(decoder_model_present_for_this_op[i], 0); - } - - if (current->initial_display_delay_present_flag) { - flags(initial_display_delay_present_for_this_op[i], 1, i); - if (current->initial_display_delay_present_for_this_op[i]) - fbs(4, initial_display_delay_minus_1[i], 1, i); - } - } - } - - fb(4, frame_width_bits_minus_1); - fb(4, frame_height_bits_minus_1); - - fb(current->frame_width_bits_minus_1 + 1, max_frame_width_minus_1); - fb(current->frame_height_bits_minus_1 + 1, max_frame_height_minus_1); - - if (current->reduced_still_picture_header) - infer(frame_id_numbers_present_flag, 0); - else - flag(frame_id_numbers_present_flag); - if (current->frame_id_numbers_present_flag) { - fb(4, delta_frame_id_length_minus_2); - fb(3, additional_frame_id_length_minus_1); - } - - flag(use_128x128_superblock); - flag(enable_filter_intra); - flag(enable_intra_edge_filter); - - if (current->reduced_still_picture_header) { - infer(enable_interintra_compound, 0); - infer(enable_masked_compound, 0); - infer(enable_warped_motion, 0); - infer(enable_dual_filter, 0); - infer(enable_order_hint, 0); - infer(enable_jnt_comp, 0); - infer(enable_ref_frame_mvs, 0); - - infer(seq_force_screen_content_tools, - AV1_SELECT_SCREEN_CONTENT_TOOLS); - infer(seq_force_integer_mv, - AV1_SELECT_INTEGER_MV); - } else { - flag(enable_interintra_compound); - flag(enable_masked_compound); - flag(enable_warped_motion); - flag(enable_dual_filter); - - flag(enable_order_hint); - if (current->enable_order_hint) { - flag(enable_jnt_comp); - flag(enable_ref_frame_mvs); - } else { - infer(enable_jnt_comp, 0); - infer(enable_ref_frame_mvs, 0); - } - - flag(seq_choose_screen_content_tools); - if (current->seq_choose_screen_content_tools) - infer(seq_force_screen_content_tools, - AV1_SELECT_SCREEN_CONTENT_TOOLS); - else - fb(1, seq_force_screen_content_tools); - if (current->seq_force_screen_content_tools > 0) { - flag(seq_choose_integer_mv); - if (current->seq_choose_integer_mv) - infer(seq_force_integer_mv, - AV1_SELECT_INTEGER_MV); - else - fb(1, seq_force_integer_mv); - } else { - infer(seq_force_integer_mv, AV1_SELECT_INTEGER_MV); - } - - if (current->enable_order_hint) - fb(3, order_hint_bits_minus_1); - } - - flag(enable_superres); - flag(enable_cdef); - flag(enable_restoration); - - CHECK(FUNC(color_config)(ctx, rw, ¤t->color_config, - current->seq_profile)); - - flag(film_grain_params_present); - - return 0; -} - -static int FUNC(temporal_delimiter_obu)(CodedBitstreamContext *ctx, RWContext *rw) -{ - CodedBitstreamAV1Context *priv = ctx->priv_data; - - HEADER("Temporal Delimiter"); - - priv->seen_frame_header = 0; - - return 0; -} - -static int FUNC(set_frame_refs)(CodedBitstreamContext *ctx, RWContext *rw, - AV1RawFrameHeader *current) -{ - CodedBitstreamAV1Context *priv = ctx->priv_data; - const AV1RawSequenceHeader *seq = priv->sequence_header; - static const uint8_t ref_frame_list[AV1_NUM_REF_FRAMES - 2] = { - AV1_REF_FRAME_LAST2, AV1_REF_FRAME_LAST3, AV1_REF_FRAME_BWDREF, - AV1_REF_FRAME_ALTREF2, AV1_REF_FRAME_ALTREF - }; - int8_t ref_frame_idx[AV1_REFS_PER_FRAME], used_frame[AV1_NUM_REF_FRAMES]; - int16_t shifted_order_hints[AV1_NUM_REF_FRAMES]; - int cur_frame_hint, latest_order_hint, earliest_order_hint, ref; - int i, j; - - for (i = 0; i < AV1_REFS_PER_FRAME; i++) - ref_frame_idx[i] = -1; - ref_frame_idx[AV1_REF_FRAME_LAST - AV1_REF_FRAME_LAST] = current->last_frame_idx; - ref_frame_idx[AV1_REF_FRAME_GOLDEN - AV1_REF_FRAME_LAST] = current->golden_frame_idx; - - for (i = 0; i < AV1_NUM_REF_FRAMES; i++) - used_frame[i] = 0; - used_frame[current->last_frame_idx] = 1; - used_frame[current->golden_frame_idx] = 1; - - cur_frame_hint = 1 << (seq->order_hint_bits_minus_1); - for (i = 0; i < AV1_NUM_REF_FRAMES; i++) - shifted_order_hints[i] = cur_frame_hint + - cbs_av1_get_relative_dist(seq, priv->ref[i].order_hint, - priv->order_hint); - - latest_order_hint = shifted_order_hints[current->last_frame_idx]; - earliest_order_hint = shifted_order_hints[current->golden_frame_idx]; - - ref = -1; - for (i = 0; i < AV1_NUM_REF_FRAMES; i++) { - int hint = shifted_order_hints[i]; - if (!used_frame[i] && hint >= cur_frame_hint && - (ref < 0 || hint >= latest_order_hint)) { - ref = i; - latest_order_hint = hint; - } - } - if (ref >= 0) { - ref_frame_idx[AV1_REF_FRAME_ALTREF - AV1_REF_FRAME_LAST] = ref; - used_frame[ref] = 1; - } - - ref = -1; - for (i = 0; i < AV1_NUM_REF_FRAMES; i++) { - int hint = shifted_order_hints[i]; - if (!used_frame[i] && hint >= cur_frame_hint && - (ref < 0 || hint < earliest_order_hint)) { - ref = i; - earliest_order_hint = hint; - } - } - if (ref >= 0) { - ref_frame_idx[AV1_REF_FRAME_BWDREF - AV1_REF_FRAME_LAST] = ref; - used_frame[ref] = 1; - } - - ref = -1; - for (i = 0; i < AV1_NUM_REF_FRAMES; i++) { - int hint = shifted_order_hints[i]; - if (!used_frame[i] && hint >= cur_frame_hint && - (ref < 0 || hint < earliest_order_hint)) { - ref = i; - earliest_order_hint = hint; - } - } - if (ref >= 0) { - ref_frame_idx[AV1_REF_FRAME_ALTREF2 - AV1_REF_FRAME_LAST] = ref; - used_frame[ref] = 1; - } - - for (i = 0; i < AV1_REFS_PER_FRAME - 2; i++) { - int ref_frame = ref_frame_list[i]; - if (ref_frame_idx[ref_frame - AV1_REF_FRAME_LAST] < 0 ) { - ref = -1; - for (j = 0; j < AV1_NUM_REF_FRAMES; j++) { - int hint = shifted_order_hints[j]; - if (!used_frame[j] && hint < cur_frame_hint && - (ref < 0 || hint >= latest_order_hint)) { - ref = j; - latest_order_hint = hint; - } - } - if (ref >= 0) { - ref_frame_idx[ref_frame - AV1_REF_FRAME_LAST] = ref; - used_frame[ref] = 1; - } - } - } - - ref = -1; - for (i = 0; i < AV1_NUM_REF_FRAMES; i++) { - int hint = shifted_order_hints[i]; - if (ref < 0 || hint < earliest_order_hint) { - ref = i; - earliest_order_hint = hint; - } - } - for (i = 0; i < AV1_REFS_PER_FRAME; i++) { - if (ref_frame_idx[i] < 0) - ref_frame_idx[i] = ref; - infer(ref_frame_idx[i], ref_frame_idx[i]); - } - - return 0; -} - -static int FUNC(superres_params)(CodedBitstreamContext *ctx, RWContext *rw, - AV1RawFrameHeader *current) -{ - CodedBitstreamAV1Context *priv = ctx->priv_data; - const AV1RawSequenceHeader *seq = priv->sequence_header; - int denom, err; - - if (seq->enable_superres) - flag(use_superres); - else - infer(use_superres, 0); - - if (current->use_superres) { - fb(3, coded_denom); - denom = current->coded_denom + AV1_SUPERRES_DENOM_MIN; - } else { - denom = AV1_SUPERRES_NUM; - } - - priv->upscaled_width = priv->frame_width; - priv->frame_width = (priv->upscaled_width * AV1_SUPERRES_NUM + - denom / 2) / denom; - - return 0; -} - -static int FUNC(frame_size)(CodedBitstreamContext *ctx, RWContext *rw, - AV1RawFrameHeader *current) -{ - CodedBitstreamAV1Context *priv = ctx->priv_data; - const AV1RawSequenceHeader *seq = priv->sequence_header; - int err; - - if (current->frame_size_override_flag) { - fb(seq->frame_width_bits_minus_1 + 1, frame_width_minus_1); - fb(seq->frame_height_bits_minus_1 + 1, frame_height_minus_1); - } else { - infer(frame_width_minus_1, seq->max_frame_width_minus_1); - infer(frame_height_minus_1, seq->max_frame_height_minus_1); - } - - priv->frame_width = current->frame_width_minus_1 + 1; - priv->frame_height = current->frame_height_minus_1 + 1; - - CHECK(FUNC(superres_params)(ctx, rw, current)); - - return 0; -} - -static int FUNC(render_size)(CodedBitstreamContext *ctx, RWContext *rw, - AV1RawFrameHeader *current) -{ - CodedBitstreamAV1Context *priv = ctx->priv_data; - int err; - - flag(render_and_frame_size_different); - - if (current->render_and_frame_size_different) { - fb(16, render_width_minus_1); - fb(16, render_height_minus_1); - } else { - infer(render_width_minus_1, current->frame_width_minus_1); - infer(render_height_minus_1, current->frame_height_minus_1); - } - - priv->render_width = current->render_width_minus_1 + 1; - priv->render_height = current->render_height_minus_1 + 1; - - return 0; -} - -static int FUNC(frame_size_with_refs)(CodedBitstreamContext *ctx, RWContext *rw, - AV1RawFrameHeader *current) -{ - CodedBitstreamAV1Context *priv = ctx->priv_data; - int i, err; - - for (i = 0; i < AV1_REFS_PER_FRAME; i++) { - flags(found_ref[i], 1, i); - if (current->found_ref[i]) { - AV1ReferenceFrameState *ref = - &priv->ref[current->ref_frame_idx[i]]; - - if (!ref->valid) { - av_log(ctx->log_ctx, AV_LOG_ERROR, - "Missing reference frame needed for frame size " - "(ref = %d, ref_frame_idx = %d).\n", - i, current->ref_frame_idx[i]); - return AVERROR_INVALIDDATA; - } - - infer(frame_width_minus_1, ref->upscaled_width - 1); - infer(frame_height_minus_1, ref->frame_height - 1); - infer(render_width_minus_1, ref->render_width - 1); - infer(render_height_minus_1, ref->render_height - 1); - - priv->upscaled_width = ref->upscaled_width; - priv->frame_width = priv->upscaled_width; - priv->frame_height = ref->frame_height; - priv->render_width = ref->render_width; - priv->render_height = ref->render_height; - break; - } - } - - if (i >= AV1_REFS_PER_FRAME) { - CHECK(FUNC(frame_size)(ctx, rw, current)); - CHECK(FUNC(render_size)(ctx, rw, current)); - } else { - CHECK(FUNC(superres_params)(ctx, rw, current)); - } - - return 0; -} - -static int FUNC(interpolation_filter)(CodedBitstreamContext *ctx, RWContext *rw, - AV1RawFrameHeader *current) -{ - int err; - - flag(is_filter_switchable); - if (current->is_filter_switchable) - infer(interpolation_filter, - AV1_INTERPOLATION_FILTER_SWITCHABLE); - else - fb(2, interpolation_filter); - - return 0; -} - -static int FUNC(tile_info)(CodedBitstreamContext *ctx, RWContext *rw, - AV1RawFrameHeader *current) -{ - CodedBitstreamAV1Context *priv = ctx->priv_data; - const AV1RawSequenceHeader *seq = priv->sequence_header; - int mi_cols, mi_rows, sb_cols, sb_rows, sb_shift, sb_size; - int max_tile_width_sb, max_tile_height_sb, max_tile_area_sb; - int min_log2_tile_cols, max_log2_tile_cols, max_log2_tile_rows; - int min_log2_tiles, min_log2_tile_rows; - int i, err; - - mi_cols = 2 * ((priv->frame_width + 7) >> 3); - mi_rows = 2 * ((priv->frame_height + 7) >> 3); - - sb_cols = seq->use_128x128_superblock ? ((mi_cols + 31) >> 5) - : ((mi_cols + 15) >> 4); - sb_rows = seq->use_128x128_superblock ? ((mi_rows + 31) >> 5) - : ((mi_rows + 15) >> 4); - - sb_shift = seq->use_128x128_superblock ? 5 : 4; - sb_size = sb_shift + 2; - - max_tile_width_sb = AV1_MAX_TILE_WIDTH >> sb_size; - max_tile_area_sb = AV1_MAX_TILE_AREA >> (2 * sb_size); - - min_log2_tile_cols = cbs_av1_tile_log2(max_tile_width_sb, sb_cols); - max_log2_tile_cols = cbs_av1_tile_log2(1, FFMIN(sb_cols, AV1_MAX_TILE_COLS)); - max_log2_tile_rows = cbs_av1_tile_log2(1, FFMIN(sb_rows, AV1_MAX_TILE_ROWS)); - min_log2_tiles = FFMAX(min_log2_tile_cols, - cbs_av1_tile_log2(max_tile_area_sb, sb_rows * sb_cols)); - - flag(uniform_tile_spacing_flag); - - if (current->uniform_tile_spacing_flag) { - int tile_width_sb, tile_height_sb; - - increment(tile_cols_log2, min_log2_tile_cols, max_log2_tile_cols); - - tile_width_sb = (sb_cols + (1 << current->tile_cols_log2) - 1) >> - current->tile_cols_log2; - current->tile_cols = (sb_cols + tile_width_sb - 1) / tile_width_sb; - - min_log2_tile_rows = FFMAX(min_log2_tiles - current->tile_cols_log2, 0); - - increment(tile_rows_log2, min_log2_tile_rows, max_log2_tile_rows); - - tile_height_sb = (sb_rows + (1 << current->tile_rows_log2) - 1) >> - current->tile_rows_log2; - current->tile_rows = (sb_rows + tile_height_sb - 1) / tile_height_sb; - - for (i = 0; i < current->tile_cols - 1; i++) - infer(width_in_sbs_minus_1[i], tile_width_sb - 1); - infer(width_in_sbs_minus_1[i], - sb_cols - (current->tile_cols - 1) * tile_width_sb - 1); - for (i = 0; i < current->tile_rows - 1; i++) - infer(height_in_sbs_minus_1[i], tile_height_sb - 1); - infer(height_in_sbs_minus_1[i], - sb_rows - (current->tile_rows - 1) * tile_height_sb - 1); - - } else { - int widest_tile_sb, start_sb, size_sb, max_width, max_height; - - widest_tile_sb = 0; - - start_sb = 0; - for (i = 0; start_sb < sb_cols && i < AV1_MAX_TILE_COLS; i++) { - max_width = FFMIN(sb_cols - start_sb, max_tile_width_sb); - ns(max_width, width_in_sbs_minus_1[i], 1, i); - size_sb = current->width_in_sbs_minus_1[i] + 1; - widest_tile_sb = FFMAX(size_sb, widest_tile_sb); - start_sb += size_sb; - } - current->tile_cols_log2 = cbs_av1_tile_log2(1, i); - current->tile_cols = i; - - if (min_log2_tiles > 0) - max_tile_area_sb = (sb_rows * sb_cols) >> (min_log2_tiles + 1); - else - max_tile_area_sb = sb_rows * sb_cols; - max_tile_height_sb = FFMAX(max_tile_area_sb / widest_tile_sb, 1); - - start_sb = 0; - for (i = 0; start_sb < sb_rows && i < AV1_MAX_TILE_ROWS; i++) { - max_height = FFMIN(sb_rows - start_sb, max_tile_height_sb); - ns(max_height, height_in_sbs_minus_1[i], 1, i); - size_sb = current->height_in_sbs_minus_1[i] + 1; - start_sb += size_sb; - } - current->tile_rows_log2 = cbs_av1_tile_log2(1, i); - current->tile_rows = i; - } - - if (current->tile_cols_log2 > 0 || - current->tile_rows_log2 > 0) { - fb(current->tile_cols_log2 + current->tile_rows_log2, - context_update_tile_id); - fb(2, tile_size_bytes_minus1); - } else { - infer(context_update_tile_id, 0); - } - - priv->tile_cols = current->tile_cols; - priv->tile_rows = current->tile_rows; - - return 0; -} - -static int FUNC(quantization_params)(CodedBitstreamContext *ctx, RWContext *rw, - AV1RawFrameHeader *current) -{ - CodedBitstreamAV1Context *priv = ctx->priv_data; - const AV1RawSequenceHeader *seq = priv->sequence_header; - int err; - - fb(8, base_q_idx); - - delta_q(delta_q_y_dc); - - if (priv->num_planes > 1) { - if (seq->color_config.separate_uv_delta_q) - flag(diff_uv_delta); - else - infer(diff_uv_delta, 0); - - delta_q(delta_q_u_dc); - delta_q(delta_q_u_ac); - - if (current->diff_uv_delta) { - delta_q(delta_q_v_dc); - delta_q(delta_q_v_ac); - } else { - infer(delta_q_v_dc, current->delta_q_u_dc); - infer(delta_q_v_ac, current->delta_q_u_ac); - } - } else { - infer(delta_q_u_dc, 0); - infer(delta_q_u_ac, 0); - infer(delta_q_v_dc, 0); - infer(delta_q_v_ac, 0); - } - - flag(using_qmatrix); - if (current->using_qmatrix) { - fb(4, qm_y); - fb(4, qm_u); - if (seq->color_config.separate_uv_delta_q) - fb(4, qm_v); - else - infer(qm_v, current->qm_u); - } - - return 0; -} - -static int FUNC(segmentation_params)(CodedBitstreamContext *ctx, RWContext *rw, - AV1RawFrameHeader *current) -{ - CodedBitstreamAV1Context *priv = ctx->priv_data; - static const uint8_t bits[AV1_SEG_LVL_MAX] = { 8, 6, 6, 6, 6, 3, 0, 0 }; - static const uint8_t sign[AV1_SEG_LVL_MAX] = { 1, 1, 1, 1, 1, 0, 0, 0 }; - static const uint8_t default_feature_enabled[AV1_SEG_LVL_MAX] = { 0 }; - static const int16_t default_feature_value[AV1_SEG_LVL_MAX] = { 0 }; - int i, j, err; - - flag(segmentation_enabled); - - if (current->segmentation_enabled) { - if (current->primary_ref_frame == AV1_PRIMARY_REF_NONE) { - infer(segmentation_update_map, 1); - infer(segmentation_temporal_update, 0); - infer(segmentation_update_data, 1); - } else { - flag(segmentation_update_map); - if (current->segmentation_update_map) - flag(segmentation_temporal_update); - else - infer(segmentation_temporal_update, 0); - flag(segmentation_update_data); - } - - for (i = 0; i < AV1_MAX_SEGMENTS; i++) { - const uint8_t *ref_feature_enabled; - const int16_t *ref_feature_value; - - if (current->primary_ref_frame == AV1_PRIMARY_REF_NONE) { - ref_feature_enabled = default_feature_enabled; - ref_feature_value = default_feature_value; - } else { - ref_feature_enabled = - priv->ref[current->ref_frame_idx[current->primary_ref_frame]].feature_enabled[i]; - ref_feature_value = - priv->ref[current->ref_frame_idx[current->primary_ref_frame]].feature_value[i]; - } - - for (j = 0; j < AV1_SEG_LVL_MAX; j++) { - if (current->segmentation_update_data) { - flags(feature_enabled[i][j], 2, i, j); - - if (current->feature_enabled[i][j] && bits[j] > 0) { - if (sign[j]) - sus(1 + bits[j], feature_value[i][j], 2, i, j); - else - fbs(bits[j], feature_value[i][j], 2, i, j); - } else { - infer(feature_value[i][j], 0); - } - } else { - infer(feature_enabled[i][j], ref_feature_enabled[j]); - infer(feature_value[i][j], ref_feature_value[j]); - } - } - } - } else { - for (i = 0; i < AV1_MAX_SEGMENTS; i++) { - for (j = 0; j < AV1_SEG_LVL_MAX; j++) { - infer(feature_enabled[i][j], 0); - infer(feature_value[i][j], 0); - } - } - } - - return 0; -} - -static int FUNC(delta_q_params)(CodedBitstreamContext *ctx, RWContext *rw, - AV1RawFrameHeader *current) -{ - int err; - - if (current->base_q_idx > 0) - flag(delta_q_present); - else - infer(delta_q_present, 0); - - if (current->delta_q_present) - fb(2, delta_q_res); - - return 0; -} - -static int FUNC(delta_lf_params)(CodedBitstreamContext *ctx, RWContext *rw, - AV1RawFrameHeader *current) -{ - int err; - - if (current->delta_q_present) { - if (!current->allow_intrabc) - flag(delta_lf_present); - else - infer(delta_lf_present, 0); - if (current->delta_lf_present) { - fb(2, delta_lf_res); - flag(delta_lf_multi); - } else { - infer(delta_lf_res, 0); - infer(delta_lf_multi, 0); - } - } else { - infer(delta_lf_present, 0); - infer(delta_lf_res, 0); - infer(delta_lf_multi, 0); - } - - return 0; -} - -static int FUNC(loop_filter_params)(CodedBitstreamContext *ctx, RWContext *rw, - AV1RawFrameHeader *current) -{ - CodedBitstreamAV1Context *priv = ctx->priv_data; - static const int8_t default_loop_filter_ref_deltas[AV1_TOTAL_REFS_PER_FRAME] = - { 1, 0, 0, 0, -1, 0, -1, -1 }; - static const int8_t default_loop_filter_mode_deltas[2] = { 0, 0 }; - int i, err; - - if (priv->coded_lossless || current->allow_intrabc) { - infer(loop_filter_level[0], 0); - infer(loop_filter_level[1], 0); - infer(loop_filter_ref_deltas[AV1_REF_FRAME_INTRA], 1); - infer(loop_filter_ref_deltas[AV1_REF_FRAME_LAST], 0); - infer(loop_filter_ref_deltas[AV1_REF_FRAME_LAST2], 0); - infer(loop_filter_ref_deltas[AV1_REF_FRAME_LAST3], 0); - infer(loop_filter_ref_deltas[AV1_REF_FRAME_BWDREF], 0); - infer(loop_filter_ref_deltas[AV1_REF_FRAME_GOLDEN], -1); - infer(loop_filter_ref_deltas[AV1_REF_FRAME_ALTREF], -1); - infer(loop_filter_ref_deltas[AV1_REF_FRAME_ALTREF2], -1); - for (i = 0; i < 2; i++) - infer(loop_filter_mode_deltas[i], 0); - return 0; - } - - fb(6, loop_filter_level[0]); - fb(6, loop_filter_level[1]); - - if (priv->num_planes > 1) { - if (current->loop_filter_level[0] || - current->loop_filter_level[1]) { - fb(6, loop_filter_level[2]); - fb(6, loop_filter_level[3]); - } - } - - fb(3, loop_filter_sharpness); - - flag(loop_filter_delta_enabled); - if (current->loop_filter_delta_enabled) { - const int8_t *ref_loop_filter_ref_deltas, *ref_loop_filter_mode_deltas; - - if (current->primary_ref_frame == AV1_PRIMARY_REF_NONE) { - ref_loop_filter_ref_deltas = default_loop_filter_ref_deltas; - ref_loop_filter_mode_deltas = default_loop_filter_mode_deltas; - } else { - ref_loop_filter_ref_deltas = - priv->ref[current->ref_frame_idx[current->primary_ref_frame]].loop_filter_ref_deltas; - ref_loop_filter_mode_deltas = - priv->ref[current->ref_frame_idx[current->primary_ref_frame]].loop_filter_mode_deltas; - } - - flag(loop_filter_delta_update); - for (i = 0; i < AV1_TOTAL_REFS_PER_FRAME; i++) { - if (current->loop_filter_delta_update) - flags(update_ref_delta[i], 1, i); - else - infer(update_ref_delta[i], 0); - if (current->update_ref_delta[i]) - sus(1 + 6, loop_filter_ref_deltas[i], 1, i); - else - infer(loop_filter_ref_deltas[i], ref_loop_filter_ref_deltas[i]); - } - for (i = 0; i < 2; i++) { - if (current->loop_filter_delta_update) - flags(update_mode_delta[i], 1, i); - else - infer(update_mode_delta[i], 0); - if (current->update_mode_delta[i]) - sus(1 + 6, loop_filter_mode_deltas[i], 1, i); - else - infer(loop_filter_mode_deltas[i], ref_loop_filter_mode_deltas[i]); - } - } else { - for (i = 0; i < AV1_TOTAL_REFS_PER_FRAME; i++) - infer(loop_filter_ref_deltas[i], default_loop_filter_ref_deltas[i]); - for (i = 0; i < 2; i++) - infer(loop_filter_mode_deltas[i], default_loop_filter_mode_deltas[i]); - } - - return 0; -} - -static int FUNC(cdef_params)(CodedBitstreamContext *ctx, RWContext *rw, - AV1RawFrameHeader *current) -{ - CodedBitstreamAV1Context *priv = ctx->priv_data; - const AV1RawSequenceHeader *seq = priv->sequence_header; - int i, err; - - if (priv->coded_lossless || current->allow_intrabc || - !seq->enable_cdef) { - infer(cdef_damping_minus_3, 0); - infer(cdef_bits, 0); - infer(cdef_y_pri_strength[0], 0); - infer(cdef_y_sec_strength[0], 0); - infer(cdef_uv_pri_strength[0], 0); - infer(cdef_uv_sec_strength[0], 0); - - return 0; - } - - fb(2, cdef_damping_minus_3); - fb(2, cdef_bits); - - for (i = 0; i < (1 << current->cdef_bits); i++) { - fbs(4, cdef_y_pri_strength[i], 1, i); - fbs(2, cdef_y_sec_strength[i], 1, i); - - if (priv->num_planes > 1) { - fbs(4, cdef_uv_pri_strength[i], 1, i); - fbs(2, cdef_uv_sec_strength[i], 1, i); - } - } - - return 0; -} - -static int FUNC(lr_params)(CodedBitstreamContext *ctx, RWContext *rw, - AV1RawFrameHeader *current) -{ - CodedBitstreamAV1Context *priv = ctx->priv_data; - const AV1RawSequenceHeader *seq = priv->sequence_header; - int uses_lr, uses_chroma_lr; - int i, err; - - if (priv->all_lossless || current->allow_intrabc || - !seq->enable_restoration) { - return 0; - } - - uses_lr = uses_chroma_lr = 0; - for (i = 0; i < priv->num_planes; i++) { - fbs(2, lr_type[i], 1, i); - - if (current->lr_type[i] != AV1_RESTORE_NONE) { - uses_lr = 1; - if (i > 0) - uses_chroma_lr = 1; - } - } - - if (uses_lr) { - if (seq->use_128x128_superblock) - increment(lr_unit_shift, 1, 2); - else - increment(lr_unit_shift, 0, 2); - - if(seq->color_config.subsampling_x && - seq->color_config.subsampling_y && uses_chroma_lr) { - fb(1, lr_uv_shift); - } else { - infer(lr_uv_shift, 0); - } - } - - return 0; -} - -static int FUNC(read_tx_mode)(CodedBitstreamContext *ctx, RWContext *rw, - AV1RawFrameHeader *current) -{ - CodedBitstreamAV1Context *priv = ctx->priv_data; - int err; - - if (priv->coded_lossless) - infer(tx_mode, 0); - else - increment(tx_mode, 1, 2); - - return 0; -} - -static int FUNC(frame_reference_mode)(CodedBitstreamContext *ctx, RWContext *rw, - AV1RawFrameHeader *current) -{ - int err; - - if (current->frame_type == AV1_FRAME_INTRA_ONLY || - current->frame_type == AV1_FRAME_KEY) - infer(reference_select, 0); - else - flag(reference_select); - - return 0; -} - -static int FUNC(skip_mode_params)(CodedBitstreamContext *ctx, RWContext *rw, - AV1RawFrameHeader *current) -{ - CodedBitstreamAV1Context *priv = ctx->priv_data; - const AV1RawSequenceHeader *seq = priv->sequence_header; - int skip_mode_allowed; - int err; - - if (current->frame_type == AV1_FRAME_KEY || - current->frame_type == AV1_FRAME_INTRA_ONLY || - !current->reference_select || !seq->enable_order_hint) { - skip_mode_allowed = 0; - } else { - int forward_idx, backward_idx; - int forward_hint, backward_hint; - int ref_hint, dist, i; - - forward_idx = -1; - backward_idx = -1; - for (i = 0; i < AV1_REFS_PER_FRAME; i++) { - ref_hint = priv->ref[current->ref_frame_idx[i]].order_hint; - dist = cbs_av1_get_relative_dist(seq, ref_hint, - priv->order_hint); - if (dist < 0) { - if (forward_idx < 0 || - cbs_av1_get_relative_dist(seq, ref_hint, - forward_hint) > 0) { - forward_idx = i; - forward_hint = ref_hint; - } - } else if (dist > 0) { - if (backward_idx < 0 || - cbs_av1_get_relative_dist(seq, ref_hint, - backward_hint) < 0) { - backward_idx = i; - backward_hint = ref_hint; - } - } - } - - if (forward_idx < 0) { - skip_mode_allowed = 0; - } else if (backward_idx >= 0) { - skip_mode_allowed = 1; - // Frames for skip mode are forward_idx and backward_idx. - } else { - int second_forward_idx; - int second_forward_hint; - - second_forward_idx = -1; - for (i = 0; i < AV1_REFS_PER_FRAME; i++) { - ref_hint = priv->ref[current->ref_frame_idx[i]].order_hint; - if (cbs_av1_get_relative_dist(seq, ref_hint, - forward_hint) < 0) { - if (second_forward_idx < 0 || - cbs_av1_get_relative_dist(seq, ref_hint, - second_forward_hint) > 0) { - second_forward_idx = i; - second_forward_hint = ref_hint; - } - } - } - - if (second_forward_idx < 0) { - skip_mode_allowed = 0; - } else { - skip_mode_allowed = 1; - // Frames for skip mode are forward_idx and second_forward_idx. - } - } - } - - if (skip_mode_allowed) - flag(skip_mode_present); - else - infer(skip_mode_present, 0); - - return 0; -} - -static int FUNC(global_motion_param)(CodedBitstreamContext *ctx, RWContext *rw, - AV1RawFrameHeader *current, - int type, int ref, int idx) -{ - uint32_t abs_bits, prec_bits, num_syms; - int err; - - if (idx < 2) { - if (type == AV1_WARP_MODEL_TRANSLATION) { - abs_bits = AV1_GM_ABS_TRANS_ONLY_BITS - !current->allow_high_precision_mv; - prec_bits = AV1_GM_TRANS_ONLY_PREC_BITS - !current->allow_high_precision_mv; - } else { - abs_bits = AV1_GM_ABS_TRANS_BITS; - prec_bits = AV1_GM_TRANS_PREC_BITS; - } - } else { - abs_bits = AV1_GM_ABS_ALPHA_BITS; - prec_bits = AV1_GM_ALPHA_PREC_BITS; - } - - num_syms = 2 * (1 << abs_bits) + 1; - subexp(gm_params[ref][idx], num_syms, 2, ref, idx); - - // Actual gm_params value is not reconstructed here. - (void)prec_bits; - - return 0; -} - -static int FUNC(global_motion_params)(CodedBitstreamContext *ctx, RWContext *rw, - AV1RawFrameHeader *current) -{ - int ref, type; - int err; - - if (current->frame_type == AV1_FRAME_KEY || - current->frame_type == AV1_FRAME_INTRA_ONLY) - return 0; - - for (ref = AV1_REF_FRAME_LAST; ref <= AV1_REF_FRAME_ALTREF; ref++) { - flags(is_global[ref], 1, ref); - if (current->is_global[ref]) { - flags(is_rot_zoom[ref], 1, ref); - if (current->is_rot_zoom[ref]) { - type = AV1_WARP_MODEL_ROTZOOM; - } else { - flags(is_translation[ref], 1, ref); - type = current->is_translation[ref] ? AV1_WARP_MODEL_TRANSLATION - : AV1_WARP_MODEL_AFFINE; - } - } else { - type = AV1_WARP_MODEL_IDENTITY; - } - - if (type >= AV1_WARP_MODEL_ROTZOOM) { - CHECK(FUNC(global_motion_param)(ctx, rw, current, type, ref, 2)); - CHECK(FUNC(global_motion_param)(ctx, rw, current, type, ref, 3)); - if (type == AV1_WARP_MODEL_AFFINE) { - CHECK(FUNC(global_motion_param)(ctx, rw, current, type, ref, 4)); - CHECK(FUNC(global_motion_param)(ctx, rw, current, type, ref, 5)); - } else { - // gm_params[ref][4] = -gm_params[ref][3] - // gm_params[ref][5] = gm_params[ref][2] - } - } - if (type >= AV1_WARP_MODEL_TRANSLATION) { - CHECK(FUNC(global_motion_param)(ctx, rw, current, type, ref, 0)); - CHECK(FUNC(global_motion_param)(ctx, rw, current, type, ref, 1)); - } - } - - return 0; -} - -static int FUNC(film_grain_params)(CodedBitstreamContext *ctx, RWContext *rw, - AV1RawFilmGrainParams *current, - AV1RawFrameHeader *frame_header) -{ - CodedBitstreamAV1Context *priv = ctx->priv_data; - const AV1RawSequenceHeader *seq = priv->sequence_header; - int num_pos_luma, num_pos_chroma; - int i, err; - - if (!seq->film_grain_params_present || - (!frame_header->show_frame && !frame_header->showable_frame)) - return 0; - - flag(apply_grain); - - if (!current->apply_grain) - return 0; - - fb(16, grain_seed); - - if (frame_header->frame_type == AV1_FRAME_INTER) - flag(update_grain); - else - infer(update_grain, 1); - - if (!current->update_grain) { - fb(3, film_grain_params_ref_idx); - return 0; - } - - fc(4, num_y_points, 0, 14); - for (i = 0; i < current->num_y_points; i++) { - fcs(8, point_y_value[i], - i ? current->point_y_value[i - 1] + 1 : 0, - MAX_UINT_BITS(8) - (current->num_y_points - i - 1), - 1, i); - fbs(8, point_y_scaling[i], 1, i); - } - - if (seq->color_config.mono_chrome) - infer(chroma_scaling_from_luma, 0); - else - flag(chroma_scaling_from_luma); - - if (seq->color_config.mono_chrome || - current->chroma_scaling_from_luma || - (seq->color_config.subsampling_x == 1 && - seq->color_config.subsampling_y == 1 && - current->num_y_points == 0)) { - infer(num_cb_points, 0); - infer(num_cr_points, 0); - } else { - fc(4, num_cb_points, 0, 10); - for (i = 0; i < current->num_cb_points; i++) { - fcs(8, point_cb_value[i], - i ? current->point_cb_value[i - 1] + 1 : 0, - MAX_UINT_BITS(8) - (current->num_cb_points - i - 1), - 1, i); - fbs(8, point_cb_scaling[i], 1, i); - } - fc(4, num_cr_points, 0, 10); - for (i = 0; i < current->num_cr_points; i++) { - fcs(8, point_cr_value[i], - i ? current->point_cr_value[i - 1] + 1 : 0, - MAX_UINT_BITS(8) - (current->num_cr_points - i - 1), - 1, i); - fbs(8, point_cr_scaling[i], 1, i); - } - } - - fb(2, grain_scaling_minus_8); - fb(2, ar_coeff_lag); - num_pos_luma = 2 * current->ar_coeff_lag * (current->ar_coeff_lag + 1); - if (current->num_y_points) { - num_pos_chroma = num_pos_luma + 1; - for (i = 0; i < num_pos_luma; i++) - fbs(8, ar_coeffs_y_plus_128[i], 1, i); - } else { - num_pos_chroma = num_pos_luma; - } - if (current->chroma_scaling_from_luma || current->num_cb_points) { - for (i = 0; i < num_pos_chroma; i++) - fbs(8, ar_coeffs_cb_plus_128[i], 1, i); - } - if (current->chroma_scaling_from_luma || current->num_cr_points) { - for (i = 0; i < num_pos_chroma; i++) - fbs(8, ar_coeffs_cr_plus_128[i], 1, i); - } - fb(2, ar_coeff_shift_minus_6); - fb(2, grain_scale_shift); - if (current->num_cb_points) { - fb(8, cb_mult); - fb(8, cb_luma_mult); - fb(9, cb_offset); - } - if (current->num_cr_points) { - fb(8, cr_mult); - fb(8, cr_luma_mult); - fb(9, cr_offset); - } - - flag(overlap_flag); - flag(clip_to_restricted_range); - - return 0; -} - -static int FUNC(uncompressed_header)(CodedBitstreamContext *ctx, RWContext *rw, - AV1RawFrameHeader *current) -{ - CodedBitstreamAV1Context *priv = ctx->priv_data; - const AV1RawSequenceHeader *seq; - int id_len, diff_len, all_frames, frame_is_intra, order_hint_bits; - int i, err; - - if (!priv->sequence_header) { - av_log(ctx->log_ctx, AV_LOG_ERROR, "No sequence header available: " - "unable to decode frame header.\n"); - return AVERROR_INVALIDDATA; - } - seq = priv->sequence_header; - - id_len = seq->additional_frame_id_length_minus_1 + - seq->delta_frame_id_length_minus_2 + 3; - all_frames = (1 << AV1_NUM_REF_FRAMES) - 1; - - if (seq->reduced_still_picture_header) { - infer(show_existing_frame, 0); - infer(frame_type, AV1_FRAME_KEY); - infer(show_frame, 1); - infer(showable_frame, 0); - frame_is_intra = 1; - - } else { - flag(show_existing_frame); - - if (current->show_existing_frame) { - AV1ReferenceFrameState *ref; - - fb(3, frame_to_show_map_idx); - ref = &priv->ref[current->frame_to_show_map_idx]; - - if (!ref->valid) { - av_log(ctx->log_ctx, AV_LOG_ERROR, "Missing reference frame needed for " - "show_existing_frame (frame_to_show_map_idx = %d).\n", - current->frame_to_show_map_idx); - return AVERROR_INVALIDDATA; - } - - if (seq->decoder_model_info_present_flag && - !seq->timing_info.equal_picture_interval) { - fb(seq->decoder_model_info.frame_presentation_time_length_minus_1 + 1, - frame_presentation_time); - } - - if (seq->frame_id_numbers_present_flag) - fb(id_len, display_frame_id); - - infer(frame_type, ref->frame_type); - if (current->frame_type == AV1_FRAME_KEY) { - infer(refresh_frame_flags, all_frames); - - // Section 7.21 - infer(current_frame_id, ref->frame_id); - priv->upscaled_width = ref->upscaled_width; - priv->frame_width = ref->frame_width; - priv->frame_height = ref->frame_height; - priv->render_width = ref->render_width; - priv->render_height = ref->render_height; - priv->bit_depth = ref->bit_depth; - priv->order_hint = ref->order_hint; - } else - infer(refresh_frame_flags, 0); - - infer(frame_width_minus_1, ref->upscaled_width - 1); - infer(frame_height_minus_1, ref->frame_height - 1); - infer(render_width_minus_1, ref->render_width - 1); - infer(render_height_minus_1, ref->render_height - 1); - - // Section 7.20 - goto update_refs; - } - - fb(2, frame_type); - frame_is_intra = (current->frame_type == AV1_FRAME_INTRA_ONLY || - current->frame_type == AV1_FRAME_KEY); - - flag(show_frame); - if (current->show_frame && - seq->decoder_model_info_present_flag && - !seq->timing_info.equal_picture_interval) { - fb(seq->decoder_model_info.frame_presentation_time_length_minus_1 + 1, - frame_presentation_time); - } - if (current->show_frame) - infer(showable_frame, current->frame_type != AV1_FRAME_KEY); - else - flag(showable_frame); - - if (current->frame_type == AV1_FRAME_SWITCH || - (current->frame_type == AV1_FRAME_KEY && current->show_frame)) - infer(error_resilient_mode, 1); - else - flag(error_resilient_mode); - } - - if (current->frame_type == AV1_FRAME_KEY && current->show_frame) { - for (i = 0; i < AV1_NUM_REF_FRAMES; i++) { - priv->ref[i].valid = 0; - priv->ref[i].order_hint = 0; - } - } - - flag(disable_cdf_update); - - if (seq->seq_force_screen_content_tools == - AV1_SELECT_SCREEN_CONTENT_TOOLS) { - flag(allow_screen_content_tools); - } else { - infer(allow_screen_content_tools, - seq->seq_force_screen_content_tools); - } - if (current->allow_screen_content_tools) { - if (seq->seq_force_integer_mv == AV1_SELECT_INTEGER_MV) - flag(force_integer_mv); - else - infer(force_integer_mv, seq->seq_force_integer_mv); - } else { - infer(force_integer_mv, 0); - } - - if (seq->frame_id_numbers_present_flag) { - fb(id_len, current_frame_id); - - diff_len = seq->delta_frame_id_length_minus_2 + 2; - for (i = 0; i < AV1_NUM_REF_FRAMES; i++) { - if (current->current_frame_id > (1 << diff_len)) { - if (priv->ref[i].frame_id > current->current_frame_id || - priv->ref[i].frame_id < (current->current_frame_id - - (1 << diff_len))) - priv->ref[i].valid = 0; - } else { - if (priv->ref[i].frame_id > current->current_frame_id && - priv->ref[i].frame_id < ((1 << id_len) + - current->current_frame_id - - (1 << diff_len))) - priv->ref[i].valid = 0; - } - } - } else { - infer(current_frame_id, 0); - } - - if (current->frame_type == AV1_FRAME_SWITCH) - infer(frame_size_override_flag, 1); - else if(seq->reduced_still_picture_header) - infer(frame_size_override_flag, 0); - else - flag(frame_size_override_flag); - - order_hint_bits = - seq->enable_order_hint ? seq->order_hint_bits_minus_1 + 1 : 0; - if (order_hint_bits > 0) - fb(order_hint_bits, order_hint); - else - infer(order_hint, 0); - priv->order_hint = current->order_hint; - - if (frame_is_intra || current->error_resilient_mode) - infer(primary_ref_frame, AV1_PRIMARY_REF_NONE); - else - fb(3, primary_ref_frame); - - if (seq->decoder_model_info_present_flag) { - flag(buffer_removal_time_present_flag); - if (current->buffer_removal_time_present_flag) { - for (i = 0; i <= seq->operating_points_cnt_minus_1; i++) { - if (seq->decoder_model_present_for_this_op[i]) { - int op_pt_idc = seq->operating_point_idc[i]; - int in_temporal_layer = (op_pt_idc >> priv->temporal_id ) & 1; - int in_spatial_layer = (op_pt_idc >> (priv->spatial_id + 8)) & 1; - if (seq->operating_point_idc[i] == 0 || - (in_temporal_layer && in_spatial_layer)) { - fbs(seq->decoder_model_info.buffer_removal_time_length_minus_1 + 1, - buffer_removal_time[i], 1, i); - } - } - } - } - } - - if (current->frame_type == AV1_FRAME_SWITCH || - (current->frame_type == AV1_FRAME_KEY && current->show_frame)) - infer(refresh_frame_flags, all_frames); - else - fb(8, refresh_frame_flags); - - if (!frame_is_intra || current->refresh_frame_flags != all_frames) { - if (seq->enable_order_hint) { - for (i = 0; i < AV1_NUM_REF_FRAMES; i++) { - if (current->error_resilient_mode) - fbs(order_hint_bits, ref_order_hint[i], 1, i); - else - infer(ref_order_hint[i], priv->ref[i].order_hint); - if (current->ref_order_hint[i] != priv->ref[i].order_hint) - priv->ref[i].valid = 0; - } - } - } - - if (current->frame_type == AV1_FRAME_KEY || - current->frame_type == AV1_FRAME_INTRA_ONLY) { - CHECK(FUNC(frame_size)(ctx, rw, current)); - CHECK(FUNC(render_size)(ctx, rw, current)); - - if (current->allow_screen_content_tools && - priv->upscaled_width == priv->frame_width) - flag(allow_intrabc); - else - infer(allow_intrabc, 0); - - } else { - if (!seq->enable_order_hint) { - infer(frame_refs_short_signaling, 0); - } else { - flag(frame_refs_short_signaling); - if (current->frame_refs_short_signaling) { - fb(3, last_frame_idx); - fb(3, golden_frame_idx); - CHECK(FUNC(set_frame_refs)(ctx, rw, current)); - } - } - - for (i = 0; i < AV1_REFS_PER_FRAME; i++) { - if (!current->frame_refs_short_signaling) - fbs(3, ref_frame_idx[i], 1, i); - if (seq->frame_id_numbers_present_flag) { - fbs(seq->delta_frame_id_length_minus_2 + 2, - delta_frame_id_minus1[i], 1, i); - } - } - - if (current->frame_size_override_flag && - !current->error_resilient_mode) { - CHECK(FUNC(frame_size_with_refs)(ctx, rw, current)); - } else { - CHECK(FUNC(frame_size)(ctx, rw, current)); - CHECK(FUNC(render_size)(ctx, rw, current)); - } - - if (current->force_integer_mv) - infer(allow_high_precision_mv, 0); - else - flag(allow_high_precision_mv); - - CHECK(FUNC(interpolation_filter)(ctx, rw, current)); - - flag(is_motion_mode_switchable); - - if (current->error_resilient_mode || - !seq->enable_ref_frame_mvs) - infer(use_ref_frame_mvs, 0); - else - flag(use_ref_frame_mvs); - - infer(allow_intrabc, 0); - } - - if (!frame_is_intra) { - // Derive reference frame sign biases. - } - - if (seq->reduced_still_picture_header || current->disable_cdf_update) - infer(disable_frame_end_update_cdf, 1); - else - flag(disable_frame_end_update_cdf); - - if (current->primary_ref_frame == AV1_PRIMARY_REF_NONE) { - // Init non-coeff CDFs. - // Setup past independence. - } else { - // Load CDF tables from previous frame. - // Load params from previous frame. - } - - if (current->use_ref_frame_mvs) { - // Perform motion field estimation process. - } - - CHECK(FUNC(tile_info)(ctx, rw, current)); - - CHECK(FUNC(quantization_params)(ctx, rw, current)); - - CHECK(FUNC(segmentation_params)(ctx, rw, current)); - - CHECK(FUNC(delta_q_params)(ctx, rw, current)); - - CHECK(FUNC(delta_lf_params)(ctx, rw, current)); - - // Init coeff CDFs / load previous segments. - - priv->coded_lossless = 1; - for (i = 0; i < AV1_MAX_SEGMENTS; i++) { - int qindex; - if (current->feature_enabled[i][AV1_SEG_LVL_ALT_Q]) { - qindex = (current->base_q_idx + - current->feature_value[i][AV1_SEG_LVL_ALT_Q]); - } else { - qindex = current->base_q_idx; - } - qindex = av_clip_uintp2(qindex, 8); - - if (qindex || current->delta_q_y_dc || - current->delta_q_u_ac || current->delta_q_u_dc || - current->delta_q_v_ac || current->delta_q_v_dc) { - priv->coded_lossless = 0; - } - } - priv->all_lossless = priv->coded_lossless && - priv->frame_width == priv->upscaled_width; - - CHECK(FUNC(loop_filter_params)(ctx, rw, current)); - - CHECK(FUNC(cdef_params)(ctx, rw, current)); - - CHECK(FUNC(lr_params)(ctx, rw, current)); - - CHECK(FUNC(read_tx_mode)(ctx, rw, current)); - - CHECK(FUNC(frame_reference_mode)(ctx, rw, current)); - - CHECK(FUNC(skip_mode_params)(ctx, rw, current)); - - if (frame_is_intra || current->error_resilient_mode || - !seq->enable_warped_motion) - infer(allow_warped_motion, 0); - else - flag(allow_warped_motion); - - flag(reduced_tx_set); - - CHECK(FUNC(global_motion_params)(ctx, rw, current)); - - CHECK(FUNC(film_grain_params)(ctx, rw, ¤t->film_grain, current)); - - av_log(ctx->log_ctx, AV_LOG_DEBUG, "Frame %d: size %dx%d " - "upscaled %d render %dx%d subsample %dx%d " - "bitdepth %d tiles %dx%d.\n", priv->order_hint, - priv->frame_width, priv->frame_height, priv->upscaled_width, - priv->render_width, priv->render_height, - seq->color_config.subsampling_x + 1, - seq->color_config.subsampling_y + 1, priv->bit_depth, - priv->tile_rows, priv->tile_cols); - -update_refs: - for (i = 0; i < AV1_NUM_REF_FRAMES; i++) { - if (current->refresh_frame_flags & (1 << i)) { - priv->ref[i] = (AV1ReferenceFrameState) { - .valid = 1, - .frame_id = current->current_frame_id, - .upscaled_width = priv->upscaled_width, - .frame_width = priv->frame_width, - .frame_height = priv->frame_height, - .render_width = priv->render_width, - .render_height = priv->render_height, - .frame_type = current->frame_type, - .subsampling_x = seq->color_config.subsampling_x, - .subsampling_y = seq->color_config.subsampling_y, - .bit_depth = priv->bit_depth, - .order_hint = priv->order_hint, - }; - memcpy(priv->ref[i].loop_filter_ref_deltas, current->loop_filter_ref_deltas, - sizeof(current->loop_filter_ref_deltas)); - memcpy(priv->ref[i].loop_filter_mode_deltas, current->loop_filter_mode_deltas, - sizeof(current->loop_filter_mode_deltas)); - memcpy(priv->ref[i].feature_enabled, current->feature_enabled, - sizeof(current->feature_enabled)); - memcpy(priv->ref[i].feature_value, current->feature_value, - sizeof(current->feature_value)); - } - } - - return 0; -} - -static int FUNC(frame_header_obu)(CodedBitstreamContext *ctx, RWContext *rw, - AV1RawFrameHeader *current, int redundant, - AVBufferRef *rw_buffer_ref) -{ - CodedBitstreamAV1Context *priv = ctx->priv_data; - int start_pos, fh_bits, fh_bytes, err; - uint8_t *fh_start; - - if (priv->seen_frame_header) { - if (!redundant) { - av_log(ctx->log_ctx, AV_LOG_ERROR, "Invalid repeated " - "frame header OBU.\n"); - return AVERROR_INVALIDDATA; - } else { - GetBitContext fh; - size_t i, b; - uint32_t val; - - HEADER("Redundant Frame Header"); - - av_assert0(priv->frame_header_ref && priv->frame_header); - - init_get_bits(&fh, priv->frame_header, - priv->frame_header_size); - for (i = 0; i < priv->frame_header_size; i += 8) { - b = FFMIN(priv->frame_header_size - i, 8); - val = get_bits(&fh, b); - xf(b, frame_header_copy[i], - val, val, val, 1, i / 8); - } - } - } else { - if (redundant) - HEADER("Redundant Frame Header (used as Frame Header)"); - else - HEADER("Frame Header"); - -#ifdef READ - start_pos = get_bits_count(rw); -#else - start_pos = put_bits_count(rw); -#endif - - CHECK(FUNC(uncompressed_header)(ctx, rw, current)); - - priv->tile_num = 0; - - if (current->show_existing_frame) { - priv->seen_frame_header = 0; - } else { - priv->seen_frame_header = 1; - - av_buffer_unref(&priv->frame_header_ref); - -#ifdef READ - fh_bits = get_bits_count(rw) - start_pos; - fh_start = (uint8_t*)rw->buffer + start_pos / 8; -#else - // Need to flush the bitwriter so that we can copy its output, - // but use a copy so we don't affect the caller's structure. - { - PutBitContext tmp = *rw; - flush_put_bits(&tmp); - } - - fh_bits = put_bits_count(rw) - start_pos; - fh_start = rw->buf + start_pos / 8; -#endif - fh_bytes = (fh_bits + 7) / 8; - - priv->frame_header_size = fh_bits; - - if (rw_buffer_ref) { - priv->frame_header_ref = av_buffer_ref(rw_buffer_ref); - if (!priv->frame_header_ref) - return AVERROR(ENOMEM); - priv->frame_header = fh_start; - } else { - priv->frame_header_ref = - av_buffer_alloc(fh_bytes + AV_INPUT_BUFFER_PADDING_SIZE); - if (!priv->frame_header_ref) - return AVERROR(ENOMEM); - priv->frame_header = priv->frame_header_ref->data; - memcpy(priv->frame_header, fh_start, fh_bytes); - } - } - } - - return 0; -} - -static int FUNC(tile_group_obu)(CodedBitstreamContext *ctx, RWContext *rw, - AV1RawTileGroup *current) -{ - CodedBitstreamAV1Context *priv = ctx->priv_data; - int num_tiles, tile_bits; - int err; - - HEADER("Tile Group"); - - num_tiles = priv->tile_cols * priv->tile_rows; - if (num_tiles > 1) - flag(tile_start_and_end_present_flag); - else - infer(tile_start_and_end_present_flag, 0); - - if (num_tiles == 1 || !current->tile_start_and_end_present_flag) { - infer(tg_start, 0); - infer(tg_end, num_tiles - 1); - } else { - tile_bits = cbs_av1_tile_log2(1, priv->tile_cols) + - cbs_av1_tile_log2(1, priv->tile_rows); - fc(tile_bits, tg_start, priv->tile_num, num_tiles - 1); - fc(tile_bits, tg_end, current->tg_start, num_tiles - 1); - } - - priv->tile_num = current->tg_end + 1; - - CHECK(FUNC(byte_alignment)(ctx, rw)); - - // Reset header for next frame. - if (current->tg_end == num_tiles - 1) - priv->seen_frame_header = 0; - - // Tile data follows. - - return 0; -} - -static int FUNC(frame_obu)(CodedBitstreamContext *ctx, RWContext *rw, - AV1RawFrame *current, - AVBufferRef *rw_buffer_ref) -{ - int err; - - CHECK(FUNC(frame_header_obu)(ctx, rw, ¤t->header, - 0, rw_buffer_ref)); - - CHECK(FUNC(byte_alignment)(ctx, rw)); - - CHECK(FUNC(tile_group_obu)(ctx, rw, ¤t->tile_group)); - - return 0; -} - -static int FUNC(tile_list_obu)(CodedBitstreamContext *ctx, RWContext *rw, - AV1RawTileList *current) -{ - int err; - - fb(8, output_frame_width_in_tiles_minus_1); - fb(8, output_frame_height_in_tiles_minus_1); - - fb(16, tile_count_minus_1); - - // Tile data follows. - - return 0; -} - -static int FUNC(metadata_hdr_cll)(CodedBitstreamContext *ctx, RWContext *rw, - AV1RawMetadataHDRCLL *current) -{ - int err; - - fb(16, max_cll); - fb(16, max_fall); - - return 0; -} - -static int FUNC(metadata_hdr_mdcv)(CodedBitstreamContext *ctx, RWContext *rw, - AV1RawMetadataHDRMDCV *current) -{ - int err, i; - - for (i = 0; i < 3; i++) { - fbs(16, primary_chromaticity_x[i], 1, i); - fbs(16, primary_chromaticity_y[i], 1, i); - } - - fb(16, white_point_chromaticity_x); - fb(16, white_point_chromaticity_y); - - fb(32, luminance_max); - fb(32, luminance_min); - - return 0; -} - -static int FUNC(scalability_structure)(CodedBitstreamContext *ctx, RWContext *rw, - AV1RawMetadataScalability *current) -{ - CodedBitstreamAV1Context *priv = ctx->priv_data; - const AV1RawSequenceHeader *seq; - int err, i, j; - - if (!priv->sequence_header) { - av_log(ctx->log_ctx, AV_LOG_ERROR, "No sequence header available: " - "unable to parse scalability metadata.\n"); - return AVERROR_INVALIDDATA; - } - seq = priv->sequence_header; - - fb(2, spatial_layers_cnt_minus_1); - flag(spatial_layer_dimensions_present_flag); - flag(spatial_layer_description_present_flag); - flag(temporal_group_description_present_flag); - fc(3, scalability_structure_reserved_3bits, 0, 0); - if (current->spatial_layer_dimensions_present_flag) { - for (i = 0; i <= current->spatial_layers_cnt_minus_1; i++) { - fcs(16, spatial_layer_max_width[i], - 0, seq->max_frame_width_minus_1 + 1, 1, i); - fcs(16, spatial_layer_max_height[i], - 0, seq->max_frame_height_minus_1 + 1, 1, i); - } - } - if (current->spatial_layer_description_present_flag) { - for (i = 0; i <= current->spatial_layers_cnt_minus_1; i++) - fbs(8, spatial_layer_ref_id[i], 1, i); - } - if (current->temporal_group_description_present_flag) { - fb(8, temporal_group_size); - for (i = 0; i < current->temporal_group_size; i++) { - fbs(3, temporal_group_temporal_id[i], 1, i); - flags(temporal_group_temporal_switching_up_point_flag[i], 1, i); - flags(temporal_group_spatial_switching_up_point_flag[i], 1, i); - fbs(3, temporal_group_ref_cnt[i], 1, i); - for (j = 0; j < current->temporal_group_ref_cnt[i]; j++) { - fbs(8, temporal_group_ref_pic_diff[i][j], 2, i, j); - } - } - } - - return 0; -} - -static int FUNC(metadata_scalability)(CodedBitstreamContext *ctx, RWContext *rw, - AV1RawMetadataScalability *current) -{ - int err; - - fb(8, scalability_mode_idc); - - if (current->scalability_mode_idc == AV1_SCALABILITY_SS) - CHECK(FUNC(scalability_structure)(ctx, rw, current)); - - return 0; -} - -static int FUNC(metadata_itut_t35)(CodedBitstreamContext *ctx, RWContext *rw, - AV1RawMetadataITUTT35 *current) -{ - int err; - size_t i; - - fb(8, itu_t_t35_country_code); - if (current->itu_t_t35_country_code == 0xff) - fb(8, itu_t_t35_country_code_extension_byte); - -#ifdef READ - // The payload runs up to the start of the trailing bits, but there might - // be arbitrarily many trailing zeroes so we need to read through twice. - current->payload_size = cbs_av1_get_payload_bytes_left(rw); - - current->payload_ref = av_buffer_alloc(current->payload_size); - if (!current->payload_ref) - return AVERROR(ENOMEM); - current->payload = current->payload_ref->data; -#endif - - for (i = 0; i < current->payload_size; i++) - xf(8, itu_t_t35_payload_bytes[i], current->payload[i], - 0x00, 0xff, 1, i); - - return 0; -} - -static int FUNC(metadata_timecode)(CodedBitstreamContext *ctx, RWContext *rw, - AV1RawMetadataTimecode *current) -{ - int err; - - fb(5, counting_type); - flag(full_timestamp_flag); - flag(discontinuity_flag); - flag(cnt_dropped_flag); - fb(9, n_frames); - - if (current->full_timestamp_flag) { - fc(6, seconds_value, 0, 59); - fc(6, minutes_value, 0, 59); - fc(5, hours_value, 0, 23); - } else { - flag(seconds_flag); - if (current->seconds_flag) { - fc(6, seconds_value, 0, 59); - flag(minutes_flag); - if (current->minutes_flag) { - fc(6, minutes_value, 0, 59); - flag(hours_flag); - if (current->hours_flag) - fc(5, hours_value, 0, 23); - } - } - } - - fb(5, time_offset_length); - if (current->time_offset_length > 0) - fb(current->time_offset_length, time_offset_value); - else - infer(time_offset_length, 0); - - return 0; -} - -static int FUNC(metadata_obu)(CodedBitstreamContext *ctx, RWContext *rw, - AV1RawMetadata *current) -{ - int err; - - leb128(metadata_type); - - switch (current->metadata_type) { - case AV1_METADATA_TYPE_HDR_CLL: - CHECK(FUNC(metadata_hdr_cll)(ctx, rw, ¤t->metadata.hdr_cll)); - break; - case AV1_METADATA_TYPE_HDR_MDCV: - CHECK(FUNC(metadata_hdr_mdcv)(ctx, rw, ¤t->metadata.hdr_mdcv)); - break; - case AV1_METADATA_TYPE_SCALABILITY: - CHECK(FUNC(metadata_scalability)(ctx, rw, ¤t->metadata.scalability)); - break; - case AV1_METADATA_TYPE_ITUT_T35: - CHECK(FUNC(metadata_itut_t35)(ctx, rw, ¤t->metadata.itut_t35)); - break; - case AV1_METADATA_TYPE_TIMECODE: - CHECK(FUNC(metadata_timecode)(ctx, rw, ¤t->metadata.timecode)); - break; - default: - // Unknown metadata type. - return AVERROR_PATCHWELCOME; - } - - return 0; -} - -static int FUNC(padding_obu)(CodedBitstreamContext *ctx, RWContext *rw, - AV1RawPadding *current) -{ - int i, err; - - HEADER("Padding"); - -#ifdef READ - // The payload runs up to the start of the trailing bits, but there might - // be arbitrarily many trailing zeroes so we need to read through twice. - current->payload_size = cbs_av1_get_payload_bytes_left(rw); - - current->payload_ref = av_buffer_alloc(current->payload_size); - if (!current->payload_ref) - return AVERROR(ENOMEM); - current->payload = current->payload_ref->data; -#endif - - for (i = 0; i < current->payload_size; i++) - xf(8, obu_padding_byte[i], current->payload[i], 0x00, 0xff, 1, i); - - return 0; -} diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/dcadct.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/dcadct.c deleted file mode 100644 index a0eda3d2bb5a4c04f22503915cc8407edece78d7..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/dcadct.c +++ /dev/null @@ -1,362 +0,0 @@ -/* - * Copyright (C) 2016 foo86 - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#include - -#include "dcadct.h" -#include "dcamath.h" - -static void sum_a(const int *input, int *output, int len) -{ - int i; - - for (i = 0; i < len; i++) - output[i] = input[2 * i] + input[2 * i + 1]; -} - -static void sum_b(const int *input, int *output, int len) -{ - int i; - - output[0] = input[0]; - for (i = 1; i < len; i++) - output[i] = input[2 * i] + input[2 * i - 1]; -} - -static void sum_c(const int *input, int *output, int len) -{ - int i; - - for (i = 0; i < len; i++) - output[i] = input[2 * i]; -} - -static void sum_d(const int *input, int *output, int len) -{ - int i; - - output[0] = input[1]; - for (i = 1; i < len; i++) - output[i] = input[2 * i - 1] + input[2 * i + 1]; -} - -static void dct_a(const int *input, int *output) -{ - static const int cos_mod[8][8] = { - { 8348215, 8027397, 7398092, 6484482, 5321677, 3954362, 2435084, 822227 }, - { 8027397, 5321677, 822227, -3954362, -7398092, -8348215, -6484482, -2435084 }, - { 7398092, 822227, -6484482, -8027397, -2435084, 5321677, 8348215, 3954362 }, - { 6484482, -3954362, -8027397, 822227, 8348215, 2435084, -7398092, -5321677 }, - { 5321677, -7398092, -2435084, 8348215, -822227, -8027397, 3954362, 6484482 }, - { 3954362, -8348215, 5321677, 2435084, -8027397, 6484482, 822227, -7398092 }, - { 2435084, -6484482, 8348215, -7398092, 3954362, 822227, -5321677, 8027397 }, - { 822227, -2435084, 3954362, -5321677, 6484482, -7398092, 8027397, -8348215 } - }; - - int i, j; - - for (i = 0; i < 8; i++) { - int64_t res = 0; - for (j = 0; j < 8; j++) - res += (int64_t)cos_mod[i][j] * input[j]; - output[i] = norm23(res); - } -} - -static void dct_b(const int *input, int *output) -{ - static const int cos_mod[8][7] = { - { 8227423, 7750063, 6974873, 5931642, 4660461, 3210181, 1636536 }, - { 6974873, 3210181, -1636536, -5931642, -8227423, -7750063, -4660461 }, - { 4660461, -3210181, -8227423, -5931642, 1636536, 7750063, 6974873 }, - { 1636536, -7750063, -4660461, 5931642, 6974873, -3210181, -8227423 }, - { -1636536, -7750063, 4660461, 5931642, -6974873, -3210181, 8227423 }, - { -4660461, -3210181, 8227423, -5931642, -1636536, 7750063, -6974873 }, - { -6974873, 3210181, 1636536, -5931642, 8227423, -7750063, 4660461 }, - { -8227423, 7750063, -6974873, 5931642, -4660461, 3210181, -1636536 } - }; - - int i, j; - - for (i = 0; i < 8; i++) { - int64_t res = input[0] * (INT64_C(1) << 23); - for (j = 0; j < 7; j++) - res += (int64_t)cos_mod[i][j] * input[1 + j]; - output[i] = norm23(res); - } -} - -static void mod_a(const int *input, int *output) -{ - static const int cos_mod[16] = { - 4199362, 4240198, 4323885, 4454708, - 4639772, 4890013, 5221943, 5660703, - -6245623, -7040975, -8158494, -9809974, - -12450076, -17261920, -28585092, -85479984 - }; - - int i, k; - - for (i = 0; i < 8; i++) - output[i] = mul23(cos_mod[i], input[i] + input[8 + i]); - - for (i = 8, k = 7; i < 16; i++, k--) - output[i] = mul23(cos_mod[i], input[k] - input[8 + k]); -} - -static void mod_b(int *input, int *output) -{ - static const int cos_mod[8] = { - 4214598, 4383036, 4755871, 5425934, - 6611520, 8897610, 14448934, 42791536 - }; - - int i, k; - - for (i = 0; i < 8; i++) - input[8 + i] = mul23(cos_mod[i], input[8 + i]); - - for (i = 0; i < 8; i++) - output[i] = input[i] + input[8 + i]; - - for (i = 8, k = 7; i < 16; i++, k--) - output[i] = input[k] - input[8 + k]; -} - -static void mod_c(const int *input, int *output) -{ - static const int cos_mod[32] = { - 1048892, 1051425, 1056522, 1064244, - 1074689, 1087987, 1104313, 1123884, - 1146975, 1173922, 1205139, 1241133, - 1282529, 1330095, 1384791, 1447815, - -1520688, -1605358, -1704360, -1821051, - -1959964, -2127368, -2332183, -2587535, - -2913561, -3342802, -3931480, -4785806, - -6133390, -8566050, -14253820, -42727120 - }; - - int i, k; - - for (i = 0; i < 16; i++) - output[i] = mul23(cos_mod[i], input[i] + input[16 + i]); - - for (i = 16, k = 15; i < 32; i++, k--) - output[i] = mul23(cos_mod[i], input[k] - input[16 + k]); -} - -static void clp_v(int *input, int len) -{ - int i; - - for (i = 0; i < len; i++) - input[i] = clip23(input[i]); -} - -static void imdct_half_32(int32_t *output, const int32_t *input) -{ - int buf_a[32], buf_b[32]; - int i, k, mag, shift, round; - - mag = 0; - for (i = 0; i < 32; i++) - mag += abs(input[i]); - - shift = mag > 0x400000 ? 2 : 0; - round = shift > 0 ? 1 << (shift - 1) : 0; - - for (i = 0; i < 32; i++) - buf_a[i] = (input[i] + round) >> shift; - - sum_a(buf_a, buf_b + 0, 16); - sum_b(buf_a, buf_b + 16, 16); - clp_v(buf_b, 32); - - sum_a(buf_b + 0, buf_a + 0, 8); - sum_b(buf_b + 0, buf_a + 8, 8); - sum_c(buf_b + 16, buf_a + 16, 8); - sum_d(buf_b + 16, buf_a + 24, 8); - clp_v(buf_a, 32); - - dct_a(buf_a + 0, buf_b + 0); - dct_b(buf_a + 8, buf_b + 8); - dct_b(buf_a + 16, buf_b + 16); - dct_b(buf_a + 24, buf_b + 24); - clp_v(buf_b, 32); - - mod_a(buf_b + 0, buf_a + 0); - mod_b(buf_b + 16, buf_a + 16); - clp_v(buf_a, 32); - - mod_c(buf_a, buf_b); - - for (i = 0; i < 32; i++) - buf_b[i] = clip23(buf_b[i] * (1 << shift)); - - for (i = 0, k = 31; i < 16; i++, k--) { - output[ i] = clip23(buf_b[i] - buf_b[k]); - output[16 + i] = clip23(buf_b[i] + buf_b[k]); - } -} - -static void mod64_a(const int *input, int *output) -{ - static const int cos_mod[32] = { - 4195568, 4205700, 4226086, 4256977, - 4298755, 4351949, 4417251, 4495537, - 4587901, 4695690, 4820557, 4964534, - 5130115, 5320382, 5539164, 5791261, - -6082752, -6421430, -6817439, -7284203, - -7839855, -8509474, -9328732, -10350140, - -11654242, -13371208, -15725922, -19143224, - -24533560, -34264200, -57015280, -170908480 - }; - - int i, k; - - for (i = 0; i < 16; i++) - output[i] = mul23(cos_mod[i], input[i] + input[16 + i]); - - for (i = 16, k = 15; i < 32; i++, k--) - output[i] = mul23(cos_mod[i], input[k] - input[16 + k]); -} - -static void mod64_b(int *input, int *output) -{ - static const int cos_mod[16] = { - 4199362, 4240198, 4323885, 4454708, - 4639772, 4890013, 5221943, 5660703, - 6245623, 7040975, 8158494, 9809974, - 12450076, 17261920, 28585092, 85479984 - }; - - int i, k; - - for (i = 0; i < 16; i++) - input[16 + i] = mul23(cos_mod[i], input[16 + i]); - - for (i = 0; i < 16; i++) - output[i] = input[i] + input[16 + i]; - - for (i = 16, k = 15; i < 32; i++, k--) - output[i] = input[k] - input[16 + k]; -} - -static void mod64_c(const int *input, int *output) -{ - static const int cos_mod[64] = { - 741511, 741958, 742853, 744199, - 746001, 748262, 750992, 754197, - 757888, 762077, 766777, 772003, - 777772, 784105, 791021, 798546, - 806707, 815532, 825054, 835311, - 846342, 858193, 870912, 884554, - 899181, 914860, 931667, 949686, - 969011, 989747, 1012012, 1035941, - -1061684, -1089412, -1119320, -1151629, - -1186595, -1224511, -1265719, -1310613, - -1359657, -1413400, -1472490, -1537703, - -1609974, -1690442, -1780506, -1881904, - -1996824, -2128058, -2279225, -2455101, - -2662128, -2909200, -3208956, -3579983, - -4050785, -4667404, -5509372, -6726913, - -8641940, -12091426, -20144284, -60420720 - }; - - int i, k; - - for (i = 0; i < 32; i++) - output[i] = mul23(cos_mod[i], input[i] + input[32 + i]); - - for (i = 32, k = 31; i < 64; i++, k--) - output[i] = mul23(cos_mod[i], input[k] - input[32 + k]); -} - -static void imdct_half_64(int32_t *output, const int32_t *input) -{ - int buf_a[64], buf_b[64]; - int i, k, mag, shift, round; - - mag = 0; - for (i = 0; i < 64; i++) - mag += abs(input[i]); - - shift = mag > 0x400000 ? 2 : 0; - round = shift > 0 ? 1 << (shift - 1) : 0; - - for (i = 0; i < 64; i++) - buf_a[i] = (input[i] + round) >> shift; - - sum_a(buf_a, buf_b + 0, 32); - sum_b(buf_a, buf_b + 32, 32); - clp_v(buf_b, 64); - - sum_a(buf_b + 0, buf_a + 0, 16); - sum_b(buf_b + 0, buf_a + 16, 16); - sum_c(buf_b + 32, buf_a + 32, 16); - sum_d(buf_b + 32, buf_a + 48, 16); - clp_v(buf_a, 64); - - sum_a(buf_a + 0, buf_b + 0, 8); - sum_b(buf_a + 0, buf_b + 8, 8); - sum_c(buf_a + 16, buf_b + 16, 8); - sum_d(buf_a + 16, buf_b + 24, 8); - sum_c(buf_a + 32, buf_b + 32, 8); - sum_d(buf_a + 32, buf_b + 40, 8); - sum_c(buf_a + 48, buf_b + 48, 8); - sum_d(buf_a + 48, buf_b + 56, 8); - clp_v(buf_b, 64); - - dct_a(buf_b + 0, buf_a + 0); - dct_b(buf_b + 8, buf_a + 8); - dct_b(buf_b + 16, buf_a + 16); - dct_b(buf_b + 24, buf_a + 24); - dct_b(buf_b + 32, buf_a + 32); - dct_b(buf_b + 40, buf_a + 40); - dct_b(buf_b + 48, buf_a + 48); - dct_b(buf_b + 56, buf_a + 56); - clp_v(buf_a, 64); - - mod_a(buf_a + 0, buf_b + 0); - mod_b(buf_a + 16, buf_b + 16); - mod_b(buf_a + 32, buf_b + 32); - mod_b(buf_a + 48, buf_b + 48); - clp_v(buf_b, 64); - - mod64_a(buf_b + 0, buf_a + 0); - mod64_b(buf_b + 32, buf_a + 32); - clp_v(buf_a, 64); - - mod64_c(buf_a, buf_b); - - for (i = 0; i < 64; i++) - buf_b[i] = clip23(buf_b[i] * (1 << shift)); - - for (i = 0, k = 63; i < 32; i++, k--) { - output[ i] = clip23(buf_b[i] - buf_b[k]); - output[32 + i] = clip23(buf_b[i] + buf_b[k]); - } -} - -av_cold void ff_dcadct_init(DCADCTContext *c) -{ - c->imdct_half[0] = imdct_half_32; - c->imdct_half[1] = imdct_half_64; -} diff --git a/spaces/congsaPfin/Manga-OCR/logs/Capture Stunning Images with ProCam X ( HD Camera Pro ) - APK Download Link.md b/spaces/congsaPfin/Manga-OCR/logs/Capture Stunning Images with ProCam X ( HD Camera Pro ) - APK Download Link.md deleted file mode 100644 index 8d157a68e084490522d808952a20460478814812..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Capture Stunning Images with ProCam X ( HD Camera Pro ) - APK Download Link.md +++ /dev/null @@ -1,28 +0,0 @@ -
-

ProCam X APK Download: A Professional Camera App for Android

- If you are looking for a camera app that can turn your Android phone into a professional camera, you might want to check out ProCam X. This app offers full control over exposure, focus, white balance, ISO and other features that can bring your mobile photography to the next level. In this article, we will tell you what ProCam X is, what features it has, how to download and install it, how to use it, what are its pros and cons, and what are some alternatives to it.

What is ProCam X?

- ProCam X is a camera app developed by Imagi Mobile that aims to provide a professional camera experience on your Android phone. It has a simple and intuitive interface that lets you access all the settings and tools you need to take stunning photos and videos. It supports manual controls, burst mode, realtime filters, 4K video recording, geotagging, anti-shake and more. It also has a lite version that is free but has some limitations on resolution and duration.

Features of ProCam X

- ProCam X has many features that make it stand out from other camera apps. Here are some of them:

Manual controls

- ProCam X gives you full control over the camera parameters such as exposure, ISO, focus, shutter speed and white balance. You can adjust them manually or use the auto mode. You can also lock the exposure and focus values for consistent shots.

Burst mode

- ProCam X allows you to take multiple photos in a row with a configurable delay. This is useful for creating stop motion or time lapse videos or capturing fast-moving subjects.

Realtime filters

- ProCam X lets you apply various filters and color effects to your photos and videos in realtime. You can choose from mono, negative, solarized, sepia, posterized, aqua, blackboard and whiteboard.

4K video recording

- ProCam X enables you to record videos in high resolution up to 4K (depending on your device). You can also set the video bit rate and audio recording options.

How to download and install ProCam X APK?

- There are several ways to download and install ProCam X APK on your Android phone. Here are some of them:

Download from Google Play Store

- The easiest way to get ProCam X is to download it from the Google Play Store. You can search for it or use this link. The app costs $4.99 but it is currently available for free for a limited time. Just tap on the Install button and wait for the app to be downloaded and installed on your phone.

Download from APKCombo

- Another way to get ProCam X is to download it from APKCombo. This is a website that provides APK files for various apps. You can search for ProCam X or use this link. You will see a page with the app information and a download button. Tap on the download button and wait for the APK file to be downloaded on your phone. Then, you need to enable the installation of apps from unknown sources in your phone settings. After that, you can open the APK file and install the app.

Download from AppBrain

- A third way to get ProCam X is to download it from AppBrain. This is another website that provides APK files for various apps. You can search for ProCam X or use this link. You will see a page with the app information and a download button. Tap on the download button and wait for the APK file to be downloaded on your phone. Then, you need to follow the same steps as above to install the app.

How to use ProCam X?

- Once you have installed ProCam X on your phone, you can start using it to take professional photos and videos. Here are some tips on how to use it:

Access the settings menu

- To access the settings menu, tap on the gear icon on the top right corner of the screen. You will see a list of options that you can customize according to your preferences. For example, you can change the resolution, quality, format, orientation, grid, timer, geotagging and more.

Adjust the camera parameters

- To adjust the camera parameters, tap on the icons on the bottom of the screen. You will see sliders that you can move to change the exposure, ISO, focus, shutter speed and white balance. You can also tap on the auto mode icon to let the app decide the best settings for you. You can also lock the exposure and focus values by tapping on the lock icons.

Take photos and videos

- To take photos, tap on the shutter button on the right side of the screen. To take videos, tap on the video button on the left side of the screen. You can also switch between photo and video mode by swiping left or right on the screen. To apply filters and effects, tap on the filter icon on the top left corner of the screen. You can choose from various options such as mono, negative, solarized, sepia and more. To take multiple photos in a row, tap on the burst mode icon on the top center of the screen. You can set the delay and number of shots in the settings menu.

Pros and cons of ProCam X

- ProCam X is a great camera app that can enhance your mobile photography skills. However, it also has some drawbacks that you should be aware of. Here are some pros and cons of ProCam X:

Pros

- - It offers full manual control over camera parameters - It supports burst mode, realtime filters and 4K video recording - It has a simple and intuitive interface - It is currently free for a limited time

Cons

- - It requires a high-end device to run smoothly - It may not be compatible with some devices or models - It may drain your battery faster than other camera apps - It may not support some features such as zoom or flash

Alternatives to ProCam X

- If you are not satisfied with ProCam X or want to try other camera apps, here are some alternatives that you can check out:

Open Camera

- Open Camera is a free and open source camera app that offers many features such as manual controls, auto-stabilization, HDR, panorama, timelapse, face detection and more. It also supports external microphones and remote controls.

VSCO

- VSCO is a popular camera app that also has a social media platform where you can share your photos and videos with other users. It has many filters and presets that you can apply to your shots. It also has editing tools such as crop, rotate, adjust, contrast and more.

Moment Pro Camera

- Moment Pro Camera is a camera app that is designed to work with Moment lenses and cases. It has manual controls, RAW capture, live histogram, focus peaking and more. It also has video features such as slow motion, time lapse and anamorphic.

Conclusion

- ProCam X is a professional camera app for Android that can help you take stunning photos and videos with your phone. It has many features such as manual controls, burst mode, realtime filters and 4K video recording. It is easy to use and customize according to your preferences. However, it also has some drawbacks such as compatibility issues, battery consumption and limited support for some features. If you want to try other camera apps, you can check out Open Camera, VSCO or Moment Pro Camera.

FAQs

-

procam x apk download


Downloadhttps://urlca.com/2uO6Xf



197e85843d
-
-
\ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Download Chess Lessons from Top Masters and Coaches - Free Trial.md b/spaces/congsaPfin/Manga-OCR/logs/Download Chess Lessons from Top Masters and Coaches - Free Trial.md deleted file mode 100644 index 76a9a34aa80b9d41b3afb5bc813e6a2da3aefc70..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Download Chess Lessons from Top Masters and Coaches - Free Trial.md +++ /dev/null @@ -1,138 +0,0 @@ - -

Download Chess Lessons: How to Improve Your Chess Skills Online

-

Chess is a fascinating game that challenges your mind, improves your concentration, and enhances your creativity. Whether you are a beginner or an advanced player, there is always room for improvement in chess. But how can you learn new skills and strategies without spending a fortune on chess books, DVDs, or coaches?

-

download chess lessons


DOWNLOADhttps://urlca.com/2uObwG



-

The answer is simple: download chess lessons online. In this article, we will show you why downloading chess lessons is a great way to improve your chess skills, how to choose the best chess lessons for you, and how to download and use them effectively.

-

Why Download Chess Lessons?

-

Downloading chess lessons online has many advantages over other methods of learning chess. Here are some of them:

-

Benefits of Online Chess Learning

-
    -
  • You can learn at your own pace and convenience. You can download chess lessons anytime, anywhere, and access them on your computer, tablet, or smartphone. You can also pause, rewind, or replay the lessons as many times as you need.
  • -
  • You can save money and time. Downloading chess lessons online is much cheaper than buying chess books, DVDs, or hiring a chess coach. You also don't have to travel to a chess club or a library to find chess resources.
  • -
  • You can learn from the best. Downloading chess lessons online gives you access to a wide range of chess experts, from grandmasters to coaches, who can teach you the secrets of chess success. You can learn from their experience, insights, and tips.
  • -
  • You can have fun and enjoy yourself. Downloading chess lessons online makes learning chess more enjoyable and interactive. You can watch videos, listen to audio, play games, solve puzzles, and take quizzes. You can also join online chess communities and interact with other chess enthusiasts.
  • -
-

Types of Chess Lessons You Can Download

-

There are many types of chess lessons you can download online, depending on your level, goals, and preferences. Here are some examples:

-
    -
  • Chess basics. These are lessons that teach you the rules of chess, how to move the pieces, how to checkmate, and how to avoid common mistakes.
  • -
  • Chess openings. These are lessons that teach you the most popular and effective ways to start a chess game, such as the Queen's Gambit, the Sicilian Defense, or the Ruy Lopez.
  • -
  • Chess tactics. These are lessons that teach you how to spot and execute winning moves, such as forks, pins, skewers, sacrifices, and checkmates.
  • -
  • Chess strategy. These are lessons that teach you how to plan and execute long-term plans, such as controlling the center, developing your pieces, attacking the king, or defending your position.
  • -
  • Chess endgames. These are lessons that teach you how to win or draw when there are few pieces left on the board, such as king and pawn endings, rook endings, or bishop endings.
  • -
  • Chess variants. These are lessons that teach you how to play different versions of chess, such as chess960 (Fischer-Random), blitz chess (fast-paced), puzzle rush (timed puzzles), bullet chess (ultra-fast), or blindfold chess (without seeing the board).
  • -
-

How to Choose the Best Chess Lessons for You

-

With so many options available online, how do you choose the best chess lessons for you? Here are some factors to consider:

-

Factors to Consider When Selecting Chess Lessons

-
    -
  • Your level. Choose chess lessons that match your current skill level and help you progress to the next one. For example, if you are a beginner, start with chess basics and then move on to chess openings and tactics. If you are an intermediate player, focus on chess strategy and endgames. If you are an advanced player, challenge yourself with chess variants and puzzles.
  • -
  • Your goals. Choose chess lessons that help you achieve your specific goals and interests. For example, if you want to improve your rating, look for lessons that cover the topics and skills that are most relevant for your level and style. If you want to have fun, look for lessons that are entertaining and interactive.
  • -
  • Your budget. Choose chess lessons that fit your budget and offer good value for money. For example, if you have a limited budget, look for free or low-cost chess lessons that are high-quality and comprehensive. If you have a larger budget, look for premium chess lessons that offer more features and benefits, such as personalized feedback, live coaching, or access to exclusive content.
  • -
  • Your preferences. Choose chess lessons that suit your preferences and learning style. For example, if you prefer to watch videos, look for chess lessons that have clear and engaging visuals and audio. If you prefer to read text, look for chess lessons that have concise and informative explanations and examples. If you prefer to practice by playing games, look for chess lessons that have interactive exercises and quizzes.
  • -
-

Sources of Quality Chess Lessons Online

-

There are many sources of quality chess lessons online, but here are some of the most popular and reputable ones:

-

download chess lessons for beginners
-download chess lessons from grandmasters
-download chess lessons online free
-download chess lessons pdf
-download chess lessons videos
-download chess lessons app
-download chess lessons for kids
-download chess lessons for intermediate players
-download chess lessons for advanced players
-download chess lessons by Magnus Carlsen
-download chess lessons by Hikaru Nakamura
-download chess lessons by Chess.com
-download chess lessons by Hercules Chess
-download chess lessons on openings
-download chess lessons on endgames
-download chess lessons on tactics
-download chess lessons on strategy
-download chess lessons on checkmate patterns
-download chess lessons on common mistakes
-download chess lessons on positional play
-download chess lessons on attacking the king
-download chess lessons on defending the king
-download chess lessons on pawn structures
-download chess lessons on piece coordination
-download chess lessons on calculation
-download chess lessons on visualization
-download chess lessons on time management
-download chess lessons on psychology
-download chess lessons on improvement tips
-download chess lessons on rating system
-download chess lessons on game analysis
-download chess lessons on annotated games
-download chess lessons on famous games
-download chess lessons on historical players
-download chess lessons on modern players
-download chess lessons on fun variants
-download chess lessons on puzzles and exercises
-download chess lessons on interactive courses
-download chess lessons on learning plan
-download chess lessons on study methods
-how to download chess lessons from youtube
-how to download chess lessons from lichess.org
-how to download chess lessons from udemy.com
-how to download chess lessons from skillshare.com
-how to download chess lessons from masterclass.com.

-
    -
  • Chess.com. This is the largest and most popular online chess platform, with over 50 million members. It offers a variety of chess lessons for all levels and topics, as well as other features such as games, puzzles, articles, forums, tournaments, and more.
  • -
  • Lichess.org. This is a free and open-source online chess platform, with over 10 million members. It offers a range of chess lessons for all levels and topics, as well as other features such as games, puzzles, analysis, studies, broadcasts, teams, and more.
  • -
  • Chessable.com. This is an online chess learning platform that uses the science of spaced repetition to help you memorize chess patterns and concepts. It offers hundreds of chess courses for all levels and topics, as well as other features such as games, puzzles, drills, leaderboards, and more.
  • -
  • Chess24.com. This is an online chess platform that provides live coverage of major chess events, as well as premium chess lessons from top grandmasters and coaches. It offers dozens of chess courses for all levels and topics, as well as other features such as games, puzzles, videos, articles, podcasts, and more.
  • -
  • TheChessWebsite.com. This is an online chess resource that provides comprehensive and easy-to-understand chess lessons for all levels and topics. It offers hundreds of videos, articles, diagrams, quizzes, puzzles, and more.
  • -
-

How to Download and Use Chess Lessons

-

Once you have chosen the best chess lessons for you from the sources above, you need to download them and use them effectively. Here are some steps and tips to help you:

-

Steps to Download Chess Lessons from Different Platforms

-
    -
  1. Go to the website or app of the platform that offers the chess lessons you want to download.
  2. -
  3. Find the chess lesson or course that interests you and click on it.
  4. -
  5. Depending on the platform, you may need to create an account or log in with your existing account.
  6. -
  7. Depending on the platform, you may need to pay a fee or subscribe to access the chess lesson or course.
  8. -
  9. Depending on the platform, you may need to download a software or an app to view the chess lesson or course.
  10. -
  11. Follow the instructions on the screen to download the chess lesson or course to your device.
  12. -
  13. Once downloaded, open the file or app and start learning from the chess lesson or course.
  14. -
-

Tips to Make the Most of Your Chess Lessons

-
    -
  • Set a regular schedule for learning from your downloaded chess lessons. For example, you can dedicate 30 minutes a day or a few hours a week to watch, read, or practice the chess lessons.
  • -
  • Review the chess lessons regularly and test your knowledge and skills. For example, you can solve puzzles, play games, or take quizzes related to the chess lessons.
  • -
  • Apply what you learn from the chess lessons to your own games. For example, you can try out new openings, tactics, or strategies that you learned from the chess lessons.
  • -
  • Seek feedback and guidance from other sources. For example, you can join online chess forums, clubs, or communities and ask questions, share ideas, or get advice from other chess players or experts.
  • -
  • Keep track of your progress and achievements. For example, you can record your games, analyze your mistakes, or monitor your rating changes.
  • -
-

Conclusion

-

Downloading chess lessons online is a smart and convenient way to improve your chess skills. You can learn from the best chess experts, save money and time, and have fun and enjoy yourself. You can also choose from a variety of chess lessons that suit your level, goals, preferences, and budget.

-

To download and use chess lessons effectively, you need to follow some steps and tips. You need to find the best chess lessons for you from reputable sources, download them to your device, and use them regularly and actively. You also need to review, apply, seek feedback, and keep track of what you learn from the chess lessons.

-

If you are ready to take your chess game to the next level, download some chess lessons today and start learning. You will be amazed by how much you can improve in a short time. Happy chess learning!

-

Call to Action

-

If you found this article helpful and informative, please share it with your friends and family who are interested in learning chess. You can also leave a comment below and let us know what you think about downloading chess lessons online. We would love to hear from you!

-

Frequently Asked Questions

-

Q: How do I know what level of chess I am?

-

A: One way to determine your level of chess is to take an online chess test or assessment that evaluates your knowledge and skills in different aspects of chess. Another way is to check your online chess rating or Elo score that measures your performance against other players.

-

Q: How much do online chess lessons cost?

-

A: The cost of online chess lessons varies depending on the source, quality, quantity, and features of the lessons. Some online chess platforms offer free or low-cost chess lessons that are basic and limited. Others offer premium or subscription-based chess lessons that are more advanced and comprehensive.

-

Q: How long does it take to improve in chess?

-

A: The time it takes to improve in chess depends on many factors, such as your current level, goals, motivation, effort, and practice. Generally speaking, the more you learn and play chess, the faster you will improve. However, there is no fixed or guaranteed timeline for improvement in chess.

-

Q: What are some common mistakes to avoid when learning chess?

-

A: Some common mistakes to avoid when learning chess are:

-
    -
  • Learning too much theory without enough practice.
  • -
  • Practicing without enough feedback or analysis.
  • -
  • Focusing on only one aspect of chess without balancing others.
  • -
  • Playing too fast or too slow without managing your time well.
  • -
  • Giving up or losing confidence when facing challenges or setbacks.
  • -
-

Q: What are some resources to learn more about downloading chess lessons online?

-

A: Some resources to learn more about downloading chess lessons online are:

-

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Download Euro Truck Simulator 2 for PC - Free Trial and Full Version.md b/spaces/congsaPfin/Manga-OCR/logs/Download Euro Truck Simulator 2 for PC - Free Trial and Full Version.md deleted file mode 100644 index e29137869560c34f0d4c8d3a2d71aac473e1c21e..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Download Euro Truck Simulator 2 for PC - Free Trial and Full Version.md +++ /dev/null @@ -1,91 +0,0 @@ - -

How to Download Truck Simulator 2

-

Have you ever dreamed of becoming a truck driver and traveling across Europe? If so, you might want to try Truck Simulator 2, a popular simulation game that lets you experience the life of a trucker. In this article, we will show you how to download Truck Simulator 2 from two different sources: Steam and the official website.

-

What is Truck Simulator 2?

-

Truck Simulator 2 is a game developed by SCS Software that was released in 2012. It is the sequel to Euro Truck Simulator, which was released in 2008. The game allows you to drive various trucks across Europe, delivering cargo and earning money. You can also customize your trucks, buy new ones, hire drivers, and manage your own transportation company.

-

how to download truck simulator 2


Download File ✒ ✒ ✒ https://urlca.com/2uOfVe



-

Features of Truck Simulator 2

-

Some of the features that make Truck Simulator 2 stand out are:

-
    -
  • It features over 60 European cities and countries, with realistic landmarks and landscapes.
  • -
  • It has licensed trucks from 7 brands, with over 15 models and countless customization options.
  • -
  • It has advanced driving physics and realistic traffic and weather conditions.
  • -
  • It has a dynamic economy system that changes according to supply and demand.
  • -
  • It has a modding community that creates new content and features for the game.
  • -
-

Requirements for Truck Simulator 2

-

To play Truck Simulator 2, you need to have a PC that meets the following minimum requirements:

- - - -
OSProcessorMemoryGraphicsStorage
Windows XP/Vista/7/8/10Dual core CPU 2.4 GHz4 GB RAMGeForce GTS 450-class (Intel HD 4000)5 GB available space
-

How to Download Truck Simulator 2 from Steam

-

Steam is a digital distribution platform that offers thousands of games for PC, Mac, Linux, and mobile devices. It also provides online multiplayer, social features, cloud saving, and more. To download Truck Simulator 2 from Steam, you need to follow these steps:

-

Step 1: Create a Steam account

-

If you don't have a Steam account yet, you need to create one first. You can do this by visiting https://store.steampowered.com/join/ and filling out the required information. You will also need to verify your email address and agree to the terms of service.

-

Step 2: Search for Truck Simulator 2 on Steam

-

Once you have a Steam account, you can log in and search for Truck Simulator 2 on the Steam store. You can do this by typing "Truck Simulator 2" in the search bar or by visiting https://store.steampowered.com/app/227300/euro_truck_simulator_2/.

-

Step 3: Purchase and download Truck Simulator 2

-

When you find Truck Simulator 2 on Steam, you can click on the "Add to Cart" button to purchase it. The game costs $19.99, but it may be on sale or discount at certain times. You can also check the reviews and ratings of the game before buying it. After you purchase the game, you can download it by clicking on the "Library" tab and then on "Euro Truck Simulator 2". The download size is about 3.3 GB, so it may take some time depending on your internet speed.

-

how to download truck simulator 2 for free
-how to download truck simulator 2 for pc
-how to download truck simulator 2 mods
-how to download truck simulator 2 on android
-how to download truck simulator 2 multiplayer
-how to download truck simulator 2 for windows 10
-how to download truck simulator 2 full version
-how to download truck simulator 2 on laptop
-how to download truck simulator 2 on mac
-how to download truck simulator 2 apk
-how to download truck simulator 2 from steam
-how to download truck simulator 2 dlc
-how to download truck simulator 2 in mobile
-how to download truck simulator 2 highly compressed
-how to download truck simulator 2 without steam
-how to download truck simulator 2 crack
-how to download truck simulator 2 bus mod
-how to download truck simulator 2 update
-how to download truck simulator 2 save game
-how to download truck simulator 2 map editor
-how to download truck simulator 2 online
-how to download truck simulator 2 with all dlc
-how to download truck simulator 2 in hindi
-how to download truck simulator 2 for android phone
-how to download truck simulator 2 promods
-how to download truck simulator 2 for windows 7
-how to download truck simulator 2 for macbook air
-how to download truck simulator 2 with mods
-how to download truck simulator 2 in pc without steam
-how to download truck simulator 2 for ios
-how to download truck simulator 2 on chromebook
-how to download truck simulator 2 game for pc
-how to download truck simulator 2 in tamil
-how to download truck simulator 2 for free on android
-how to download truck simulator 2 on ps4
-how to download truck simulator 2 for free on pc windows 10
-how to download truck simulator 2 in low mb
-how to download truck simulator 2 on xbox one
-how to download truck simulator 2 mod apk
-how to download truck simulator 2 for free on steam

-

How to Download Truck Simulator 2 from the Official Website

-

If you prefer to buy Truck Simulator 2 directly from the official website, you can do so by following these steps:

-

Step 1: Visit the official website of Truck Simulator 2

-

The official website of Truck Simulator 2 is https://eurotrucksimulator2.com/. Here you can find more information about the game, such as trailers, screenshots, news, updates, and more. You can also access the online shop, where you can buy the game and its DLCs.

-

Step 2: Choose your preferred edition of Truck Simulator 2

-

On the online shop, you can choose between two editions of Truck Simulator 2: the standard edition and the gold edition. The standard edition includes only the base game, while the gold edition includes the base game and the Going East! expansion, which adds 13 new cities and countries to the map. The standard edition costs $19.99, while the gold edition costs $29.99.

-

Step 3: Select your payment method and complete the purchase

-

After you choose your edition of Truck Simulator 2, you can select your payment method. You can pay with credit card, PayPal, or bank transfer. You will also need to enter your email address and agree to the terms and conditions. Once you complete the payment, you will receive a confirmation email with a download link and a product key.

-

Step 4: Download and install Truck Simulator 2

-

To download Truck Simulator 2 from the official website, you need to click on the download link in the confirmation email. The download size is about 1.9 GB for the standard edition and 2.1 GB for the gold edition. After you download the game, you need to run the installer and follow the instructions. You will also need to enter your product key when prompted.

-

Conclusion

-

Truck Simulator 2 is a fun and realistic simulation game that lets you drive trucks across Europe. You can download it from Steam or from the official website, depending on your preference. Either way, you will need to pay for the game and have a PC that meets the minimum requirements. Once you download and install the game, you can start your trucking career and enjoy the scenery.

-

FAQs

-
    -
  • Q: How much does Truck Simulator 2 cost?
    A: The standard edition of Truck Simulator 2 costs $19.99, while the gold edition costs $29.99.
  • -
  • Q: What are some of the DLCs for Truck Simulator 2?
    A: Some of the DLCs for Truck Simulator 2 are Going East!, Scandinavia, Vive la France!, Italia, Beyond the Baltic Sea, Road to the Black Sea, Iberia, and Heart of Russia.
  • -
  • Q: Can I play Truck Simulator 2 online?
    A: Yes, you can play Truck Simulator 2 online with other players using a mod called TruckersMP.
  • -
  • Q: Can I use a steering wheel or a controller to play Truck Simulator 2?
    A: Yes, you can use a steering wheel or a controller to play Truck Simulator 2. The game supports various devices and configurations.
  • -
  • Q: Can I mod Truck Simulator 2?
    A: Yes, you can mod Truck Simulator 2 using various tools and resources provided by SCS Software and the modding community.
  • -

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Download Red Foot APK for Android - Watch Tagalog Dubbed Movies Online.md b/spaces/congsaPfin/Manga-OCR/logs/Download Red Foot APK for Android - Watch Tagalog Dubbed Movies Online.md deleted file mode 100644 index 0a07736842527330c31ca54bb151223e062f2bb1..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Download Red Foot APK for Android - Watch Tagalog Dubbed Movies Online.md +++ /dev/null @@ -1,108 +0,0 @@ -
-

Red Foot APK: A Tagalog Dubbed Movie Streaming App

-

Do you love watching movies in Tagalog? Do you want to enjoy a variety of genres and themes from action to romance? Do you want to watch movies for free without any hassle or interruption? If you answered yes to any of these questions, then you might want to check out Red Foot APK, a Tagalog dubbed movie streaming app that lets you watch your favorite movies anytime, anywhere.

-

red foot apk


DOWNLOAD ✶✶✶ https://urlca.com/2uOb5M



-

What is Red Foot APK?

-

Red Foot APK is an Android app that allows you to stream and download Tagalog dubbed movies on your mobile device. You can choose from a wide range of movie categories, such as action, anime, comedy, drama, fantasy, horror, love story, suspense, fiction, and more. You can also watch the latest releases and updates from the app's library. Red Foot APK is free and easy to use, and it does not require any registration or subscription. You can watch movies in high-quality video and audio, and you can also save them for offline viewing. Red Foot APK is a great app for Filipino movie lovers who want to enjoy their favorite movies in their own language.

-

Features of Red Foot APK

-

- Free and easy to use

-

One of the best features of Red Foot APK is that it is completely free and easy to use. You do not need to sign up or pay anything to access the app's content. You just need to download and install the app on your device, and you can start watching movies right away. You can also navigate the app's interface with ease, as it has a simple and user-friendly design.

-

- Various movie categories

-

Another great feature of Red Foot APK is that it offers a variety of movie categories for you to choose from. You can find movies in different genres and themes, such as action, anime, comedy, drama, fantasy, horror, love story, suspense, fiction, and more. You can also browse movies by popularity, rating, year, or alphabetically. You can also search for specific titles or keywords using the app's search function.

-

red foot tagalog dubbed movie apk
-red foot android app free download
-red foot apk latest version
-red foot apk for pc
-red foot apk mod
-red foot apk offline
-red foot apk no ads
-red foot apk update
-red foot apk old version
-red foot apk 2023
-red foot movie streaming app
-red foot action anime comedy apk
-red foot drama fantasy horror apk
-red foot love story suspense apk
-red foot fiction tagalog dubbed apk
-red foot apkcombo download
-red foot paris ton box apk
-red foot app review
-red foot app features
-red foot app alternatives
-redfoot live football scores apk
-redfoot sports tv 365 apk
-redfoot apk 1.1.2 download
-redfoot android app free download
-redfoot apk latest version
-redfoot apk for pc
-redfoot apk mod
-redfoot apk offline
-redfoot apk no ads
-redfoot apk update
-redfoot apk old version
-redfoot apk 2023
-redfoot live football app
-redfoot live score update app
-redfoot sports tv guide app
-redfoot app review
-redfoot app features
-redfoot app alternatives
-futbol24 soccer live scores results apk
-futbol24 football monitoring app apk
-futbol24 fastest live scores app apk
-futbol24 widest coverage app apk
-futbol24 local and minor leagues app apk
-futbol24 gluak srl app apk
-futbol24 android app free download
-futbol24 apk latest version
-futbol24 apk for pc
-futbol24 apk mod
-futbol24 app review
-futbol24 app features

-

- High-quality video and audio

-

Red Foot APK also provides high-quality video and audio for your viewing pleasure. You can watch movies in HD resolution and clear sound. You can also adjust the video quality according to your preference or network speed. You can also enable subtitles or change the language if available.

-

- Regular updates and new releases

-

Red Foot APK also keeps its content fresh and updated by adding new movies regularly. You can find the latest releases and updates from the app's library. You can also request for movies that are not yet available on the app. The app's developers are always working hard to improve the app's performance and features.

-

How to download and install Red Foot APK?

-

If you are interested in downloading and installing Red Foot APK on your device, you need to follow these simple steps:

-

Step 1: Enable unknown sources on your device

-

Since Red Foot APK is not available on Google Play Store or App Store, you need to enable unknown sources on your device to allow the installation of apps from unknown sources. To do this, go to your device's settings, then security, then unknown sources, and enable it.

-

Step 2: Download the APK file from a trusted source

-

Next, you need to download the APK file of Red Foot APK from a trusted source. You can use the link provided below to download the latest version of the app. Make sure you have enough storage space on your device before downloading the file.

-

Download Red Foot APK here

-

Step 3: Locate and install the APK file

-

After downloading the APK file, you need to locate and install it on your device. You can use a file manager app to find the file in your downloads folder, or you can tap on the notification that appears after the download is complete. Then, tap on the file and follow the instructions on the screen to install the app.

-

Step 4: Launch the app and enjoy watching movies

-

Finally, you can launch the app and start watching movies. You can find the app icon on your home screen or app drawer. Tap on it and grant the necessary permissions for the app to function properly. Then, you can browse through the app's library and select any movie you want to watch. You can also adjust the settings and preferences of the app according to your liking.

-

Pros and cons of Red Foot APK

-

Like any other app, Red Foot APK has its own pros and cons that you should be aware of before using it. Here are some of them:

-

Pros

-

- No registration or subscription required

-

One of the main advantages of Red Foot APK is that it does not require any registration or subscription to access its content. You can watch movies for free without any limitations or restrictions. You do not need to provide any personal information or payment details to use the app.

-

- No annoying ads or pop-ups

-

Another advantage of Red Foot APK is that it does not have any annoying ads or pop-ups that interrupt your viewing experience. You can watch movies without any disturbance or distraction. You do not need to worry about clicking on malicious links or downloading unwanted files from the app.

-

- Supports offline viewing and downloading

-

Red Foot APK also supports offline viewing and downloading of movies. You can save movies on your device for later viewing or when you do not have an internet connection. You can also share movies with your friends or family using Bluetooth or other methods.

-

Cons

-

- Not available on Google Play Store or App Store

-

One of the main disadvantages of Red Foot APK is that it is not available on Google Play Store or App Store, which means you need to download and install it manually from an external source. This may pose some risks for your device's security and performance, as some sources may contain viruses or malware that can harm your device. You also need to enable unknown sources on your device, which may expose your device to other threats.

-

- May not be compatible with some devices or regions

-

Another disadvantage of Red Foot APK is that it may not be compatible with some devices or regions. Some users may experience problems with the app's functionality or availability depending on their device model, operating system, network provider, or location. Some movies may also be blocked or restricted in some regions due to legal issues or censorship.

-

- May contain some bugs or errors

-

Red Foot APK may also contain some bugs or errors that affect its performance and quality. Some users may encounter issues with the app's loading speed, video quality, audio sync, subtitles, downloads, requests, or updates. The app's developers are trying to fix these issues as soon as possible, but they may still persist for some users.

-

Conclusion

-

Red Foot APK is a Tagalog dubbed movie streaming app that lets you watch your favorite movies anytime, anywhere. It has many features that make it a great app for Filipino movie lovers, such as free and easy access, various movie categories, high-quality video and audio, regular updates and new releases, no ads or pop-ups, and offline viewing and downloading support. However, it also has some drawbacks that you should consider before using it, such as not being available on Google Play Store or App Store, compatibility issues with some devices or regions, and possible bugs or errors. Overall, Red Foot APK is a good app for watching movies in Tagalog, but you should use it at your own risk and discretion.

-

FAQs

-

Here are some frequently asked questions about Red Foot APK:

-

Q: Is Red Foot APK safe to use?

-

A: Red Foot APK is generally safe to use, as it does not contain any viruses or malware that can harm your device. However, you should always download and install the app from a trusted source, and scan the file with an antivirus app before installing it. You should also be careful about the permissions you grant to the app, and avoid clicking on any suspicious links or pop-ups that may appear on the app.

-

Q: Is Red Foot APK legal to use?

-

A: Red Foot APK is not legal to use in some regions, as it may violate the copyright laws or regulations of the movie industry. The app does not have the rights or licenses to stream or distribute the movies it offers, and it may infringe on the intellectual property rights of the movie owners or producers. You may face legal consequences or penalties if you use the app in a region where it is prohibited or restricted.

-

Q: How can I request for a movie that is not available on Red Foot APK?

-

A: You can request for a movie that is not available on Red Foot APK by using the app's request feature. You can find the request feature on the app's menu, and you can type in the name of the movie you want to watch. The app's developers will try to add the movie to the app's library as soon as possible, but they cannot guarantee that they will be able to fulfill all requests.

-

Q: How can I update Red Foot APK to the latest version?

-

A: You can update Red Foot APK to the latest version by using the app's update feature. You can find the update feature on the app's menu, and you can tap on it to check for any available updates. The app will automatically download and install the latest version of the app if there is one. You can also check for updates manually by visiting the app's official website or source.

-

Q: How can I contact Red Foot APK's developers or support team?

-

A: You can contact Red Foot APK's developers or support team by using the app's feedback feature. You can find the feedback feature on the app's menu, and you can use it to send your comments, suggestions, questions, or complaints to the app's developers or support team. You can also contact them by email at redfootapk@gmail.com.

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Download Solitaire 2022 Mod Apk and Enjoy Unlimited Card Stacking Fun.md b/spaces/congsaPfin/Manga-OCR/logs/Download Solitaire 2022 Mod Apk and Enjoy Unlimited Card Stacking Fun.md deleted file mode 100644 index 06765679a666a803411cdfb9a092c876e5b9523a..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Download Solitaire 2022 Mod Apk and Enjoy Unlimited Card Stacking Fun.md +++ /dev/null @@ -1,120 +0,0 @@ - -

Solitaire 2022 Mod APK: A Classic Card Game with Unlimited Money

-

If you are a fan of card games, you might have played solitaire at least once in your life. Solitaire is a classic game that can be enjoyed by anyone, anywhere, anytime. It is simple, relaxing, and challenging at the same time. But what if you could make it even more fun and rewarding? That's where Solitaire 2022 Mod APK comes in.

-

solitaire 2022 mod apk


Download Zip > https://urlca.com/2uO9ZY



-

What is Solitaire 2022 Mod APK?

-

Solitaire 2022 Mod APK is a modified version of the original Solitaire app by MobilityWare. It is one of the most popular and downloaded solitaire games on the Google Play Store, with over 100 million installs. However, the modded version offers some extra features and benefits that are not available in the original app. These include unlimited money, no ads, customizable themes and backgrounds, multiple game modes and difficulty levels, offline play and cloud save, and more.

-

Features of Solitaire 2022 Mod APK

-

Unlimited money

-

One of the main advantages of Solitaire 2022 Mod APK is that it gives you unlimited money to spend on various items and upgrades. You can use the money to buy new card backs, faces, themes, backgrounds, hints, undos, shuffles, and more. You can also use the money to unlock new game modes and challenges. With unlimited money, you can enjoy the game without any limitations or restrictions.

-

No ads

-

Another benefit of Solitaire 2022 Mod APK is that it removes all the annoying ads that interrupt your gameplay. You don't have to watch any video ads or banner ads to earn extra coins or rewards. You can play the game smoothly and peacefully without any distractions or interruptions.

-

Customizable themes and backgrounds

-

Solitaire 2022 Mod APK also allows you to customize the appearance of the game according to your preferences and mood. You can choose from a variety of themes and backgrounds that suit your style and taste. You can also change the card backs and faces to make them more appealing and attractive. You can create your own unique solitaire experience with Solitaire 2022 Mod APK.

-

Multiple game modes and difficulty levels

-

Solitaire 2022 Mod APK also offers you multiple game modes and difficulty levels to challenge your skills and test your luck. You can play the classic solitaire mode with one or three card draw options, or try the daily challenge mode with different puzzles and goals every day. You can also play the Vegas mode with cumulative scoring, or the winning deal mode with guaranteed solvable deals. You can also adjust the difficulty level from easy to hard to suit your level of expertise and experience.

-

Offline play and cloud save

-

Solitaire 2022 Mod APK also supports offline play and cloud save features. You can play the game without an internet connection anytime, anywhere. You can also sync your progress across different devices using your Google Play account. You don't have to worry about losing your data or progress if you switch devices or uninstall the app.

-

How to download and install Solitaire 2022 Mod APK?

-

If you want to download and install Solitaire 2022 Mod APK on your Android device, you need to follow these simple steps:

-

solitaire 2022 hack apk download
-solitaire 2022 unlimited coins mod apk
-solitaire 2022 premium mod apk free
-solitaire 2022 latest version mod apk
-solitaire 2022 pro mod apk unlocked
-solitaire 2022 offline mod apk android
-solitaire 2022 cheats mod apk ios
-solitaire 2022 vip mod apk no ads
-solitaire 2022 modded apk full
-solitaire 2022 cracked apk mod
-solitaire 2022 mega mod apk online
-solitaire 2022 mod apk unlimited money
-solitaire 2022 mod apk no root
-solitaire 2022 mod apk all features
-solitaire 2022 mod apk high score
-solitaire 2022 mod apk easy win
-solitaire 2022 mod apk no verification
-solitaire 2022 mod apk without survey
-solitaire 2022 mod apk for pc
-solitaire 2022 mod apk for windows
-solitaire 2022 mod apk for mac
-solitaire 2022 mod apk for laptop
-solitaire 2022 mod apk for tablet
-solitaire 2022 mod apk for chromebook
-solitaire 2022 mod apk for firestick
-solitaire 2022 mod apk for smart tv
-solitaire 2022 mod apk for android tv
-solitaire 2022 mod apk for roku
-solitaire 2022 mod apk for xbox one
-solitaire 2022 mod apk for ps4
-solitaire 2022 mod apk for nintendo switch
-solitaire 2022 mod apk for iphone
-solitaire 2022 mod apk for ipad
-solitaire 2022 mod apk for ipod touch
-solitaire 2022 mod apk for apple tv
-solitaire 2022 mod apk for samsung galaxy
-solitaire 2022 mod apk for huawei p40
-solitaire 2022 mod apk for oneplus nord
-solitaire 2022 mod apk for google pixel
-solitaire 2022 mod apk for lg v60 thinq
-solitaire 2022 mod apk for sony xperia
-solitaire 2022 mod apk for motorola razr
-solitaire 2022 mod apk for nokia lumia
-solitaire 2022 mod apk for blackberry keyone
-solitaire 2022 mod apk for amazon kindle
-solitaire 2022 mod apk for facebook gameroom
-solitaire 2022 mod apk for steam
-solitaire 2022 mod apk for bluestacks

-
    -
  1. Go to [Solitaire MOD APK v4.25.0.20221208 (Unlimited money) - Apkmody](^1^) and click on the download button.Wait for the download to finish and then open the downloaded file.
  2. -
  3. Allow the installation of unknown sources on your device if prompted.
  4. -
  5. Follow the instructions on the screen to install the app.
  6. -
  7. Launch the app and enjoy playing Solitaire 2022 Mod APK with unlimited money and no ads.
  8. -
-

Pros and cons of Solitaire 2022 Mod APK

-

Like any other modded app, Solitaire 2022 Mod APK has its own pros and cons. Here are some of them:

-

Pros

-
    -
  • It gives you unlimited money to buy and unlock anything you want in the game.
  • -
  • It removes all the ads that can ruin your gaming experience.
  • -
  • It lets you customize the game's appearance with various themes and backgrounds.
  • -
  • It offers you multiple game modes and difficulty levels to suit your preferences and skills.
  • -
  • It supports offline play and cloud save features.
  • -
-

Cons

-
    -
  • It may not be compatible with some devices or Android versions.
  • -
  • It may cause some glitches or errors in the game's performance or functionality.
  • -
  • It may not be updated regularly or frequently by the developers.
  • -
  • It may not be safe or secure to use as it is not verified by Google Play.
  • -
  • It may violate the terms and conditions of the original app or game.
  • -
-

Tips and tricks for playing Solitaire 2022 Mod APK

-

If you want to improve your solitaire skills and win more games, here are some tips and tricks that you can use:

-
    -
  • Always pay attention to the cards on the tableau and the foundation. Try to move the cards from the tableau to the foundation as soon as possible.
  • -
  • Use the hints, undos, and shuffles wisely. Don't rely on them too much, but don't hesitate to use them when you are stuck or need some help.
  • -
  • Try to clear the cards in descending order and alternating colors. This will create more space and opportunities for moving cards around.
  • -
  • Avoid moving cards to the waste pile unless necessary. The waste pile is where you draw new cards from, so you want to keep it as full as possible.
  • -
  • Practice different game modes and difficulty levels. This will help you learn new strategies and techniques for solving different types of solitaire puzzles.
  • -
-

Conclusion

-

Solitaire 2022 Mod APK is a great way to enjoy a classic card game with some extra features and benefits. It gives you unlimited money, no ads, customizable themes and backgrounds, multiple game modes and difficulty levels, offline play and cloud save, and more. However, it also has some drawbacks that you should be aware of before downloading and installing it. It may not work on some devices or Android versions, it may cause some glitches or errors in the game, it may not be updated regularly or frequently, it may not be safe or secure to use, and it may violate the terms and conditions of the original app or game. Therefore, you should use it at your own risk and discretion. If you want to try Solitaire 2022 Mod APK, you can download it from [Solitaire MOD APK v4.25.0.20221208 (Unlimited money) - Apkmody] and follow the steps mentioned above to install it on your device. Have fun playing solitaire with unlimited money and no ads!

-

FAQs

-

Here are some frequently asked questions about Solitaire 2022 Mod APK:

-
    -
  1. What is the difference between Solitaire 2022 Mod APK and Solitaire 2022?
  2. -

    Solitaire 2022 Mod APK is a modified version of Solitaire 2022 that offers some extra features and benefits that are not available in the original app. These include unlimited money, no ads, customizable themes and backgrounds, multiple game modes and difficulty levels, offline play and cloud save, and more.

    -
  3. Is Solitaire 2022 Mod APK safe to use?
  4. -

    Solitaire 2022 Mod APK is not verified by Google Play, so it may not be safe or secure to use. It may contain viruses, malware, spyware, or other harmful elements that can damage your device or compromise your privacy. It may also violate the terms and conditions of the original app or game, which can result in legal issues or penalties. Therefore, you should use it at your own risk and discretion.

    -
  5. How can I get unlimited money in Solitaire 2022 Mod APK?
  6. -

    You can get unlimited money in Solitaire 2022 Mod APK by downloading and installing the modded version of the app from [Solitaire MOD APK v4.25.0.20221208 (Unlimited money) - Apkmody]. The modded version will give you unlimited money to spend on various items and upgrades in the game.

    -
  7. How can I remove ads in Solitaire 2022 Mod APK?
  8. -

    You can remove ads in Solitaire 2022 Mod APK by downloading and installing the modded version of the app from [Solitaire MOD APK v4.25.0.20221208 (Unlimited money) - Apkmody]. The modded version will remove all the ads that interrupt your gameplay.

    -
  9. How can I customize the themes and backgrounds in Solitaire 2022 Mod APK?
  10. -

    You can customize the themes and backgrounds in Solitaire 2022 Mod APK by using the money you get from the modded version of the app. You can buy new card backs, faces, themes, backgrounds, and more from the shop. You can also change them anytime from the settings menu.

    -
  11. How can I play different game modes and difficulty levels in Solitaire 2022 Mod APK?
  12. -

    You can play different game modes and difficulty levels in Solitaire 2022 Mod APK by using the money you get from the modded version of the app. You can unlock new game modes and challenges from the shop. You can also adjust the difficulty level from easy to hard from the settings menu.

    -

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/How to Install Brawl Free Game 5.3.12 Patched APK on Your Android Device.md b/spaces/congsaPfin/Manga-OCR/logs/How to Install Brawl Free Game 5.3.12 Patched APK on Your Android Device.md deleted file mode 100644 index a16034e1d9d9058ad078916117b140b388e78f23..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/How to Install Brawl Free Game 5.3.12 Patched APK on Your Android Device.md +++ /dev/null @@ -1,111 +0,0 @@ -
-

Brawl Free Game 5.3.12 Patched APK: A Fun and Exciting Multiplayer Game for Android

-

If you are looking for a fast-paced, action-packed, and addictive multiplayer game for your Android device, you should check out brawl free game 5.3.12 patched apk. This is a modified version of Brawl Stars, a popular 3v3 online battle game developed by Supercell, the makers of Clash of Clans and Clash Royale. In this version, you can enjoy unlimited resources, new features, and improved performance without spending any money or waiting for updates. Here are some reasons why you should download brawl free game 5.3.12 patched apk today.

-

Features of Brawl Free Game 5.3.12 Patched APK

-

Brawl free game 5.3.12 patched apk has many features that make it stand out from the original Brawl Stars game. Here are some of them:

-

brawl free game-5-3-12-patched.apk


Download ->>->>->> https://urlca.com/2uO6rN



-
    -
  • Unlimited gems, coins, and tickets: You can use these resources to unlock and upgrade dozens of Brawlers with different abilities, skins, star powers, and gadgets. You can also buy special items, such as brawl boxes, big boxes, mega boxes, and token doublers.
  • -
  • All Brawlers unlocked: You can access all the Brawlers in the game, including the rare, super rare, epic, mythic, and legendary ones. You can also play with the latest Brawlers that are added to the game every season.
  • -
  • All skins unlocked: You can customize your Brawlers with various skins that change their appearance and animations. You can also use exclusive skins that are only available in certain events or promotions.
  • -
  • All maps unlocked: You can play on any map in the game, including the player-designed maps that offer challenging new terrain to master.
  • -
  • No ads: You can enjoy the game without any interruptions or distractions from annoying ads.
  • -
-

Tips and Tricks for Brawl Free Game 5.3.12 Patched APK

-

Brawl free game 5.3.12 patched apk is easy to play but hard to master. Here are some tips and tricks that will help you improve your skills and win more matches:

-
    -
  • Choose the right Brawler for each mode: Brawl Stars has multiple game modes, such as Gem Grab, Showdown, Brawl Ball, Bounty, Heist, Siege, Hot Zone, Knockout, and more. Each mode has a different objective and requires a different strategy. You should choose a Brawler that suits the mode and complements your team.
  • -
  • Use obstacles to your advantage: The maps in Brawl Stars have various obstacles that can provide cover or hinder movement. You should use them wisely to dodge enemy attacks or ambush them from behind.
  • -
  • Use your Super ability wisely: Each Brawler has a unique Super ability that can turn the tide of the battle. You should use it at the right time and place to maximize its effect.
  • -
  • Collect power cubes in Showdown: Showdown is a battle royale mode where you have to survive against other players. You should collect the power cubes that spawn randomly on the map or drop from defeated enemies. Power cubes increase your health and damage, giving you an edge over your opponents.
  • -
  • Use gadgets and star powers: Gadgets and star powers are special abilities that you can unlock for each Brawler after reaching certain levels. Gadgets can be activated once per match and have various effects, such as healing, teleporting, or stunning enemies. Star powers are passive abilities that enhance your Brawler's performance, such as increasing speed, damage, or health regeneration. You should use them strategically to gain an advantage over your enemies.
  • -
-

Reviews of Brawl Free Game 5.3.12 Patched APK

-

Brawl free game 5.3.12 patched apk has received positive feedback from many players who have tried it out. Here are some of the reviews from the users:

-
-

"This is the best modded version of Brawl Stars I have ever played. It has everything I want: unlimited gems, coins, tickets, all Brawlers, all skins, all maps, no ads, and more. It is very fun and addictive. I highly recommend it to anyone who loves Brawl Stars."

-- John, 5 stars -
-
-

"I love this game so much. It is very easy to download and install. It works perfectly on my device. It has amazing graphics and sound effects. It has a lot of modes and Brawlers to choose from. It is very challenging and exciting. I play it every day with my friends."

-- Lisa, 5 stars -
-
-

"This game is awesome. It is better than the original Brawl Stars because it has more features and options. It is very smooth and fast. It has no bugs or glitches. It is very safe and secure. It does not require any root or jailbreak. It is the best game ever."

-

Brawlhalla free download android apk
-ReBrawl classic mod apk latest version
-College Brawl adult game apk for PC
-Brawl Stars hack apk unlimited gems and coins
-Brawl Masters 3D action game mod apk
-Brawl Quest offline fighting game apk
-Brawl Smash multiplayer platform fighter apk
-Brawl Ball soccer stars apk download
-Brawl Troopers fun shooting game apk
-Brawl Bash online battle royale game apk
-Brawl Gang street fighting game apk
-Brawl Party mini games collection apk
-Brawl Legends epic hero arena apk
-Brawl Tanks war machines game apk
-Brawl Chess 3D board game apk
-Brawl Soccer football manager game apk
-Brawl Golf arcade sports game apk
-Brawl Boxing punch club game apk
-Brawl Ninja shadow fight game apk
-Brawl Zombie survival horror game apk
-Brawl Racing car drift game apk
-Brawl Puzzle match 3 game apk
-Brawl Casino slot machine game apk
-Brawl Royale clash of clans game apk
-Brawl Shooter gun shooting game apk
-Brawl Runner endless runner game apk
-Brawl Builder city building game apk
-Brawl Simulator simulation game apk
-Brawl RPG role playing game apk
-Brawl Adventure platformer game apk
-Brawl Quiz trivia game apk
-Brawl Music rhythm game apk
-Brawl Word word search game apk
-Brawl Farm farming game apk
-Brawl Cooking cooking game apk
-Brawl Dress up fashion game apk
-Brawl Pets pet care game apk
-Brawl Paint coloring game apk
-Brawl Escape escape room game apk
-Brawl Hidden hidden object game apk

-- Kevin, 5 stars -
-

Brawl free game 5.3.12 patched apk also has some advantages over other similar games, such as:

-
    -
  • It is free: You do not have to pay anything to download or play this game. You can enjoy all the features and benefits without spending any money.
  • -
  • It is updated: You do not have to wait for long periods of time for new updates or patches. This game is always updated with the latest content and improvements.
  • -
  • It is compatible: You do not have to worry about compatibility issues or device requirements. This game works on any Android device that supports Brawl Stars.
  • -
-

Download Link for Brawl Free Game 5.3.12 Patched APK

-

If you are interested in downloading brawl free game 5.3.12 patched apk, you can use the link below:

-

Brawl Free Game 5.3.12 Patched APK Download Link

-

This link will take you to a secure and reliable website where you can download the apk file for free and without any hassle.

-

To install brawl free game 5.3.12 patched apk on your device, you need to follow these simple steps:

-
    -
  1. Download the apk file from the link above.
  2. -
  3. Go to your device settings and enable unknown sources.
  4. -
  5. Locate the apk file in your file manager and tap on it.
  6. -
  7. Follow the instructions on the screen and wait for the installation to complete.
  8. -
  9. Launch the game and enjoy.
  10. -
-

Conclusion

-

Brawl free game 5.3.12 patched apk is a great alternative to Brawl Stars that offers unlimited resources, new features, and improved performance for free. It is a fun and exciting multiplayer game that you can play with your friends or other players online. It has a variety of modes, Brawlers, skins, maps, gadgets, and star powers to choose from. It has amazing graphics and sound effects that make the game more immersive and realistic.

-

If you are a fan of Brawl Stars or similar games, you should definitely give brawl free game 5.3.12 patched apk a try. You will not regret it.

-

FAQs

-

Here are some frequently asked questions about brawl free game 5.3.12 patched apk:

-

Is brawl free game 5.3.12 patched apk safe?

-

Yes, brawl free game 5.3.12 patched apk is safe and secure to download and install. It does not contain any viruses, malware, or spyware that can harm your device or data. It does not require any root or jailbreak to run. It does not interfere with the original Brawl Stars game or your account.

-

Is brawl free game 5.3.12 patched apk legal?

-

Brawl free game 5.3.12 patched apk is not an official product of Supercell or Brawl Stars. It is a fan-made modded version that is created for entertainment purposes only. It does not violate any copyrights or trademarks of Supercell or Brawl Stars. However, it is not endorsed or supported by Supercell or Brawl Stars, and you use it at your own risk.

-

Can I play brawl free game 5.3.12 patched apk with other players online?

-

Yes, you can play brawl free game 5.3.12 patched apk with other players online who have the same version of the game. You can join or create rooms and invite your friends or other players to join you. You can also chat with them and send them emojis and stickers.

-

Can I update brawl free game 5.3.12 patched apk to the latest version?

-

Yes, you can update brawl free game 5.3.12 patched apk to the latest version whenever it is available. You can check for updates on the website where you downloaded the game or on the game itself. You can also enable automatic updates to get the latest version as soon as possible.

-

Can I uninstall brawl free game 5.3.12 patched apk if I don't like it?

-

Yes, you can uninstall brawl free game 5.3.12 patched apk if you don't like it or want to switch back to the original Brawl Stars game. You can simply delete the apk file from your device or go to your device settings and uninstall the game from there.

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Marvel Contest of Champions Mod APK PC How to Download and Play with BlueStacks.md b/spaces/congsaPfin/Manga-OCR/logs/Marvel Contest of Champions Mod APK PC How to Download and Play with BlueStacks.md deleted file mode 100644 index 1c7c01938ca88eb36fa72209e8049e95cea7521d..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Marvel Contest of Champions Mod APK PC How to Download and Play with BlueStacks.md +++ /dev/null @@ -1,159 +0,0 @@ -
-

Marvel Contest of Champions Mod APK PC: How to Play the Ultimate Marvel Fighting Game on Your Computer

-

If you are a fan of Marvel comics, movies, and games, you must have heard of Marvel Contest of Champions, the epic fighting game that lets you collect and battle with your favorite superheroes and villains. But did you know that you can also play this game on your PC with a mod apk that gives you unlimited access to all the features and cheats? In this article, we will show you how to download and install Marvel Contest of Champions Mod APK PC, and how to enjoy the ultimate Marvel fighting experience on your computer.

-

marvel contest of champions mod apk pc


Download Filehttps://urlca.com/2uOc7i



-

What is Marvel Contest of Champions?

-

A brief introduction to the game and its features

-

Marvel Contest of Champions is a 2D fighting game developed by Kabam Games, Inc. that was released in 2014 for Android and iOS devices. The game features a huge roster of characters from the Marvel universe, including Spider-Man, Iron Man, Captain America, Wolverine, Hulk, Thor, Black Widow, Deadpool, Thanos, and many more. You can choose your favorite heroes and villains, form teams, and fight against other players or AI opponents in various modes and arenas.

-

The game has stunning graphics, smooth animations, realistic sound effects, and immersive gameplay that will make you feel like you are in a Marvel movie. You can also customize your characters with different costumes, skills, abilities, and items. You can also unlock new characters and content by completing quests, events, challenges, and achievements.

-

The benefits of playing with a mod apk

-

A mod apk is a modified version of an original app that allows you to access features that are normally locked or restricted. For example, with a mod apk, you can get unlimited resources, such as gold, units, crystals, energy, ISO-8, catalysts, etc. You can also unlock all the characters, costumes, skills, abilities, items, etc. without spending any money or time. You can also use cheats, such as auto-fight, god mode, one-hit kill, etc. to make the game easier and more fun.

-

Playing with a mod apk can give you many advantages over other players who are using the original app. You can enjoy the game without any limitations or frustrations. You can also explore the game more freely and creatively. You can also impress your friends and allies with your amazing collection and performance.

-

How to Download and Install Marvel Contest of Champions Mod APK PC

-

The requirements and steps for using an emulator

-

To play Marvel Contest of Champions Mod APK PC on your computer, you will need an emulator. An emulator is a software that allows you to run Android apps on your PC or Mac. There are many emulators available online, but not all of them are compatible with Marvel Contest of Champions Mod APK PC. Therefore, you need to choose an emulator that is reliable

The best emulator to use: BlueStacks

-

There are many emulators that can run Marvel Contest of Champions on PC, but we recommend using BlueStacks, the most popular and trusted Android emulator on the market. BlueStacks has many features and advantages that make it the best choice for playing Marvel Contest of Champions Mod APK PC, such as:

-

marvel contest of champions mod apk pc download
-marvel contest of champions mod apk pc bluestacks
-marvel contest of champions mod apk pc emulator
-marvel contest of champions mod apk pc free
-marvel contest of champions mod apk pc online
-marvel contest of champions mod apk pc windows 10
-marvel contest of champions mod apk pc gameplay
-marvel contest of champions mod apk pc hack
-marvel contest of champions mod apk pc cheats
-marvel contest of champions mod apk pc nox
-marvel contest of champions mod apk pc 2023
-marvel contest of champions mod apk pc latest version
-marvel contest of champions mod apk pc unlimited units
-marvel contest of champions mod apk pc kabam games
-marvel contest of champions mod apk pc action game
-marvel contest of champions mod apk pc how to install
-marvel contest of champions mod apk pc review
-marvel contest of champions mod apk pc requirements
-marvel contest of champions mod apk pc features
-marvel contest of champions mod apk pc guide
-marvel contest of champions mod apk pc tips and tricks
-marvel contest of champions mod apk pc best team
-marvel contest of champions mod apk pc characters
-marvel contest of champions mod apk pc spider-man
-marvel contest of champions mod apk pc captain america
-marvel contest of champions mod apk pc wolverine
-marvel contest of champions mod apk pc hulk
-marvel contest of champions mod apk pc iron man
-marvel contest of champions mod apk pc thanos
-marvel contest of champions mod apk pc deadpool
-marvel contest of champions mod apk pc black widow
-marvel contest of champions mod apk pc thor
-marvel contest of champions mod apk pc doctor strange
-marvel contest of champions mod apk pc star-lord
-marvel contest of champions mod apk pc gamora
-marvel contest of champions mod apk pc groot
-marvel contest of champions mod apk pc rocket raccoon
-marvel contest of champions mod apk pc vision
-marvel contest of champions mod apk pc ultron
-marvel contest of champions mod apk pc ant-man
-marvel contest of champions mod apk pc black panther
-marvel contest of champions mod apk pc daredevil
-marvel contest of champions mod apk pc loki
-marvel contest of champions mod apk pc magneto
-marvel contest of characters on pccs-mod-apk-pc-storm

-
    -
  • It is fast, stable, and compatible with most Android games and apps.
  • -
  • It has a large user base and a dedicated support team.
  • -
  • It allows you to customize your keyboard and mouse controls with ease.
  • -
  • It supports high-resolution graphics and fullscreen mode.
  • -
  • It has a built-in app center where you can download and install the latest mod apk files.
  • -
  • It has a macro recorder and a multi-instance manager that let you automate tasks and run multiple games at the same time.
  • -
-

To download and install BlueStacks on your PC, you can follow these simple steps:

-
    -
  1. Visit the official website of BlueStacks at https://www.bluestacks.com and click on the download button.
  2. -
  3. Run the installer file and follow the instructions on the screen to complete the installation.
  4. -
  5. Launch BlueStacks and sign in with your Google account to access the Google Play Store.
  6. -
-

How to download and install the mod apk file

-

Once you have BlueStacks installed on your PC, you can download and install the mod apk file for Marvel Contest of Champions. There are two ways to do this:

-
    -
  1. You can use the built-in app center of BlueStacks to search for Marvel Contest of Champions Mod APK PC and click on the install button. This will automatically download and install the latest version of the mod apk file on your emulator.
  2. -
  3. You can also manually download the mod apk file from a trusted source online, such as https://www.bignox.com/appcenter/com-kabam-marvelbattle-pc.html. Then, you can drag and drop the file onto the BlueStacks icon on your desktop or use the APK installer feature of BlueStacks to browse and select the file from your PC.
  4. -
-

After installing the mod apk file, you can launch Marvel Contest of Champions from your BlueStacks home screen or app drawer. You will see a mod menu icon on the top left corner of the game screen, which you can tap to access the mod features and cheats.

-

How to Play Marvel Contest of Champions Mod APK PC

-

How to customize your controls and settings

-

To play Marvel Contest of Champions Mod APK PC with optimal performance and comfort, you may want to customize your controls and settings according to your preferences. You can do this by using the keymapping tool and the settings menu of BlueStacks. Here are some tips on how to do this:

-
    -
  • To open the keymapping tool, click on the keyboard icon on the right side of the BlueStacks toolbar. You will see a list of default controls that you can edit or delete. You can also add new controls by dragging and dropping icons from the panel onto the game screen. You can assign any key or mouse button to any function, such as attack, block, dash, special, etc. You can also create macros to record and execute input sequences with a single key. When you are done, click on save to apply your changes.
  • -
  • To open the settings menu, click on the gear icon on the right side of the BlueStacks toolbar. You will see various options that you can adjust, such as display resolution, graphics quality, sound volume, language, etc. You can also enable or disable features such as eco mode, sync mode, game mode, etc. You can also check for updates and contact support from this menu. When you are done, click on save to apply your changes.
  • -
-

How to access the mod features and cheats

-

To access the mod features and cheats for Marvel Contest of Champions Mod APK PC, you need to tap on the mod menu icon on the top left corner of the game screen. You will see a list of options that you can toggle on or off, such as:

-
    -
  • Unlimited gold: This will give you unlimited amount of gold, which you can use to upgrade your characters, skills, items, etc.
  • -
  • Unlimited units: This will give you unlimited amount of units, which you can use to buy crystals, energy refills, potions, revives, etc.
  • -
  • Unlimited crystals: This will

    Unlimited crystals: This will give you unlimited amount of crystals, which you can use to open premium hero crystals, alliance crystals, quest crystals, etc.

    -
  • Unlimited energy: This will give you unlimited amount of energy, which you can use to play quests and events without waiting for the energy bar to refill.
  • -
  • Unlimited ISO-8: This will give you unlimited amount of ISO-8, which you can use to level up your characters and increase their stats.
  • -
  • Unlimited catalysts: This will give you unlimited amount of catalysts, which you can use to rank up your characters and unlock their potential.
  • -
  • Auto-fight: This will enable the auto-fight mode, which will make the game play itself and win the battles for you.
  • -
  • God mode: This will make your characters invincible, immune to damage, debuffs, and effects.
  • -
  • One-hit kill: This will make your characters deal massive damage to the enemies, killing them with one hit.
  • -
-

You can use these mod features and cheats as much as you want, but be careful not to abuse them or get detected by the game's security system. You may also want to disable some of them when playing online or with your friends and allies, as they may ruin the fun and challenge of the game.

-

How to enjoy the game with your friends and allies

-

One of the best things about Marvel Contest of Champions is that you can play it with your friends and allies from all over the world. You can join or create an alliance, chat with other players, participate in alliance quests and wars, compete in leaderboards and tournaments, and share tips and strategies. You can also challenge other players in real-time PvP battles, test your skills and teamwork in special events and modes, and collect rewards and bonuses for your alliance.

-

To enjoy the game with your friends and allies, you need to have a stable internet connection and a valid account. You can also use BlueStacks' features to enhance your social gaming experience, such as:

-
    -
  • You can use the voice chat feature to communicate with your teammates and opponents during the game.
  • -
  • You can use the screen recorder feature to capture your gameplay and share it with others.
  • -
  • You can use the streaming feature to broadcast your gameplay live on platforms like Twitch or YouTube.
  • -
-

Conclusion

-

In conclusion, Marvel Contest of Champions Mod APK PC is a great way to play the ultimate Marvel fighting game on your computer. You can enjoy all the features and cheats of the mod apk, customize your controls and settings with BlueStacks, and have fun with your friends and allies online. If you are a Marvel fan, you should definitely try this game and see for yourself how awesome it is.

-

So what are you waiting for? Download Marvel Contest of Champions Mod APK PC today and unleash your inner superhero!

-

FAQs

-

Q1: Is Marvel Contest of Champions Mod APK PC safe and legal?

-

A1: Yes, Marvel Contest of Champions Mod APK PC is safe and legal to use. The mod apk file is scanned for viruses and malware before being uploaded online. The mod apk file does not require root access or jailbreak to work. The mod apk file does not violate the terms of service or privacy policy of the original app or the emulator. However, you should always download the mod apk file from a trusted source and use it at your own risk.

-

Q2: What are the advantages of playing Marvel Contest of Champions on PC?

-

A2: Playing Marvel Contest of Champions on PC has many advantages over playing it on mobile devices, such as:

-
    -
  • You can play the game on a bigger screen with better graphics and sound quality.
  • -
  • You can play the game with more comfort and accuracy using a keyboard and mouse instead of a touchscreen.
  • -
  • You can play the game without worrying about battery life, storage space, or performance issues.
  • -
  • You can play the game with more features and options using an emulator like BlueStacks.
  • -
-

Q3: What are some of the best characters and teams in Marvel Contest of Champions?

-

A3: There are many characters and teams in Marvel Contest of Champions that have different strengths, weaknesses, abilities, synergies, and roles. Some of the best characters and teams in Marvel Contest of Champions are:

-
    -
  • Cosmic: Captain Marvel (Movie), Hyperion, Corvus Glaive, Silver Surfer, Venom
  • -
  • Tech: Ghost, Warlock, Guardian, Vision (Aarkus), Iron Man (Infinity War)Mutant: Apocalypse, Professor X, Magneto (House of X), Colossus, Archangel
  • -
  • Skill: Nick Fury, Aegon, Stealth Suit Spider-Man, Hit Monkey, Falcon
  • -
  • Science: Quake, Human Torch, Void, She-Hulk, Spider-Gwen
  • -
  • Mystic: Doctor Doom, Symbiote Supreme, Black Widow (Claire Voyant), Longshot, Tigra
  • -
  • Team: The Fantastic Four, The X-Men, The Avengers, The Guardians of the Galaxy, The Inhumans
  • -
-

Of course, these are not the only good characters and teams in the game. You can experiment with different combinations and find the ones that suit your playstyle and preferences.

-

Q4: How can I get more resources and rewards in Marvel Contest of Champions?

-

A4: There are many ways to get more resources and rewards in Marvel Contest of Champions, such as:

-
    -
  • Completing quests, events, challenges, and achievements.
  • -
  • Opening crystals, chests, and boxes.
  • -
  • Participating in alliance quests and wars.
  • -
  • Competing in arenas and tournaments.
  • -
  • Claiming daily, weekly, and monthly rewards.
  • -
  • Watching ads and videos.
  • -
  • Using the mod apk features and cheats.
  • -
-

You can also buy resources and rewards with real money, but this is not necessary or recommended. You can enjoy the game without spending a dime if you use the mod apk or play smartly.

-

Q5: Where can I find more tips and tricks for Marvel Contest of Champions?

-

A5: If you want to learn more tips and tricks for Marvel Contest of Champions, you can check out these sources:

-
    -
  • The official website of Marvel Contest of Champions at https://playcontestofchampions.com/, where you can find the latest news, updates, guides, and support for the game.
  • -
  • The official social media pages of Marvel Contest of Champions on Facebook, Twitter, Instagram, YouTube, etc., where you can follow the game's community and interact with other players and developers.
  • -
  • The official forums of Marvel Contest of Champions at https://forums.playcontestofchampions.com/en/, where you can join discussions, ask questions, share feedback, and report issues.
  • -
  • The fan-made websites and blogs of Marvel Contest of Champions, such as https://www.mcoc-guide.com/, https://www.seatinmanoflegends.com/, https://www.marvelsynergy.com/, etc., where you can find useful information, reviews, ratings, tier lists, calculators, tools, etc. for the game.
  • -
  • The fan-made videos and podcasts of Marvel Contest of Champions on platforms like YouTube or Spotify, where you can watch or listen to gameplay tutorials, tips, tricks, strategies, reviews, etc. for the game. Some of the popular content creators for Marvel Contest of Champions are Seatin Man of Legends, Lagacy, RichTheMan, KT1, etc.
  • -
-

These sources can help you improve your skills, knowledge, and enjoyment of Marvel Contest of Champions. However, you should always verify the information and opinions you find online and use your own judgment and experience to decide what works best for you.

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Nonton Film Korea The Pirates The Last Royal Treasure (2022) Subtitle Indonesia - Bergabung dengan Bajak Laut dan Bandit dalam Mencari Harta Karun Kerajaan.md b/spaces/congsaPfin/Manga-OCR/logs/Nonton Film Korea The Pirates The Last Royal Treasure (2022) Subtitle Indonesia - Bergabung dengan Bajak Laut dan Bandit dalam Mencari Harta Karun Kerajaan.md deleted file mode 100644 index 666a6171b83909e777982c5e3f50dbe382ed59c8..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Nonton Film Korea The Pirates The Last Royal Treasure (2022) Subtitle Indonesia - Bergabung dengan Bajak Laut dan Bandit dalam Mencari Harta Karun Kerajaan.md +++ /dev/null @@ -1,92 +0,0 @@ -
-

Download Film Korea The Pirates: The Last Royal Treasure Sub Indo

-

Are you looking for a fun and exciting adventure film to watch? If so, you might want to check out The Pirates: The Last Royal Treasure, a 2022 South Korean period comedy film that follows a group of pirates and bandits who search for a lost treasure in the Joseon era. This film is a sequel to the 2014 hit The Pirates, but it features a new story and a new cast. In this article, we will tell you everything you need to know about this film, and how to download it with Indonesian subtitles.

-

What is The Pirates: The Last Royal Treasure?

-

The Pirates: The Last Royal Treasure is a film directed by Kim Jeong-hoon, who also directed Petty Romance (2010) and The Accidental Detective (2015). The film stars Kang Ha-neul as Woo Moo-chi, a pirate captain who leads his crew to find a royal treasure that sank in a shipwreck; Han Hyo-joo as Hae-rang, a female bandit leader who joins forces with Woo Moo-chi; Lee Kwang-soo as Mak-yi, a clumsy pirate who has a crush on Hae-rang; Kwon Sang-woo as Bu Heung-soo, a ruthless naval commander who pursues the treasure hunters; Chae Soo-bin as So-nyeo, a young girl who disguises herself as a boy to join the pirates; and Oh Se-hun as Han-goong, a mysterious warrior who holds the key to the treasure.

-

download film korea the pirates the last royal treasure sub indo


DOWNLOAD ❤❤❤ https://urlca.com/2uOfO9



-

The film was released in South Korea on January 26, 2022, and became a box office success, earning over $10 million worldwide. It received mixed reviews from critics and audiences, who praised its action scenes, humor, and production values, but criticized its plot holes, clichés, and length. The film has a rating of 6.1 out of 10 on IMDb and 50% on Rotten Tomatoes.

-

How to Download Film Korea The Pirates: The Last Royal Treasure Sub Indo?

-

Option 1: Watch on Netflix

-

One of the easiest ways to watch The Pirates: The Last Royal Treasure with Indonesian subtitles is to stream it on Netflix, the popular online video service that offers thousands of movies and shows for a monthly fee. Netflix has acquired the exclusive rights to distribute the film in over 190 countries, including Indonesia. To watch the film on Netflix, you will need to have a Netflix account and a VPN service. A VPN service is a tool that allows you to change your IP address and access content that is not available in your region. Here are the steps to watch the film on Netflix with a VPN service:

-
    -
  1. Choose a VPN service that has servers in South Korea, such as NordVPN, ExpressVPN, or Surfshark.
  2. -
  3. Download and install the VPN app on your device.
  4. -
  5. Connect to a server in South Korea.
  6. -
  7. Open Netflix and sign in with your account.
  8. -
  9. Search for The Pirates: The Last Royal Treasure and enjoy the film with Indonesian subtitles.
  10. -
-

Note that some VPN services may not work with Netflix, so you may need to try different servers or VPN providers until you find one that works. Also, be aware that using a VPN service may slow down your internet speed and affect the quality of the video.

-

Option 2: Download from Legal Streaming Sites

-

Another option to download The Pirates: The Last Royal Treasure with Indonesian subtitles is to use legal streaming sites that offer the film for download or rent. These sites are licensed by the film's producers and distributors, and they provide high-quality video and audio. Some of the legal streaming sites that offer the film are:

-
    -
  • iQiyi: iQiyi is a Chinese online video platform that has a large collection of Korean films and dramas. You can download the film for free with ads, or pay a small fee to watch it without ads.
  • -
  • Viu: Viu is a Hong Kong-based online video service that specializes in Asian content. You can download the film for free with a Viu account, or rent it for a low price.
  • -
  • WeTV: WeTV is a Thai online video platform that offers a variety of Korean, Chinese, and Thai films and shows. You can download the film for free with a WeTV account, or buy it for a reasonable price.
  • -
-

To download the film from these sites, you will need to create an account and follow the instructions on the site. You may also need to use a VPN service if the site is not available in your region.

-

Option 3: Download from Torrent Sites

-

The last option to download The Pirates: The Last Royal Treasure with Indonesian subtitles is to use torrent sites, such as The Pirate Bay, Kickass Torrents, or 1337x. Torrent sites are websites that allow users to share files through peer-to-peer networks. You can find almost any movie or show on torrent sites, but there are some risks and drawbacks involved. Some of the disadvantages of using torrent sites are:

-

Download film korea the pirates 2 sub indo
-Nonton online the pirates the last royal treasure subtitle indonesia
-Streaming film korea the pirates: goblin flag sub indo
-Download the pirates: the last royal treasure 2022 sub indo lk21
-Nonton gratis film korea the pirates: the last royal treasure ganool
-Download film korea the pirates: the last royal treasure hardsub indo
-Nonton film korea the pirates: the last royal treasure full movie sub indo
-Streaming the pirates: the last royal treasure 2022 subtitle indonesia
-Download film korea the pirates: the last royal treasure drakorindo
-Nonton film korea the pirates: the last royal treasure hd sub indo
-Download film korea the pirates: the last royal treasure 1080p sub indo
-Streaming film korea the pirates: the last royal treasure 720p sub indo
-Download film korea the pirates: the last royal treasure bluray sub indo
-Nonton film korea the pirates: the last royal treasure dunia21 sub indo
-Download film korea the pirates: the last royal treasure mp4 sub indo
-Streaming film korea the pirates: the last royal treasure xxi sub indo
-Download film korea the pirates: the last royal treasure mkv sub indo
-Nonton film korea the pirates: the last royal treasure bioskopkeren sub indo
-Download film korea the pirates: the last royal treasure dramasubindo
-Streaming film korea the pirates: the last royal treasure layarkaca21 sub indo
-Download film korea kang ha neul han hyo joo sub indo
-Nonton film korea lee kwang soo chae soo bin sub indo
-Streaming film korea kwon sang woo oh sehun sub indo
-Download film korea action adventure comedy sub indo
-Nonton film korea historical fantasy harta karun sub indo
-Streaming film korea kapal pembunuh orang-orang aneh sub indo
-Download film korea sinopsis review trailer sub indo
-Nonton film korea rating imdb rotten tomatoes sub indo
-Streaming film korea subtitle indonesia english sub indo
-Download film korea terbaru terlengkap terbaik sub indo

-
    -
  • You may download malware, viruses, or spyware that can harm your device or steal your personal information.
  • -
  • You may face legal issues or fines for violating intellectual property rights or downloading copyrighted content without permission.
  • -
  • You may get poor quality video or audio, incomplete files, or wrong subtitles.
  • -
  • You may need to use additional software or tools, such as BitTorrent, uTorrent, or VLC Media Player, to download and play the files.
  • -
-

If you decide to use torrent sites, you should be careful and cautious about what you download and where you download it from. You should also use a VPN service to protect your privacy and security online.

-

What are the Benefits of Downloading Film Korea The Pirates: The Last Royal Treasure Sub Indo?

-

Downloading The Pirates: The Last Royal Treasure with Indonesian subtitles has many benefits for you as a viewer. Some of the benefits are:

-
    -
  • You can save time and money by not going to the cinema or buying physical copies of the film.
  • -
  • You can enjoy offline viewing anytime and anywhere, without worrying about internet connection or buffering issues.
  • -
  • You can support the Korean film industry and appreciate its culture and history through this film.
  • -
  • You can learn Korean language and expressions by watching the film with subtitles.
  • -
-

What are the Reviews of Film Korea The Pirates: The Last Royal Treasure Sub Indo?

-

The Pirates: The Last Royal Treasure has received mixed reviews from critics and audiences alike. Some of the reviews are: - A swashbuckling good time that overcomes its flaws thanks to its charm. - A fun period swashbuckler from Korea that delivers a bustling mix of exciting sword fights, mysterious treasure, scheming bad guys, and witty repartee. - Luscious visuals and an endearing cast can't steer this ship away from its set course. - Witty and exciting, with a charismatic cast and impressive action scenes. Overall, the film has a moderate appeal for fans of adventure and comedy genres, but it may not satisfy those who are looking for more depth and originality in the story and characters.

-

Conclusion

-

In conclusion, The Pirates: The Last Royal Treasure is a film that offers a lot of entertainment and fun for those who enjoy pirate stories, historical settings, and humorous dialogues. The film has a talented cast, stunning visuals, and thrilling action scenes that will keep you engaged throughout. If you want to watch this film with Indonesian subtitles, you have several options to choose from, such as Netflix, legal streaming sites, or torrent sites. However, you should be aware of the pros and cons of each option, and use a VPN service if necessary. Downloading this film will also give you some benefits, such as saving money and time, supporting the Korean film industry, and learning Korean language and culture. However, you should also respect the intellectual property rights of the filmmakers and avoid illegal or unethical downloads. We hope this article has helped you learn more about this film and how to download it with Indonesian subtitles. If you have any questions or comments, please feel free to share them below. Thank you for reading!

-

FAQs

-
    -
  1. Who are the main actors in The Pirates: The Last Royal Treasure?
  2. -

    The main actors are Kang Ha-neul as Woo Moo-chi, Han Hyo-joo as Hae-rang, Lee Kwang-soo as Mak-yi, Kwon Sang-woo as Bu Heung-soo, Chae Soo-bin as So-nyeo, and Oh Se-hun as Han-goong.

    -
  3. When was The Pirates: The Last Royal Treasure released?
  4. -

    The film was released in South Korea on January 26, 2022, and on Netflix on March 2, 2022.

    -
  5. Is The Pirates: The Last Royal Treasure a sequel to The Pirates (2014)?
  6. -

    The film is not a direct sequel to The Pirates (2014), but it shares the same concept and genre. It features a new story and a new cast.

    -
  7. How long is The Pirates: The Last Royal Treasure?
  8. -

    The film has a runtime of 126 minutes.

    -
  9. What is the rating of The Pirates: The Last Royal Treasure?
  10. -

    The film has a rating of 6.1 out of 10 on IMDb and 50% on Rotten Tomatoes.

    -

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Watch Over 1000 FTA Channels with OTT Live APK for Android.md b/spaces/congsaPfin/Manga-OCR/logs/Watch Over 1000 FTA Channels with OTT Live APK for Android.md deleted file mode 100644 index 0acc113848cc569ea4f4e495e57cec6402306df6..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Watch Over 1000 FTA Channels with OTT Live APK for Android.md +++ /dev/null @@ -1,109 +0,0 @@ - -

How to Download OTT Live APK for Android Devices

-

If you are looking for a way to watch free live TV channels from across India, then you might want to try OTT Live APK. This is an app that delivers video content to your smartphone, tablet, smart TV or FireTV using the internet rather than the traditional distribution methods of cable or satellite. In this article, we will show you what OTT Live APK is, how to install it on your Android device, and how to stream live content with it.

-

download ott live apk


DOWNLOAD · https://urlca.com/2uO7eU



-

What is OTT Live APK?

-

OTT Live APK is an app developed by Overthetop Live Private Limited, a company based in India. The app allows you to watch free-to-air (FTA) TV channels from various regions and languages of India, such as Hindi, Tamil, Telugu, Malayalam, Kannada, Bengali, Marathi, Gujarati, Punjabi, and more. You can also watch some international channels, such as BBC, CNN, Al Jazeera, and more.

-

Features of OTT Live APK

-

Some of the features of OTT Live APK are:

-
    -
  • It has a simple and user-friendly interface that lets you browse and watch channels easily.
  • -
  • It supports multiple devices, such as smartphones, tablets, smart TVs, and FireTVs.
  • -
  • It does not require any registration or subscription to use.
  • -
  • It offers high-quality video streaming with adaptive bitrate.
  • -
  • It has a built-in video player that supports various formats and codecs.
  • -
  • It has a favorites section where you can save your preferred channels for quick access.
  • -
  • It has a search function that lets you find channels by name or category.
  • -
  • It has a feedback option where you can report any issues or suggestions to the developers.
  • -
-

How to Install OTT Live APK on Android Devices

-

To install OTT Live APK on your Android device, you need to follow these steps:

-

How to download ott live apk for android
-Download ott live apk latest version free
-Ott live apk download for pc windows 10
-Ott live tv apk download for smart tv
-Ott live apk mod premium unlocked
-Download ott live apk from apkcombo
-Ott live apk for firestick download
-Ott live private limited apk download
-Ott live tv app download apk
-Ott live apk download for ios
-Download ott live tv apk in hindi
-Ott live apk download for android tv box
-Ott live tv apk mod ad free
-Download ott live apk from uptodown
-Ott live apk for roku download
-Ott live sports apk download
-Ott live tv plus apk download
-Ott live apk pro cracked
-Download ott live apk from apkpure
-Ott live apk for chromecast download
-Ott live movies apk download
-Ott live tv hd apk download
-Ott live apk no ads
-Download ott live apk from apkmirror
-Ott live apk for nvidia shield download
-Ott live news apk download
-Ott live tv india apk download
-Ott live apk premium features
-Download ott live apk from aptoide
-Ott live apk for mi box download
-Ott live shows apk download
-Ott live tv usa apk download
-Ott live apk update 2023
-Download ott live apk from play store
-Ott live apk for samsung tv download
-Ott live music apk download
-Ott live tv uk apk download
-Ott live apk old version 2022
-Download ott live apk from amazon appstore
-Ott live apk for lg tv download
-Ott live comedy apk download
-Ott live tv canada apk download
-Ott live apk hack unlimited access
-Download ott live tv cssab 20141003 apk

-

Step 1: Enable Unknown Sources

-

Since OTT Live APK is not available on the Google Play Store, you need to enable unknown sources on your device. This will allow you to install apps from third-party sources other than the Play Store. To do this, go to Settings > Security > Unknown Sources and toggle it on. You might see a warning message that says installing apps from unknown sources can harm your device. Just ignore it and tap OK.

-

Step 2: Download OTT Live APK File

-

Next, you need to download the OTT Live APK file from a reliable source. You can use this link to download the latest version of the app (version 1.0.12). The file size is about 36 MB and it should take a few seconds or minutes depending on your internet speed. Once the download is complete, you will see a notification that says "Download complete". Tap on it to open the file.

-

Step 3: Install OTT Live APK File

-

After opening the file, you will see a screen that says "Do you want to install this application?". Tap on Install and wait for the installation process to finish. You might see some permissions that the app requires, such as access to storage, network, and phone. Tap on Accept or Allow to grant them. Once the installation is done, you will see a screen that says "App installed ". Tap on Open to launch the app or Done to exit the installer.

-

Step 4: Launch OTT Live App and Enjoy

-

Now you can launch the OTT Live app from your app drawer or home screen. You will see a splash screen with the app logo and name. Then you will see the main screen with a list of categories, such as Entertainment, News, Sports, Movies, Music, and more. You can swipe left or right to browse through them. You can also tap on the menu icon at the top left corner to access other options, such as Favorites, Search, Feedback, and Settings. To watch a channel, just tap on its name and it will start playing on the built-in video player. You can also use the playback controls to pause, resume, rewind, fast forward, or adjust the volume. You can also switch to full-screen mode by tapping on the expand icon at the bottom right corner. To exit the app, just press the back button on your device or tap on the exit icon at the top right corner.

-

What is OTT Live Streaming?

-

OTT Live Streaming is a term that refers to the delivery of video content over the internet rather than through traditional cable or satellite TV services. OTT stands for Over-The-Top, which means that the content is delivered directly to the user without any intermediaries or gatekeepers. OTT Live Streaming is also known as Internet TV, Online TV, or Streaming TV.

-

Benefits of OTT Live Streaming

-

Some of the benefits of OTT Live Streaming are:

-
    -
  • It is cheaper than cable or satellite TV services, as you only need an internet connection and a compatible device to watch.
  • -
  • It offers more variety and diversity of content, as you can access channels from different regions and languages of India and the world.
  • -
  • It gives you more control and flexibility over what you watch, when you watch, and how you watch. You can choose your own provider or source, add your own playlist or URL, and watch on any device you want.
  • -
  • It provides better quality and reliability of video streaming, as it uses adaptive bitrate technology that adjusts the resolution and bandwidth according to your network speed and device capabilities.
  • -
  • It supports multiple devices and platforms, such as Android, iOS, Windows, Mac, Linux, Roku, Chromecast, FireTV, Smart TV, and more.
  • -
-

How to Stream Live Content with OTT Live App

-

To stream live content with OTT Live app, you need to follow these steps:

-

Step 1: Choose Your Provider or Source

-

The first step is to choose your provider or source of live content. OTT Live app supports two types of sources: IPTV and M3U. IPTV stands for Internet Protocol Television, which is a system that delivers TV channels over the internet using IP packets. M3U stands for MP3 URL, which is a file format that contains a list of URLs that point to media files or streams. You can choose either one depending on your preference and availability.

-

Step 2: Add Your Playlist or URL

-

The next step is to add your playlist or URL to the app. A playlist is a file that contains a list of channels or streams that you want to watch. A URL is a web address that points to a single channel or stream that you want to watch. You can add either one depending on your source type. To add a playlist or URL, go to Settings > Add Playlist/URL and enter the name and link of your playlist or URL. You can also scan a QR code if available. Then tap on Save and go back to the main screen.

-

Step 3: Browse and Watch Live Channels

-

The final step is to browse and watch live channels from your playlist or URL. To do this, go to Settings > Select Playlist/URL and choose the one you added in the previous step. Then you will see a list of channels or streams available from your playlist or URL. You can swipe left or right to browse through them. You can also use the search function to find channels by name or category. To watch a channel or stream, just tap on its name and it will start playing on the built-in video player.

-

Conclusion

-

In this article, we have shown you how to download OTT Live APK for Android devices and how to stream live content with it. OTT Live APK is an app that lets you watch free live TV channels from India and around the world using the internet. It has many features and benefits that make it a great alternative to cable or satellite TV services. If you are looking for a way to watch free live TV channels from your Android device, then you should give OTT Live APK a try. You can download it from the link below and enjoy watching your favorite channels anytime, anywhere.

-

FAQs

-

Here are some frequently asked questions about OTT Live APK and OTT Live Streaming:

-
    -
  1. What are the requirements to use OTT Live APK?
  2. -

    To use OTT Live APK, you need an Android device running Android 4.4 or higher, an internet connection with a speed of at least 2 Mbps, and a compatible video player such as MX Player or VLC Player.

    -
  3. Is OTT Live APK safe and legal to use?
  4. -

    OTT Live APK is safe and legal to use as long as you download it from a trusted source and use it for personal and non-commercial purposes. However, some of the channels or streams that you watch may be subject to copyright or geo-restrictions, so you should always check the legality and availability of the content before watching.

    -
  5. How can I update OTT Live APK?
  6. -

    To update OTT Live APK, you need to download the latest version of the app from the same source that you downloaded it from before and install it over the existing app. You can also check for updates within the app by going to Settings > Check for Updates.

    -
  7. How can I contact the developers of OTT Live APK?
  8. -

    To contact the developers of OTT Live APK, you can use the feedback option within the app by going to Settings > Feedback. You can also visit their website or email them at support@overthetop.live.

    -
  9. What are some alternatives to OTT Live APK?
  10. -

    Some alternatives to OTT Live APK are ThopTV, Oreo TV, RedBox TV, Live NetTV, and AOS TV. These are some other apps that let you watch free live TV channels from India and around the world on your Android device.

    -

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/XTools Unlock Extra How to Download and Use the Best iCloud Bypass Software.md b/spaces/congsaPfin/Manga-OCR/logs/XTools Unlock Extra How to Download and Use the Best iCloud Bypass Software.md deleted file mode 100644 index a21781098cd4f4071a86ae658c3853d491627d1e..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/XTools Unlock Extra How to Download and Use the Best iCloud Bypass Software.md +++ /dev/null @@ -1,166 +0,0 @@ - -

Download XTools Unlock Extra: A Complete Guide

-

If you have an iPhone, iPad, or iPod touch that is locked by iCloud Activation Lock, you may be looking for a way to unlock it. iCloud Activation Lock is a security feature that prevents anyone from accessing your device without your Apple ID and password. It is designed to protect your data and privacy in case your device is lost or stolen.

-

download xtools unlock extra


Downloadhttps://urlca.com/2uOfza



-

However, iCloud Activation Lock can also be a problem if you buy a second-hand device that is still linked to the previous owner's iCloud account, or if you forget your own Apple ID and password. In such cases, you may need a tool that can help you bypass iCloud Activation Lock and use your device normally.

-

One of the tools that claim to do this is XTools Unlock Extra. In this article, we will give you a complete guide on how to download and use XTools Unlock Extra to unlock iCloud Activation Lock. We will also tell you if this tool is reliable and trustworthy, and what are the risks and drawbacks of using it.

-

What is XTools Unlock Extra?

-

XTools Unlock Extra is a software program that can help you unlock an iCloud locked iOS device. It is promoted as the best solution for second-hand devices that are still connected to the previous owner's iCloud account. The program's numerous promotional websites claim that it can unlock all iPhone models including iPhone 13 Pro Max and all iPad models including iPad Pro. It will also work on all versions of iOS from iOS 7 to later.

-

Features of XTools Unlock Extra

-

According to the websites that promote XTools Unlock Extra, some of the features of this tool are:

-
    -
  • It can remove iCloud Activation Lock permanently.
  • -
  • It can unlock any iOS device in minutes.
  • -
  • It can support all iOS versions and models.
  • -
  • It can work offline without internet connection.
  • -
  • It can provide free updates and technical support.
  • -
-

Compatibility of XTools Unlock Extra

-

XTools Unlock Extra claims to be compatible with the following devices and iOS versions:

- - - - - -
DeviceiOS Version
iPhone 5/5s/5c/6/6s/6 Plus/6s Plus/7/7 Plus/8/8 Plus/X/XS/XS Max/XR/11/11 Pro/11 Pro Max/12/12 mini/12 Pro/12 Pro Max/13/13 mini/13 Pro/13 Pro MaxiOS 7/iOS 8/iOS 9/iOS 10/iOS 11/iOS 12/iOS 13/iOS 14/iOS 15
iPad Air/Air 2/Air 3/Air 4/Air (2020)/iPad mini 2/mini 3/mini 4/mini 5/iPad (2017)/iPad (2018)/iPad (2019)/iPad (2020)/iPad Pro (9.7-inch)/iPad Pro (10.5-inch)/iPad Pro (11-inch)/iPad Pro (12.9-inch)iOS 7/iOS 8/iOS 9/iOS 10/iOS 11/iOS 12/iOS 13/i OS 14/iOS 15
iPod touch (5th generation)/iPod touch (6th generation)/iPod touch (7th generation)iOS 7/iOS 8/iOS 9/iOS 10/iOS 11/iOS 12/iOS 13/iOS 14/iOS 15
-

How to Download XTools Unlock Extra?

-

If you want to try XTools Unlock Extra, you will need to download it from a safe and reliable source. However, this is easier said than done, as there are many fake and malicious websites that claim to offer XTools Unlock Extra, but actually contain viruses, malware, or scams. Therefore, you need to be very careful when downloading this tool.

-

How to download xtools unlock extra for free
-Xtools unlock extra review: is it legit and safe?
-Download xtools unlock extra to bypass iCloud activation lock
-Xtools unlock extra ultimate version: how to use it
-Download xtools unlock extra for Windows 10/8/7
-Xtools unlock extra alternative: iToolab UnlockGo
-Download xtools unlock extra for iPhone 13/12/11/XS/XR/X/8/7/6
-Xtools unlock extra not working: how to fix it
-Download xtools unlock extra crack with serial key
-Xtools unlock extra vs Tenorshare 4MeKey: which one is better?
-Download xtools unlock extra for iPad Pro/Air/Mini
-Xtools unlock extra customer service: how to contact them
-Download xtools unlock extra for iPod Touch
-Xtools unlock extra refund policy: how to get your money back
-Download xtools unlock extra for Mac OS
-Xtools unlock extra discount code: how to save money
-Download xtools unlock extra for iOS 15/14/13/12/11
-Xtools unlock extra testimonials: what users say about it
-Download xtools unlock extra full version with license key
-Xtools unlock extra pros and cons: what you need to know
-Download xtools unlock extra from official website
-Xtools unlock extra FAQ: answers to common questions
-Download xtools unlock extra without survey or password
-Xtools unlock extra tutorial: step by step guide
-Download xtools unlock extra for Android devices
-Xtools unlock extra features: what it can do for you
-Download xtools unlock extra latest version 2023
-Xtools unlock extra scam alert: how to avoid it
-Download xtools unlock extra for second-hand devices
-Xtools unlock extra comparison: how it stacks up against other tools
-Download xtools unlock extra trial version for free
-Xtools unlock extra support: how to get help from experts
-Download xtools unlock extra online without installation
-Xtools unlock extra benefits: why you should choose it
-Download xtools unlock extra for all iOS devices and models
-Xtools unlock extra requirements: what you need to use it
-Download xtools unlock extra safely and securely
-Xtools unlock extra guarantee: how it ensures your satisfaction
-Download xtools unlock extra with lifetime updates and support
-Xtools unlock extra tips and tricks: how to get the most out of it

-

Step 1: Find a Safe Download Link

-

The first step is to find a safe and legitimate download link for XTools Unlock Extra. You can do this by searching on Google or other search engines, but you need to be wary of the results. Some of the websites that appear on the first page may not be trustworthy, and may try to trick you into downloading something else or paying for something you don't need.

-

One way to avoid this is to look for reviews and feedback from other users who have tried XTools Unlock Extra. You can find these on forums, blogs, social media, or YouTube. You can also check the reputation and credibility of the website that offers the download link, by looking at its domain name, design, content, and contact information.

-

A safe and legitimate download link for XTools Unlock Extra should have the following characteristics:

-
    -
  • It should be from the official website of XTools Unlock Extra, which is xtoolsunlock.com.
  • -
  • It should have a secure connection, indicated by a padlock icon and https:// in the address bar.
  • -
  • It should not ask you for any personal or financial information, such as your name, email, phone number, credit card number, etc.
  • -
  • It should not redirect you to other websites or pop-ups that are unrelated to XTools Unlock Extra.
  • -
  • It should not require you to complete any surveys, offers, or human verification tests before downloading.
  • -
-

Step 2: Install and Launch the Program

-

Once you have found a safe and legitimate download link for XTools Unlock Extra, you can proceed to install and launch the program on your computer. To do this, follow these steps:

-
    -
  1. Click on the download link and save the file to your computer.
  2. -
  3. Locate the file and double-click on it to run the installer.
  4. -
  5. Follow the instructions on the screen to complete the installation process.
  6. -
  7. Launch the program by clicking on its icon on your desktop or start menu.
  8. -
-

Step 3: Sign in with Your ID and Password

-

The next step is to sign in with your ID and password that you received when you purchased XTools Unlock Extra. If you don't have an ID and password yet, you will need to buy one from the official website of XTools Unlock Extra. The price of XTools Unlock Extra varies depending on the device model and iOS version that you want to unlock. You can check the price list on the website before buying.

-

To sign in with your ID and password, follow these steps:

-
    -
  1. Enter your ID and password in the corresponding fields on the program's interface.
  2. -
  3. Click on the "Sign In" button to verify your credentials.
  4. -
  5. If your ID and password are valid, you will see a message that says "Welcome to XTools Unlock Extra".
  6. -
-

How to Use XTools Unlock Extra to Unlock iCloud Activation Lock?

-

After signing in with your ID and password, you can start using XTools Unlock Extra to unlock iCloud Activation Lock on your iOS device. To do this, follow these steps:

-

Step 1: Select Your Device Model and iOS Version

-

The first step is to select your device model and iOS version that you want to unlock. You can do this by clicking on the drop-down menus on the program's interface. You will see a list of supported devices and iOS versions that you can choose from. Make sure that you select the correct device model and iOS version that match your device.

-

Step 2: Enter Your IMEI and Serial Numbers

-

The next step is to enter your IMEI and serial numbers of your device. These are unique identifiers that are used to verify your device's identity and eligibility for unlocking. You can find these numbers on the back of your device, or on the box, or on the settings app. You can also dial *#06# on your device to get your IMEI number.

-

To enter your IMEI and serial numbers, follow these steps:

-
    -
  1. Type your IMEI number in the field that says "Enter IMEI".
  2. -
  3. Type your serial number in the field that says "Enter Serial Number".
  4. -
  5. Click on the "Verify" button to check if your device is compatible with XTools Unlock Extra.
  6. -
  7. If your device is compatible, you will see a message that says "Your device is ready to be unlocked".
  8. -
-

Step 3: Connect Your Device to the Computer

-

The third step is to connect your device to the computer using a USB cable. Make sure that your device is turned on and has enough battery power. You may also need to trust the computer on your device by tapping on "Trust" when prompted.

-

To connect your device to the computer, follow these steps:

-
    -
  1. Plug one end of the USB cable into your device's charging port.
  2. -
  3. Plug the other end of the USB cable into your computer's USB port.
  4. -
  5. Wait for the program to detect your device and show its information on the screen.
  6. -
  7. If your device is not detected, try using a different USB cable or port, or restart your device and computer.
  8. -
-

Step 4: Start the Unlocking Process

-

The final step is to start the unlocking process and wait for it to finish. This may take several minutes depending on your device model and iOS version. Do not disconnect your device or close the program during this process, as it may cause errors or damage to your device.

-

To start the unlocking process, follow these steps:

-
    -
  1. Click on the "Start Unlock" button on the program's interface.
  2. -
  3. Follow the instructions on the screen to put your device in DFU mode or recovery mode.
  4. -
  5. Wait for the program to download and install the firmware file on your device.
  6. -
  7. Wait for the program to remove iCloud Activation Lock from your device.
  8. -
  9. When the process is completed, you will see a message that says "Congratulations! Your device has been unlocked".
  10. -
-

Is XTools Unlock Extra Reliable and Trustworthy?

-

XTools Unlock Extra claims to be a reliable and trustworthy tool that can unlock iCloud Activation Lock permanently and safely. However, there are some doubts and concerns about its legitimacy and effectiveness. Here are some of the pros and cons of XTools Unlock Extra that you should know before using it.

-

Pros and Cons of XTools Unlock Extra

-

Some of the pros of XTools Unlock Extra are:

-
    -
  • It can unlock any iOS device in minutes.
  • -
  • It can support all iOS versions and models.
  • -
  • It can work offline without internet connection.
  • -
  • It can provide free updates and technical support.
  • -
-

Some of the cons of XTools Unlock Extra are:

-
    -
  • It is not free. You have to pay for an ID and password to use it.
  • -
  • It is not easy to find a safe and legitimate download link for it.
  • -
  • It may not work for all devices and situations.
  • -
  • It may cause errors or damage to your device if not used properly.
  • -
-

Risks and Drawbacks of XTools Unlock Extra

-

Besides the cons mentioned above, there are also some risks and drawbacks of using XTools Unlock Extra that you should be aware of. These include:

-
    -
  • You may lose your data and settings on your device after unlocking it with XTools Unlock Extra.
  • -
  • You may void your warranty or violate Apple's terms of service by using XTools Unlock Extra.
  • -
  • You may expose your device to viruses, malware, or scams by downloading XTools Unlock Extra from untrustworthy sources.
  • -
  • You may face legal issues or consequences if you use XTools Unlock Extra to unlock a stolen or lost device.
  • -
-

Conclusion and FAQs

-

XTools Unlock Extra is a software program that claims to help you unlock iCloud Activation Lock on your iOS device. It promises to unlock any iOS device in minutes, support all iOS versions and models, work offline without internet connection, and provide free updates and technical support. However, it also has some drawbacks and risks that you should consider before using it. It is not free, not easy to find, not guaranteed to work, and may cause problems or troubles to your device. Therefore, you should use it with caution and at your own risk.

-

If you have any questions about XTools Unlock Extra, you may find the answers in the following FAQs:

-

Q: Is XTools Unlock Extra legal?

-

A: XTools Unlock Extra is not illegal per se, but it may violate Apple's terms of service and warranty policy. It may also be illegal if you use it to unlock a device that does not belong to you or that is reported as stolen or lost. Therefore, you should only use XTools Unlock Extra for legitimate purposes and with the consent of the original owner of the device.

-

Q: Does XTools Unlock Extra work for all iCloud locked devices?

-

A: XTools Unlock Extra claims to work for all iCloud locked devices, but this may not be true in reality. Some devices may have a different or more advanced iCloud Activation Lock mechanism that XTools Unlock Extra cannot bypass. Some devices may also have other issues or errors that prevent XTools Unlock Extra from working properly. Therefore, you should not rely on XTools Unlock Extra as the only solution for unlocking iCloud Activation Lock.

-

Q: Will XTools Unlock Extra erase my data and settings on my device?

-

A: Yes, XTools Unlock Extra will erase your data and settings on your device after unlocking it. This is because XTools Unlock Extra will install a new firmware file on your device that will overwrite your existing data and settings. Therefore, you should backup your data and settings before using XTools Unlock Extra, or use another method that can preserve your data and settings.

-

Q: Will XTools Unlock Extra affect the performance or functionality of my device?

-

A: Possibly, XTools Unlock Extra may affect the performance or functionality of your device after unlocking it. This is because XTools Unlock Extra may not be compatible with your device model or iOS version, or may cause some errors or glitches on your device. Therefore, you should check the compatibility of XTools Unlock Extra with your device before using it, and be prepared to restore your device if something goes wrong.

-

Q: Is there a better alternative to XTools Unlock Extra?

-

A: Yes, there may be a better alternative to XTools Unlock Extra that can unlock iCloud Activation Lock more safely and effectively. One of the best alternatives is to contact Apple or the original owner of the device and ask them to remove iCloud Activation Lock from your device. This is the most official and reliable way to unlock iCloud Activation Lock, but it may require some proof of purchase or ownership. Another alternative is to use a reputable and professional iCloud unlock service that can unlock iCloud Activation Lock remotely and without erasing your data and settings. However, this may cost some money and time, and may not guarantee success.

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/contluForse/HuggingGPT/assets/3d Tool V10 Premium Crackl REPACK.md b/spaces/contluForse/HuggingGPT/assets/3d Tool V10 Premium Crackl REPACK.md deleted file mode 100644 index f1f6bee348dad14c770a7f18c7bf1865871fd020..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/3d Tool V10 Premium Crackl REPACK.md +++ /dev/null @@ -1,9 +0,0 @@ - -

You can also set any of the parameters for the tool itself to get a better quality cut. This includes whether to use horizontal or vertical toolpaths, the number of passes to make, spacing between the passes (between the toolbeams), the number of passes that are to be made, the speed of the tool and whether to use the same or different toolpaths for the down and up strokes of the tool. Further we can also look at the data exported from the toolpath as text files. These are text files which contain the toolpath data in a simple tabular form.

-

We can also view the toolpath in real time to check that the step changes that we made have created the toolpaths we wanted. You can also set the tool to run automatically or to use the output display to see how the tool would look if cut this material. This allows you to do some quick checks to see if the cut looks right. You can then make the tool run again in seconds if it does not or you can use it to test it on different materials with different settings before you get into the machine. There is also a 3D preview function which allows you to view the toolpaths in real time to check the cut before you start the tool.

-

3d Tool V10 Premium Crackl


Download File >>>>> https://ssurll.com/2uzxTH



-

Using the detailed features of KeyShot, basic shapes can be rotated, moved, scaled, masked, filled and decimated. The Homing tool will automatically home in on the edge of a shape to maximize tool size.

-

Taking advantage of the interface was easily my favorite aspect of the software. Its clear and easy to use. It contains many features such as 2D and 3D slicing, combined with livelink and livelink for sharing without losing your work, easy to use tool shaders and huge library of presets.

-

899543212b
-
-
\ No newline at end of file diff --git a/spaces/cyliawardana/Womens_Clothing_Sentiment_Analysis/README.md b/spaces/cyliawardana/Womens_Clothing_Sentiment_Analysis/README.md deleted file mode 100644 index 2c50a375695c027bdb4d5679d99ff92615f7d98c..0000000000000000000000000000000000000000 --- a/spaces/cyliawardana/Womens_Clothing_Sentiment_Analysis/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Womens Clothing Sentiment Analysis -emoji: 🏢 -colorFrom: blue -colorTo: purple -sdk: streamlit -sdk_version: 1.17.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/danterivers/music-generation-samples/audiocraft/models/encodec.py b/spaces/danterivers/music-generation-samples/audiocraft/models/encodec.py deleted file mode 100644 index 69621a695887b0b41614c51cae020f6fd0af221d..0000000000000000000000000000000000000000 --- a/spaces/danterivers/music-generation-samples/audiocraft/models/encodec.py +++ /dev/null @@ -1,302 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -from abc import ABC, abstractmethod -import typing as tp - -from einops import rearrange -import torch -from torch import nn - -from .. import quantization as qt - - -class CompressionModel(ABC, nn.Module): - - @abstractmethod - def forward(self, x: torch.Tensor) -> qt.QuantizedResult: - ... - - @abstractmethod - def encode(self, x: torch.Tensor) -> tp.Tuple[torch.Tensor, tp.Optional[torch.Tensor]]: - """See `EncodecModel.encode`""" - ... - - @abstractmethod - def decode(self, codes: torch.Tensor, scale: tp.Optional[torch.Tensor] = None): - """See `EncodecModel.decode`""" - ... - - @property - @abstractmethod - def channels(self) -> int: - ... - - @property - @abstractmethod - def frame_rate(self) -> int: - ... - - @property - @abstractmethod - def sample_rate(self) -> int: - ... - - @property - @abstractmethod - def cardinality(self) -> int: - ... - - @property - @abstractmethod - def num_codebooks(self) -> int: - ... - - @property - @abstractmethod - def total_codebooks(self) -> int: - ... - - @abstractmethod - def set_num_codebooks(self, n: int): - """Set the active number of codebooks used by the quantizer. - """ - ... - - -class EncodecModel(CompressionModel): - """Encodec model operating on the raw waveform. - - Args: - encoder (nn.Module): Encoder network. - decoder (nn.Module): Decoder network. - quantizer (qt.BaseQuantizer): Quantizer network. - frame_rate (int): Frame rate for the latent representation. - sample_rate (int): Audio sample rate. - channels (int): Number of audio channels. - causal (bool): Whether to use a causal version of the model. - renormalize (bool): Whether to renormalize the audio before running the model. - """ - # we need assignement to override the property in the abstract class, - # I couldn't find a better way... - frame_rate: int = 0 - sample_rate: int = 0 - channels: int = 0 - - def __init__(self, - encoder: nn.Module, - decoder: nn.Module, - quantizer: qt.BaseQuantizer, - frame_rate: int, - sample_rate: int, - channels: int, - causal: bool = False, - renormalize: bool = False): - super().__init__() - self.encoder = encoder - self.decoder = decoder - self.quantizer = quantizer - self.frame_rate = frame_rate - self.sample_rate = sample_rate - self.channels = channels - self.renormalize = renormalize - self.causal = causal - if self.causal: - # we force disabling here to avoid handling linear overlap of segments - # as supported in original EnCodec codebase. - assert not self.renormalize, 'Causal model does not support renormalize' - - @property - def total_codebooks(self): - """Total number of quantizer codebooks available. - """ - return self.quantizer.total_codebooks - - @property - def num_codebooks(self): - """Active number of codebooks used by the quantizer. - """ - return self.quantizer.num_codebooks - - def set_num_codebooks(self, n: int): - """Set the active number of codebooks used by the quantizer. - """ - self.quantizer.set_num_codebooks(n) - - @property - def cardinality(self): - """Cardinality of each codebook. - """ - return self.quantizer.bins - - def preprocess(self, x: torch.Tensor) -> tp.Tuple[torch.Tensor, tp.Optional[torch.Tensor]]: - scale: tp.Optional[torch.Tensor] - if self.renormalize: - mono = x.mean(dim=1, keepdim=True) - volume = mono.pow(2).mean(dim=2, keepdim=True).sqrt() - scale = 1e-8 + volume - x = x / scale - scale = scale.view(-1, 1) - else: - scale = None - return x, scale - - def postprocess(self, - x: torch.Tensor, - scale: tp.Optional[torch.Tensor] = None) -> torch.Tensor: - if scale is not None: - assert self.renormalize - x = x * scale.view(-1, 1, 1) - return x - - def forward(self, x: torch.Tensor) -> qt.QuantizedResult: - assert x.dim() == 3 - length = x.shape[-1] - x, scale = self.preprocess(x) - - emb = self.encoder(x) - q_res = self.quantizer(emb, self.frame_rate) - out = self.decoder(q_res.x) - - # remove extra padding added by the encoder and decoder - assert out.shape[-1] >= length, (out.shape[-1], length) - out = out[..., :length] - - q_res.x = self.postprocess(out, scale) - - return q_res - - def encode(self, x: torch.Tensor) -> tp.Tuple[torch.Tensor, tp.Optional[torch.Tensor]]: - """Encode the given input tensor to quantized representation along with scale parameter. - - Args: - x (torch.Tensor): Float tensor of shape [B, C, T] - - Returns: - codes, scale (tp.Tuple[torch.Tensor, torch.Tensor]): Tuple composed of: - codes a float tensor of shape [B, K, T] with K the number of codebooks used and T the timestep. - scale a float tensor containing the scale for audio renormalizealization. - """ - assert x.dim() == 3 - x, scale = self.preprocess(x) - emb = self.encoder(x) - codes = self.quantizer.encode(emb) - return codes, scale - - def decode(self, codes: torch.Tensor, scale: tp.Optional[torch.Tensor] = None): - """Decode the given codes to a reconstructed representation, using the scale to perform - audio denormalization if needed. - - Args: - codes (torch.Tensor): Int tensor of shape [B, K, T] - scale (tp.Optional[torch.Tensor]): Float tensor containing the scale value. - - Returns: - out (torch.Tensor): Float tensor of shape [B, C, T], the reconstructed audio. - """ - emb = self.quantizer.decode(codes) - out = self.decoder(emb) - out = self.postprocess(out, scale) - # out contains extra padding added by the encoder and decoder - return out - - -class FlattenedCompressionModel(CompressionModel): - """Wraps a CompressionModel and flatten its codebooks, e.g. - instead of returning [B, K, T], return [B, S, T * (K // S)] with - S the number of codebooks per step, and `K // S` the number of 'virtual steps' - for each real time step. - - Args: - model (CompressionModel): compression model to wrap. - codebooks_per_step (int): number of codebooks to keep per step, - this must divide the number of codebooks provided by the wrapped model. - extend_cardinality (bool): if True, and for instance if codebooks_per_step = 1, - if each codebook has a cardinality N, then the first codebook will - use the range [0, N - 1], and the second [N, 2 N - 1] etc. - On decoding, this can lead to potentially invalid sequences. - Any invalid entry will be silently remapped to the proper range - with a modulo. - """ - def __init__(self, model: CompressionModel, codebooks_per_step: int = 1, - extend_cardinality: bool = True): - super().__init__() - self.model = model - self.codebooks_per_step = codebooks_per_step - self.extend_cardinality = extend_cardinality - - @property - def total_codebooks(self): - return self.model.total_codebooks - - @property - def num_codebooks(self): - """Active number of codebooks used by the quantizer. - - ..Warning:: this reports the number of codebooks after the flattening - of the codebooks! - """ - assert self.model.num_codebooks % self.codebooks_per_step == 0 - return self.codebooks_per_step - - def set_num_codebooks(self, n: int): - """Set the active number of codebooks used by the quantizer. - - ..Warning:: this sets the number of codebooks **before** the flattening - of the codebooks. - """ - assert n % self.codebooks_per_step == 0 - self.model.set_num_codebooks(n) - - @property - def num_virtual_steps(self) -> int: - """Return the number of virtual steps, e.g. one real step - will be split into that many steps. - """ - return self.model.num_codebooks // self.codebooks_per_step - - @property - def frame_rate(self) -> int: - return self.model.frame_rate * self.num_virtual_steps - - @property - def sample_rate(self) -> int: - return self.model.sample_rate - - @property - def channels(self) -> int: - return self.model.channels - - @property - def cardinality(self): - """Cardinality of each codebook. - """ - if self.extend_cardinality: - return self.model.cardinality * self.num_virtual_steps - else: - return self.model.cardinality - - def forward(self, x: torch.Tensor) -> qt.QuantizedResult: - raise NotImplementedError("Not supported, use encode and decode.") - - def encode(self, x: torch.Tensor) -> tp.Tuple[torch.Tensor, tp.Optional[torch.Tensor]]: - indices, scales = self.model.encode(x) - B, K, T = indices.shape - indices = rearrange(indices, 'b (k v) t -> b k t v', k=self.codebooks_per_step) - if self.extend_cardinality: - for virtual_step in range(1, self.num_virtual_steps): - indices[..., virtual_step] += self.model.cardinality * virtual_step - indices = rearrange(indices, 'b k t v -> b k (t v)') - return (indices, scales) - - def decode(self, codes: torch.Tensor, scale: tp.Optional[torch.Tensor] = None): - B, K, T = codes.shape - assert T % self.num_virtual_steps == 0 - codes = rearrange(codes, 'b k (t v) -> b (k v) t', v=self.num_virtual_steps) - # We silently ignore potential errors from the LM when - # using extend_cardinality. - codes = codes % self.model.cardinality - return self.model.decode(codes, scale) diff --git a/spaces/davidscmx/fire_detector/app.py b/spaces/davidscmx/fire_detector/app.py deleted file mode 100644 index 9ef8001906d48edcc76954eb32bf5068ca4c4cdb..0000000000000000000000000000000000000000 --- a/spaces/davidscmx/fire_detector/app.py +++ /dev/null @@ -1,30 +0,0 @@ -# AUTOGENERATED! DO NOT EDIT! File to edit: ../fire_detector_gradio.ipynb. - -# %% auto 0 -__all__ = ['learner', 'categories', 'image', 'label', 'examples', 'intf', 'classify_image'] - -# %% ../fire_detector_gradio.ipynb 1 -from fastai.vision.all import load_learner -import gradio as gr - -# %% ../fire_detector_gradio.ipynb 3 -learner = load_learner("model.pkl") - -# %% ../fire_detector_gradio.ipynb 4 -categories = ('fireplace fire', "building fire", "wild fire", "lithium battery fire", "bonfire") - - -def classify_image(im): - pred, idx, probs = learner.predict(im) - return dict(zip(categories, map(float, probs))) - - -# %% ../fire_detector_gradio.ipynb 6 -image = gr.inputs.Image(shape=(256, 256)) -label = gr.outputs.Label() -examples = ["wildfire.jpg", "bonfire.jpg", "fireplace.jpg", "lithium.jpg"] - -intf = gr.Interface(fn=classify_image, inputs=image, - outputs=label, examples=examples) - -intf.launch(inline=False) \ No newline at end of file diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/attr/_cmp.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/attr/_cmp.py deleted file mode 100644 index d9cbe22cde35ff08abb0f1261f2173091490e02f..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/attr/_cmp.py +++ /dev/null @@ -1,155 +0,0 @@ -# SPDX-License-Identifier: MIT - - -import functools -import types - -from ._make import _make_ne - - -_operation_names = {"eq": "==", "lt": "<", "le": "<=", "gt": ">", "ge": ">="} - - -def cmp_using( - eq=None, - lt=None, - le=None, - gt=None, - ge=None, - require_same_type=True, - class_name="Comparable", -): - """ - Create a class that can be passed into `attrs.field`'s ``eq``, ``order``, - and ``cmp`` arguments to customize field comparison. - - The resulting class will have a full set of ordering methods if at least - one of ``{lt, le, gt, ge}`` and ``eq`` are provided. - - :param Optional[callable] eq: `callable` used to evaluate equality of two - objects. - :param Optional[callable] lt: `callable` used to evaluate whether one - object is less than another object. - :param Optional[callable] le: `callable` used to evaluate whether one - object is less than or equal to another object. - :param Optional[callable] gt: `callable` used to evaluate whether one - object is greater than another object. - :param Optional[callable] ge: `callable` used to evaluate whether one - object is greater than or equal to another object. - - :param bool require_same_type: When `True`, equality and ordering methods - will return `NotImplemented` if objects are not of the same type. - - :param Optional[str] class_name: Name of class. Defaults to 'Comparable'. - - See `comparison` for more details. - - .. versionadded:: 21.1.0 - """ - - body = { - "__slots__": ["value"], - "__init__": _make_init(), - "_requirements": [], - "_is_comparable_to": _is_comparable_to, - } - - # Add operations. - num_order_functions = 0 - has_eq_function = False - - if eq is not None: - has_eq_function = True - body["__eq__"] = _make_operator("eq", eq) - body["__ne__"] = _make_ne() - - if lt is not None: - num_order_functions += 1 - body["__lt__"] = _make_operator("lt", lt) - - if le is not None: - num_order_functions += 1 - body["__le__"] = _make_operator("le", le) - - if gt is not None: - num_order_functions += 1 - body["__gt__"] = _make_operator("gt", gt) - - if ge is not None: - num_order_functions += 1 - body["__ge__"] = _make_operator("ge", ge) - - type_ = types.new_class( - class_name, (object,), {}, lambda ns: ns.update(body) - ) - - # Add same type requirement. - if require_same_type: - type_._requirements.append(_check_same_type) - - # Add total ordering if at least one operation was defined. - if 0 < num_order_functions < 4: - if not has_eq_function: - # functools.total_ordering requires __eq__ to be defined, - # so raise early error here to keep a nice stack. - raise ValueError( - "eq must be define is order to complete ordering from " - "lt, le, gt, ge." - ) - type_ = functools.total_ordering(type_) - - return type_ - - -def _make_init(): - """ - Create __init__ method. - """ - - def __init__(self, value): - """ - Initialize object with *value*. - """ - self.value = value - - return __init__ - - -def _make_operator(name, func): - """ - Create operator method. - """ - - def method(self, other): - if not self._is_comparable_to(other): - return NotImplemented - - result = func(self.value, other.value) - if result is NotImplemented: - return NotImplemented - - return result - - method.__name__ = f"__{name}__" - method.__doc__ = ( - f"Return a {_operation_names[name]} b. Computed by attrs." - ) - - return method - - -def _is_comparable_to(self, other): - """ - Check whether `other` is comparable to `self`. - """ - for func in self._requirements: - if not func(self, other): - return False - return True - - -def _check_same_type(self, other): - """ - Return True if *self* and *other* are of the same type, False otherwise. - """ - return other.value.__class__ is self.value.__class__ diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/pens/cairoPen.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/pens/cairoPen.py deleted file mode 100644 index 9cd5da9128fc0054cf748de703540afa7685b7b2..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/pens/cairoPen.py +++ /dev/null @@ -1,26 +0,0 @@ -"""Pen to draw to a Cairo graphics library context.""" - -from fontTools.pens.basePen import BasePen - - -__all__ = ["CairoPen"] - - -class CairoPen(BasePen): - """Pen to draw to a Cairo graphics library context.""" - - def __init__(self, glyphSet, context): - BasePen.__init__(self, glyphSet) - self.context = context - - def _moveTo(self, p): - self.context.move_to(*p) - - def _lineTo(self, p): - self.context.line_to(*p) - - def _curveToOne(self, p1, p2, p3): - self.context.curve_to(*p1, *p2, *p3) - - def _closePath(self): - self.context.close_path() diff --git a/spaces/declare-lab/tango/diffusers/src/diffusers/utils/dummy_torch_and_transformers_objects.py b/spaces/declare-lab/tango/diffusers/src/diffusers/utils/dummy_torch_and_transformers_objects.py deleted file mode 100644 index cf85ff157f5797703ff9200a6e306a2ede80a707..0000000000000000000000000000000000000000 --- a/spaces/declare-lab/tango/diffusers/src/diffusers/utils/dummy_torch_and_transformers_objects.py +++ /dev/null @@ -1,512 +0,0 @@ -# This file is autogenerated by the command `make fix-copies`, do not edit. -from ..utils import DummyObject, requires_backends - - -class TextualInversionLoaderMixin(metaclass=DummyObject): - _backends = ["torch", "transformers"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - -class AltDiffusionImg2ImgPipeline(metaclass=DummyObject): - _backends = ["torch", "transformers"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - -class AltDiffusionPipeline(metaclass=DummyObject): - _backends = ["torch", "transformers"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - -class AudioLDMPipeline(metaclass=DummyObject): - _backends = ["torch", "transformers"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - -class CycleDiffusionPipeline(metaclass=DummyObject): - _backends = ["torch", "transformers"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - -class LDMTextToImagePipeline(metaclass=DummyObject): - _backends = ["torch", "transformers"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - -class PaintByExamplePipeline(metaclass=DummyObject): - _backends = ["torch", "transformers"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - -class SemanticStableDiffusionPipeline(metaclass=DummyObject): - _backends = ["torch", "transformers"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - -class StableDiffusionAttendAndExcitePipeline(metaclass=DummyObject): - _backends = ["torch", "transformers"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - -class StableDiffusionControlNetPipeline(metaclass=DummyObject): - _backends = ["torch", "transformers"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - -class StableDiffusionDepth2ImgPipeline(metaclass=DummyObject): - _backends = ["torch", "transformers"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - -class StableDiffusionImageVariationPipeline(metaclass=DummyObject): - _backends = ["torch", "transformers"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - -class StableDiffusionImg2ImgPipeline(metaclass=DummyObject): - _backends = ["torch", "transformers"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - -class StableDiffusionInpaintPipeline(metaclass=DummyObject): - _backends = ["torch", "transformers"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - -class StableDiffusionInpaintPipelineLegacy(metaclass=DummyObject): - _backends = ["torch", "transformers"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - -class StableDiffusionInstructPix2PixPipeline(metaclass=DummyObject): - _backends = ["torch", "transformers"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - -class StableDiffusionLatentUpscalePipeline(metaclass=DummyObject): - _backends = ["torch", "transformers"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - -class StableDiffusionModelEditingPipeline(metaclass=DummyObject): - _backends = ["torch", "transformers"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - -class StableDiffusionPanoramaPipeline(metaclass=DummyObject): - _backends = ["torch", "transformers"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - -class StableDiffusionPipeline(metaclass=DummyObject): - _backends = ["torch", "transformers"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - -class StableDiffusionPipelineSafe(metaclass=DummyObject): - _backends = ["torch", "transformers"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - -class StableDiffusionPix2PixZeroPipeline(metaclass=DummyObject): - _backends = ["torch", "transformers"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - -class StableDiffusionSAGPipeline(metaclass=DummyObject): - _backends = ["torch", "transformers"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - -class StableDiffusionUpscalePipeline(metaclass=DummyObject): - _backends = ["torch", "transformers"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - -class StableUnCLIPImg2ImgPipeline(metaclass=DummyObject): - _backends = ["torch", "transformers"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - -class StableUnCLIPPipeline(metaclass=DummyObject): - _backends = ["torch", "transformers"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - -class TextToVideoSDPipeline(metaclass=DummyObject): - _backends = ["torch", "transformers"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - -class UnCLIPImageVariationPipeline(metaclass=DummyObject): - _backends = ["torch", "transformers"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - -class UnCLIPPipeline(metaclass=DummyObject): - _backends = ["torch", "transformers"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - -class VersatileDiffusionDualGuidedPipeline(metaclass=DummyObject): - _backends = ["torch", "transformers"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - -class VersatileDiffusionImageVariationPipeline(metaclass=DummyObject): - _backends = ["torch", "transformers"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - -class VersatileDiffusionPipeline(metaclass=DummyObject): - _backends = ["torch", "transformers"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - -class VersatileDiffusionTextToImagePipeline(metaclass=DummyObject): - _backends = ["torch", "transformers"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - -class VQDiffusionPipeline(metaclass=DummyObject): - _backends = ["torch", "transformers"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers"]) diff --git a/spaces/declare-lab/tango/diffusers/tests/fixtures/custom_pipeline/what_ever.py b/spaces/declare-lab/tango/diffusers/tests/fixtures/custom_pipeline/what_ever.py deleted file mode 100644 index a8af08d3980a6e9dbd5af240792edf013cef7313..0000000000000000000000000000000000000000 --- a/spaces/declare-lab/tango/diffusers/tests/fixtures/custom_pipeline/what_ever.py +++ /dev/null @@ -1,101 +0,0 @@ -# Copyright 2023 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and - -# limitations under the License. - - -from typing import Optional, Tuple, Union - -import torch - -from diffusers.pipeline_utils import DiffusionPipeline, ImagePipelineOutput - - -class CustomLocalPipeline(DiffusionPipeline): - r""" - This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the - library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) - - Parameters: - unet ([`UNet2DModel`]): U-Net architecture to denoise the encoded image. - scheduler ([`SchedulerMixin`]): - A scheduler to be used in combination with `unet` to denoise the encoded image. Can be one of - [`DDPMScheduler`], or [`DDIMScheduler`]. - """ - - def __init__(self, unet, scheduler): - super().__init__() - self.register_modules(unet=unet, scheduler=scheduler) - - @torch.no_grad() - def __call__( - self, - batch_size: int = 1, - generator: Optional[torch.Generator] = None, - num_inference_steps: int = 50, - output_type: Optional[str] = "pil", - return_dict: bool = True, - **kwargs, - ) -> Union[ImagePipelineOutput, Tuple]: - r""" - Args: - batch_size (`int`, *optional*, defaults to 1): - The number of images to generate. - generator (`torch.Generator`, *optional*): - A [torch generator](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation - deterministic. - eta (`float`, *optional*, defaults to 0.0): - The eta parameter which controls the scale of the variance (0 is DDIM and 1 is one type of DDPM). - num_inference_steps (`int`, *optional*, defaults to 50): - The number of denoising steps. More denoising steps usually lead to a higher quality image at the - expense of slower inference. - output_type (`str`, *optional*, defaults to `"pil"`): - The output format of the generate image. Choose between - [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`. - return_dict (`bool`, *optional*, defaults to `True`): - Whether or not to return a [`~pipeline_utils.ImagePipelineOutput`] instead of a plain tuple. - - Returns: - [`~pipeline_utils.ImagePipelineOutput`] or `tuple`: [`~pipelines.utils.ImagePipelineOutput`] if - `return_dict` is True, otherwise a `tuple. When returning a tuple, the first element is a list with the - generated images. - """ - - # Sample gaussian noise to begin loop - image = torch.randn( - (batch_size, self.unet.in_channels, self.unet.sample_size, self.unet.sample_size), - generator=generator, - ) - image = image.to(self.device) - - # set step values - self.scheduler.set_timesteps(num_inference_steps) - - for t in self.progress_bar(self.scheduler.timesteps): - # 1. predict noise model_output - model_output = self.unet(image, t).sample - - # 2. predict previous mean of image x_t-1 and add variance depending on eta - # eta corresponds to η in paper and should be between [0, 1] - # do x_t -> x_t-1 - image = self.scheduler.step(model_output, t, image).prev_sample - - image = (image / 2 + 0.5).clamp(0, 1) - image = image.cpu().permute(0, 2, 3, 1).numpy() - if output_type == "pil": - image = self.numpy_to_pil(image) - - if not return_dict: - return (image,), "This is a local test" - - return ImagePipelineOutput(images=image), "This is a local test" diff --git a/spaces/deepskyreal/ai-mixer-hotchpotch/sad_talker/src/audio2exp_models/audio2exp.py b/spaces/deepskyreal/ai-mixer-hotchpotch/sad_talker/src/audio2exp_models/audio2exp.py deleted file mode 100644 index 9e79a929560592687a505e13188796e2b0ca8772..0000000000000000000000000000000000000000 --- a/spaces/deepskyreal/ai-mixer-hotchpotch/sad_talker/src/audio2exp_models/audio2exp.py +++ /dev/null @@ -1,41 +0,0 @@ -from tqdm import tqdm -import torch -from torch import nn - - -class Audio2Exp(nn.Module): - def __init__(self, netG, cfg, device, prepare_training_loss=False): - super(Audio2Exp, self).__init__() - self.cfg = cfg - self.device = device - self.netG = netG.to(device) - - def test(self, batch): - - mel_input = batch['indiv_mels'] # bs T 1 80 16 - bs = mel_input.shape[0] - T = mel_input.shape[1] - - exp_coeff_pred = [] - - for i in tqdm(range(0, T, 10),'audio2exp:'): # every 10 frames - - current_mel_input = mel_input[:,i:i+10] - - #ref = batch['ref'][:, :, :64].repeat((1,current_mel_input.shape[1],1)) #bs T 64 - ref = batch['ref'][:, :, :64][:, i:i+10] - ratio = batch['ratio_gt'][:, i:i+10] #bs T - - audiox = current_mel_input.view(-1, 1, 80, 16) # bs*T 1 80 16 - - curr_exp_coeff_pred = self.netG(audiox, ref, ratio) # bs T 64 - - exp_coeff_pred += [curr_exp_coeff_pred] - - # BS x T x 64 - results_dict = { - 'exp_coeff_pred': torch.cat(exp_coeff_pred, axis=1) - } - return results_dict - - diff --git a/spaces/derful/Chatgpt-academic/check_proxy.py b/spaces/derful/Chatgpt-academic/check_proxy.py deleted file mode 100644 index 39c89728cce1a8675a1a3189b00356a83af6e31b..0000000000000000000000000000000000000000 --- a/spaces/derful/Chatgpt-academic/check_proxy.py +++ /dev/null @@ -1,26 +0,0 @@ - -def check_proxy(proxies): - import requests - proxies_https = proxies['https'] if proxies is not None else '无' - try: - response = requests.get("https://ipapi.co/json/", proxies=proxies, timeout=4) - data = response.json() - print(f'查询代理的地理位置,返回的结果是{data}') - if 'country_name' in data: - country = data['country_name'] - result = f"代理配置 {proxies_https}, 代理所在地:{country}" - elif 'error' in data: - result = f"代理配置 {proxies_https}, 代理所在地:未知,IP查询频率受限" - print(result) - return result - except: - result = f"代理配置 {proxies_https}, 代理所在地查询超时,代理可能无效" - print(result) - return result - - -if __name__ == '__main__': - import os; os.environ['no_proxy'] = '*' # 避免代理网络产生意外污染 - try: from config_private import proxies # 放自己的秘密如API和代理网址 os.path.exists('config_private.py') - except: from config import proxies - check_proxy(proxies) \ No newline at end of file diff --git a/spaces/derful/Chatgpt-academic/crazy_functions/test_project/cpp/cppipc/waiter.h b/spaces/derful/Chatgpt-academic/crazy_functions/test_project/cpp/cppipc/waiter.h deleted file mode 100644 index ee45fe3517be95ac1688a3e3540189edeb0d860c..0000000000000000000000000000000000000000 --- a/spaces/derful/Chatgpt-academic/crazy_functions/test_project/cpp/cppipc/waiter.h +++ /dev/null @@ -1,83 +0,0 @@ -#pragma once - -#include -#include -#include -#include - -#include "libipc/def.h" -#include "libipc/mutex.h" -#include "libipc/condition.h" -#include "libipc/platform/detail.h" - -namespace ipc { -namespace detail { - -class waiter { - ipc::sync::condition cond_; - ipc::sync::mutex lock_; - std::atomic quit_ {false}; - -public: - static void init(); - - waiter() = default; - waiter(char const *name) { - open(name); - } - - ~waiter() { - close(); - } - - bool valid() const noexcept { - return cond_.valid() && lock_.valid(); - } - - bool open(char const *name) noexcept { - quit_.store(false, std::memory_order_relaxed); - if (!cond_.open((std::string{"_waiter_cond_"} + name).c_str())) { - return false; - } - if (!lock_.open((std::string{"_waiter_lock_"} + name).c_str())) { - cond_.close(); - return false; - } - return valid(); - } - - void close() noexcept { - cond_.close(); - lock_.close(); - } - - template - bool wait_if(F &&pred, std::uint64_t tm = ipc::invalid_value) noexcept { - IPC_UNUSED_ std::lock_guard guard {lock_}; - while ([this, &pred] { - return !quit_.load(std::memory_order_relaxed) - && std::forward(pred)(); - }()) { - if (!cond_.wait(lock_, tm)) return false; - } - return true; - } - - bool notify() noexcept { - std::lock_guard{lock_}; // barrier - return cond_.notify(lock_); - } - - bool broadcast() noexcept { - std::lock_guard{lock_}; // barrier - return cond_.broadcast(lock_); - } - - bool quit_waiting() { - quit_.store(true, std::memory_order_release); - return broadcast(); - } -}; - -} // namespace detail -} // namespace ipc diff --git a/spaces/diacanFperku/AutoGPT/Cara Aktivasi Windows 7 Ultimate Yang Sudah Expired NEW!.md b/spaces/diacanFperku/AutoGPT/Cara Aktivasi Windows 7 Ultimate Yang Sudah Expired NEW!.md deleted file mode 100644 index 6b2d58c06a8a5ba8e1310daf36e3fc2a5dda06d9..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Cara Aktivasi Windows 7 Ultimate Yang Sudah Expired NEW!.md +++ /dev/null @@ -1,6 +0,0 @@ -

Cara Aktivasi Windows 7 Ultimate Yang Sudah Expired


DOWNLOADhttps://gohhs.com/2uFVFG



- -April 5, 2018 Get HideMyAss VPN » or Try 7-day Free VPN Trial ... 0 Crack + License key 2020 (Mac-Win) admin 2020-10-22 2 HMA VPN 5 Crack with ... 1 user, 1 year] Expired:3/5/2022 Auto Renewal:[true] *** Hidden text: You do ... Ini Adalah Kode Aktivasi Yang Bisa Kalian Gunakan Daftar Dengan Email ... 4d29de3e1b
-
-
-

diff --git a/spaces/diacanFperku/AutoGPT/HD Online Player (monarch Fx Creator Video Mixing Soft) High Quality.md b/spaces/diacanFperku/AutoGPT/HD Online Player (monarch Fx Creator Video Mixing Soft) High Quality.md deleted file mode 100644 index 939e9b84244a425dc38570df6173af63e376f9e6..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/HD Online Player (monarch Fx Creator Video Mixing Soft) High Quality.md +++ /dev/null @@ -1,6 +0,0 @@ -

HD Online Player (monarch fx creator Video mixing Soft)


Download Ziphttps://gohhs.com/2uFUlQ



- -master s1000 hd decoder software download Source: Dish Master SR-B10 HD ... Download and enjoy Now HD video player 2020 is your best video mate in India to enjoy ... 2016 uobdii OBD2 Code Scanner 0 Free download Creator C110 V4. ... Monarch HDX Encoder Appliance Matrox Monarch HDX is a dual-channel H. 4d29de3e1b
-
-
-

diff --git a/spaces/diacanFperku/AutoGPT/PHPRad Vue 2.6.3 PHPRad Classic 2.5.8.md b/spaces/diacanFperku/AutoGPT/PHPRad Vue 2.6.3 PHPRad Classic 2.5.8.md deleted file mode 100644 index 63e091f25a6bbbf8f2ab89493e1fa2ea9af2af01..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/PHPRad Vue 2.6.3 PHPRad Classic 2.5.8.md +++ /dev/null @@ -1,9 +0,0 @@ -
-

the phprad xml language support for layout, combined with the phprad xml language is used to create a layout and edit the layout directly from the interface, create your own style or edit your own style directly. and this template will be used to create your application and database. this feature is great for beginners. the most complete phprad template library is preloaded.

-

PHPRad Vue 2.6.3 PHPRad Classic 2.5.8


Download Zip https://gohhs.com/2uFTOi



-

if you make an application using templates, you can be able to regenerate your application easily. the position of each template is memorized, so that the phprad can generate the code of the application and database. you can also rename the entire template library name.

-

the phprad designer can create a dhtml frame from the position of the designer preview, designer element, the designer right or left side, designer add form, designer edit form, designer statistics, designer url address panel, design designer viewer, designer browser, designer banner, designer page, designer avatar, designer menu, etc. all these are supported in the designer.

-

phprad classic allows users to design and build php applications with php, bootstrap, jquery and other base codes. with a few clicks, users can build the application and make it running in no time. it requires a couple of measures to build an application, create a new project, connect it with the database, select icon pack and done. users can then modify and config the application design and also can set page properties, page modules and more as per desire. it includes application menu, user record management, user control management, different page theme, custom css, different form layout. you can also download audio studio 2.

-

899543212b
-
-
\ No newline at end of file diff --git a/spaces/diagaiwei/ir_chinese_medqa/colbert/training/rerank_batcher.py b/spaces/diagaiwei/ir_chinese_medqa/colbert/training/rerank_batcher.py deleted file mode 100644 index 192dbc01d0df3dbbd945725e0fef3dbede0b3387..0000000000000000000000000000000000000000 --- a/spaces/diagaiwei/ir_chinese_medqa/colbert/training/rerank_batcher.py +++ /dev/null @@ -1,75 +0,0 @@ -import os -import ujson - -from functools import partial -from colbert.infra.config.config import ColBERTConfig -from colbert.utils.utils import flatten, print_message, zipstar -from colbert.modeling.reranker.tokenizer import RerankerTokenizer - -from colbert.data.collection import Collection -from colbert.data.queries import Queries -from colbert.data.examples import Examples - -# from colbert.utils.runs import Run - - -class RerankBatcher(): - def __init__(self, config: ColBERTConfig, triples, queries, collection, rank=0, nranks=1): - self.bsize, self.accumsteps = config.bsize, config.accumsteps - self.nway = config.nway - - assert self.accumsteps == 1, "The tensorizer doesn't support larger accumsteps yet --- but it's easy to add." - - self.tokenizer = RerankerTokenizer(total_maxlen=config.doc_maxlen, base=config.checkpoint) - self.position = 0 - - self.triples = Examples.cast(triples, nway=self.nway).tolist(rank, nranks) - self.queries = Queries.cast(queries) - self.collection = Collection.cast(collection) - - def __iter__(self): - return self - - def __len__(self): - return len(self.triples) - - def __next__(self): - offset, endpos = self.position, min(self.position + self.bsize, len(self.triples)) - self.position = endpos - - if offset + self.bsize > len(self.triples): - raise StopIteration - - all_queries, all_passages, all_scores = [], [], [] - - for position in range(offset, endpos): - query, *pids = self.triples[position] - pids = pids[:self.nway] - - query = self.queries[query] - - try: - pids, scores = zipstar(pids) - except: - scores = [] - - passages = [self.collection[pid] for pid in pids] - - all_queries.append(query) - all_passages.extend(passages) - all_scores.extend(scores) - - assert len(all_scores) in [0, len(all_passages)], len(all_scores) - - return self.collate(all_queries, all_passages, all_scores) - - def collate(self, queries, passages, scores): - assert len(queries) == self.bsize - assert len(passages) == self.nway * self.bsize - - queries = flatten([[query] * self.nway for query in queries]) - return [(self.tokenizer.tensorize(queries, passages), scores)] - - # def skip_to_batch(self, batch_idx, intended_batch_size): - # Run.warn(f'Skipping to batch #{batch_idx} (with intended_batch_size = {intended_batch_size}) for training.') - # self.position = intended_batch_size * batch_idx diff --git a/spaces/digitalxingtong/Jiaran-Bert-VITS2/text/japanese.py b/spaces/digitalxingtong/Jiaran-Bert-VITS2/text/japanese.py deleted file mode 100644 index ddedafa0c5b7986068dc6c91637a86febc3923a9..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Jiaran-Bert-VITS2/text/japanese.py +++ /dev/null @@ -1,104 +0,0 @@ -# modified from https://github.com/CjangCjengh/vits/blob/main/text/japanese.py -import re -import sys - -import pyopenjtalk - -from text import symbols - -# Regular expression matching Japanese without punctuation marks: -_japanese_characters = re.compile( - r'[A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]') - -# Regular expression matching non-Japanese characters or punctuation marks: -_japanese_marks = re.compile( - r'[^A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]') - -# List of (symbol, Japanese) pairs for marks: -_symbols_to_japanese = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('%', 'パーセント') -]] - - -# List of (consonant, sokuon) pairs: -_real_sokuon = [(re.compile('%s' % x[0]), x[1]) for x in [ - (r'Q([↑↓]*[kg])', r'k#\1'), - (r'Q([↑↓]*[tdjʧ])', r't#\1'), - (r'Q([↑↓]*[sʃ])', r's\1'), - (r'Q([↑↓]*[pb])', r'p#\1') -]] - -# List of (consonant, hatsuon) pairs: -_real_hatsuon = [(re.compile('%s' % x[0]), x[1]) for x in [ - (r'N([↑↓]*[pbm])', r'm\1'), - (r'N([↑↓]*[ʧʥj])', r'n^\1'), - (r'N([↑↓]*[tdn])', r'n\1'), - (r'N([↑↓]*[kg])', r'ŋ\1') -]] - - - -def post_replace_ph(ph): - rep_map = { - ':': ',', - ';': ',', - ',': ',', - '。': '.', - '!': '!', - '?': '?', - '\n': '.', - "·": ",", - '、': ",", - '...': '…', - 'v': "V" - } - if ph in rep_map.keys(): - ph = rep_map[ph] - if ph in symbols: - return ph - if ph not in symbols: - ph = 'UNK' - return ph - -def symbols_to_japanese(text): - for regex, replacement in _symbols_to_japanese: - text = re.sub(regex, replacement, text) - return text - - -def preprocess_jap(text): - '''Reference https://r9y9.github.io/ttslearn/latest/notebooks/ch10_Recipe-Tacotron.html''' - text = symbols_to_japanese(text) - sentences = re.split(_japanese_marks, text) - marks = re.findall(_japanese_marks, text) - text = [] - for i, sentence in enumerate(sentences): - if re.match(_japanese_characters, sentence): - p = pyopenjtalk.g2p(sentence) - text += p.split(" ") - - if i < len(marks): - text += [marks[i].replace(' ', '')] - return text - -def text_normalize(text): - # todo: jap text normalize - return text - -def g2p(norm_text): - phones = preprocess_jap(norm_text) - phones = [post_replace_ph(i) for i in phones] - # todo: implement tones and word2ph - tones = [0 for i in phones] - word2ph = [1 for i in phones] - return phones, tones, word2ph - - -if __name__ == '__main__': - for line in open("../../../Downloads/transcript_utf8.txt").readlines(): - text = line.split(":")[1] - phones, tones, word2ph = g2p(text) - for p in phones: - if p == "z": - print(text, phones) - sys.exit(0) diff --git a/spaces/digitalxingtong/Lixiang-Bert-Vits2/monotonic_align/__init__.py b/spaces/digitalxingtong/Lixiang-Bert-Vits2/monotonic_align/__init__.py deleted file mode 100644 index a323673bb16070d6d0fffddb939b657d0915ff1b..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Lixiang-Bert-Vits2/monotonic_align/__init__.py +++ /dev/null @@ -1,20 +0,0 @@ -from numpy import zeros, int32, float32 -from torch import from_numpy - -from .core import maximum_path_jit - - -def maximum_path(neg_cent, mask): - """ numba optimized version. - neg_cent: [b, t_t, t_s] - mask: [b, t_t, t_s] - """ - device = neg_cent.device - dtype = neg_cent.dtype - neg_cent = neg_cent.data.cpu().numpy().astype(float32) - path = zeros(neg_cent.shape, dtype=int32) - - t_t_max = mask.sum(1)[:, 0].data.cpu().numpy().astype(int32) - t_s_max = mask.sum(2)[:, 0].data.cpu().numpy().astype(int32) - maximum_path_jit(path, neg_cent, t_t_max, t_s_max) - return from_numpy(path).to(device=device, dtype=dtype) \ No newline at end of file diff --git a/spaces/digitalxingtong/Xingtong-Longread-Bert-VITS2/models.py b/spaces/digitalxingtong/Xingtong-Longread-Bert-VITS2/models.py deleted file mode 100644 index d4afe44d883691610c5903e602a3ca245fcb3a5c..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Xingtong-Longread-Bert-VITS2/models.py +++ /dev/null @@ -1,707 +0,0 @@ -import copy -import math -import torch -from torch import nn -from torch.nn import functional as F - -import commons -import modules -import attentions -import monotonic_align - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm - -from commons import init_weights, get_padding -from text import symbols, num_tones, num_languages -class DurationDiscriminator(nn.Module): #vits2 - def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, gin_channels=0): - super().__init__() - - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.gin_channels = gin_channels - - self.drop = nn.Dropout(p_dropout) - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size, padding=kernel_size//2) - self.norm_1 = modules.LayerNorm(filter_channels) - self.conv_2 = nn.Conv1d(filter_channels, filter_channels, kernel_size, padding=kernel_size//2) - self.norm_2 = modules.LayerNorm(filter_channels) - self.dur_proj = nn.Conv1d(1, filter_channels, 1) - - self.pre_out_conv_1 = nn.Conv1d(2*filter_channels, filter_channels, kernel_size, padding=kernel_size//2) - self.pre_out_norm_1 = modules.LayerNorm(filter_channels) - self.pre_out_conv_2 = nn.Conv1d(filter_channels, filter_channels, kernel_size, padding=kernel_size//2) - self.pre_out_norm_2 = modules.LayerNorm(filter_channels) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, in_channels, 1) - - self.output_layer = nn.Sequential( - nn.Linear(filter_channels, 1), - nn.Sigmoid() - ) - - def forward_probability(self, x, x_mask, dur, g=None): - dur = self.dur_proj(dur) - x = torch.cat([x, dur], dim=1) - x = self.pre_out_conv_1(x * x_mask) - x = torch.relu(x) - x = self.pre_out_norm_1(x) - x = self.drop(x) - x = self.pre_out_conv_2(x * x_mask) - x = torch.relu(x) - x = self.pre_out_norm_2(x) - x = self.drop(x) - x = x * x_mask - x = x.transpose(1, 2) - output_prob = self.output_layer(x) - return output_prob - - def forward(self, x, x_mask, dur_r, dur_hat, g=None): - x = torch.detach(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.conv_1(x * x_mask) - x = torch.relu(x) - x = self.norm_1(x) - x = self.drop(x) - x = self.conv_2(x * x_mask) - x = torch.relu(x) - x = self.norm_2(x) - x = self.drop(x) - - output_probs = [] - for dur in [dur_r, dur_hat]: - output_prob = self.forward_probability(x, x_mask, dur, g) - output_probs.append(output_prob) - - return output_probs - -class TransformerCouplingBlock(nn.Module): - def __init__(self, - channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - n_flows=4, - gin_channels=0, - share_parameter=False - ): - - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - - self.wn = attentions.FFT(hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout, isflow = True, gin_channels = self.gin_channels) if share_parameter else None - - for i in range(n_flows): - self.flows.append( - modules.TransformerCouplingLayer(channels, hidden_channels, kernel_size, n_layers, n_heads, p_dropout, filter_channels, mean_only=True, wn_sharing_parameter=self.wn, gin_channels = self.gin_channels)) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - -class StochasticDurationPredictor(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, n_flows=4, gin_channels=0): - super().__init__() - filter_channels = in_channels # it needs to be removed from future version. - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.log_flow = modules.Log() - self.flows = nn.ModuleList() - self.flows.append(modules.ElementwiseAffine(2)) - for i in range(n_flows): - self.flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3)) - self.flows.append(modules.Flip()) - - self.post_pre = nn.Conv1d(1, filter_channels, 1) - self.post_proj = nn.Conv1d(filter_channels, filter_channels, 1) - self.post_convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout) - self.post_flows = nn.ModuleList() - self.post_flows.append(modules.ElementwiseAffine(2)) - for i in range(4): - self.post_flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3)) - self.post_flows.append(modules.Flip()) - - self.pre = nn.Conv1d(in_channels, filter_channels, 1) - self.proj = nn.Conv1d(filter_channels, filter_channels, 1) - self.convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout) - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, filter_channels, 1) - - def forward(self, x, x_mask, w=None, g=None, reverse=False, noise_scale=1.0): - x = torch.detach(x) - x = self.pre(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.convs(x, x_mask) - x = self.proj(x) * x_mask - - if not reverse: - flows = self.flows - assert w is not None - - logdet_tot_q = 0 - h_w = self.post_pre(w) - h_w = self.post_convs(h_w, x_mask) - h_w = self.post_proj(h_w) * x_mask - e_q = torch.randn(w.size(0), 2, w.size(2)).to(device=x.device, dtype=x.dtype) * x_mask - z_q = e_q - for flow in self.post_flows: - z_q, logdet_q = flow(z_q, x_mask, g=(x + h_w)) - logdet_tot_q += logdet_q - z_u, z1 = torch.split(z_q, [1, 1], 1) - u = torch.sigmoid(z_u) * x_mask - z0 = (w - u) * x_mask - logdet_tot_q += torch.sum((F.logsigmoid(z_u) + F.logsigmoid(-z_u)) * x_mask, [1, 2]) - logq = torch.sum(-0.5 * (math.log(2 * math.pi) + (e_q ** 2)) * x_mask, [1, 2]) - logdet_tot_q - - logdet_tot = 0 - z0, logdet = self.log_flow(z0, x_mask) - logdet_tot += logdet - z = torch.cat([z0, z1], 1) - for flow in flows: - z, logdet = flow(z, x_mask, g=x, reverse=reverse) - logdet_tot = logdet_tot + logdet - nll = torch.sum(0.5 * (math.log(2 * math.pi) + (z ** 2)) * x_mask, [1, 2]) - logdet_tot - return nll + logq # [b] - else: - flows = list(reversed(self.flows)) - flows = flows[:-2] + [flows[-1]] # remove a useless vflow - z = torch.randn(x.size(0), 2, x.size(2)).to(device=x.device, dtype=x.dtype) * noise_scale - for flow in flows: - z = flow(z, x_mask, g=x, reverse=reverse) - z0, z1 = torch.split(z, [1, 1], 1) - logw = z0 - return logw - - -class DurationPredictor(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, gin_channels=0): - super().__init__() - - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.gin_channels = gin_channels - - self.drop = nn.Dropout(p_dropout) - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size, padding=kernel_size // 2) - self.norm_1 = modules.LayerNorm(filter_channels) - self.conv_2 = nn.Conv1d(filter_channels, filter_channels, kernel_size, padding=kernel_size // 2) - self.norm_2 = modules.LayerNorm(filter_channels) - self.proj = nn.Conv1d(filter_channels, 1, 1) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, in_channels, 1) - - def forward(self, x, x_mask, g=None): - x = torch.detach(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.conv_1(x * x_mask) - x = torch.relu(x) - x = self.norm_1(x) - x = self.drop(x) - x = self.conv_2(x * x_mask) - x = torch.relu(x) - x = self.norm_2(x) - x = self.drop(x) - x = self.proj(x * x_mask) - return x * x_mask - - -class TextEncoder(nn.Module): - def __init__(self, - n_vocab, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - gin_channels=0): - super().__init__() - self.n_vocab = n_vocab - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.gin_channels = gin_channels - self.emb = nn.Embedding(len(symbols), hidden_channels) - nn.init.normal_(self.emb.weight, 0.0, hidden_channels ** -0.5) - self.tone_emb = nn.Embedding(num_tones, hidden_channels) - nn.init.normal_(self.tone_emb.weight, 0.0, hidden_channels ** -0.5) - self.language_emb = nn.Embedding(num_languages, hidden_channels) - nn.init.normal_(self.language_emb.weight, 0.0, hidden_channels ** -0.5) - self.bert_proj = nn.Conv1d(1024, hidden_channels, 1) - - self.encoder = attentions.Encoder( - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - gin_channels=self.gin_channels) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, tone, language, bert, g=None): - x = (self.emb(x)+ self.tone_emb(tone)+ self.language_emb(language)+self.bert_proj(bert).transpose(1,2)) * math.sqrt(self.hidden_channels) # [b, t, h] - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - - x = self.encoder(x * x_mask, x_mask, g=g) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return x, m, logs, x_mask - - -class ResidualCouplingBlock(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append( - modules.ResidualCouplingLayer(channels, hidden_channels, kernel_size, dilation_rate, n_layers, - gin_channels=gin_channels, mean_only=True)) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - -class PosteriorEncoder(nn.Module): - def __init__(self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN(hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - -class Generator(torch.nn.Module): - def __init__(self, initial_channel, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, - upsample_initial_channel, upsample_kernel_sizes, gin_channels=0): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d(initial_channel, upsample_initial_channel, 7, 1, padding=3) - resblock = modules.ResBlock1 if resblock == '1' else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append(weight_norm( - ConvTranspose1d(upsample_initial_channel // (2 ** i), upsample_initial_channel // (2 ** (i + 1)), - k, u, padding=(k - u) // 2))) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate(zip(resblock_kernel_sizes, resblock_dilation_sizes)): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - print('Removing weight norm...') - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(get_padding(kernel_size, 1), 0))), - ]) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ]) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2, 3, 5, 7, 11] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - -class ReferenceEncoder(nn.Module): - ''' - inputs --- [N, Ty/r, n_mels*r] mels - outputs --- [N, ref_enc_gru_size] - ''' - - def __init__(self, spec_channels, gin_channels=0): - - super().__init__() - self.spec_channels = spec_channels - ref_enc_filters = [32, 32, 64, 64, 128, 128] - K = len(ref_enc_filters) - filters = [1] + ref_enc_filters - convs = [weight_norm(nn.Conv2d(in_channels=filters[i], - out_channels=filters[i + 1], - kernel_size=(3, 3), - stride=(2, 2), - padding=(1, 1))) for i in range(K)] - self.convs = nn.ModuleList(convs) - # self.wns = nn.ModuleList([weight_norm(num_features=ref_enc_filters[i]) for i in range(K)]) - - out_channels = self.calculate_channels(spec_channels, 3, 2, 1, K) - self.gru = nn.GRU(input_size=ref_enc_filters[-1] * out_channels, - hidden_size=256 // 2, - batch_first=True) - self.proj = nn.Linear(128, gin_channels) - - def forward(self, inputs, mask=None): - N = inputs.size(0) - out = inputs.view(N, 1, -1, self.spec_channels) # [N, 1, Ty, n_freqs] - for conv in self.convs: - out = conv(out) - # out = wn(out) - out = F.relu(out) # [N, 128, Ty//2^K, n_mels//2^K] - - out = out.transpose(1, 2) # [N, Ty//2^K, 128, n_mels//2^K] - T = out.size(1) - N = out.size(0) - out = out.contiguous().view(N, T, -1) # [N, Ty//2^K, 128*n_mels//2^K] - - self.gru.flatten_parameters() - memory, out = self.gru(out) # out --- [1, N, 128] - - return self.proj(out.squeeze(0)) - - def calculate_channels(self, L, kernel_size, stride, pad, n_convs): - for i in range(n_convs): - L = (L - kernel_size + 2 * pad) // stride + 1 - return L - - -class SynthesizerTrn(nn.Module): - """ - Synthesizer for Training - """ - - def __init__(self, - n_vocab, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - n_speakers=256, - gin_channels=256, - use_sdp=True, - n_flow_layer = 4, - n_layers_trans_flow = 3, - flow_share_parameter = False, - use_transformer_flow = True, - **kwargs): - - super().__init__() - self.n_vocab = n_vocab - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.n_speakers = n_speakers - self.gin_channels = gin_channels - self.n_layers_trans_flow = n_layers_trans_flow - self.use_spk_conditioned_encoder = kwargs.get("use_spk_conditioned_encoder", True) - self.use_sdp = use_sdp - self.use_noise_scaled_mas = kwargs.get("use_noise_scaled_mas", False) - self.mas_noise_scale_initial = kwargs.get("mas_noise_scale_initial", 0.01) - self.noise_scale_delta = kwargs.get("noise_scale_delta", 2e-6) - self.current_mas_noise_scale = self.mas_noise_scale_initial - if self.use_spk_conditioned_encoder and gin_channels > 0: - self.enc_gin_channels = gin_channels - self.enc_p = TextEncoder(n_vocab, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - gin_channels=self.enc_gin_channels) - self.dec = Generator(inter_channels, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, - upsample_initial_channel, upsample_kernel_sizes, gin_channels=gin_channels) - self.enc_q = PosteriorEncoder(spec_channels, inter_channels, hidden_channels, 5, 1, 16, - gin_channels=gin_channels) - if use_transformer_flow: - self.flow = TransformerCouplingBlock(inter_channels, hidden_channels, filter_channels, n_heads, n_layers_trans_flow, 5, p_dropout, n_flow_layer, gin_channels=gin_channels,share_parameter= flow_share_parameter) - else: - self.flow = ResidualCouplingBlock(inter_channels, hidden_channels, 5, 1, n_flow_layer, gin_channels=gin_channels) - self.sdp = StochasticDurationPredictor(hidden_channels, 192, 3, 0.5, 4, gin_channels=gin_channels) - self.dp = DurationPredictor(hidden_channels, 256, 3, 0.5, gin_channels=gin_channels) - - if n_speakers >= 1: - self.emb_g = nn.Embedding(n_speakers, gin_channels) - else: - self.ref_enc = ReferenceEncoder(spec_channels, gin_channels) - - def forward(self, x, x_lengths, y, y_lengths, sid, tone, language, bert): - if self.n_speakers > 0: - g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1] - else: - g = self.ref_enc(y.transpose(1,2)).unsqueeze(-1) - x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths, tone, language, bert,g=g) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - - with torch.no_grad(): - # negative cross-entropy - s_p_sq_r = torch.exp(-2 * logs_p) # [b, d, t] - neg_cent1 = torch.sum(-0.5 * math.log(2 * math.pi) - logs_p, [1], keepdim=True) # [b, 1, t_s] - neg_cent2 = torch.matmul(-0.5 * (z_p ** 2).transpose(1, 2), - s_p_sq_r) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s] - neg_cent3 = torch.matmul(z_p.transpose(1, 2), (m_p * s_p_sq_r)) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s] - neg_cent4 = torch.sum(-0.5 * (m_p ** 2) * s_p_sq_r, [1], keepdim=True) # [b, 1, t_s] - neg_cent = neg_cent1 + neg_cent2 + neg_cent3 + neg_cent4 - if self.use_noise_scaled_mas: - epsilon = torch.std(neg_cent) * torch.randn_like(neg_cent) * self.current_mas_noise_scale - neg_cent = neg_cent + epsilon - - attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1) - attn = monotonic_align.maximum_path(neg_cent, attn_mask.squeeze(1)).unsqueeze(1).detach() - - w = attn.sum(2) - - l_length_sdp = self.sdp(x, x_mask, w, g=g) - l_length_sdp = l_length_sdp / torch.sum(x_mask) - - logw_ = torch.log(w + 1e-6) * x_mask - logw = self.dp(x, x_mask, g=g) - l_length_dp = torch.sum((logw - logw_) ** 2, [1, 2]) / torch.sum(x_mask) # for averaging - - l_length = l_length_dp + l_length_sdp - - # expand prior - m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) - logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2) - - z_slice, ids_slice = commons.rand_slice_segments(z, y_lengths, self.segment_size) - o = self.dec(z_slice, g=g) - return o, l_length, attn, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q), (x, logw, logw_) - - def infer(self, x, x_lengths, sid, tone, language, bert, noise_scale=.667, length_scale=1, noise_scale_w=0.8, max_len=None, sdp_ratio=0,y=None): - #x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths, tone, language, bert) - # g = self.gst(y) - if self.n_speakers > 0: - g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1] - else: - g = self.ref_enc(y.transpose(1,2)).unsqueeze(-1) - x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths, tone, language, bert,g=g) - logw = self.sdp(x, x_mask, g=g, reverse=True, noise_scale=noise_scale_w) * (sdp_ratio) + self.dp(x, x_mask, g=g) * (1 - sdp_ratio) - w = torch.exp(logw) * x_mask * length_scale - w_ceil = torch.ceil(w) - y_lengths = torch.clamp_min(torch.sum(w_ceil, [1, 2]), 1).long() - y_mask = torch.unsqueeze(commons.sequence_mask(y_lengths, None), 1).to(x_mask.dtype) - attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1) - attn = commons.generate_path(w_ceil, attn_mask) - - m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t'] - logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, - 2) # [b, t', t], [b, t, d] -> [b, d, t'] - - z_p = m_p + torch.randn_like(m_p) * torch.exp(logs_p) * noise_scale - z = self.flow(z_p, y_mask, g=g, reverse=True) - o = self.dec((z * y_mask)[:, :, :max_len], g=g) - return o, attn, y_mask, (z, z_p, m_p, logs_p) diff --git a/spaces/dineshreddy/WALT/mmdet/datasets/pipelines/formating.py b/spaces/dineshreddy/WALT/mmdet/datasets/pipelines/formating.py deleted file mode 100644 index 5781341bd48766a740f23ebba7a85cf8993642d7..0000000000000000000000000000000000000000 --- a/spaces/dineshreddy/WALT/mmdet/datasets/pipelines/formating.py +++ /dev/null @@ -1,364 +0,0 @@ -from collections.abc import Sequence - -import mmcv -import numpy as np -import torch -from mmcv.parallel import DataContainer as DC - -from ..builder import PIPELINES - - -def to_tensor(data): - """Convert objects of various python types to :obj:`torch.Tensor`. - - Supported types are: :class:`numpy.ndarray`, :class:`torch.Tensor`, - :class:`Sequence`, :class:`int` and :class:`float`. - - Args: - data (torch.Tensor | numpy.ndarray | Sequence | int | float): Data to - be converted. - """ - - if isinstance(data, torch.Tensor): - return data - elif isinstance(data, np.ndarray): - return torch.from_numpy(data) - elif isinstance(data, Sequence) and not mmcv.is_str(data): - return torch.tensor(data) - elif isinstance(data, int): - return torch.LongTensor([data]) - elif isinstance(data, float): - return torch.FloatTensor([data]) - else: - raise TypeError(f'type {type(data)} cannot be converted to tensor.') - - -@PIPELINES.register_module() -class ToTensor(object): - """Convert some results to :obj:`torch.Tensor` by given keys. - - Args: - keys (Sequence[str]): Keys that need to be converted to Tensor. - """ - - def __init__(self, keys): - self.keys = keys - - def __call__(self, results): - """Call function to convert data in results to :obj:`torch.Tensor`. - - Args: - results (dict): Result dict contains the data to convert. - - Returns: - dict: The result dict contains the data converted - to :obj:`torch.Tensor`. - """ - for key in self.keys: - results[key] = to_tensor(results[key]) - return results - - def __repr__(self): - return self.__class__.__name__ + f'(keys={self.keys})' - - -@PIPELINES.register_module() -class ImageToTensor(object): - """Convert image to :obj:`torch.Tensor` by given keys. - - The dimension order of input image is (H, W, C). The pipeline will convert - it to (C, H, W). If only 2 dimension (H, W) is given, the output would be - (1, H, W). - - Args: - keys (Sequence[str]): Key of images to be converted to Tensor. - """ - - def __init__(self, keys): - self.keys = keys - - def __call__(self, results): - """Call function to convert image in results to :obj:`torch.Tensor` and - transpose the channel order. - - Args: - results (dict): Result dict contains the image data to convert. - - Returns: - dict: The result dict contains the image converted - to :obj:`torch.Tensor` and transposed to (C, H, W) order. - """ - for key in self.keys: - img = results[key] - if len(img.shape) < 3: - img = np.expand_dims(img, -1) - results[key] = to_tensor(img.transpose(2, 0, 1)) - return results - - def __repr__(self): - return self.__class__.__name__ + f'(keys={self.keys})' - - -@PIPELINES.register_module() -class Transpose(object): - """Transpose some results by given keys. - - Args: - keys (Sequence[str]): Keys of results to be transposed. - order (Sequence[int]): Order of transpose. - """ - - def __init__(self, keys, order): - self.keys = keys - self.order = order - - def __call__(self, results): - """Call function to transpose the channel order of data in results. - - Args: - results (dict): Result dict contains the data to transpose. - - Returns: - dict: The result dict contains the data transposed to \ - ``self.order``. - """ - for key in self.keys: - results[key] = results[key].transpose(self.order) - return results - - def __repr__(self): - return self.__class__.__name__ + \ - f'(keys={self.keys}, order={self.order})' - - -@PIPELINES.register_module() -class ToDataContainer(object): - """Convert results to :obj:`mmcv.DataContainer` by given fields. - - Args: - fields (Sequence[dict]): Each field is a dict like - ``dict(key='xxx', **kwargs)``. The ``key`` in result will - be converted to :obj:`mmcv.DataContainer` with ``**kwargs``. - Default: ``(dict(key='img', stack=True), dict(key='gt_bboxes'), - dict(key='gt_labels'))``. - """ - - def __init__(self, - fields=(dict(key='img', stack=True), dict(key='gt_bboxes'), - dict(key='gt_labels'))): - self.fields = fields - - def __call__(self, results): - """Call function to convert data in results to - :obj:`mmcv.DataContainer`. - - Args: - results (dict): Result dict contains the data to convert. - - Returns: - dict: The result dict contains the data converted to \ - :obj:`mmcv.DataContainer`. - """ - - for field in self.fields: - field = field.copy() - key = field.pop('key') - results[key] = DC(results[key], **field) - return results - - def __repr__(self): - return self.__class__.__name__ + f'(fields={self.fields})' - - -@PIPELINES.register_module() -class DefaultFormatBundle(object): - """Default formatting bundle. - - It simplifies the pipeline of formatting common fields, including "img", - "proposals", "gt_bboxes", "gt_labels", "gt_masks" and "gt_semantic_seg". - These fields are formatted as follows. - - - img: (1)transpose, (2)to tensor, (3)to DataContainer (stack=True) - - proposals: (1)to tensor, (2)to DataContainer - - gt_bboxes: (1)to tensor, (2)to DataContainer - - gt_bboxes_ignore: (1)to tensor, (2)to DataContainer - - gt_labels: (1)to tensor, (2)to DataContainer - - gt_masks: (1)to tensor, (2)to DataContainer (cpu_only=True) - - gt_semantic_seg: (1)unsqueeze dim-0 (2)to tensor, \ - (3)to DataContainer (stack=True) - """ - - def __call__(self, results): - """Call function to transform and format common fields in results. - - Args: - results (dict): Result dict contains the data to convert. - - Returns: - dict: The result dict contains the data that is formatted with \ - default bundle. - """ - - if 'img' in results: - img = results['img'] - # add default meta keys - results = self._add_default_meta_keys(results) - if len(img.shape) < 3: - img = np.expand_dims(img, -1) - img = np.ascontiguousarray(img.transpose(2, 0, 1)) - results['img'] = DC(to_tensor(img), stack=True) - for key in ['proposals', 'gt_bboxes', 'gt_bboxes_ignore', 'gt_labels']: - if key not in results: - continue - results[key] = DC(to_tensor(results[key])) - if 'gt_masks' in results: - results['gt_masks'] = DC(results['gt_masks'], cpu_only=True) - if 'gt_semantic_seg' in results: - results['gt_semantic_seg'] = DC( - to_tensor(results['gt_semantic_seg'][None, ...]), stack=True) - return results - - def _add_default_meta_keys(self, results): - """Add default meta keys. - - We set default meta keys including `pad_shape`, `scale_factor` and - `img_norm_cfg` to avoid the case where no `Resize`, `Normalize` and - `Pad` are implemented during the whole pipeline. - - Args: - results (dict): Result dict contains the data to convert. - - Returns: - results (dict): Updated result dict contains the data to convert. - """ - img = results['img'] - results.setdefault('pad_shape', img.shape) - results.setdefault('scale_factor', 1.0) - num_channels = 1 if len(img.shape) < 3 else img.shape[2] - results.setdefault( - 'img_norm_cfg', - dict( - mean=np.zeros(num_channels, dtype=np.float32), - std=np.ones(num_channels, dtype=np.float32), - to_rgb=False)) - return results - - def __repr__(self): - return self.__class__.__name__ - - -@PIPELINES.register_module() -class Collect(object): - """Collect data from the loader relevant to the specific task. - - This is usually the last stage of the data loader pipeline. Typically keys - is set to some subset of "img", "proposals", "gt_bboxes", - "gt_bboxes_ignore", "gt_labels", and/or "gt_masks". - - The "img_meta" item is always populated. The contents of the "img_meta" - dictionary depends on "meta_keys". By default this includes: - - - "img_shape": shape of the image input to the network as a tuple \ - (h, w, c). Note that images may be zero padded on the \ - bottom/right if the batch tensor is larger than this shape. - - - "scale_factor": a float indicating the preprocessing scale - - - "flip": a boolean indicating if image flip transform was used - - - "filename": path to the image file - - - "ori_shape": original shape of the image as a tuple (h, w, c) - - - "pad_shape": image shape after padding - - - "img_norm_cfg": a dict of normalization information: - - - mean - per channel mean subtraction - - std - per channel std divisor - - to_rgb - bool indicating if bgr was converted to rgb - - Args: - keys (Sequence[str]): Keys of results to be collected in ``data``. - meta_keys (Sequence[str], optional): Meta keys to be converted to - ``mmcv.DataContainer`` and collected in ``data[img_metas]``. - Default: ``('filename', 'ori_filename', 'ori_shape', 'img_shape', - 'pad_shape', 'scale_factor', 'flip', 'flip_direction', - 'img_norm_cfg')`` - """ - - def __init__(self, - keys, - meta_keys=('filename', 'ori_filename', 'ori_shape', - 'img_shape', 'pad_shape', 'scale_factor', 'flip', - 'flip_direction', 'img_norm_cfg')): - self.keys = keys - self.meta_keys = meta_keys - - def __call__(self, results): - """Call function to collect keys in results. The keys in ``meta_keys`` - will be converted to :obj:mmcv.DataContainer. - - Args: - results (dict): Result dict contains the data to collect. - - Returns: - dict: The result dict contains the following keys - - - keys in``self.keys`` - - ``img_metas`` - """ - - data = {} - img_meta = {} - for key in self.meta_keys: - img_meta[key] = results[key] - data['img_metas'] = DC(img_meta, cpu_only=True) - for key in self.keys: - data[key] = results[key] - return data - - def __repr__(self): - return self.__class__.__name__ + \ - f'(keys={self.keys}, meta_keys={self.meta_keys})' - - -@PIPELINES.register_module() -class WrapFieldsToLists(object): - """Wrap fields of the data dictionary into lists for evaluation. - - This class can be used as a last step of a test or validation - pipeline for single image evaluation or inference. - - Example: - >>> test_pipeline = [ - >>> dict(type='LoadImageFromFile'), - >>> dict(type='Normalize', - mean=[123.675, 116.28, 103.53], - std=[58.395, 57.12, 57.375], - to_rgb=True), - >>> dict(type='Pad', size_divisor=32), - >>> dict(type='ImageToTensor', keys=['img']), - >>> dict(type='Collect', keys=['img']), - >>> dict(type='WrapFieldsToLists') - >>> ] - """ - - def __call__(self, results): - """Call function to wrap fields into lists. - - Args: - results (dict): Result dict contains the data to wrap. - - Returns: - dict: The result dict where value of ``self.keys`` are wrapped \ - into list. - """ - - # Wrap dict fields into lists - for key, val in results.items(): - results[key] = [val] - return results - - def __repr__(self): - return f'{self.__class__.__name__}()' diff --git a/spaces/dmvaldman/ICLR2023/paper_list.py b/spaces/dmvaldman/ICLR2023/paper_list.py deleted file mode 100644 index 11fdeecc4a7fccc9be28c671b8e9764b389c7aeb..0000000000000000000000000000000000000000 --- a/spaces/dmvaldman/ICLR2023/paper_list.py +++ /dev/null @@ -1,79 +0,0 @@ -from __future__ import annotations - -import pandas as pd -import requests -from huggingface_hub.hf_api import SpaceInfo - - -class PaperList: - def __init__(self): - self.organization_name = 'ICLR2023' - self.table = pd.read_csv('iclr_submissions.csv') - self._preprocess_table() - - self.table_header = ''' - - Title - PDF - Tldr - Abstract - ''' - - @staticmethod - def load_space_info(author: str) -> list[SpaceInfo]: - path = 'https://huggingface.co/api/spaces' - r = requests.get(path, params={'author': author}) - d = r.json() - return [SpaceInfo(**x) for x in d] - - def add_spaces_to_table(self, organization_name: str, - df: pd.DataFrame) -> pd.DataFrame: - spaces = self.load_space_info(organization_name) - name2space = { - s.id.split('/')[1].lower(): f'https://huggingface.co/spaces/{s.id}' - for s in spaces - } - return df - - def _preprocess_table(self) -> None: - self.table = self.add_spaces_to_table(self.organization_name, - self.table) - self.table['title_lowercase'] = self.table.title.str.lower() - - rows = [] - for row in self.table.itertuples(): - paper = f'{row.title}' if isinstance( - row.url, str) else row.title - pdf = f'pdf' if isinstance( - row.pdf, str) else '' - tldr = row.tldr if isinstance(row.tldr, str) else '' - - row = f''' - - {paper} - {pdf} - {tldr} - {row.abstract} - ''' - rows.append(row) - self.table['html_table_content'] = rows - - def render(self, search_query: str, case_sensitive: bool) -> tuple[int, str]: - df = self.add_spaces_to_table(self.organization_name, self.table) - if search_query: - if case_sensitive: - df = df[df.title.str.contains(search_query)] - else: - df = df[df.title_lowercase.str.contains(search_query.lower())] - - return len(df), self.to_html(df, self.table_header) - - @staticmethod - def to_html(df: pd.DataFrame, table_header: str) -> str: - table_data = ''.join(df.html_table_content) - html = f''' - - {table_header} - {table_data} -
''' - return html diff --git a/spaces/dorkai/SINGPT-Temporary/modules/shared.py b/spaces/dorkai/SINGPT-Temporary/modules/shared.py deleted file mode 100644 index ea2eb50b7f586e5c562bf2e7c75429c91f21ec6c..0000000000000000000000000000000000000000 --- a/spaces/dorkai/SINGPT-Temporary/modules/shared.py +++ /dev/null @@ -1,103 +0,0 @@ -import argparse - -model = None -tokenizer = None -model_name = "" -soft_prompt_tensor = None -soft_prompt = False -is_RWKV = False - -# Chat variables -history = {'internal': [], 'visible': []} -character = 'None' -stop_everything = False -processing_message = '*Is typing...*' - -# UI elements (buttons, sliders, HTML, etc) -gradio = {} - -# Generation input parameters -input_params = [] - -settings = { - 'max_new_tokens': 200, - 'max_new_tokens_min': 1, - 'max_new_tokens_max': 2000, - 'name1': 'Person 1', - 'name2': 'Person 2', - 'context': 'This is a conversation between two people.', - 'stop_at_newline': True, - 'chat_prompt_size': 2048, - 'chat_prompt_size_min': 0, - 'chat_prompt_size_max': 2048, - 'chat_generation_attempts': 1, - 'chat_generation_attempts_min': 1, - 'chat_generation_attempts_max': 5, - 'name1_pygmalion': 'You', - 'name2_pygmalion': 'Kawaii', - 'context_pygmalion': "Kawaii's persona: Kawaii is a cheerful person who loves to make others smile. She is an optimist who loves to spread happiness and positivity wherever she goes.\n", - 'stop_at_newline_pygmalion': False, - 'default_extensions': [], - 'chat_default_extensions': ["gallery"], - 'presets': { - 'default': 'NovelAI-Sphinx Moth', - 'pygmalion-*': 'Pygmalion', - 'RWKV-*': 'Naive', - }, - 'prompts': { - 'default': 'Common sense questions and answers\n\nQuestion: \nFactual answer:', - '^(gpt4chan|gpt-4chan|4chan)': '-----\n--- 865467536\nInput text\n--- 865467537\n', - '(rosey|chip|joi)_.*_instruct.*': 'User: \n', - 'oasst-*': '<|prompter|>Write a story about future of AI development<|endoftext|><|assistant|>' - } -} - -def str2bool(v): - if isinstance(v, bool): - return v - if v.lower() in ('yes', 'true', 't', 'y', '1'): - return True - elif v.lower() in ('no', 'false', 'f', 'n', '0'): - return False - else: - raise argparse.ArgumentTypeError('Boolean value expected.') - -parser = argparse.ArgumentParser(formatter_class=lambda prog: argparse.HelpFormatter(prog,max_help_position=54)) -parser.add_argument('--model', type=str, help='Name of the model to load by default.') -parser.add_argument('--notebook', action='store_true', help='Launch the web UI in notebook mode, where the output is written to the same text box as the input.') -parser.add_argument('--chat', action='store_true', help='Launch the web UI in chat mode.') -parser.add_argument('--cai-chat', action='store_true', help='Launch the web UI in chat mode with a style similar to Character.AI\'s. If the file img_bot.png or img_bot.jpg exists in the same folder as server.py, this image will be used as the bot\'s profile picture. Similarly, img_me.png or img_me.jpg will be used as your profile picture.') -parser.add_argument('--cpu', action='store_true', help='Use the CPU to generate text.') -parser.add_argument('--load-in-8bit', action='store_true', help='Load the model with 8-bit precision.') -parser.add_argument('--load-in-4bit', action='store_true', help='DEPRECATED: use --gptq-bits 4 instead.') -parser.add_argument('--gptq-bits', type=int, default=0, help='Load a pre-quantized model with specified precision. 2, 3, 4 and 8bit are supported. Currently only works with LLaMA and OPT.') -parser.add_argument('--gptq-model-type', type=str, help='Model type of pre-quantized model. Currently only LLaMa and OPT are supported.') -parser.add_argument('--bf16', action='store_true', help='Load the model with bfloat16 precision. Requires NVIDIA Ampere GPU.') -parser.add_argument('--auto-devices', action='store_true', help='Automatically split the model across the available GPU(s) and CPU.') -parser.add_argument('--disk', action='store_true', help='If the model is too large for your GPU(s) and CPU combined, send the remaining layers to the disk.') -parser.add_argument('--disk-cache-dir', type=str, default="cache", help='Directory to save the disk cache to. Defaults to "cache".') -parser.add_argument('--gpu-memory', type=int, nargs="+", help='Maxmimum GPU memory in GiB to be allocated per GPU. Example: --gpu-memory 10 for a single GPU, --gpu-memory 10 5 for two GPUs.') -parser.add_argument('--cpu-memory', type=int, help='Maximum CPU memory in GiB to allocate for offloaded weights. Must be an integer number. Defaults to 99.') -parser.add_argument('--flexgen', action='store_true', help='Enable the use of FlexGen offloading.') -parser.add_argument('--percent', type=int, nargs="+", default=[0, 100, 100, 0, 100, 0], help='FlexGen: allocation percentages. Must be 6 numbers separated by spaces (default: 0, 100, 100, 0, 100, 0).') -parser.add_argument("--compress-weight", action="store_true", help="FlexGen: activate weight compression.") -parser.add_argument("--pin-weight", type=str2bool, nargs="?", const=True, default=True, help="FlexGen: whether to pin weights (setting this to False reduces CPU memory by 20%%).") -parser.add_argument('--deepspeed', action='store_true', help='Enable the use of DeepSpeed ZeRO-3 for inference via the Transformers integration.') -parser.add_argument('--nvme-offload-dir', type=str, help='DeepSpeed: Directory to use for ZeRO-3 NVME offloading.') -parser.add_argument('--local_rank', type=int, default=0, help='DeepSpeed: Optional argument for distributed setups.') -parser.add_argument('--rwkv-strategy', type=str, default=None, help='RWKV: The strategy to use while loading the model. Examples: "cpu fp32", "cuda fp16", "cuda fp16i8".') -parser.add_argument('--rwkv-cuda-on', action='store_true', help='RWKV: Compile the CUDA kernel for better performance.') -parser.add_argument('--no-stream', action='store_true', help='Don\'t stream the text output in real time.') -parser.add_argument('--settings', type=str, help='Load the default interface settings from this json file. See settings-template.json for an example. If you create a file called settings.json, this file will be loaded by default without the need to use the --settings flag.') -parser.add_argument('--extensions', type=str, nargs="+", help='The list of extensions to load. If you want to load more than one extension, write the names separated by spaces.') -parser.add_argument('--listen', action='store_true', help='Make the web UI reachable from your local network.') -parser.add_argument('--listen-port', type=int, help='The listening port that the server will use.') -parser.add_argument('--share', action='store_true', help='Create a public URL. This is useful for running the web UI on Google Colab or similar.') -parser.add_argument('--auto-launch', action='store_true', default=False, help='Open the web UI in the default browser upon launch.') -parser.add_argument('--verbose', action='store_true', help='Print the prompts to the terminal.') -args = parser.parse_args() - -# Provisional, this will be deleted later -if args.load_in_4bit: - print("Warning: --load-in-4bit is deprecated and will be removed. Use --gptq-bits 4 instead.\n") - args.gptq_bits = 4 diff --git a/spaces/dorkai/text-generation-webui-main/text-generation-webui-main/extensions/silero_tts/script.py b/spaces/dorkai/text-generation-webui-main/text-generation-webui-main/extensions/silero_tts/script.py deleted file mode 100644 index 460e76a888ae6ff74b74c34ee7437eae85a8c691..0000000000000000000000000000000000000000 --- a/spaces/dorkai/text-generation-webui-main/text-generation-webui-main/extensions/silero_tts/script.py +++ /dev/null @@ -1,182 +0,0 @@ -import time -from pathlib import Path - -import gradio as gr -import torch - -from extensions.silero_tts import tts_preprocessor -from modules import chat, shared -from modules.html_generator import chat_html_wrapper - -torch._C._jit_set_profiling_mode(False) - - -params = { - 'activate': True, - 'speaker': 'en_56', - 'language': 'en', - 'model_id': 'v3_en', - 'sample_rate': 48000, - 'device': 'cpu', - 'show_text': False, - 'autoplay': True, - 'voice_pitch': 'medium', - 'voice_speed': 'medium', - 'local_cache_path': '' # User can override the default cache path to something other via settings.json -} - -current_params = params.copy() -voices_by_gender = ['en_99', 'en_45', 'en_18', 'en_117', 'en_49', 'en_51', 'en_68', 'en_0', 'en_26', 'en_56', 'en_74', 'en_5', 'en_38', 'en_53', 'en_21', 'en_37', 'en_107', 'en_10', 'en_82', 'en_16', 'en_41', 'en_12', 'en_67', 'en_61', 'en_14', 'en_11', 'en_39', 'en_52', 'en_24', 'en_97', 'en_28', 'en_72', 'en_94', 'en_36', 'en_4', 'en_43', 'en_88', 'en_25', 'en_65', 'en_6', 'en_44', 'en_75', 'en_91', 'en_60', 'en_109', 'en_85', 'en_101', 'en_108', 'en_50', 'en_96', 'en_64', 'en_92', 'en_76', 'en_33', 'en_116', 'en_48', 'en_98', 'en_86', 'en_62', 'en_54', 'en_95', 'en_55', 'en_111', 'en_3', 'en_83', 'en_8', 'en_47', 'en_59', 'en_1', 'en_2', 'en_7', 'en_9', 'en_13', 'en_15', 'en_17', 'en_19', 'en_20', 'en_22', 'en_23', 'en_27', 'en_29', 'en_30', 'en_31', 'en_32', 'en_34', 'en_35', 'en_40', 'en_42', 'en_46', 'en_57', 'en_58', 'en_63', 'en_66', 'en_69', 'en_70', 'en_71', 'en_73', 'en_77', 'en_78', 'en_79', 'en_80', 'en_81', 'en_84', 'en_87', 'en_89', 'en_90', 'en_93', 'en_100', 'en_102', 'en_103', 'en_104', 'en_105', 'en_106', 'en_110', 'en_112', 'en_113', 'en_114', 'en_115'] -voice_pitches = ['x-low', 'low', 'medium', 'high', 'x-high'] -voice_speeds = ['x-slow', 'slow', 'medium', 'fast', 'x-fast'] -streaming_state = shared.args.no_stream # remember if chat streaming was enabled - -# Used for making text xml compatible, needed for voice pitch and speed control -table = str.maketrans({ - "<": "<", - ">": ">", - "&": "&", - "'": "'", - '"': """, -}) - - -def xmlesc(txt): - return txt.translate(table) - - -def load_model(): - torch_cache_path = torch.hub.get_dir() if params['local_cache_path'] == '' else params['local_cache_path'] - model_path = torch_cache_path + "/snakers4_silero-models_master/src/silero/model/" + params['model_id'] + ".pt" - if Path(model_path).is_file(): - print(f'\nUsing Silero TTS cached checkpoint found at {torch_cache_path}') - model, example_text = torch.hub.load(repo_or_dir=torch_cache_path + '/snakers4_silero-models_master/', model='silero_tts', language=params['language'], speaker=params['model_id'], source='local', path=model_path, force_reload=True) - else: - print(f'\nSilero TTS cache not found at {torch_cache_path}. Attempting to download...') - model, example_text = torch.hub.load(repo_or_dir='snakers4/silero-models', model='silero_tts', language=params['language'], speaker=params['model_id']) - model.to(params['device']) - return model - - -def remove_tts_from_history(name1, name2, mode): - for i, entry in enumerate(shared.history['internal']): - shared.history['visible'][i] = [shared.history['visible'][i][0], entry[1]] - return chat_html_wrapper(shared.history['visible'], name1, name2, mode) - - -def toggle_text_in_history(name1, name2, mode): - for i, entry in enumerate(shared.history['visible']): - visible_reply = entry[1] - if visible_reply.startswith('')[0]}\n\n{reply}"] - else: - shared.history['visible'][i] = [shared.history['visible'][i][0], f"{visible_reply.split('')[0]}"] - return chat_html_wrapper(shared.history['visible'], name1, name2, mode) - - -def input_modifier(string): - """ - This function is applied to your text inputs before - they are fed into the model. - """ - - # Remove autoplay from the last reply - if shared.is_chat() and len(shared.history['internal']) > 0: - shared.history['visible'][-1] = [shared.history['visible'][-1][0], shared.history['visible'][-1][1].replace('controls autoplay>', 'controls>')] - - shared.processing_message = "*Is recording a voice message...*" - shared.args.no_stream = True # Disable streaming cause otherwise the audio output will stutter and begin anew every time the message is being updated - return string - - -def output_modifier(string): - """ - This function is applied to the model outputs. - """ - - global model, current_params, streaming_state - - for i in params: - if params[i] != current_params[i]: - model = load_model() - current_params = params.copy() - break - - if not params['activate']: - return string - - original_string = string - string = tts_preprocessor.preprocess(string) - - if string == '': - string = '*Empty reply, try regenerating*' - else: - output_file = Path(f'extensions/silero_tts/outputs/{shared.character}_{int(time.time())}.wav') - prosody = ''.format(params['voice_speed'], params['voice_pitch']) - silero_input = f'{prosody}{xmlesc(string)}' - model.save_wav(ssml_text=silero_input, speaker=params['speaker'], sample_rate=int(params['sample_rate']), audio_path=str(output_file)) - - autoplay = 'autoplay' if params['autoplay'] else '' - string = f'' - if params['show_text']: - string += f'\n\n{original_string}' - - shared.processing_message = "*Is typing...*" - shared.args.no_stream = streaming_state # restore the streaming option to the previous value - return string - - -def bot_prefix_modifier(string): - """ - This function is only applied in chat mode. It modifies - the prefix text for the Bot and can be used to bias its - behavior. - """ - - return string - - -def setup(): - global model - model = load_model() - - -def ui(): - # Gradio elements - with gr.Accordion("Silero TTS"): - with gr.Row(): - activate = gr.Checkbox(value=params['activate'], label='Activate TTS') - autoplay = gr.Checkbox(value=params['autoplay'], label='Play TTS automatically') - - show_text = gr.Checkbox(value=params['show_text'], label='Show message text under audio player') - voice = gr.Dropdown(value=params['speaker'], choices=voices_by_gender, label='TTS voice') - with gr.Row(): - v_pitch = gr.Dropdown(value=params['voice_pitch'], choices=voice_pitches, label='Voice pitch') - v_speed = gr.Dropdown(value=params['voice_speed'], choices=voice_speeds, label='Voice speed') - - with gr.Row(): - convert = gr.Button('Permanently replace audios with the message texts') - convert_cancel = gr.Button('Cancel', visible=False) - convert_confirm = gr.Button('Confirm (cannot be undone)', variant="stop", visible=False) - - # Convert history with confirmation - convert_arr = [convert_confirm, convert, convert_cancel] - convert.click(lambda: [gr.update(visible=True), gr.update(visible=False), gr.update(visible=True)], None, convert_arr) - convert_confirm.click(lambda: [gr.update(visible=False), gr.update(visible=True), gr.update(visible=False)], None, convert_arr) - convert_confirm.click(remove_tts_from_history, [shared.gradio[k] for k in ['name1', 'name2', 'mode']], shared.gradio['display']) - convert_confirm.click(lambda: chat.save_history(timestamp=False), [], [], show_progress=False) - convert_cancel.click(lambda: [gr.update(visible=False), gr.update(visible=True), gr.update(visible=False)], None, convert_arr) - - # Toggle message text in history - show_text.change(lambda x: params.update({"show_text": x}), show_text, None) - show_text.change(toggle_text_in_history, [shared.gradio[k] for k in ['name1', 'name2', 'mode']], shared.gradio['display']) - show_text.change(lambda: chat.save_history(timestamp=False), [], [], show_progress=False) - - # Event functions to update the parameters in the backend - activate.change(lambda x: params.update({"activate": x}), activate, None) - autoplay.change(lambda x: params.update({"autoplay": x}), autoplay, None) - voice.change(lambda x: params.update({"speaker": x}), voice, None) - v_pitch.change(lambda x: params.update({"voice_pitch": x}), v_pitch, None) - v_speed.change(lambda x: params.update({"voice_speed": x}), v_speed, None) diff --git a/spaces/drdata/kohbanye-pixel-art-style/README.md b/spaces/drdata/kohbanye-pixel-art-style/README.md deleted file mode 100644 index 764ce349295e9392ad214e77b75586f434e8114c..0000000000000000000000000000000000000000 --- a/spaces/drdata/kohbanye-pixel-art-style/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Kohbanye Pixel Art Style -emoji: 💻 -colorFrom: pink -colorTo: gray -sdk: gradio -sdk_version: 3.19.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/eIysia/VITS-Umamusume-voice-synthesizer/ONNXVITS_to_onnx.py b/spaces/eIysia/VITS-Umamusume-voice-synthesizer/ONNXVITS_to_onnx.py deleted file mode 100644 index 846e39849535ed08accb10d7001f2431a851d372..0000000000000000000000000000000000000000 --- a/spaces/eIysia/VITS-Umamusume-voice-synthesizer/ONNXVITS_to_onnx.py +++ /dev/null @@ -1,31 +0,0 @@ -import ONNXVITS_models -import utils -from text import text_to_sequence -import torch -import commons - -def get_text(text, hps): - text_norm = text_to_sequence(text, hps.symbols, hps.data.text_cleaners) - if hps.data.add_blank: - text_norm = commons.intersperse(text_norm, 0) - text_norm = torch.LongTensor(text_norm) - return text_norm - -hps = utils.get_hparams_from_file("../vits/pretrained_models/uma87.json") -symbols = hps.symbols -net_g = ONNXVITS_models.SynthesizerTrn( - len(symbols), - hps.data.filter_length // 2 + 1, - hps.train.segment_size // hps.data.hop_length, - n_speakers=hps.data.n_speakers, - **hps.model) -_ = net_g.eval() -_ = utils.load_checkpoint("../vits/pretrained_models/uma_1153000.pth", net_g) - -text1 = get_text("ありがとうございます。", hps) -stn_tst = text1 -with torch.no_grad(): - x_tst = stn_tst.unsqueeze(0) - x_tst_lengths = torch.LongTensor([stn_tst.size(0)]) - sid = torch.tensor([0]) - o = net_g(x_tst, x_tst_lengths, sid=sid, noise_scale=.667, noise_scale_w=0.8, length_scale=1) \ No newline at end of file diff --git a/spaces/echarlaix/openvino-export/README.md b/spaces/echarlaix/openvino-export/README.md deleted file mode 100644 index 6c7e07dd8198a4430d2fd3bfdfa0ddc4ae500560..0000000000000000000000000000000000000000 --- a/spaces/echarlaix/openvino-export/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: OpenVINO Export -emoji: 🚀 -colorFrom: grey -colorTo: grey -sdk: gradio -sdk_version: 3.21.0 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/edaiofficial/mmtafrica/README.md b/spaces/edaiofficial/mmtafrica/README.md deleted file mode 100644 index b6f34041425cd1d38bd133344d86bd3a8de08a36..0000000000000000000000000000000000000000 --- a/spaces/edaiofficial/mmtafrica/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: MMTAfrica -emoji: 🌍 -colorFrom: gray -colorTo: purple -sdk: gradio -sdk_version: 2.9.4 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/ehristoforu/Teststudio/README.md b/spaces/ehristoforu/Teststudio/README.md deleted file mode 100644 index a0871e108ae02a7dbd0153de21a9a3e318d7e8a5..0000000000000000000000000000000000000000 --- a/spaces/ehristoforu/Teststudio/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: Zenml Server -emoji: 🧘 -colorFrom: purple -colorTo: green -sdk: docker -pinned: false -app_port: 8080 -license: apache-2.0 -duplicated_from: zenml/zenml ---- diff --git a/spaces/ehristoforu/llm-discord-bot/entry_script.sh b/spaces/ehristoforu/llm-discord-bot/entry_script.sh deleted file mode 100644 index 5eaeaa1969202b543f98333be145630ededfe6a9..0000000000000000000000000000000000000000 --- a/spaces/ehristoforu/llm-discord-bot/entry_script.sh +++ /dev/null @@ -1,4 +0,0 @@ -#!/bin/bash - -exec python entry_point.py > log1.log & -exec python health_check_200.py \ No newline at end of file diff --git a/spaces/ehristoforu/llm-discord-bot/health_check_200.py b/spaces/ehristoforu/llm-discord-bot/health_check_200.py deleted file mode 100644 index 15d7a435d6e6af9d4d888e68233d84437d2df38e..0000000000000000000000000000000000000000 --- a/spaces/ehristoforu/llm-discord-bot/health_check_200.py +++ /dev/null @@ -1,20 +0,0 @@ -import sys -from http.server import BaseHTTPRequestHandler, HTTPServer - -class S(BaseHTTPRequestHandler): - def _set_headers(self): - self.send_response(200) - self.send_header('Content-type', 'application/json') - self.end_headers() - - def do_GET(self): - self._set_headers() - self.wfile.write(b"") - -def run_dummy_server(server_class=HTTPServer, handler_class=S, port=7860): - server_address = ('', port) - httpd = server_class(server_address, handler_class) - print('Starting httpd...') - httpd.serve_forever() - -run_dummy_server() \ No newline at end of file diff --git a/spaces/elevenlabs/tts/app.py b/spaces/elevenlabs/tts/app.py deleted file mode 100644 index 083a1d98cc3a0cf1682be63d8c7f2e4cb12c1f2b..0000000000000000000000000000000000000000 --- a/spaces/elevenlabs/tts/app.py +++ /dev/null @@ -1,94 +0,0 @@ -import random -import gradio as gr -import numpy as np -from elevenlabs import voices, generate, set_api_key, UnauthenticatedRateLimitError - -def pad_buffer(audio): - # Pad buffer to multiple of 2 bytes - buffer_size = len(audio) - element_size = np.dtype(np.int16).itemsize - if buffer_size % element_size != 0: - audio = audio + b'\0' * (element_size - (buffer_size % element_size)) - return audio - -def generate_voice(text, voice_name): - try: - audio = generate( - text[:250], # Limit to 250 characters - voice=voice_name, - model="eleven_multilingual_v2" - ) - return (44100, np.frombuffer(pad_buffer(audio), dtype=np.int16)) - except UnauthenticatedRateLimitError as e: - raise gr.Error("Thanks for trying out ElevenLabs TTS! You've reached the free tier limit. Please provide an API key to continue.") - except Exception as e: - raise gr.Error(e) - - -badges = """ -
- - -[ ![GitHub](https://img.shields.io/badge/github-%23121011.svg?style=for-the-badge&logo=github&logoColor=white) ](https://github.com/elevenlabs/elevenlabs-python) - - - - -[ ![Twitter](https://img.shields.io/badge/Twitter-%231DA1F2.svg?style=for-the-badge&logo=Twitter&logoColor=white) ](https://twitter.com/elevenlabsio) - - - - -[ ![](https://dcbadge.vercel.app/api/server/elevenlabs) ](https://discord.gg/elevenlabs) - - -
-""" - -description = """ -A demo of the world's most advanced TTS systems, made by [ElevenLabs](https://elevenlabs.io). Eleven Multilingual V2 is a single foundational model supporting 28 languages including: English, Chinese, Spanish, Hindi, Portuguese, French, German, Japanese, Arabic, Korean, Indonesian, Italian, Dutch, Turkish, Polish, Swedish, Filipino, Malay, Romanian, Ukrainian, Greek, Czech, Danish, Finnish, Bulgarian, Croatian, Slovak, and Tamil. Sign up on [ElevenLabs](https://elevenlabs.io) to get fast access, long-form generation, voice cloning, API keys, and more! -""" - -with gr.Blocks() as block: - gr.Markdown('[ ![ElevenLabs](https://user-images.githubusercontent.com/12028621/262629275-4f85c9cf-85b6-435e-ab50-5b8c7c4e9dd2.png) ](https://elevenlabs.io)') - gr.Markdown(badges) - gr.Markdown(description) - - input_text = gr.Textbox( - label="Input Text (250 characters max)", - lines=2, - value="Hello! 你好! Hola! नमस्ते! Bonjour! こんにちは! مرحبا! 안녕하세요! Ciao! Cześć! Привіт! Γειά σας! Здравей! வணக்கம்!", - elem_id="input_text" - ) - - all_voices = voices() - input_voice = gr.Dropdown( - [ voice.name for voice in all_voices ], - value="Bella", - label="Voice", - elem_id="input_voice" - ) - - run_button = gr.Button( - text="Generate Voice", - type="button" - ) - - out_audio = gr.Audio( - label="Generated Voice", - type="numpy", - elem_id="out_audio", - format="mp3" - ) - - inputs = [input_text, input_voice] - outputs = [out_audio] - - run_button.click( - fn=generate_voice, - inputs=inputs, - outputs=outputs, - queue=True - ) - -block.queue(concurrency_count=5).launch(debug=True) \ No newline at end of file diff --git a/spaces/enzostvs/stable-diffusion-tpu/app/api/me/route.ts b/spaces/enzostvs/stable-diffusion-tpu/app/api/me/route.ts deleted file mode 100644 index d17d4863e58af14b62fd3a0a2b1ed64c92461b84..0000000000000000000000000000000000000000 --- a/spaces/enzostvs/stable-diffusion-tpu/app/api/me/route.ts +++ /dev/null @@ -1,32 +0,0 @@ -import { cookies } from "next/headers" - -export async function GET() { - const cookie = cookies().get("auth_hf_token") - - if (!cookie) return Response.json({ status: 401, ok: false, message: "Unauthorized" }); - - const request = await fetch("https://huggingface.co/oauth/userinfo", { - method: "GET", - headers: { - Authorization: `Bearer ${cookie.value}`, - }, - }) - - const res = await request.clone().json().catch(() => ({})); - // @ts-ignore - const HF_ADMIN = process?.env?.HF_ADMIN?.split(',') ?? [] - const is_admin = res?.sub ? HF_ADMIN.includes(res?.sub) : false - - if (!res?.sub) return Response.json({ status: 401, ok: false, message: "Unauthorized" }); - - return Response.json( - { - user: { - ...res, - is_admin, - }, - status: 200, - ok: true - } - ) -} \ No newline at end of file diff --git a/spaces/ercaronte/speech-to-speech-translation/app.py b/spaces/ercaronte/speech-to-speech-translation/app.py deleted file mode 100644 index 4485d4ba9b79876e801d669d7238a447504410d3..0000000000000000000000000000000000000000 --- a/spaces/ercaronte/speech-to-speech-translation/app.py +++ /dev/null @@ -1,110 +0,0 @@ -import gradio as gr -import numpy as np -import torch -from datasets import load_dataset -from transformers import pipeline - -device = "cuda:0" if torch.cuda.is_available() else "cpu" - -# load speech translation checkpoint -asr_pipe = pipeline("automatic-speech-recognition", model="openai/whisper-base", device=device) - - -def translate(audio): - outputs = asr_pipe(audio, max_new_tokens=256, generate_kwargs={"task": "translate"}) - return outputs["text"] - -''' -from transformers import SpeechT5Processor, SpeechT5ForTextToSpeech, SpeechT5HifiGan - -# load text-to-speech checkpoint and speaker embeddings -processor = SpeechT5Processor.from_pretrained("microsoft/speecht5_tts") - -model = SpeechT5ForTextToSpeech.from_pretrained("microsoft/speecht5_tts").to(device) -vocoder = SpeechT5HifiGan.from_pretrained("microsoft/speecht5_hifigan").to(device) - -embeddings_dataset = load_dataset("Matthijs/cmu-arctic-xvectors", split="validation") -speaker_embeddings = torch.tensor(embeddings_dataset[7306]["xvector"]).unsqueeze(0) - - -def synthesise_old(text): - inputs = processor(text=text, return_tensors="pt") - speech = model.generate_speech(inputs["input_ids"].to(device), speaker_embeddings.to(device), vocoder=vocoder) - return speech.cpu() - - -def speech_to_speech_translation_old(audio): - translated_text = translate(audio) - synthesised_speech = synthesise_old(translated_text) - synthesised_speech = (synthesised_speech.numpy() * 32767).astype(np.int16) - return 16000, synthesised_speech -''' - -from transformers import VitsModel, VitsTokenizer - - -# load translator to french -en_fr_translator = pipeline("translation_en_to_fr") - -# load text-to-speech -model_new = VitsModel.from_pretrained("facebook/mms-tts-fra") -tokenizer = VitsTokenizer.from_pretrained("facebook/mms-tts-fra") - - -def synthesise(text): - translation_to_french = en_fr_translator(text) - french_text = translation_to_french[0]['translation_text'] - - inputs = tokenizer(french_text, return_tensors="pt") - input_ids = inputs["input_ids"] - - with torch.no_grad(): - outputs = model_new(input_ids) - - speech = outputs["waveform"] - return speech - - -def speech_to_speech_translation(audio): - translated_text = translate(audio) - synthesised_speech = synthesise(translated_text) - synthesised_speech = (synthesised_speech[0].numpy() * 32767).astype(np.int16) - return 16000, synthesised_speech - - -title = "Cascaded STST" -description = """ -Demo for cascaded speech-to-speech translation (STST), mapping from source speech in any language to target speech in French. -Demo uses OpenAI's [Whisper Base](https://huggingface.co/openai/whisper-base) model for speech translation, -Google's [T5](https://huggingface.co/t5-base) for translating from English to French -and Facebook's [Massive Multilingual Speech (MMS)](https://huggingface.co/facebook/mms-tts) model for text-to-speech: - -![Cascaded STST](https://huggingface.co/datasets/huggingface-course/audio-course-images/resolve/main/s2st_cascaded.png "Diagram of cascaded speech to speech translation") -""" - -demo = gr.Blocks() - -mic_translate = gr.Interface( - fn=speech_to_speech_translation, - inputs=gr.Audio(source="microphone", type="filepath"), - outputs=gr.Audio(label="Generated Speech", type="numpy"), - title=title, - description=description, - api_name='predict', -) - -file_translate = gr.Interface( - fn=speech_to_speech_translation, - inputs=gr.Audio(source="upload", type="filepath"), - outputs=gr.Audio(label="Generated Speech", type="numpy"), - examples=[["./example.wav"]], - title=title, - description=description, - api_name='predict_upload', -) - -with demo: - gr.TabbedInterface([mic_translate, file_translate], ["Microphone", "Audio File"]) - -demo.queue() -demo.launch() diff --git a/spaces/evi0mo/vits-fastapi-server/text/thai.py b/spaces/evi0mo/vits-fastapi-server/text/thai.py deleted file mode 100644 index 998207c01a85c710a46db1ec8b62c39c2d94bc84..0000000000000000000000000000000000000000 --- a/spaces/evi0mo/vits-fastapi-server/text/thai.py +++ /dev/null @@ -1,44 +0,0 @@ -import re -from num_thai.thainumbers import NumThai - - -num = NumThai() - -# List of (Latin alphabet, Thai) pairs: -_latin_to_thai = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [ - ('a', 'เอ'), - ('b','บี'), - ('c','ซี'), - ('d','ดี'), - ('e','อี'), - ('f','เอฟ'), - ('g','จี'), - ('h','เอช'), - ('i','ไอ'), - ('j','เจ'), - ('k','เค'), - ('l','แอล'), - ('m','เอ็ม'), - ('n','เอ็น'), - ('o','โอ'), - ('p','พี'), - ('q','คิว'), - ('r','แอร์'), - ('s','เอส'), - ('t','ที'), - ('u','ยู'), - ('v','วี'), - ('w','ดับเบิลยู'), - ('x','เอ็กซ์'), - ('y','วาย'), - ('z','ซี') -]] - - -def num_to_thai(text): - return re.sub(r'(?:\d+(?:,?\d+)?)+(?:\.\d+(?:,?\d+)?)?', lambda x: ''.join(num.NumberToTextThai(float(x.group(0).replace(',', '')))), text) - -def latin_to_thai(text): - for regex, replacement in _latin_to_thai: - text = re.sub(regex, replacement, text) - return text diff --git a/spaces/falterWliame/Face_Mask_Detection/Mahabharat Movie Download Kickass 720p Torrent !FREE!.md b/spaces/falterWliame/Face_Mask_Detection/Mahabharat Movie Download Kickass 720p Torrent !FREE!.md deleted file mode 100644 index 80bb3bddbddf911c4792f164ba22eeef7a3d7235..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Mahabharat Movie Download Kickass 720p Torrent !FREE!.md +++ /dev/null @@ -1,20 +0,0 @@ - -

Mahabharat Movie Download Kickass 720p Torrent

-

Mahabharat is a 2013 Indian animated film based on the epic of the same name. The film was directed by Amaan Khan and produced by Kushal Kantilal Gada and Dhaval Jayantilal Gada. The film features the voices of Amitabh Bachchan, Ajay Devgn, Vidya Balan, Sunny Deol, Anil Kapoor, Jackie Shroff, Manoj Bajpayee and Deepti Naval. The film was released on 27 December 2013 and received mixed reviews from critics.

-

Mahabharat Movie Download Kickass 720p Torrent


Download ✓✓✓ https://urlca.com/2uDdQM



-

If you are looking for a way to download Mahabharat movie in 720p quality, you may try using kickass torrents, a popular torrent site that offers a variety of movies, TV shows, music, games and more. However, before you proceed, you should be aware of the risks and legal issues involved in downloading torrents. Torrenting is illegal in many countries and can expose you to malware, viruses, hackers and copyright infringement lawsuits. Therefore, you should always use a VPN (virtual private network) to protect your identity and data while torrenting.

-

To download Mahabharat movie from kickass torrents, you will need a torrent client such as BitTorrent or uTorrent. You can follow these steps:

-
    -
  1. Go to https://katcr.to/ and search for Mahabharat movie.
  2. -
  3. Choose a torrent file that has a good number of seeders and leechers. Seeders are users who have the complete file and are sharing it with others. Leechers are users who are downloading the file but have not completed it yet. The more seeders and leechers a torrent has, the faster the download speed will be.
  4. -
  5. Download the torrent file or copy the magnet link. A magnet link is a URL that contains the information of the torrent file without having to download it.
  6. -
  7. Open your torrent client and add the torrent file or paste the magnet link.
  8. -
  9. Wait for the download to finish. You can check the progress and speed of your download on your torrent client.
  10. -
  11. Once the download is complete, you can enjoy watching Mahabharat movie in 720p quality.
  12. -
-

Note: This article is for informational purposes only. We do not condone or encourage piracy or illegal downloading of any content. Please respect the rights of the content creators and pay for their work.

-

If you want to watch Mahabharat movie in a higher quality, you may also try downloading it in 1080p resolution. However, this will require more storage space and bandwidth than 720p. You can follow the same steps as above, but look for a torrent file that has 1080p in its name. For example, you can search for "Mahabharat Anim 2013 1080p NF WebDL AVC DD 5 1-DTOne" on kickass torrents. This torrent has a file size of 6 GB and has 7 seeders and 38 leechers as of writing this article.

-

Alternatively, you can also stream Mahabharat movie online on various platforms such as Disney+ Hotstar, Amazon Prime Video, Netflix or YouTube. However, you may need to pay a subscription fee or rent the movie to watch it legally. Streaming online also depends on your internet speed and connection quality. You may experience buffering, lagging or low-quality video if your internet is slow or unstable.

-

Whichever method you choose to watch Mahabharat movie, we hope you enjoy this epic tale of courage, loyalty, duty and destiny. Mahabharat is one of the oldest and most revered stories in Indian culture and history. It has inspired many generations of artists, writers, filmmakers and thinkers. It is a story that transcends time and space and speaks to the human condition.

d5da3c52bf
-
-
\ No newline at end of file diff --git a/spaces/falterWliame/Face_Mask_Detection/Microsoft-Toolkit-25-Beta-5-Official-Windows-81-Office-Activator-EXCLUSIVE.md b/spaces/falterWliame/Face_Mask_Detection/Microsoft-Toolkit-25-Beta-5-Official-Windows-81-Office-Activator-EXCLUSIVE.md deleted file mode 100644 index 0dce039a00052455d219efa7471b119ef558fefc..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Microsoft-Toolkit-25-Beta-5-Official-Windows-81-Office-Activator-EXCLUSIVE.md +++ /dev/null @@ -1,122 +0,0 @@ -## Microsoft Toolkit 2.5 Beta 5 Official Windows 8.1 Office Activator - - - - - - - - - -**CLICK HERE >>> [https://climmulponorc.blogspot.com/?c=2txua3](https://climmulponorc.blogspot.com/?c=2txua3)** - - - - - - - - - - - - - -# How to Activate Windows 8.1 and Office with Microsoft Toolkit 2.5 Beta 5 - - - -If you are looking for a reliable and easy way to activate your Windows 8.1 or Office products, you might want to try Microsoft Toolkit 2.5 Beta 5. This is a powerful and versatile tool that can help you manage, license, deploy, and activate Microsoft Office and Windows in general. In this article, we will show you how to use Microsoft Toolkit 2.5 Beta 5 to activate your Windows 8.1 and Office with just a few clicks. - - - -## What is Microsoft Toolkit 2.5 Beta 5? - - - -Microsoft Toolkit 2.5 Beta 5 is a set of tools and functions that can perform various tasks related to Microsoft Office and Windows activation. It can also customize the setup of Office products, uninstall them, check their product keys, and more. It works with all editions of Windows Vista or later, as well as Office 2010 or later. - - - -Microsoft Toolkit 2.5 Beta 5 is based on the KMS (Key Management Service) technology, which is a method of activating Microsoft products by emulating a local server on your computer. This way, you can bypass the online activation process and enjoy the full features of your Windows or Office products without any limitations. - - - -## How to Download Microsoft Toolkit 2.5 Beta 5? - - - -You can download Microsoft Toolkit 2.5 Beta 5 from various sources on the internet, but be careful to choose a trustworthy and safe one. One of the official websites where you can download Microsoft Toolkit 2.5 Beta 5 is [https://officialkmspico.net/microsoft-toolkit/](https://officialkmspico.net/microsoft-toolkit/). Here you can find the latest version of the tool, as well as instructions and support. - - - -Before you download Microsoft Toolkit 2.5 Beta 5, make sure you have the following requirements: - - - -- Microsoft .NET Framework 4.0 or 4.5 (Not 3.5) - -- Microsoft Office 2010 or Later for Office Toolkit Support - -- Windows Vista or Later for Windows Toolkit Support - - - -Also, you might need to disable your antivirus software or firewall temporarily, as they might interfere with the tool's working or flag it as malicious. - - - -## How to Activate Windows 8.1 with Microsoft Toolkit 2.5 Beta 5? - - - -After you download Microsoft Toolkit 2.5 Beta 5, follow these steps to activate your Windows 8.1: - - - -1. Extract the archive "MSToolkit\_2.6.4.rar" using a password "12345". - -2. Right-click on the MSToolkit\_2.6.4 icon and then click on Run as Administrator. - -3. Select the Windows icon at the bottom right of the program. - -4. Go to the Activation tab and click on EZ-Activator. - -5. Wait for some moments until you get a confirmation message at the bottom of the screen saying "Windows is activated". - -6. Reboot your PC and enjoy your activated Windows 8.1. - - - -## How to Activate Office with Microsoft Toolkit 2.5 Beta 5? - - - -If you want to activate your Office products with Microsoft Toolkit 2.5 Beta 5, follow these steps: - - - -1. Extract the archive "MSToolkit\_2.6.4.rar" using a password "12345". - -2. Right-click on the MSToolkit\_2.6.4 icon and then click on Run as Administrator. - -3. Select the Office icon at the bottom right of the program. - -4. Go to the Activation tab and click on EZ-Activator. - -5. Wait for some moments until you get a confirmation message at the bottom of the screen saying "Office is activated". - -6. Reboot your PC and enjoy your activated Office products. - - - -## Conclusion 1b8d091108 - - - - - - - - - diff --git a/spaces/falterWliame/Face_Mask_Detection/Pthc Collection Torrents.md b/spaces/falterWliame/Face_Mask_Detection/Pthc Collection Torrents.md deleted file mode 100644 index 9ca3aa058653087df5143e522fbaaf7fb31c1440..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Pthc Collection Torrents.md +++ /dev/null @@ -1,6 +0,0 @@ -

pthc collection torrents


Download File ★★★ https://urlca.com/2uDcOp



-
-569 item. Teacher/Caregiver Resources and InformationnGrammar lessons and culture. 566 item. Teacher/Caregiver Resources and InformationnLanguage lessons and culture. 640 item. Teacher/Caregiver Resources and InformationnTopics, including a series on iBooks. 489 item. Teacher/Caregiver Resources and InformationnResearch resources. 460 item. Teacher/Caregiver Resources and InformationnEditions. 464 item. Teacher/Caregiver Resources and InformationnActivities and games for learning about and exploring the United States. 448 item. Teacher/Caregiver Resources and InformationnA series of six lessons in international languages. 391 item. Teacher/Caregiver Resources and InformationnActivities and games for learning about and exploring the world. 388 item. Teacher/Caregiver Resources and InformationnActivities and games for learning about and exploring life. 365 item. Teacher/Caregiver Resources and InformationnActivities and games for learning about and exploring places. 348 item. Teacher/Caregiver Resources and InformationnActivities and games for learning about and exploring social groups. 337 item. Teacher/Caregiver Resources and InformationnActivities and games for learning about and exploring nature. 325 item. Teacher/Caregiver Resources and InformationnActivities and games for learning about and exploring art. 321 item. Teacher/Caregiver Resources and InformationnActivities and games for learning about and exploring fiction. 311 item. Teacher/Caregiver Resources and InformationnActivities and games for learning about and exploring history. 292 item. Teacher/Caregiver Resources and InformationnActivities and games for learning about and exploring science. 274 item. Teacher/Caregiver Resources and InformationnActivities and games for learning about and exploring the Earth. 249 item. Teacher/Caregiver Resources and InformationnActivities and games for learning about and exploring technology. 244 item. Teacher/Caregiver Resources and InformationnActivities and games for learning about and exploring animals. 216 item. Teacher/Caregiver Resources and InformationnActivities and games for learning about and exploring earth science. 206 item. Teacher/Caregiver Resources and InformationnActivities and games for learning about and exploring literature. 186 item. Teacher/Caregiver Resources and InformationnActivities and games for learning about and exploring math. 166 item. Teacher/Caregiver Resources and InformationnActivities and games for learning about and exploring literature and language. 4fefd39f24
-
-
-

diff --git a/spaces/farhananis005/LawyerGPT/README.md b/spaces/farhananis005/LawyerGPT/README.md deleted file mode 100644 index d00a5da30f51b965286e060aea8a5b200c0fe295..0000000000000000000000000000000000000000 --- a/spaces/farhananis005/LawyerGPT/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: LawyerGPT -emoji: 🐢 -colorFrom: green -colorTo: purple -sdk: gradio -sdk_version: 3.43.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/fatiXbelha/sd/Enjoy Candy Crush Soda Saga on Your Phone - APK Download.md b/spaces/fatiXbelha/sd/Enjoy Candy Crush Soda Saga on Your Phone - APK Download.md deleted file mode 100644 index fdfd777c5147edd47511fb8db3312c85f2e0ec62..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Enjoy Candy Crush Soda Saga on Your Phone - APK Download.md +++ /dev/null @@ -1,152 +0,0 @@ - -

Candy Crush Soda Saga: A Sodalicious Puzzle Game for Android

-

If you are a fan of match 3 puzzle games, you might have heard of Candy Crush Soda Saga, a sequel to the popular Candy Crush Saga. This game is developed by King, the same company that created other hit games like Farm Heroes Saga and Bubble Witch Saga. In this game, you will join Kimmy on her juicy journey to find Tiffi, by switching and matching your way through new dimensions of magical gameplay. You will also encounter new candies, more divine matching combinations, and challenging game modes brimming with purple soda and fun.

-

candy crush soda saga download apkpure


DOWNLOAD >>> https://urllie.com/2uNILZ



-

In this article, we will tell you what Candy Crush Soda Saga is, what features it has, how to download it from APKPure, and some tips and tricks to master it. We will also share some reviews from users who have played the game. So, if you are ready to quench your thirst for fun, read on!

-

What is Candy Crush Soda Saga?

-

Candy Crush Soda Saga is a free-to-play puzzle game for Android devices. It is a spin-off of the original Candy Crush Saga, but with some new twists and turns. The game features over 6000 levels, monthly season updates, and various game modes filled with unique candies and challenging gameplay. Players can play alone or with friends to see who can get the highest score. The game also has new features such as over 140 Sodalicious levels and new game modes.

-

Features of Candy Crush Soda Saga

-

Here are some of the features that make Candy Crush Soda Saga a fun and addictive game:

-

candy crush soda saga apk download latest version
-candy crush soda saga mod apk unlimited everything
-candy crush soda saga apk free download for android
-candy crush soda saga hack apk download
-candy crush soda saga offline apk download
-candy crush soda saga apk download for pc
-candy crush soda saga apk pure app
-candy crush soda saga update apk download
-candy crush soda saga old version apk download
-candy crush soda saga apk download uptodown
-candy crush soda saga full apk download
-candy crush soda saga premium apk download
-candy crush soda saga cracked apk download
-candy crush soda saga apk download apkmirror
-candy crush soda saga original apk download
-candy crush soda saga modded apk download
-candy crush soda saga unlimited lives apk download
-candy crush soda saga new version apk download
-candy crush soda saga apk download for ios
-candy crush soda saga pro apk download
-candy crush soda saga unlocked apk download
-candy crush soda saga cheat apk download
-candy crush soda saga mega mod apk download
-candy crush soda saga latest mod apk download
-candy crush soda saga hack tool apk download
-candy crush soda saga apkpure game
-candy crush soda saga apkpure free
-candy crush soda saga apkpure online
-candy crush soda saga apkpure android
-candy crush soda saga apkpure pc
-candy crush soda saga apkpure ios
-candy crush soda saga apkpure mod
-candy crush soda saga apkpure hack
-candy crush soda saga apkpure offline
-candy crush soda saga apkpure update
-candy crush soda saga apkpure old version
-candy crush soda saga apkpure app store
-candy crush soda saga apkpure uptodown
-candy crush soda saga apkpure full version
-candy crush soda saga apkpure premium
-candy crush soda saga apkpure cracked
-candy crush soda saga apkpure mirror
-candy crush soda saga apkpure original
-candy crush soda saga apkpure modded
-candy crush soda saga apkpure unlimited lives
-candy crush soda saga apkpure new version
-candy crush soda saga apkpure pro

-

Unique candies and matching combinations

-

In Candy Crush Soda Saga, you will find new candies that have different effects and abilities. For example, you can match 4 candies in a square to make a Swedish Fish, or match 7 candies for the all new Coloring Candy. You can also create special candies by matching 4 or more candies of the same color in a row or column. These special candies can be combined with other special candies or normal candies to create powerful explosions and clear more candies from the board.

-

Different game modes and levels

-

Candy Crush Soda Saga has different game modes that require different strategies and skills. Some of the game modes are:

-
    -
  • Soda – Switch the bottles and match candies to release purple soda and save the Candy Bears.
  • -
  • Frosting – Match candy to smash the ice and set the Candy Bears free.
  • -
  • Honeycomb – Match candies next to honeycomb to release the trapped Candy Bears.
  • -
  • Jam – Spread the jam across the board.
  • -
-

The game also has over 6000 levels that vary in difficulty and objectives. Some levels have time limits, move limits, or score targets. Some levels also have obstacles such as chocolate, licorice, or blockers that can hinder your progress. You will need to use your skills and logic to overcome these challenges and complete the levels.

-

Season updates and rewards

-

Candy Crush Soda Saga has monthly season updates that bring unique quests and exciting gameplay for you to explore. You can complete quests to progress through the Season Pass while earning rewards and boosters to help you on your saga. You can also collect stars by completing levels to unlock more rewards.

-

Social features and leaderboards

-

Candy Crush Soda Saga lets you connect with your friends and other players online. You can sync your game progress across multiple devices and access the full game features when connected to the internet. You can also compare your scores with your friends and other players on the leaderboards. You can see who is the best at each level and challenge them to beat your score. You can also send and receive lives and boosters from your friends to help each other out.

-

How to download Candy Crush Soda Saga from APKPure

-

If you want to download Candy Crush Soda Saga for your Android device, you might want to try APKPure, a website that offers free and pure APK files for various apps and games. APK stands for Android Package Kit, which is a file format that contains all the elements needed to install an app on your device. APKPure is a safe and reliable source that provides fast and secure downloads for millions of users. Here are some reasons why you might want to use APKPure to download Candy Crush Soda Saga:

-

What is APKPure and why use it?

-

APKPure is a website that allows you to download APK files of apps and games that are not available in your region or on the Google Play Store. You can also find older versions of apps and games that might be compatible with your device or have features that you prefer. APKPure also has a mobile app that lets you browse, download, and update apps and games on your device easily. Some of the benefits of using APKPure are:

-
    -
  • It is free and does not require any registration or subscription.
  • -
  • It is safe and scans all the APK files for viruses and malware before uploading them.
  • -
  • It is fast and has multiple download servers to ensure smooth and speedy downloads.
  • -
  • It is easy and has a user-friendly interface that makes it simple to find and install apps and games.
  • -
  • It is updated regularly and has the latest versions of apps and games as well as exclusive releases.
  • -
-

Steps to download and install Candy Crush Soda Saga from APKPure

-

To download and install Candy Crush Soda Saga from APKPure, you need to follow these steps:

-
    -
  1. Go to https://apkpure.com/candy-crush-soda-saga/com.king.candycrushsodasaga on your browser or open the APKPure app on your device.
  2. -
  3. Click on the "Download APK" button or the "Install" button if you are using the app.
  4. -
  5. Wait for the download to finish and then locate the APK file on your device.
  6. -
  7. Tap on the APK file and allow the installation of unknown sources if prompted.
  8. -
  9. Follow the instructions on the screen to complete the installation process.
  10. -
  11. Launch Candy Crush Soda Saga from your app drawer or home screen and enjoy!
  12. -
-

Tips and tricks to master Candy Crush Soda Saga

-

Candy Crush Soda Saga is a fun but challenging game that requires skill, strategy, and luck. If you want to improve your gameplay and beat more levels, you might want to check out these tips and tricks:

-

How to get free lives faster

-

Lives are essential in Candy Crush Soda Saga, as they allow you to play more levels without waiting. You can get lives in different ways, such as asking your friends, watching ads, or buying them with gold bars. However, there is also a simple trick that can help you get free lives faster without spending any money or bothering anyone. Here is how:

-
    -
  • When you run out of lives, exit the game and go to your device's settings.
  • -
  • Change the date and time of your device to a few hours or days ahead.
  • -
  • Go back to the game and you will see that your lives have been replenished.
  • -
  • Play as much as you want and then repeat the process when you need more lives.
  • -
  • Don't forget to change your date and time back to normal after you are done playing.
  • -
-

How to scout out Frozen Bears under ice

-

In some levels of Candy Crush Soda Saga, you will encounter Frozen Bears under ice. These are cute bears that are trapped under thick layers of ice that you need to break by matching candies near them. However, sometimes it can be hard to tell where the Frozen Bears are hiding, especially if they are under multiple layers of ice. To make it easier, you can use this trick:

-
    -
  • Look for small blue dots on the ice blocks. These dots indicate where the Frozen Bears are located.
  • -
  • The more dots there are, the bigger the Frozen Bear is. A single dot means a small Frozen Bear, while four dots mean a large Frozen Bear.
  • -
  • Use this information to plan your moves and target the ice blocks that have the most dots.
  • -
  • Once you break the ice, you will see the Frozen Bears and their shapes. You need to clear all the ice around them to free them.
  • -
-

How to use Coloring Candies wisely

-

Coloring Candies are one of the most powerful special candies in Candy Crush Soda Saga. They are created by matching 7 candies of the same color in a cluster. When you match a Coloring Candy with another candy, it will change all the candies of that color on the board to the color of the Coloring Candy. This can create massive cascades and clear a lot of candies at once. However, Coloring Candies are also rare and hard to make, so you need to use them wisely. Here are some tips on how to use Coloring Candies effectively:

-
    -
  • Save them for when you really need them. Don't waste them on easy levels or low-priority objectives.
  • -
  • Combine them with other special candies for more impact. For example, matching a Coloring Candy with a Striped Candy will create a row or column of Striped Candies of the same color, which will then explode and clear more candies.
  • -
  • Use them to create more special candies. For example, matching a Coloring Candy with a normal candy can create more opportunities for making special candies of that color.
  • -
  • Use them to clear blockers or obstacles. For example, matching a Coloring Candy with a candy that is next to chocolate, licorice, or honeycomb can help you get rid of those annoyances.
  • -
-

Reviews of Candy Crush Soda Saga from users

-

Candy Crush Soda Saga is a popular game that has been downloaded over 100 million times on Google Play Store and has an average rating of 4.5 out of 5 stars. However, not everyone is satisfied with the game and some users have expressed their opinions and feedback on the game. Here are some of the positive and negative reviews from users who have played Candy Crush Soda Saga:

-

Positive reviews

-

Here are some of the positive reviews from users who love Candy Crush Soda Saga:

-
-

"I love this game. It's challenging but not impossible. It's fun and relaxing. I like the graphics and the sounds. It's one of my favorite games."

-

"This game is awesome. It has so many levels and game modes. It never gets boring. I like the seasonal updates and the rewards. It's very addictive."

-

"This game is amazing. It's very colorful and creative. I like the new candies and the combinations. It's very satisfying to play."

-
-

Negative reviews

-

Here are some of the negative reviews from users who dislike Candy Crush Soda Saga:

-
-

"I hate this game. It's too hard and frustrating. It's rigged and unfair. It always crashes and freezes. It's a waste of time and money."

-

"This game is boring. It's too repetitive and easy. It has no challenge or strategy. It's just a mindless tapping game."

-

"This game is annoying. It has too many ads and pop-ups. It always asks for permissions and access to my data. It's a scam and a spyware."

-
-

Conclusion

-

Candy Crush Soda Saga is a fun and addictive puzzle game that offers hours of entertainment and challenge for players of all ages and skill levels. It has unique candies, different game modes, season updates, social features, and more. You can download it for free from APKPure, a website that provides safe and fast APK downloads for various apps and games. You can also use some tips and tricks to master the game and beat more levels.

-

If you are looking for a sodalicious adventure, Candy Crush Soda Saga is the game for you!

-

Frequently Asked Questions

-

Here are some of the frequently asked questions about Candy Crush Soda Saga:

-

Q: Is Candy Crush Soda Saga free to play?

-

A: Yes, Candy Crush Soda Saga is free to play, but it also offers in-app purchases that can enhance your gameplay or help you progress faster.

-

Q: How can I sync my progress across multiple devices?

-

A: You can sync your progress across multiple devices by connecting your game to your Facebook account or your King account.

-

Q: What are gold bars and how can I get them?

-

A: Gold bars are the premium currency in Candy Crush Soda Saga. You can use them to buy boosters, extra moves, or lives. You can get gold bars by completing quests, earning achievements, watching ads, or buying them with real money.

-

Q: What are boosters and how can I use them?

-

A: Boosters are special items that can help you in various ways, such as clearing more candies, breaking more ice, or creating more special candies. You can use boosters before or during a level by tapping on them. You can get boosters by completing levels, opening chests, spinning the wheel, or buying them with gold bars.

-

Q: How can I contact the support team if I have any issues or questions?

-

A: You can contact the support team by tapping on the settings icon on the main screen and then tapping on the help center button. You can also visit https://community.king.com/en/candy-crush-soda-saga for more information and assistance.

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/fatiXbelha/sd/FIFA Mobile APK The Best Way to Enjoy Soccer on Your Android Phone.md b/spaces/fatiXbelha/sd/FIFA Mobile APK The Best Way to Enjoy Soccer on Your Android Phone.md deleted file mode 100644 index b6e0cbb5ce680ab42f5d3cbfb347483346d679bd..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/FIFA Mobile APK The Best Way to Enjoy Soccer on Your Android Phone.md +++ /dev/null @@ -1,109 +0,0 @@ - -

FIFA Mobile APK India: Everything You Need to Know

-

If you are a fan of soccer games and want to experience the thrill of playing with your favorite stars and teams on your mobile device, then you should check out FIFA Mobile APK India. FIFA Mobile is a popular soccer game developed by EA Sports that lets you build your Ultimate Team, compete in various modes, and relive the world's greatest soccer tournament, the FIFA World Cup 2022. In this article, we will tell you everything you need to know about FIFA Mobile APK India, including what it is, how to download and install it, why you should play it, and how to play it like a pro.

-

What is FIFA Mobile APK?

-

A brief introduction to FIFA Mobile and its features

-

FIFA Mobile is a soccer game that allows you to create and customize your own team of soccer stars from over 15,000 players and 600+ teams across the world. You can choose from world-class talent like Kylian Mbappé, Christian Pulisic, Vinicius Jr, and Son Heung-min, as well as legends like Zidane, Beckham, Ronaldo, and Maldini. You can also play with any of the 32 qualified national teams in the FIFA World Cup 2022 mode, which features authentic kits, badges, stadiums, and commentary. FIFA Mobile also offers various modes to challenge yourself and other players, such as Head to Head, VS Attack, Manager Mode, UEFA Champions League, UEFA Europa League, UEFA Europa Conference League, and more. You can also level up your players, improve their skills, and unlock new rewards by completing live events and plans.

-

fifa mobile apk indi


DOWNLOADhttps://urllie.com/2uNAsG



-

How to download and install FIFA Mobile APK on Android devices

-

If you want to play FIFA Mobile on your Android device, you will need to download and install the FIFA Mobile APK file from a trusted source. An APK file is an Android application package that contains all the files and data needed to run an app on your device. Here are the steps to download and install FIFA Mobile APK on your Android device:

-
    -
  1. Go to [FIFA Soccer - Apps on Google Play](^4^) or [FIFA Soccer APK (Android Game) - Free Download - APKCombo](^1^) and tap on the download button.
  2. -
  3. Once the download is complete, locate the APK file on your device's file manager or downloads folder.
  4. -
  5. Tap on the APK file to start the installation process. You may need to enable unknown sources in your device's settings if you haven't done so before.
  6. -
  7. Follow the on-screen instructions to complete the installation.
  8. -
  9. Launch the app and enjoy playing FIFA Mobile on your Android device.
  10. -
-

Why play FIFA Mobile in India?

-

The benefits of playing FIFA Mobile in India

-

FIFA Mobile is a great game for soccer fans in India for many reasons. Here are some of the benefits of playing FIFA Mobile in India:

-
    -
  • You can play with your favorite Indian players and teams in the game. You can choose from players like Sunil Chhetri, Gurpreet Singh Sandhu, Sandesh Jhingan, Sahal Abdul Samad, etc., as well as teams like Bengaluru FC, Kerala Blasters FC, Mumbai City FC, etc.
  • -
  • You can connect with other soccer fans in India and around the world through the game's social features. You can chat, join leagues, play with friends, and compete in leaderboards and tournaments.
  • -
  • You can enjoy the game's high-quality graphics, realistic physics, and smooth gameplay on your mobile device. You can also customize your game settings to suit your preferences and device performance.
  • -
  • You can access the game's content and updates for free. You can also earn coins, gems, and other rewards by playing the game and completing tasks. You can use these to buy packs, players, and other items in the game's store.
  • -
-

The challenges of playing FIFA Mobile in India

-

While FIFA Mobile is a fun and exciting game for soccer fans in India, it also comes with some challenges that you should be aware of. Here are some of the challenges of playing FIFA Mobile in India:

-
    -
  • You may face some issues with the game's server stability and connectivity. You may experience lag, crashes, or errors while playing the game online. This can affect your gameplay and progress.
  • -
  • You may encounter some hackers, cheaters, or bots in the game. These are players who use unfair methods or tools to gain an advantage over other players. They can ruin your gaming experience and cause you to lose matches or rewards.
  • -
  • You may have to deal with some in-game ads or pop-ups that can be annoying or distracting. These are usually used to promote the game's features or offers, but they can also interrupt your gameplay or consume your data.
  • -
-

How to play FIFA Mobile like a pro?

-

Tips and tricks for building and managing your Ultimate Team

-

Your Ultimate Team is the heart of FIFA Mobile. It is where you create and customize your own squad of soccer stars that you can use to play in various modes. Here are some tips and tricks for building and managing your Ultimate Team:

-
    -
  • Choose a formation that suits your playstyle and strategy. You can choose from different formations such as 4-3-3, 4-4-2, 3-5-2, etc. You can also change your formation during a match if needed.
  • -
  • Upgrade your players regularly by using training points, skill boosts, or rank up tokens. These will increase your players' ratings, skills, and abilities. You can also use chemistry points to improve your team's performance by matching players with similar attributes, leagues, or nationalities.
  • -
  • Use the market to buy and sell players and items. You can use coins or gems to bid on or buy players and items from other players. You can also list your own players and items for sale and earn coins or gems from them.
  • -
-

Tips and tricks for playing Head to Head mode

-

Head to Head mode is where you can test your skills against other players in real-time matches. You can play in different divisions and leagues, as well as qualify for tournaments and events. Here are some tips and tricks for playing Head to Head mode:

-
    -
  • Use the right tactics and strategies for each match. You can choose from different tactics such as balanced, attacking, defensive, etc., as well as different strategies such as counter-attack, possession, long ball, etc. You can also adjust your tactics and strategies during a match if needed.
  • -
  • Use the right controls for each situation. You can choose from different controls such as classic, casual, gesture, etc., as well as different buttons such as pass, shoot, sprint, skill move, etc. You can also customize your controls to suit your preferences.
  • -
  • Use the right players for each position. You should have a balanced team with players who have the right attributes, skills, and abilities for each position. For example, you should have a fast striker, a creative midfielder, a strong defender, etc.
  • -
-

Tips and tricks for playing at 60 FPS

-

FIFA Mobile supports 60 FPS (frames per second) gameplay on some devices that have high-end specifications. This means that you can enjoy smoother and faster gameplay with better graphics and animations. Here are some tips and tricks for playing at 60 FPS:

-

fifa mobile apk indi download
-fifa mobile apk indi mod
-fifa mobile apk indi hack
-fifa mobile apk indi latest version
-fifa mobile apk indi offline
-fifa mobile apk indi 2022
-fifa mobile apk indi free
-fifa mobile apk indi update
-fifa mobile apk indi play store
-fifa mobile apk indi review
-fifa mobile apk indi world cup mode
-fifa mobile apk indi ultimate team
-fifa mobile apk indi manager mode
-fifa mobile apk indi gameplay
-fifa mobile apk indi tips and tricks
-fifa mobile apk indi best players
-fifa mobile apk indi cheats
-fifa mobile apk indi coins generator
-fifa mobile apk indi graphics settings
-fifa mobile apk indi requirements
-fifa mobile apk indi size
-fifa mobile apk indi installation guide
-fifa mobile apk indi features
-fifa mobile apk indi vs pes 2022
-fifa mobile apk indi legends and icons
-fifa mobile apk indi tournaments and events
-fifa mobile apk indi live commentary
-fifa mobile apk indi new stadiums
-fifa mobile apk indi ratings and stats
-fifa mobile apk indi skills and controls
-fifa mobile apk indi online and offline modes
-fifa mobile apk indi squad builder and customizer
-fifa mobile apk indi leagues and clubs
-fifa mobile apk indi news and updates
-fifa mobile apk indi bugs and fixes
-fifa mobile apk indi support and feedback
-fifa mobile apk indi community and forums
-fifa mobile apk indi videos and screenshots
-fifa mobile apk indi comparison with other soccer games
-fifa mobile apk indi pros and cons

-
    -
  • Check if your device supports 60 FPS gameplay by going to the game's settings menu and looking for the FPS option. If it is available, you can enable it by tapping on it.
  • -
  • Make sure that your device has enough battery power and storage space before playing at 60 FPS. Playing at 60 FPS can consume more battery power and data than playing at lower FPS.
  • -
  • Make sure that your device has a stable internet connection before playing at 60 FPS. Playing at 60 FPS can require more bandwidth and data than playing at lower FPS. You may experience lag, stutter, or disconnection if your internet connection is weak or unstable.
  • -
-

Conclusion

-

FIFA Mobile APK India is a soccer game that you can play on your Android device. It has many features and modes that will keep you entertained and challenged. You can download and install the FIFA Mobile APK file from a trusted source and enjoy playing with your favorite players and teams. You can also play with other soccer fans in India and around the world through the game's social features. You can also improve your skills and performance by following the tips and tricks we have shared in this article. We hope you have found this article helpful and informative. If you have any questions or feedback, please feel free to leave a comment below.

-

FAQs

-

Q: Is FIFA Mobile APK India safe to download and install?

-

A: Yes, FIFA Mobile APK India is safe to download and install as long as you get it from a trusted source. You should avoid downloading the APK file from unknown or suspicious websites or links, as they may contain malware or viruses that can harm your device or data.

-

Q: How much space does FIFA Mobile APK India require on my device?

-

A: FIFA Mobile APK India requires about 100 MB of space on your device for the initial download and installation. However, the game may require additional space for updates, data, and cache. You should have at least 1 GB of free space on your device to play the game smoothly.

-

Q: Can I play FIFA Mobile APK India offline?

-

A: No, FIFA Mobile APK India requires an internet connection to play. You cannot play the game offline or without data. You should have a stable and fast internet connection to enjoy the game's features and modes.

-

Q: Can I play FIFA Mobile APK India with a controller?

-

A: No, FIFA Mobile APK India does not support controller input. You can only play the game with touch controls on your device's screen. You can choose from different control options and customize them to suit your preferences.

-

Q: Can I transfer my FIFA Mobile progress from one device to another?

-

A: Yes, you can transfer your FIFA Mobile progress from one device to another by linking your game account to Facebook, Google Play, or Apple ID. You can then log in with the same account on another device and continue playing where you left off.

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/fb700/chat3/docs/self_analysis.md b/spaces/fb700/chat3/docs/self_analysis.md deleted file mode 100644 index 28f6682c3bc70c884b31322350099b156e770bf0..0000000000000000000000000000000000000000 --- a/spaces/fb700/chat3/docs/self_analysis.md +++ /dev/null @@ -1,256 +0,0 @@ -# chatgpt-academic项目自译解报告 -(Author补充:以下分析均由本项目调用ChatGPT一键生成,如果有不准确的地方,全怪GPT😄) - -## 对程序的整体功能和构架做出概括。然后用一张markdown表格整理每个文件的功能。 - -整体概括: - -该程序是一个基于自然语言处理和机器学习的科学论文辅助工具,主要功能包括聊天机器人、批量总结PDF文档、批量翻译PDF文档、生成函数注释、解析项目源代码等。程序基于 Gradio 构建 Web 服务,并集成了代理和自动更新功能,提高了用户的使用体验。 - -文件功能表格: - -| 文件名 | 文件功能 | -| --- | --- | -| check_proxy.py | 用于检查代理的正确性和可用性 | -| colorful.py | 包含不同预设置颜色的常量,并用于多种UI元素 | -| config.py | 用于全局配置的类 | -| config_private.py | 与config.py文件一起使用的另一个配置文件,用于更改私密信息 | -| core_functional.py | 包含一些TextFunctional类和基础功能函数 | -| crazy_functional.py | 包含大量高级功能函数和实验性的功能函数 | -| main.py | 程序的主入口,包含GUI主窗口和主要的UI管理功能 | -| theme.py | 包含一些预设置主题的颜色 | -| toolbox.py | 提供了一些有用的工具函数 | -| crazy_functions\crazy_utils.py | 包含一些用于实现高级功能的辅助函数 | -| crazy_functions\Latex全文润色.py | 实现了对LaTeX文件中全文的润色和格式化功能 | -| crazy_functions\Latex全文翻译.py | 实现了对LaTeX文件中的内容进行翻译的功能 | -| crazy_functions\_\_init\_\_.py | 用于导入crazy_functional.py中的功能函数 | -| crazy_functions\下载arxiv论文翻译摘要.py | 从Arxiv上下载论文并提取重要信息 | -| crazy_functions\代码重写为全英文_多线程.py | 针对中文Python文件,将其翻译为全英文 | -| crazy_functions\总结word文档.py | 提取Word文件的重要内容来生成摘要 | -| crazy_functions\批量Markdown翻译.py | 批量翻译Markdown文件 | -| crazy_functions\批量总结PDF文档.py | 批量从PDF文件中提取摘要 | -| crazy_functions\批量总结PDF文档pdfminer.py | 批量从PDF文件中提取摘要 | -| crazy_functions\批量翻译PDF文档_多线程.py | 批量翻译PDF文件 | -| crazy_functions\理解PDF文档内容.py | 批量分析PDF文件并提取摘要 | -| crazy_functions\生成函数注释.py | 自动生成Python文件中函数的注释 | -| crazy_functions\解析项目源代码.py | 解析并分析给定项目的源代码 | -| crazy_functions\询问多个大语言模型.py | 向多个大语言模型询问输入文本并进行处理 | -| crazy_functions\读文献写摘要.py | 根据用户输入读取文献内容并生成摘要 | -| crazy_functions\谷歌检索小助手.py | 利用谷歌学术检索用户提供的论文信息并提取相关信息 | -| crazy_functions\高级功能函数模板.py | 实现高级功能的模板函数 | -| request_llm\bridge_all.py | 处理与LLM的交互 | -| request_llm\bridge_chatglm.py | 使用ChatGLM模型进行聊天 | -| request_llm\bridge_chatgpt.py | 实现对话生成的各项功能 | -| request_llm\bridge_tgui.py | 在Websockets中与用户进行交互并生成文本输出 | - - - -## [0/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\check_proxy.py - -该文件主要包括四个函数:check_proxy、backup_and_download、patch_and_restart 和 auto_update。其中,check_proxy 函数用于检查代理是否可用;backup_and_download 用于进行一键更新备份和下载;patch_and_restart 是一键更新协议的重要函数,用于覆盖和重启;auto_update 函数用于查询版本和用户意见,并自动进行一键更新。该文件主要使用了 requests、json、shutil、zipfile、distutils、subprocess 等 Python 标准库和 toolbox 和 colorful 两个第三方库。 - -## [1/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\colorful.py - -该程序文件实现了一些打印文本的函数,使其具有不同的颜色输出。当系统为Linux时直接跳过,否则使用colorama库来实现颜色输出。程序提供了深色和亮色两种颜色输出方式,同时也提供了对打印函数的别名。对于不是终端输出的情况,对所有的打印函数进行重复定义,以便在重定向时能够避免打印错误日志。 - -## [2/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\config.py - -该程序文件是一个配置文件,其主要功能是提供使用API密钥等信息,以及对程序的体验进行优化,例如定义对话框高度、布局等。还包含一些其他的设置,例如设置并行使用的线程数、重试次数限制等等。 - -## [3/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\config_private.py - -这是一个名为config_private.py的Python文件,它用于配置API_KEY和代理信息。API_KEY是一个私密密钥,用于访问某些受保护的API。USE_PROXY变量设置为True以应用代理,proxies变量配置了代理网络的地址和协议。在使用该文件时,需要填写正确的API_KEY和代理信息。 - -## [4/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\core_functional.py - -该文件是一个Python模块,名为"core_functional.py"。模块中定义了一个字典,包含了各种核心功能的配置信息,如英语学术润色、中文学术润色、查找语法错误等。每个功能都包含一些前言和后语,在前言中描述了该功能的任务和要求,在后语中提供一些附加信息。此外,有些功能还定义了一些特定的处理函数和按钮颜色。 - -## [5/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\crazy_functional.py - -这是一个Python程序文件,文件名是crazy_functional.py。它导入了一个名为HotReload的工具箱,并定义了一个名为get_crazy_functions()的函数。这个函数包括三个部分的插件组,分别是已经编写完成的第一组插件、已经测试但距离完美状态还差一点点的第二组插件和尚未充分测试的第三组插件。每个插件都有一个名称、一个按钮颜色、一个函数和一个是否加入下拉菜单中的标志位。这些插件提供了多种功能,包括生成函数注释、解析项目源代码、批量翻译PDF文档、谷歌检索、PDF文档内容理解和Latex文档的全文润色、翻译等功能。其中第三组插件可能还存在一定的bug。 - -## [6/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\main.py - -该Python脚本代码实现了一个用于交互式对话的Chatbot机器人。它使用了Gradio框架来构建一个Web界面,并在此基础之上嵌入了一个文本输入框和与Chatbot进行交互的其他控件,包括提交、重置、停止和清除按钮、选择框和滑块等。此外,它还包括了一些类和函数和一些用于编程分析的工具和方法。整个程序文件的结构清晰,注释丰富,并提供了很多技术细节,使得开发者可以很容易地在其基础上进行二次开发、修改、扩展和集成。 - -## [7/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\theme.py - -该程序文件名为theme.py,主要功能为调节Gradio的全局样式。在该文件中,调节了Gradio的主题颜色、字体、阴影、边框、渐变等等样式。同时,该文件还添加了一些高级CSS样式,比如调整表格单元格的背景和边框,设定聊天气泡的圆角、最大宽度和阴影等等。如果CODE_HIGHLIGHT为True,则还进行了代码高亮显示。 - -## [8/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\toolbox.py - -这是一个名为`toolbox.py`的源代码文件。该文件包含了一系列工具函数和装饰器,用于聊天Bot的开发和调试。其中有一些功能包括将输入参数进行重组、捕捉函数中的异常并记录到历史记录中、生成Markdown格式的聊天记录报告等。该文件中还包含了一些与转换Markdown文本相关的函数。 - -## [9/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\crazy_functions\crazy_utils.py - -这是一个Python程序文件 `crazy_utils.py`,它包含了两个函数: - -- `input_clipping(inputs, history, max_token_limit)`:这个函数接收三个参数,inputs 是一个字符串,history 是一个列表,max_token_limit 是一个整数。它使用 `tiktoken` 、`numpy` 和 `toolbox` 模块,处理输入文本和历史记录,将其裁剪到指定的最大标记数,避免输入过长导致的性能问题。如果 inputs 长度不超过 max_token_limit 的一半,则只裁剪历史;否则,同时裁剪输入和历史。 -- `request_gpt_model_in_new_thread_with_ui_alive(inputs, inputs_show_user, llm_kwargs, chatbot, history, sys_prompt, refresh_interval=0.2, handle_token_exceed=True, retry_times_at_unknown_error=2)`:这个函数接收八个参数,其中后三个是列表类型,其他为标量或句柄等。它提供对话窗口和刷新控制,执行 `predict_no_ui_long_connection` 方法,将输入数据发送至 GPT 模型并获取结果,如果子任务出错,返回相应的错误信息,否则返回结果。 - -## [10/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\crazy_functions\Latex全文润色.py - -这是一个名为"crazy_functions\Latex全文润色.py"的程序文件,其中包含了两个函数"Latex英文润色"和"Latex中文润色",以及其他辅助函数。这些函数能够对 Latex 项目进行润色处理,其中 "多文件润色" 函数是一个主要函数,它调用了其他辅助函数用于读取和处理 Latex 项目中的文件。函数使用了多线程和机器学习模型进行自然语言处理,对文件进行简化和排版来满足学术标准。注释已删除并可以在函数内部查找。 - -## [11/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\crazy_functions\Latex全文翻译.py - -这个程序文件包括一个用于对整个Latex项目进行翻译的函数 `Latex英译中` 和一个用于将中文翻译为英文的函数 `Latex中译英`。这两个函数都会尝试导入依赖库 tiktoken, 若无法导入则会提示用户安装。`Latex英译中` 函数会对 Latex 项目中的文件进行分离并去除注释,然后运行多线程翻译。`Latex中译英` 也做同样的事情,只不过是将中文翻译为英文。这个程序文件还包括其他一些帮助函数。 - -## [12/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\crazy_functions\__init__.py - -这是一个 Python 包,包名为 `crazy_functions`,在 `__init__.py` 文件中定义了一些函数,包含以下函数: - -- `crazy_addition(a, b)`:对两个数进行加法运算,并将结果返回。 -- `crazy_multiplication(a, b)`:对两个数进行乘法运算,并将结果返回。 -- `crazy_subtraction(a, b)`:对两个数进行减法运算,并将结果返回。 -- `crazy_division(a, b)`:对两个数进行除法运算,并将结果返回。 -- `crazy_factorial(n)`:计算 `n` 的阶乘并返回结果。 - -这些函数可能会有一些奇怪或者不符合常规的实现方式(由函数名可以看出来),所以这个包的名称为 `crazy_functions`,可能是暗示这些函数会有一些“疯狂”的实现方式。 - -## [13/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\crazy_functions\下载arxiv论文翻译摘要.py - -该程序实现了一个名为“下载arxiv论文并翻译摘要”的函数插件,作者是“binary-husky”。该函数的功能是,在输入一篇arxiv论文的链接后,提取摘要、下载PDF文档、翻译摘要为中文,并将翻译结果保存到文件中。程序使用了一些Python库,如requests、pdfminer和beautifulsoup4等。程序入口是名为“下载arxiv论文并翻译摘要”的函数,其中使用了自定义的辅助函数download_arxiv_和get_name。程序中还使用了其他非函数的辅助函数和变量,如update_ui、CatchException、report_exception和get_conf等。 - -## [14/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\crazy_functions\代码重写为全英文_多线程.py - -该文件是一个多线程Python脚本,包含多个函数和利用第三方库进行的API请求。主要功能是将给定文件夹内的Python代码文件中所有中文转化为英文,然后输出转化后的英文代码。重要的功能和步骤包括: - -1. 清空历史,以免输入溢出 -2. 尝试导入依赖,如果缺少依赖,则给出安装建议 -3. 集合文件 -4. 显示随意内容以防卡顿的感觉 -5. Token限制下的截断与处理 -6. 多线程操作请求转换中文变为英文的代码 -7. 所有线程同时开始执行任务函数 -8. 循环轮询各个线程是否执行完毕 -9. 把结果写入文件 -10. 备份一个文件 - -## [15/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\crazy_functions\总结word文档.py - -这是一个名为"总结word文档.py"的程序文件,使用python编写。该文件导入了"toolbox"和"crazy_utils"模块,实现了解析docx格式和doc格式的文件的功能。该文件包含了一个名为"解析docx"的函数,通过对文件内容应用自然语言处理技术,生成文章片段的中英文概述。具体实现过程中,该函数使用了"docx"模块和"win32com.client"模块来实现对docx和doc格式文件的解析,同时使用了"request_gpt_model_in_new_thread_with_ui_alive"函数来向GPT模型发起请求。最后,该文件还实现了一个名为"总结word文档"的函数来批量总结Word文档。 - -## [16/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\crazy_functions\批量Markdown翻译.py - -这个程序文件实现了一个批量Markdown翻译功能,可以将一个源代码项目中的Markdown文本翻译成指定语言(目前支持中<-英和英<-中)。程序主要分为三个函数,`PaperFileGroup`类用于处理长文本的拆分,`多文件翻译`是主要函数调用了`request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency`函数进行多线程翻译并输出结果,`Markdown英译中`和`Markdown中译外`分别是英译中和中译英的入口函数,用于解析项目路径和调用翻译函数。程序依赖于tiktoken等库实现。 - -## [17/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\crazy_functions\批量总结PDF文档.py - -这是一个名为“批量总结PDF文档”的Python脚本,包含了多个函数。其中有一个函数名为“clean_text”,可以对PDF提取出的原始文本进行清洗和格式化处理,将连字转换为其基本形式,并根据heuristic规则判断换行符是否是段落分隔,并相应地进行替换。另一个函数名为“解析PDF”,可以接收一个PDF文件清单,并对清单中的每一个PDF进行解析,提取出文本并调用“clean_text”函数进行清洗和格式化处理,然后向用户发送一个包含文章简介信息的问题并等待用户回答。最后,该脚本也包含一个名为“批量总结PDF文档”的主函数,其中调用了“解析PDF”函数来完成对PDF文件的批量处理。 - -## [18/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\crazy_functions\批量总结PDF文档pdfminer.py - -这个文件是一个Python模块,文件名为pdfminer.py,它定义了一个函数批量总结PDF文档。该函数接受一些参数,然后尝试导入pdfminer和beautifulsoup4库。该函数将读取pdf文件或tex文件中的内容,对其进行分析,并使用GPT模型进行自然语言摘要。文件中还有一个辅助函数readPdf,用于读取pdf文件中的内容。 - -## [19/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\crazy_functions\批量翻译PDF文档_多线程.py - -这是一个Python脚本,文件名是crazy_functions\批量翻译PDF文档_多线程.py。该脚本提供了一个名为“批量翻译PDF文档”的函数,可以批量翻译PDF文件并生成报告文件。该函数使用了多个模块和函数(如toolbox、crazy_utils、update_ui等),使用了Python的异常处理和多线程功能,还使用了一些文本处理函数和第三方库(如fitz和tiktoken)。在函数执行过程中,它会进行一些参数检查、读取和清理PDF文本、递归地切割PDF文件、获取文章meta信息、多线程翻译、整理报告格式等操作,并更新UI界面和生成报告文件。 - -## [20/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\crazy_functions\理解PDF文档内容.py - -这是一个解析PDF文件内容的Python程序,程序文件名为"理解PDF文档内容.py",程序主要由5个步骤组成:第0步是切割PDF文件;第1步是从摘要中提取高价值信息,放到history中;第2步是迭代地历遍整个文章,提取精炼信息;第3步是整理history;第4步是设置一个token上限,防止回答时Token溢出。程序主要用到了Python中的各种模块和函数库,如:toolbox, tiktoken, pymupdf等。 - -## [21/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\crazy_functions\生成函数注释.py - -这是一个名为"生成函数注释"的函数,带有一个装饰器"@CatchException",可以捕获异常。该函数接受文件路径、参数和聊天机器人等参数,用于对多个Python或C++文件进行函数注释,使用了"toolbox"和"crazy_utils"模块中的函数。该函数会逐个读取指定文件中的内容,并使用聊天机器人进行交互,向用户请求注释信息,然后将生成的注释与原文件内容一起输出到一个markdown表格中。最后,该函数返回一个字符串,指示任务是否已完成。另外还包含一个名为"批量生成函数注释"的函数,它与"生成函数注释"函数一起用于批量处理多个文件。 - -## [22/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\crazy_functions\解析项目源代码.py - -这个程序文件实现了对一个源代码项目进行分析的功能。其中,函数`解析项目本身`、`解析一个Python项目`、`解析一个C项目的头文件`、`解析一个C项目`、`解析一个Java项目`和`解析一个Rect项目`分别用于解析不同类型的项目。函数`解析源代码新`实现了对每一个源代码文件的分析,并将分析结果汇总,同时还实现了分组和迭代处理,提高了效率。最后,函数`write_results_to_file`将所有分析结果写入文件。中间,还用到了`request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency`和`request_gpt_model_in_new_thread_with_ui_alive`来完成请求和响应,并用`update_ui`实时更新界面。 - -## [23/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\crazy_functions\询问多个大语言模型.py - -这是一个Python程序,文件名为"crazy_functions\询问多个大语言模型.py"。该程序实现了一个同时向多个大语言模型询问的功能,接收用户输入文本以及模型参数,向ChatGPT和ChatGLM模型发出请求,并将对话记录显示在聊天框中,同时刷新界面。 - -## [24/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\crazy_functions\读文章写摘要.py - -该程序文件是一个Python模块,文件名为"读文章写摘要.py",主要包含两个函数:"解析Paper"和"读文章写摘要"。其中,"解析Paper"函数接受文件路径、参数等参数,逐个打印文件内容并使用GPT模型生成对该文件的摘要;"读文章写摘要"函数则接受一段文本内容和参数,将该文本内容及其所有.tex文件逐个传递给"解析Paper"函数进行处理,并使用GPT模型生成文章的中英文摘要。文件还导入了一些工具函数,如异常处理、信息上报和文件写入等。 - -## [25/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\crazy_functions\谷歌检索小助手.py - -该文件代码包含了一个名为`get_meta_information`的函数和一个名为`谷歌检索小助手`的装饰器函数,用于从谷歌学术中抓取文章元信息,并从用户提供的搜索页面中分析所有文章的相关信息。该文件使用了许多第三方库,如requests、arxiv、BeautifulSoup等。其中`get_meta_information`函数中还定义了一个名为`string_similar`的辅助函数,用于比较字符串相似度。 - -## [26/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\crazy_functions\高级功能函数模板.py - -该程序文件是一个 Python 模块,包含一个名为“高阶功能模板函数”的函数。该函数接受多个参数,其中包括输入文本、GPT 模型参数、插件模型参数、聊天显示框、聊天历史等。 该函数的主要功能是根据输入文本,使用 GPT 模型生成一些问题,并等待用户回答这些问题(使用 Markdown 格式),然后将用户回答加入到聊天历史中,并更新聊天显示框。该函数还包含了一些异常处理和多线程的相关操作。该程序文件还引用了另一个 Python 模块中的两个函数,分别为“CatchException”和“update_ui”,并且还引用了一个名为“request_gpt_model_in_new_thread_with_ui_alive”的自定义函数。 - -## [27/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\request_llm\bridge_all.py - -这个文件是用来处理与LLM的交互的。包含两个函数,一个是 predict_no_ui_long_connection 用来处理长文本的输出,可以多线程调用;另一个是 predict 用来处理基础的对话功能。这个文件会导入其他文件中定义的方法进行调用,具体调用哪个方法取决于传入的参数。函数中还有一些装饰器和管理多线程的逻辑。 - -## [28/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\request_llm\bridge_chatglm.py - -这个程序文件实现了一个使用ChatGLM模型进行聊天的功能。具体实现过程是:首先进行初始化,然后使用GetGLMHandle类进行ChatGLM模型的加载和运行。predict_no_ui_long_connection函数用于多线程聊天,而predict函数用于单线程聊天,它们的不同之处在于前者不会更新UI界面,后者会。这个文件还导入了其他模块和库,例如transformers、time、importlib等,并使用了多进程Pipe。 - -## [29/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\request_llm\bridge_chatgpt.py - -这个程序文件是用于对话生成的,主要包含三个函数:predict、predict_no_ui、predict_no_ui_long_connection。其中,predict是用于普通对话的函数,具备完备的交互功能,但不具备多线程能力;predict_no_ui是高级实验性功能模块调用的函数,参数简单,可以多线程并行,方便实现复杂的功能逻辑;predict_no_ui_long_connection解决了predict_no_ui在处理长文档时容易断开连接的问题,同样支持多线程。程序中还包含一些常量和工具函数,用于整合信息,选择LLM模型,生成http请求,发送请求,接收响应等。它需要配置一个config文件,包含代理网址、API等敏感信息。 - -## [30/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\request_llm\bridge_tgui.py - -该程序文件实现了一个基于Websockets的文本生成服务和对话功能。其中,有三个函数:`run()`、`predict()`和`predict_no_ui_long_connection()`。`run()`函数用于连接到Websocket服务并生成文本结果;`predict()`函数用于将用户输入作为文本生成的输入,同时在UI上显示对话历史记录,并在不断更新UI的过程中不断更新生成的文本输出;`predict_no_ui_long_connection()`函数与`predict()`函数类似,但没有UI,并在一段时间内返回单个生成的文本。整个程序还引入了多个Python模块来完成相关功能,例如`asyncio`、`websockets`、`json`等等。 - -## 根据以上分析,对程序的整体功能和构架重新做出概括。然后用一张markdown表格整理每个文件的功能(包括check_proxy.py, colorful.py, config.py, config_private.py, core_functional.py, crazy_functional.py, main.py, theme.py, toolbox.py, crazy_functions\crazy_utils.py, crazy_functions\Latex全文润色.py, crazy_functions\Latex全文翻译.py, crazy_functions\__init__.py, crazy_functions\下载arxiv论文翻译摘要.py, crazy_functions\代码重写为全英文_多线程.py, crazy_functions\总结word文档.py)。 - -程序功能概括:该程序是一个聊天机器人,可以通过 Web 界面与用户进行交互。它包含了丰富的功能,如文本润色、翻译、代码重写、在线查找等,并且支持多线程处理。用户可以通过 Gradio 框架提供的 Web 界面进行交互,程序还提供了一些调试工具,如toolbox 模块,方便程序开发和调试。 - -下表概述了每个文件的功能: - -| 文件名 | 功能 | -| ----------------------------------------------------------- | ------------------------------------------------------------ | -| check_proxy.py | 检查代理是否可用 | -| colorful.py | 用于打印文本的字体颜色输出模块 | -| config.py | 用于程序中的各种设置,如并行线程数量和重试次数的限制等 | -| config_private.py | 配置API_KEY和代理信息的文件 | -| core_functional.py | 包含具体的文本处理功能的模块 | -| crazy_functional.py | 包括各种插件函数的模块,提供了多种文本处理功能 | -| main.py | 包含 Chatbot 机器人主程序的模块 | -| theme.py | 用于调节全局样式的模块 | -| toolbox.py | 包含工具函数和装饰器,用于聊天Bot的开发和调试 | -| crazy_functions\crazy_utils.py | 包含一些辅助函数,如文本裁剪和消息捕捉等 | -| crazy_functions\Latex全文润色.py | 对 Latex 项目进行润色处理的功能模块 | -| crazy_functions\Latex全文翻译.py | 对 Latex 项目进行翻译的功能模块 | -| crazy_functions\__init__.py | 定义一些奇特的数学函数等 | -| crazy_functions\下载arxiv论文翻译摘要.py | 下载 Arxiv 论文并翻译摘要的功能模块 | -| crazy_functions\代码重写为全英文_多线程.py | 将Python程序中所有中文转化为英文的功能模块 | -| crazy_functions\总结word文档.py | 解析 docx 和 doc 格式的文件,生成文章片段的中英文概述的功能模块 | - -## 根据以上分析,对程序的整体功能和构架重新做出概括。然后用一张markdown表格整理每个文件的功能(包括check_proxy.py, colorful.py, config.py, config_private.py, core_functional.py, crazy_functional.py, main.py, theme.py, toolbox.py, crazy_functions\crazy_utils.py, crazy_functions\Latex全文润色.py, crazy_functions\Latex全文翻译.py, crazy_functions\__init__.py, crazy_functions\下载arxiv论文翻译摘要.py, crazy_functions\代码重写为全英文_多线程.py, crazy_functions\总结word文档.py, crazy_functions\批量Markdown翻译.py, crazy_functions\批量总结PDF文档.py, crazy_functions\批量总结PDF文档pdfminer.py, crazy_functions\批量翻译PDF文档_多线程.py, crazy_functions\理解PDF文档内容.py, crazy_functions\生成函数注释.py, crazy_functions\解析项目源代码.py, crazy_functions\询问多个大语言模型.py, crazy_functions\读文章写摘要.py, crazy_functions\谷歌检索小助手.py, crazy_functions\高级功能函数模板.py, request_llm\bridge_all.py, request_llm\bridge_chatglm.py, request_llm\bridge_chatgpt.py, request_llm\bridge_tgui.py)。 - -根据以上分析,整个程序是一个集成了多个有用工具和功能的文本处理和生成工具,提供了多种在不同场景下使用的功能,包括但不限于对话生成、文本摘要、PDF文件批量处理、代码翻译和实用工具等。主要的Python模块包括"toolbox.py"、"config.py"、"core_functional.py"和"crazy_functional.py"等,并且还使用了许多第三方库和模块实现相关功能。以下是每个程序文件的功能: - -| 文件名 | 文件功能 | -| --- | --- | -| check_proxy.py | 用于检查代理的正确性和可用性 | -| colorful.py | 包含不同预设置颜色的常量,并用于多种UI元素 | -| config.py | 用于全局配置的类 | -| config_private.py | 与config.py文件一起使用的另一个配置文件,用于更改私密信息 | -| core_functional.py | 包含一些TextFunctional类和基础功能函数 | -| crazy_functional.py | 包含大量高级功能函数和实验性的功能函数 | -| main.py | 程序的主入口,包含GUI主窗口和主要的UI管理功能 | -| theme.py | 包含一些预设置主题的颜色 | -| toolbox.py | 提供了一些有用的工具函数 | -| crazy_functions\crazy_utils.py | 包含一些用于实现高级功能的辅助函数 | -| crazy_functions\Latex全文润色.py | 实现了对LaTeX文件中全文的润色和格式化功能 | -| crazy_functions\Latex全文翻译.py | 实现了对LaTeX文件中的内容进行翻译的功能 | -| crazy_functions\_\_init\_\_.py | 用于导入crazy_functional.py中的功能函数 | -| crazy_functions\下载arxiv论文翻译摘要.py | 从Arxiv上下载论文并提取重要信息 | -| crazy_functions\代码重写为全英文_多线程.py | 针对中文Python文件,将其翻译为全英文 | -| crazy_functions\总结word文档.py | 提取Word文件的重要内容来生成摘要 | -| crazy_functions\批量Markdown翻译.py | 批量翻译Markdown文件 | -| crazy_functions\批量总结PDF文档.py | 批量从PDF文件中提取摘要 | -| crazy_functions\批量总结PDF文档pdfminer.py | 批量从PDF文件中提取摘要 | -| crazy_functions\批量翻译PDF文档_多线程.py | 批量翻译PDF文件 | -| crazy_functions\理解PDF文档内容.py | 批量分析PDF文件并提取摘要 | -| crazy_functions\生成函数注释.py | 自动生成Python文件中函数的注释 | -| crazy_functions\解析项目源代码.py | 解析并分析给定项目的源代码 | -| crazy_functions\询问多个大语言模型.py | 向多个大语言模型询问输入文本并进行处理 | -| crazy_functions\读文献写摘要.py | 根据用户输入读取文献内容并生成摘要 | -| crazy_functions\谷歌检索小助手.py | 利用谷歌学术检索用户提供的论文信息并提取相关信息 | -| crazy_functions\高级功能函数模板.py | 实现高级功能的模板函数 | -| request_llm\bridge_all.py | 处理与LLM的交互 | -| request_llm\bridge_chatglm.py | 使用ChatGLM模型进行聊天 | -| request_llm\bridge_chatgpt.py | 实现对话生成的各项功能 | -| request_llm\bridge_tgui.py | 在Websockets中与用户进行交互并生成文本输出 | - diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Bowmasters Full APK Download Enjoy the World-Famous Game with Bowmen on Your Android Device.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Bowmasters Full APK Download Enjoy the World-Famous Game with Bowmen on Your Android Device.md deleted file mode 100644 index b517575e57719ae2522c4140cfda411b2ad2e8c2..0000000000000000000000000000000000000000 --- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Bowmasters Full APK Download Enjoy the World-Famous Game with Bowmen on Your Android Device.md +++ /dev/null @@ -1,137 +0,0 @@ - -

Bowmasters Full APK: A Fun and Addictive Physics-Based Shooter

-

If you are looking for a game that combines humor, action, and strategy, then you should try Bowmasters. This is a multiplayer game that involves aiming and shooting with bowmen. The game offers over 60 characters from different dimensions, 60+ weapons, and multiple game modes. The game also has an online multiplayer mode where you can compete with your friends.

-

bowmasters full apk


DOWNLOAD >>>>> https://gohhs.com/2uPvnQ



-

In this article, we will tell you everything you need to know about Bowmasters Full APK, which is the unlocked version of the game that gives you access to all the features and content without any ads or in-app purchases. We will also show you how to download and install it on your Android device, why you should play it, how to play it, and some tips and tricks to help you become a Bowmasters pro.

-

What is Bowmasters?

-

A brief introduction to the game and its features

-

Bowmasters is a physics-based shooter game developed by Playgendary. The game was released in 2016 for iOS and Android devices. The game has been downloaded over 50 million times on Google Play Store alone, and has received positive reviews from critics and players alike.

-

The game is inspired by the classic Worms series, where you have to aim and shoot at your enemies using various weapons. However, Bowmasters adds its own twist by featuring bowmen instead of worms, and by using ragdoll physics and hilarious fatalities. The game also has a colorful and cartoonish graphics style that adds to the fun factor.

-

bowmasters full apk download
-bowmasters full apk mod
-bowmasters full apk unlocked
-bowmasters full apk latest version
-bowmasters full apk free
-bowmasters full apk android
-bowmasters full apk hack
-bowmasters full apk unlimited money
-bowmasters full apk no ads
-bowmasters full apk offline
-bowmasters full apk revdl
-bowmasters full apk 2023
-bowmasters full apk pure
-bowmasters full apk obb
-bowmasters full apk all characters
-bowmasters full apk rexdl
-bowmasters full apk uptodown
-bowmasters full apk premium
-bowmasters full apk pro
-bowmasters full apk cracked
-bowmasters full apk vip
-bowmasters full apk mega mod
-bowmasters full apk original
-bowmasters full apk update
-bowmasters full apk old version
-bowmasters full apk online
-bowmasters full apk data
-bowmasters full apk 2.14.8
-bowmasters full apk mod menu
-bowmasters full apk mod download
-bowmasters full apk for pc
-bowmasters full apk for ios
-bowmasters full apk for windows 10
-bowmasters full apk for laptop
-bowmasters full apk for mac
-bowmasters full apk for android 11
-bowmasters full apk for android tv
-bowmasters full apk for chromebook
-bowmasters full apk for firestick
-bowmasters full apk for tablet

-

Some of the features of Bowmasters are:

-
    -
  • 60+ insane characters from all dimensions absolutely for free
  • -
  • 60+ different weapons for total mayhem, awesome fatalities with ragdoll physics
  • -
  • Multiple game modes such as duels, tournaments, bird hunt, apple shoot, zombie mode, etc.
  • -
  • Online multiplayer where you can challenge your friends or random players
  • -
  • Endless rewards for your skills such as coins, gems, chests, characters, weapons, etc.
  • -
-

How to download and install the full APK version of Bowmasters

-

If you want to enjoy all the features and content of Bowmasters without any ads or in-app purchases, then you should download and install the full APK version of the game. This is a modified version of the game that gives you unlimited access to everything in the game.

-

To download and install the full APK version of Bowmasters, follow these steps:

-
    -
  1. Go to [1](https://apkcombo.com/bowmasters/com.playgendary.bowmasters/) and click on the "Download APK" button.
  2. -
  3. Wait for the download to finish and then open the file.
  4. -
  5. If prompted, enable the "Unknown Sources" option in your device settings.
  6. -
  7. Follow the installation instructions on the screen.
  8. -
  9. Launch the game and enjoy!
  10. -
-

Why Play Bowmasters Full APK?

-

The benefits of playing the full APK version of BowmastersThe benefits of playing the full APK version of Bowmasters

-

Playing the full APK version of Bowmasters has many advantages over playing the original version of the game. Some of the benefits are:

-
    -
  • You can play with all the characters and weapons without having to unlock them or pay for them.
  • -
  • You can enjoy the game without any annoying ads or pop-ups that interrupt your gameplay.
  • -
  • You can get unlimited coins and gems that you can use to upgrade your characters and weapons.
  • -
  • You can access all the game modes and levels without any restrictions or limitations.
  • -
  • You can have more fun and challenge yourself with the online multiplayer mode where you can face other players from around the world.
  • -
-

The challenges and rewards of playing Bowmasters

-

Bowmasters is not just a simple game where you aim and shoot. It is also a game that tests your skills, strategy, and creativity. The game has many challenges and rewards that make it more interesting and addictive. Some of the challenges and rewards are:

-
    -
  • You have to adjust your angle and power according to the distance, wind, gravity, and obstacles.
  • -
  • You have to choose the right character and weapon for each situation and opponent.
  • -
  • You have to deal with different types of enemies such as zombies, birds, ninjas, pirates, etc.
  • -
  • You have to complete various quests and achievements that give you extra coins and gems.
  • -
  • You have to collect chests that contain random items such as characters, weapons, coins, gems, etc.
  • -
-

How to Play Bowmasters Full APK?

-

The basic gameplay mechanics and controls of Bowmasters

-

The gameplay mechanics and controls of Bowmasters are very simple and intuitive. You just need to tap and drag on the screen to aim and shoot. The game has a tutorial mode that explains the basics of the game. Here are some of the gameplay mechanics and controls of Bowmasters:

-
    -
  • To aim, tap and drag on your character. You will see a trajectory line that shows the direction and power of your shot.
  • -
  • To shoot, release your finger from the screen. You will see your character launch the weapon towards the enemy.
  • -
  • To switch characters or weapons, tap on the icons at the bottom of the screen. You can also swipe left or right to see more options.
  • -
  • To pause or resume the game, tap on the pause button at the top right corner of the screen. You can also access the settings, sound, music, and help menus from there.
  • -
-

The different game modes and characters of Bowmasters

-

Bowmasters has several game modes that offer different challenges and experiences. You can play solo or with friends in these game modes. Here are some of the game modes and characters of Bowmasters:

- - - - - - - -
Game ModeDescriptionCharacters
DuelsThis is the main game mode where you have to defeat your opponent in a one-on-one battle. You can choose from three difficulty levels: easy, medium, or hard. You can also play against a friend on the same device or online.You can use any character and weapon in this mode. Each character has a unique weapon and a special ability that can be activated by hitting a headshot or a critical hit.
TournamentsThis is a game mode where you have to compete in a series of duels against different opponents. You have to win each duel to advance to the next round. You can choose from four tournament types: classic, zombie, bird hunt, or apple shoot.You can use any character and weapon in this mode. However, some tournament types have specific rules or restrictions such as using only arrows or shooting only birds or apples.
Bird HuntThis is a game mode where you have to shoot as many birds as possible in a limited time. You can choose from three time limits: 30 seconds, 60 seconds, or 90 seconds. You can also play against a friend on the same device or online.You can use any character and weapon in this mode. However, some weapons are more effective than others for shooting birds such as guns or rockets.
Apple ShootThis is a game mode where you have to shoot an apple that is placed on your opponent's head without hitting them. You can choose from three distance levels: close, medium, or far. You can also play against a friend on the same device or online.You can use any character and weapon in this mode . However, some weapons are more accurate than others for shooting apples such as bows or crossbows.
Zombie ModeThis is a game mode where you have to survive as long as possible against waves of zombies. You can choose from three difficulty levels: easy, medium, or hard. You can also play against a friend on the same device or online.You can use any character and weapon in this mode. However, some weapons are more effective than others for killing zombies such as flamethrowers or chainsaws.
-

The best tips and tricks to master Bowmasters

-

Bowmasters is a game that requires skill, strategy, and creativity. You have to learn how to aim and shoot with different weapons and characters, and how to adapt to different situations and opponents. Here are some of the best tips and tricks to master Bowmasters:

-
    -
  • Practice with different characters and weapons to find your favorite ones and learn their strengths and weaknesses.
  • -
  • Watch the wind indicator and adjust your angle and power accordingly. The wind can affect the trajectory of your shot, especially with lighter weapons.
  • -
  • Use the special abilities of your characters wisely. They can give you an edge over your opponent, but they also have a cooldown time.
  • -
  • Try to hit your opponent's head or other vital parts for more damage and a chance to activate a fatality. Fatalities are hilarious and satisfying animations that show how your opponent dies.
  • -
  • Collect coins and gems by playing the game, completing quests, opening chests, or watching ads. You can use them to unlock new characters and weapons, or to upgrade them.
  • -
-

Conclusion

-

A summary of the main points and a call to action

-

Bowmasters is a fun and addictive physics-based shooter game that you can play on your Android device. The game offers over 60 characters from different dimensions, 60+ weapons, and multiple game modes. The game also has an online multiplayer mode where you can compete with your friends or random players.

-

If you want to enjoy all the features and content of Bowmasters without any ads or in-app purchases, then you should download and install the full APK version of the game. This is a modified version of the game that gives you unlimited access to everything in the game.

-

To download and install the full APK version of Bowmasters, go to [1](https://apkcombo.com/bowmasters/com.playgendary.bowmasters/) and follow the instructions on the screen. You will be able to play with all the characters and weapons, get unlimited coins and gems, access all the game modes and levels, and have more fun and challenge yourself with the online multiplayer mode.

-

So what are you waiting for? Download Bowmasters Full APK now and start shooting!

-

FAQs

-

Five frequently asked questions and answers about Bowmasters Full APK

-
    -
  1. Is Bowmasters Full APK safe to download and install?
  2. -

    Yes, Bowmasters Full APK is safe to download and install. It is a modified version of the original game that does not contain any viruses or malware. However, you should always download it from a trusted source such as [1](https://apkcombo.com/bowmasters/com.playgendary.bowmasters/).

    -
  3. Do I need to root my device to play Bowmasters Full APK?
  4. -

    No, you do not need to root your device to play Bowmasters Full APK. You just need to enable the "Unknown Sources" option in your device settings to allow the installation of apps from outside the Google Play Store.

    -
  5. Can I play Bowmasters Full APK offline?
  6. -

    Yes, you can play Bowmasters Full APK offline. You can enjoy most of the game modes such as duels, tournaments, bird hunt, apple shoot, zombie mode, etc. without an internet connection. However, you will need an internet connection to play the online multiplayer mode or to watch ads for extra rewards.

    -
  7. Can I sync my progress between devices?
  8. -

    Yes, you can sync your progress between devices by using your Facebook account. You can log in with your Facebook account in the game settings and then use it to save or load your progress on different devices.

    -
  9. How can I contact the developers of Bowmasters?
  10. -

    You can contact the developers of Bowmasters by using their email address: support@playgendary.com. You can also visit their website: https://playgendary.com/ or follow them on their social media accounts: Facebook: https://www.facebook.com/playgendary/ Twitter: https://twitter.com/playgendary Instagram: https://www.instagram.com/playgendary/

    -
-

I hope you found this article helpful and informative. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading and happy shooting!

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Apktool X for Free and Modify Any Apk File.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Apktool X for Free and Modify Any Apk File.md deleted file mode 100644 index 430f028db0894e27fb33b44abd164cf0d0c3d9d0..0000000000000000000000000000000000000000 --- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Apktool X for Free and Modify Any Apk File.md +++ /dev/null @@ -1,103 +0,0 @@ - -

Apktool X: A Powerful Tool for Reverse Engineering Android Apps

-

If you are an Android enthusiast who likes to tinker with apps, or a developer who wants to learn more about how apps work, then you might have heard of Apktool, a popular tool for reverse engineering Android application binary (APK) files. But did you know that there is also an Android version of this tool called Apktool X that lets you decompile and modify APK files on your phone? In this article, we will tell you what Apktool X is, how to download and install it on your Android device, how to use it to decompile and modify APK files, and what are the benefits of using it.

-

apktool x download


Download File - https://gohhs.com/2uPuDy



-

What is Apktool X and what can it do?

-

Apktool X is an Android port of the desktop tool Apktool, which was developed by XDA Recognized Developer iBotPeaches. Apktool allows you to reverse engineer APK files, which are the executable files that contain the code and resources of Android apps. By reverse engineering APK files, you can decode them into smali code (an intermediate language for Android bytecode) and resource files (such as images, layouts, strings, etc.), which can then be modified and rebuilt into new APK files.

-

Apktool X can do everything that the desktop version of Apktool can do, but on your Android device. You don't need a computer or the source code of the app to decompile and modify APK files. You just need a rooted Android device and a terminal app called Termux, which allows you to run Linux commands on your phone

How to download and install Apktool X on your Android device?

-

To use Apktool X on your Android device, you need to have a rooted device and the Termux app installed. Rooting is the process of gaining full access to the system of your device, which allows you to run commands and apps that are normally restricted. Termux is an app that provides a terminal emulator and a Linux environment on your device, which enables you to run various tools and scripts. If you don't have a rooted device or the Termux app, you can follow these guides to get them:

- -

Once you have a rooted device and the Termux app, you can download Apktool X from one of these sources:

-
    -
  • Android File Host: This is the official source of Apktool X, where you can find the latest version and previous versions of the tool.
  • -
  • GitHub: This is the source code repository of Apktool X, where you can find the latest commits and releases of the tool.
  • -
  • Ddooo: This is a Chinese website that hosts various Android tools, including Apktool X.
  • -
-

The file that you need to download is called apktoolx.zip, which contains the executable file apktoolx and some other files. You can download it directly to your device or transfer it from your computer via USB or Wi-Fi.

-

apktool x download for android
-apktool x download latest version
-apktool x download arm64
-apktool x download windows 10
-apktool x download sourceforge
-apktool x download github
-apktool x download linux
-apktool x download mac
-apktool x download zip
-apktool x download jar
-apktool x download tutorial
-apktool x download free
-apktool x download no root
-apktool x download 32 bit
-apktool x download 64 bit
-apktool x download andro black
-apktool x download android file host
-apktool x download apkpure
-apktool x download aptoide
-apktool x download android studio
-apktool x download alternative
-apktool x download appvn
-apktool x download blackmod
-apktool x download browsercam
-apktool x download bluestacks
-apktool x download by andro black for utilities[^1^] [^2^]
-apktool x download by sourceforge[^3^]
-apktool x download by github
-apktool x download by androidfilehost[^1^] [^2^]
-apktool x download by apkpure
-apktool x download cracked
-apktool x download chromebook
-apktool x download cnet
-apktool x download codeplex archive
-apktool x download cydia impactor
-apktool x download direct link
-apktool x download documentation
-apktool x download decompile and recompile
-apktool x download dependencies
-apktool x download dex2jar

-

After downloading the file, you need to install Apktool X by running a simple command in Termux. To do that, follow these steps:

-
    -
  1. Open Termux and grant it root access by typing su and pressing enter. You may need to confirm the request on your device.
  2. -
  3. Navigate to the directory where you downloaded or transferred the apktoolx.zip file by typing cd /path/to/directory and pressing enter. For example, if you downloaded the file to your Downloads folder, type cd /sdcard/Download.
  4. -
  5. Unzip the file by typing unzip apktoolx.zip -d /data/data/com.termux/files/usr/bin/ and pressing enter. This will extract the files to the bin directory of Termux, where they can be executed.
  6. -
  7. Make the apktoolx file executable by typing chmod +x /data/data/com.termux/files/usr/bin/apktoolx and pressing enter. This will give it permission to run as a program.
  8. -
  9. You have successfully installed Apktool X on your device. You can verify it by typing apktoolx -version and pressing enter. You should see something like this:
  10. -
-
$ apktoolx -version ApktoolX v1.0 Apktool v2.5.0 Aapt v0.2-5164479 Aapt2 v4.1.0-6503028 Smali v2.4.0 Baksmali v2.4.0 Zipalign v29.0.3-5806383 Signapk v29.0.3-5806383 Zipsigner v4.0 Busybox v1.31.1 
-

How to use Apktool X to decompile and modify APK files?

-

To use Apktool X to decompile and modify APK files, you need to have an APK file that you want to work with. You can get APK files from various sources such as Google Play Store, APKMirror, or other websites that host Android apps. You can also extract APK files from your installed apps using a file manager app or a backup app.

-

Once you have an APK file, you can launch Apktool X from the app drawer or from Termux. Apktool X has a simple user interface with four options: Select APK File, Select apktool version, Select aapt version, and Select flags.

- - - - - -
Select APK FileThis option allows you to browse and select the APK file that you want to decompile or rebuild.
Select apktool versionThis option allows you to choose which version of Apktool you want to use. Apktool is the main tool that decompiles and rebuilds APK files. Different versions of Apktool may have different features and compatibility with different APK files. You can choose from Apktool v2.5.0, v2.4.1, v2.4.0, v2.3.4, v2.3.3, or v2.2.2.
Select aapt versionThis option allows you to choose which version of aapt or aapt2 you want to use. Aapt and aapt2 are tools that handle the resources of APK files, such as images, layouts, strings, etc. Aapt is the older version and aapt2 is the newer version that supports more features and formats. You can choose from aapt v0.2-5164479, aapt2 v4.1.0-6503028, or aapt2 v3.6.0-6040484.
Select flagsThis option allows you to choose which flags you want to use for decompiling and rebuilding APK files. Flags are additional options that customize the behavior of Apktool X, such as whether to decode the resources, whether to keep the original signature, whether to use verbose output, etc. You can choose from -r, -s, -d, -f, -v, -a, -b, -c, -e, or -z.
-

After selecting the options, you can press the DECODE button to decompile the APK file into smali code and resource files, or press the BUILD button to rebuild the APK file from the modified smali code and resource files.

-

The output of Apktool X will be shown in Termux, where you can see the progress and status of the operation. You can also see the log file in /sdcard/ApktoolX/log.txt for more details.

-

The decompiled APK file will be stored in /sdcard/ApktoolX/Decoded/, where you can find the smali code in the smali folder and the resource files in the res folder. You can edit these files with a text editor on your device, such as QuickEdit or Turbo Editor. You can also use other tools such as APK Editor Pro or MT Manager to modify the APK file.

-

The rebuilt APK file will be stored in /sdcard/ApktoolX/Rebuilt/, where you can find the new APK file with the name [original name]-signed.apk. You can install this file on your device or share it with others.

-

Benefits of using Apktool X

-

Apktool X is a convenient and fast tool that allows you to reverse engineer APK files on the go with your Android device. You don't need a computer or the source code of the app to decompile and modify APK files. You just need a rooted device and Termux app.

-

Apktool X is a powerful and versatile tool that supports various apktool and aapt versions, as well as different flags for customization. You can choose the best combination of options for your needs and preferences.

-

Apktool X is a useful and fun tool that enables you to explore, modify, and improve your favorite apps or discover new ones. You can use Apktool X for various purposes such as porting, theming, translating, debugging, and analyzing apps.

-

Conclusion

-

Apktool X is an amazing tool for reverse engineering Android apps that you should try if you are interested in modding or learning more about how apps work. Apktool X is easy to download, install, and use, but you should always be careful and respectful when modifying apps that are not yours.

-

Apktool X can open up a whole new world of possibilities for your Android device and your creativity. You can decompile and modify APK files with ease and fun using Apktool X.

-

Frequently Asked Questions

-
    -
  1. What is reverse engineering?
  2. -

    Reverse engineering is the process of analyzing how something works by taking it apart and examining its components and structure.

    -
  3. What is an APK file?
  4. -

    An APK file is an Android application binary file that contains the code and resources of an Android app. You can install APK files on your device or extract them from your installed apps.

    -
  5. What is smali code?
  6. -

    Smali code is an intermediate language for Android bytecode, which is the executable code that runs on the Android virtual machine. Smali code is more readable and editable than bytecode, but less than source code.

    -
  7. What is aapt and aapt2?
  8. -

    Aapt and aapt2 are tools that handle the resources of APK files, such as images, layouts, strings, etc. Aapt is the older version and aapt2 is the newer version that supports more features and formats.

    -
  9. What are flags?
  10. -

    Flags are additional options that customize the behavior of Apktool X, such as whether to decode the resources, whether to keep the original signature, whether to use verbose output, etc. You can choose from -r, -s, -d, -f, -v, -a, -b, -c, -e, or -z.

    -

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Goat Simulator MOD APK v2.16.2 and Enjoy All Goats and Maps for Free.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Goat Simulator MOD APK v2.16.2 and Enjoy All Goats and Maps for Free.md deleted file mode 100644 index 385b978bf1a3dea06a0915e02b233bb3798ed3e0..0000000000000000000000000000000000000000 --- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Goat Simulator MOD APK v2.16.2 and Enjoy All Goats and Maps for Free.md +++ /dev/null @@ -1,120 +0,0 @@ -
-

Goat Simulator Unlock All Goats Mod APK: How to Download and Install

-

Have you ever dreamed of being a goat and causing chaos in a suburban town? If so, then you might want to check out Goat Simulator, a hilarious and absurd game that lets you do just that. And if you want to unlock all the goats and maps in the game, then you might want to try Goat Simulator Unlock All Goats Mod APK, a modified version of the game that gives you unlimited access to everything. In this article, we will tell you what Goat Simulator is, what Goat Simulator Unlock All Goats Mod APK is, how to download and install it, and some tips and tricks for playing the game.

-

goat simulator unlock all goats mod apk


Download Filehttps://gohhs.com/2uPubj



-

What is Goat Simulator?

-

Goat Simulator is a third-person perspective action game developed and published by Coffee Stain Studios. It was released for Microsoft Windows in April 2014, and ports for Linux, OS X, Android, iOS, Xbox 360, Xbox One, PlayStation 3, PlayStation 4, and Nintendo Switch were released later. The game is a parody of simulation games, where the player controls a goat and can explore an open-world map, jump, run, bash things, lick objects, and cause as much destruction as possible. The game has no larger goals or objectives, except for getting points for wrecking stuff and completing achievements. The game is also full of bugs and glitches that are intentionally left in for comedic effect.

-

Game features

-

Some of the features of Goat Simulator are:

-
    -
  • You can be a goat.
  • -
  • You can get points for wrecking stuff and brag to your friends that you're the alpha goat.
  • -
  • You can use Steam Workshop support to make your own goats, levels, missions, game modes, and more.
  • -
  • You can experience millions of bugs and hilarious in-game physics.
  • -
  • You can discover secrets, easter eggs, references, and hidden areas in the game world.
  • -
-

Game review

-

Goat Simulator is not much of a game, but it's a hell of a good time. The game is a clever interactive spoof of all the broken game physics we've seen in open worlds. It's full of great physics-powered slapstick humor and unexpected surprises in every corner of its seemingly peaceful small-town map. The game is designed with experimentation in mind, so there is a lot to find and do. The game is also very easy to play, with simple controls and mechanics. The game has received mixed reviews from critics and players alike. Some praised the game for providing a humorous sandbox interface to experiment with, while others criticized the game's reliance on social media to popularize what was otherwise a simple and buggy product. However, most agree that the game is fun and entertaining for what it is: a joke.

-

goat simulator free download all goats unlocked mod apk
-goat simulator mod apk unlimited money and all goats
-goat simulator apk mod unlock everything no root
-goat simulator all maps and goats unlocked mod apk
-goat simulator hack mod apk all goats and features
-goat simulator latest version mod apk unlock all goats
-goat simulator cheats mod apk all goats free
-goat simulator premium mod apk all goats and maps
-goat simulator full version mod apk unlock all goats
-goat simulator cracked mod apk all goats and modes
-goat simulator android mod apk all goats and skins
-goat simulator 2023 mod apk unlock all goats and items
-goat simulator no ads mod apk all goats and costumes
-goat simulator pro mod apk all goats and abilities
-goat simulator mega mod apk all goats and secrets
-goat simulator funniest game ever mod apk unlock all goats
-goat simulator crazy physics mod apk all goats and glitches
-goat simulator best simulation game mod apk unlock all goats
-goat simulator realistic graphics mod apk all goats and effects
-goat simulator hilarious gameplay mod apk unlock all goats
-goat simulator sandbox mode mod apk all goats and tools
-goat simulator multiplayer mode mod apk unlock all goats and friends
-goat simulator zombie mode mod apk all goats and undead
-goat simulator space mode mod apk unlock all goats and planets
-goat simulator fantasy mode mod apk all goats and magic
-goat simulator superhero mode mod apk unlock all goats and powers
-goat simulator horror mode mod apk all goats and scares
-goat simulator pirate mode mod apk unlock all goats and treasures
-goat simulator western mode mod apk all goats and cowboys
-goat simulator medieval mode mod apk unlock all goats and knights
-goat simulator dinosaur mode mod apk all goats and fossils
-goat simulator robot mode mod apk unlock all goats and machines
-goat simulator alien mode mod apk all goats and invaders
-goat simulator ninja mode mod apk unlock all goats and shurikens
-goat simulator vampire mode mod apk all goats and bloodsuckers
-goat simulator dragon mode mod apk unlock all goats and firebreathers
-goat simulator mermaid mode mod apk all goats and seacreatures
-goat simulator unicorn mode mod apk unlock all goats and rainbows
-goat simulator fairy mode mod apk all goats and wings
-goat simulator angel mode mod apk unlock all goats and halos
-goat simulator devil mode mod apk unlock all goats and horns
-goat simulator mutant mode mod apk all goats and mutations
-goat simulator spider mode mod apk unlock all goats and webs
-goat simulator snake mode mod apk unlock all goats and venom
-goat simulator bee mode mod apk unlock all goats and honey
-goat simulator bird mode mod apk unlock all goats and feathers
-goat simulator dog mode mod apk unlock all goats and bones
-goat simulator cat mode mod apk unlock all goats and yarns
-goat simulator mouse mode mod apk unlock all goats and cheese

-

What is Goat Simulator Unlock All Goats Mod APK?

-

Goat Simulator Unlock All Goats Mod APK is a modified version of Goat Simulator that gives you unlimited money, no ads, free shopping, all maps and goats unlocked for free. With this mod APK, you can enjoy all the features of Goat Simulator without any limitations or restrictions. You can play as any goat you want, from a tall goat to a space goat to a devil goat. You can also access all the maps in the game, from the original town map to the zombie apocalypse map to the outer space map. You can also buy anything you want from the shop without spending any real money. You can also enjoy the game without any annoying ads or pop-ups. Goat Simulator Unlock All Goats Mod APK is the ultimate way to experience Goat Simulator to the fullest.

-

Mod features

-

Some of the features of Goat Simulator Unlock All Goats Mod APK are:

-
    -
  • Unlimited money
  • -
  • No ads
  • -
  • Free shopping
  • -
  • All maps and goats unlocked
  • -
  • Easy to download and install
  • -
  • Compatible with most Android devices
  • -
-

Mod review

-

Goat Simulator Unlock All Goats Mod APK is a great mod for Goat Simulator fans who want to have more fun and freedom in the game. The mod gives you access to everything in the game without any hassle or cost. You can play as any goat you like, explore any map you want, and buy anything you need. The mod also removes the ads that might interrupt your gameplay. The mod is very easy to download and install, and it works well on most Android devices. The mod is a must-have for anyone who loves Goat Simulator and wants to experience it in a new way.

-

How to download and install Goat Simulator Unlock All Goats Mod APK?

-

If you want to download and install Goat Simulator Unlock All Goats Mod APK, you need to follow these simple steps:

-

Step 1: Enable unknown sources

-

Before you can install the mod APK file, you need to enable unknown sources on your device. This will allow you to install apps from sources other than the Google Play Store. To do this, go to Settings > Security > Unknown Sources and toggle it on.

-

Step 2: Download the mod APK file

-

Next, you need to download the mod APK file from a reliable source. You can use this link to download the latest version of Goat Simulator Unlock All Goats Mod APK. The file size is about 16 MB, so make sure you have enough space on your device.

-

Step 3: Install the mod APK file

-

Once you have downloaded the mod APK file, you need to install it on your device. To do this, locate the file in your file manager and tap on it. You will see a prompt asking you to confirm the installation. Tap on Install and wait for the process to finish.

-

Step 4: Launch the game and enjoy

-

After the installation is complete, you can launch the game from your app drawer or home screen. You will see a new icon with a goat and a lock symbol. Tap on it and start playing Goat Simulator with all the goats and maps unlocked.

-

Tips and tricks for Goat Simulator

-

Now that you have Goat Simulator Unlock All Goats Mod APK installed, you might want to know some tips and tricks for playing the game. Here are some of them:

-

Become an evil devil goat

-

If you want to become an evil devil goat, you need to find a pentagram in the town map. It is located near a house with a red roof and a pool. Once you find it, bring five humans or animals to the pentagram and place them on it. You will see a red flash and hear a demonic voice. Congratulations, you are now an evil devil goat with horns, wings, and fire powers.

-

Become the king of goats

-

If you want to become the king of goats, you need to find a throne in the town map. It is located on top of a hill near a windmill. Once you find it, climb up the hill and sit on the throne. You will see a golden crown appear on your head and hear a majestic music. Congratulations, you are now the king of goats with royal attire and authority.

-

Find the jet pack

-

If you want to find the jet pack, you need to go to the construction site in the town map. It is located near a crane and a pile of wood. Once you get there, look for a blue container with a yellow sign that says "Testa". Inside the container, you will find a jet pack that you can wear by licking it. Press R to activate it and fly around with it.

-

Take a hang glider tour

-

If you want to take a hang glider tour, you need to go to the amusement park in the town map. It is located near a ferris wheel and a roller coaster. Once you get there, look for a hang glider that is flying around in circles above the park. Jump on it by using your tongue or headbutt and enjoy the view from above. Be careful not to fall off or crash into anything.

-

Let aliens take you away

-

If you want to let aliens take you away, you need to go to the crop circle in the town map. It is located near a farm with a barn and a silo. Once you get there, stand in the middle of the crop circle and wait for a few seconds. You will see a UFO appear and beam you up with a green light. Congratulations, you have been abducted by aliens and taken to outer space.

-

Conclusion

-

Goat Simulator is a hilarious and absurd game that lets you be a goat and cause chaos in a suburban town. It is a parody of simulation games, where the game physics are intentionally broken and buggy for comedic effect. The game has no larger goals or objectives, except for getting points for wrecking stuff and completing achievements. The game is also full of secrets, easter eggs, references, and hidden areas to discover. Goat Simulator Unlock All Goats Mod APK is a modified version of the game that gives you unlimited money, no ads, free shopping, all maps and goats unlocked for free. With this mod APK, you can enjoy all the features of Goat Simulator without any limitations or restrictions. You can play as any goat you want, explore any map you want, and buy anything you want. You can also download and install the mod APK easily by following the steps we provided. Goat Simulator Unlock All Goats Mod APK is the ultimate way to experience Goat Simulator to the fullest.

-

FAQs

-

Here are some frequently asked questions about Goat Simulator Unlock All Goats Mod APK:

-
    -
  • Q: Is Goat Simulator Unlock All Goats Mod APK safe to use?
  • -
  • A: Yes, Goat Simulator Unlock All Goats Mod APK is safe to use. It does not contain any viruses or malware that might harm your device or data. However, you should always download the mod APK from a trusted source and enable unknown sources on your device before installing it.
  • -
  • Q: Does Goat Simulator Unlock All Goats Mod APK require root access?
  • -
  • A: No, Goat Simulator Unlock All Goats Mod APK does not require root access. You can install and play the mod APK on any Android device without rooting it.
  • -
  • Q: Can I play Goat Simulator Unlock All Goats Mod APK online with other players?
  • -
  • A: No, Goat Simulator Unlock All Goats Mod APK does not support online multiplayer mode. You can only play the mod APK offline on your device.
  • -
  • Q: Can I update Goat Simulator Unlock All Goats Mod APK to the latest version?
  • -
  • A: Yes, you can update Goat Simulator Unlock All Goats Mod APK to the latest version by downloading and installing the new mod APK file from the same source. However, you might lose your progress and data if you do so. Therefore, it is recommended to back up your data before updating the mod APK.
  • -
  • Q: Can I uninstall Goat Simulator Unlock All Goats Mod APK if I don't like it?
  • -
  • A: Yes, you can uninstall Goat Simulator Unlock All Goats Mod APK if you don't like it. You can simply delete the mod APK file from your device or go to Settings > Apps > Goat Simulator > Uninstall.
  • -

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download JOOX Mod APK and Get Unlimited Skips Downloads and More.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download JOOX Mod APK and Get Unlimited Skips Downloads and More.md deleted file mode 100644 index 84e82454090b5015820525c1d6531793d0293616..0000000000000000000000000000000000000000 --- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download JOOX Mod APK and Get Unlimited Skips Downloads and More.md +++ /dev/null @@ -1,93 +0,0 @@ - -

What is JOOX Music and why you should download it

-

If you are a music lover who enjoys listening to songs from different genres, languages and countries, then you should definitely check out JOOX Music. JOOX Music is a free music streaming app that offers more than 40 million songs from all over the world. You can enjoy social music experience with karaoke, live video group chat rooms, and trending short videos. You can also access over 50 radio stations, exclusive music playlists, lyrics for all your favourite hits, personalised music recommendations and more. JOOX Music is available on mobile devices (iOS and Android), desktop (Windows and Mac), joox.com, Android TV and Google Nest in Hong Kong, Thailand, Malaysia, Indonesia and Myanmar .

-

How to download JOOX Music app on your device

-

Downloading JOOX Music app on your device is very easy and fast. Here are the steps to follow:

-

download apk mod joox


Download Ziphttps://gohhs.com/2uPtxE



-
    -
  • For Android devices, go to Google Play Store and search for JOOX Music. Tap on the install button and wait for the app to be downloaded and installed on your device. Alternatively, you can scan the QR code on the official website or use this link to download the app directly.
  • -
  • For iOS devices, go to App Store and search for JOOX Music. Tap on the get button and wait for the app to be downloaded and installed on your device. Alternatively, you can scan the QR code on the official website or use this link to download the app directly.
  • -
  • For desktop devices (Windows and Mac), go to the official website and click on the download button for your operating system. Follow the instructions to install the app on your computer.
  • -
  • For Android TV devices, go to Google Play Store on your TV and search for JOOX Music. Tap on the install button and wait for the app to be downloaded and installed on your TV.
  • -
  • For Google Nest devices, go to Google Home app on your phone and tap on the + icon. Select Set up device and then Works with Google. Search for JOOX Music and link your account. Then you can use voice commands to play music from JOOX Music on your Google Nest device.
  • -
-

What is an APK mod and why you should use it

-

An APK mod is a modified version of an original app that has been altered by someone to add or remove some features. For example, an APK mod of JOOX Music may offer unlimited skips, downloads, VIP access, ad-free experience and more. These features are usually not available in the original app or require a subscription fee.

-

The main reason why you may want to use an APK mod of JOOX Music is to enjoy more features and benefits without paying anything. You can listen to any song you want, download them for offline listening, access exclusive content, sing karaoke without interruptions and more. You can also save data and battery by avoiding ads and unnecessary updates.

-

How to download APK mod JOOX Music on your device

-

If you are interested in downloading APK mod JOOX Music on your device , here are the steps to follow:

-
    -
  1. Find a reliable source that offers APK mod JOOX Music. You can search online for reviews, ratings, feedback and comments from other users who have tried the mod. Some of the popular sources are APKPure, APKMirror, and APKCombo. Make sure you download the latest version of the mod that is compatible with your device.
  2. -
  3. Before you install the APK mod JOOX Music, you need to enable the installation of apps from unknown sources on your device. To do this, go to Settings > Security > Unknown sources and toggle it on. You may also need to disable Play Protect or any antivirus app that may block the installation.
  4. -
  5. Locate the downloaded APK file on your device and tap on it to start the installation. Follow the instructions on the screen and wait for the installation to complete.
  6. -
  7. Launch the APK mod JOOX Music app and enjoy the enhanced features and benefits.
  8. -
-

The risks and precautions of using APK mod JOOX Music

-

While using APK mod JOOX Music may seem tempting, you should also be aware of the potential risks and precautions of doing so. Here are some of them:

-
    -
  • Using APK mod JOOX Music may violate the terms and conditions of JOOX Music and result in your account being banned or suspended. You may also face legal actions from JOOX Music or its partners for infringing their intellectual property rights.
  • -
  • Using APK mod JOOX Music may expose your device to malware, viruses, spyware, adware and other harmful software that may steal your personal information, damage your device or compromise your security.
  • -
  • Using APK mod JOOX Music may affect the performance, stability and functionality of your device or the original app. You may experience crashes, glitches, errors, bugs or compatibility issues that may ruin your music experience.
  • -
  • Using APK mod JOOX Music may not guarantee you the same quality, quantity and variety of music as the original app. You may miss out on some features, updates, content or services that are only available on the official app.
  • -
-

To avoid these risks and precautions, you should always use the original app from the official source and respect the rights and policies of JOOX Music and its partners. You should also scan your device regularly for any malware or viruses and keep your device updated with the latest security patches.

-

The best alternatives to APK mod JOOX Music

-

If you are looking for some other music streaming apps that offer similar or better features than APK mod JOOX Music, here are some of the best alternatives that you can try:

- - - - - -
AppFeaturesPrice
Spotify- Over 70 million songs from various genres, artists and countries
- Personalised music recommendations based on your taste
- Offline listening with downloads
- Ad-free experience with premium subscription
- Podcasts, videos, lyrics and social features
- Free with ads and limited skips
- Premium: $9.99/month for individual plan
- Family: $14.99/month for up to 6 accounts
- Student: $4.99/month with Hulu and Showtime access
- Duo: $12.99/month for two accounts
Apple Music- Over 75 million songs from various genres, artists and countries
- Personalised music recommendations based on your taste
- Offline listening with downloads
- Ad-free experience with subscription
- Live radio stations, podcasts, lyrics and social features
- Free trial for 3 months
- Individual: $9.99/month
- Family: $14.99/month for up to 6 accounts
- Student: $4.99/month with Apple TV+ access
YouTube Music- Over 60 million songs from various genres, artists and countries
- Personalised music recommendations based on your taste
- Offline listening with downloads
- Ad-free experience with premium subscription
- Videos, lyrics and social features
- Free with ads and limited skips
- Premium: $9.99/month for individual plan
- Family: $14.99/month for up to 6 accounts
- Student: $4.99/month
-

Conclusion

-

In conclusion, JOOX Music is a great music streaming app that offers a lot of features and benefits for music lovers. However, if you want to enjoy more features and benefits without paying anything, you may be tempted to use an APK mod of JOOX Music. However, using an APK mod of JOOX Music may also expose you to some risks and precautions, such as account bans, legal issues, malware, viruses, performance issues and missing features. Therefore, you should always use the original app from the official source and respect the rights and policies of JOOX Music and its partners. If you are looking for some other music streaming apps that offer similar or better features than APK mod JOOX Music, you can try Spotify, Apple Music or YouTube Music. They are all popular, reliable and high-quality music streaming apps that will satisfy your music needs. Thank you for reading this article and I hope you found it helpful and informative.

-

FAQs

-

Here are some of the frequently asked questions about APK mod JOOX Music:

-

download joox mod apk vip unlocked
-download joox music mod apk latest version
-download joox mod apk unlimited coins
-download joox mod apk offline
-download joox mod apk no ads
-download joox mod apk 2023
-download joox mod apk android 1
-download joox mod apk for pc
-download joox mod apk free karaoke
-download joox mod apk terbaru 2022
-download joox vip mod apk 2022
-download joox music mod apk premium
-download joox mod apk full crack
-download joox mod apk tanpa root
-download joox mod apk anti banned
-download joox mod apk unlimited skips
-download joox mod apk revdl
-download joox mod apk for ios
-download joox mod apk free vip forever
-download joox mod apk update 2022
-download joox vip mod apk 2023
-download joox music mod apk unlocked all features
-download joox mod apk pro
-download joox mod apk versi lama
-download joox mod apk gratis selamanya
-download joox vip mod apk gratis tanpa bayar
-download joox music mod apk no root
-download joox mod apk hack
-download joox mod apk rexdl
-download joox mod apk for iphone
-download joox vip mod apk free trial
-download joox music mod apk 2022
-download joox mod apk plus
-download joox mod apk original
-download joox vip mod apk terbaru 2023
-download joox music mod apk unlimited money
-download joox vip premium mod apk 2022 free lifetime access
-download joox music pro premium vip unlocked cracked full version latest update 2022
-how to install and use the latest version of the JOOX VIP Mod Apk[^1^]

-
    -
  1. What is the difference between JOOX Music and APK mod JOOX Music?
    JOOX Music is the original app that offers free music streaming with ads and limited features. APK mod JOOX Music is a modified version of the original app that offers more features and benefits without ads or subscription fees.
  2. -
  3. Is APK mod JOOX Music safe to use?
    APK mod JOOX Music may not be safe to use as it may contain malware, viruses, spyware or other harmful software that may damage your device or compromise your security. It may also violate the terms and conditions of JOOX Music and result in your account being banned or suspended.
  4. -
  5. How can I update APK mod JOOX Music?
    You can update APK mod JOOX Music by downloading the latest version of the mod from a reliable source and installing it on your device. However, you may lose some of the features or benefits of the mod when you update it.
  6. -
  7. Can I use APK mod JOOX Music on multiple devices?
    You can use APK mod JOOX Music on multiple devices as long as they are compatible with the mod and have enough storage space. However, you may encounter some issues or errors when you use the same account on different devices.
  8. -
  9. Can I use APK mod JOOX Music offline?
    You can use APK mod JOOX Music offline if you have downloaded the songs that you want to listen to. However, you may not be able to access some of the features or content that require an internet connection.
  10. -

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Truck Simulator Ultimate APK and Drive Realistic Trucks.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Truck Simulator Ultimate APK and Drive Realistic Trucks.md deleted file mode 100644 index cb34f1924987661d4bb47a2b2a0757596e899b6e..0000000000000000000000000000000000000000 --- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Truck Simulator Ultimate APK and Drive Realistic Trucks.md +++ /dev/null @@ -1,137 +0,0 @@ -
-

Gamedva Truck Simulator Ultimate APK: A Review

-

If you are a fan of driving simulation games, you might have heard of Gamedva Truck Simulator Ultimate APK. This is a popular game that lets you experience the life of a truck driver in a realistic and immersive way. You can transport various cargoes across different countries, manage your own trucking company, customize your trucks, and compete with other players online. In this article, we will review this game and give you some tips and tricks on how to play it.

-

gamedva truck simulator ultimate apk


Downloadhttps://gohhs.com/2uPmVo



-

What is Gamedva Truck Simulator Ultimate APK?

-

A brief introduction to the game and its features

-

Gamedva Truck Simulator Ultimate APK is a driving simulation game developed by Zuuks Games, a Turkish company that specializes in this genre. The game is available for Android devices and can be downloaded for free from [13](https://play.google.com/store/apps/details?id=com.zuuks.truck.simulator.ultimate) or other third-party sources. The game has over 10 million downloads and a rating of 4.5 stars on Google Play Store.

-

The game features over 32 amazing trucks from official Mercedes-Benz licensed trucks to American and European trucks. You can transport a wide variety of cargo in over 100 cities across the world, such as food, cars, fuel, office supplies, theme park materials, and more. You can also participate in auctions on the freight stocks and earn higher profits.

-

As you play the game, you can also build your own truck fleet, hire employees, design your offices, and expand your business. You can also update your trucks with lamps, bumper, horn, cockpit lights, and more modification options. The game has realistic graphics, physics, weather, traffic, toll roads, radio stations, and sound effects that make you feel like driving a real truck.

-

gamedva truck simulator ultimate mod apk
-gamedva truck simulator ultimate download
-gamedva truck simulator ultimate android
-gamedva truck simulator ultimate free
-gamedva truck simulator ultimate gameplay
-gamedva truck simulator ultimate review
-gamedva truck simulator ultimate hack
-gamedva truck simulator ultimate cheats
-gamedva truck simulator ultimate tips
-gamedva truck simulator ultimate guide
-gamedva truck simulator ultimate online
-gamedva truck simulator ultimate multiplayer
-gamedva truck simulator ultimate latest version
-gamedva truck simulator ultimate update
-gamedva truck simulator ultimate features
-gamedva truck simulator ultimate best trucks
-gamedva truck simulator ultimate realistic
-gamedva truck simulator ultimate graphics
-gamedva truck simulator ultimate controls
-gamedva truck simulator ultimate settings
-gamedva truck simulator ultimate missions
-gamedva truck simulator ultimate routes
-gamedva truck simulator ultimate maps
-gamedva truck simulator ultimate europe
-gamedva truck simulator ultimate america
-gamedva truck simulator ultimate mercedes-benz
-gamedva truck simulator ultimate zuuks games
-gamedva truck simulator ultimate bus simulator
-gamedva truck simulator ultimate comparison
-gamedva truck simulator ultimate ranking
-gamedva truck simulator ultimate rating
-gamedva truck simulator ultimate feedback
-gamedva truck simulator ultimate comments
-gamedva truck simulator ultimate forum
-gamedva truck simulator ultimate community
-gamedva truck simulator ultimate support
-gamedva truck simulator ultimate help
-gamedva truck simulator ultimate faq
-gamedva truck simulator ultimate wiki
-gamedva truck simulator ultimate news
-gamedva truck simulator ultimate blog
-gamedva truck simulator ultimate video
-gamedva truck simulator ultimate trailer
-gamedva truck simulator ultimate youtube
-gamedva truck simulator ultimate instagram
-gamedva truck simulator ultimate facebook
-gamedva truck simulator ultimate twitter
-gamedva truck simulator ultimate reddit
-gamedva truck simulator ultimate quora

-

How to download and install the game on your device

-

To download and install Gamedva Truck Simulator Ultimate APK on your device, you need to follow these steps:

-
    -
  1. Go to [13](https://play.google.com/store/apps/details?id=com.zuuks.truck.simulator.ultimate) or any other trusted source that provides the APK file of the game.
  2. -
  3. Download the APK file to your device.
  4. -
  5. Go to your device's settings and enable the option to install apps from unknown sources.
  6. -
  7. Locate the downloaded APK file and tap on it to install it.
  8. -
  9. Launch the game and enjoy!
  10. -
-

Why should you play Gamedva Truck Simulator Ultimate APK?

-

The benefits of playing this game

-

Gamedva Truck Simulator Ultimate APK is not just a game for fun, but also a game for learning. Here are some of the benefits of playing this game:

-
    -
  • You can improve your driving skills, such as steering, braking, parking, reversing, and maneuvering in different situations.
  • -
  • You can develop your business acumen, such as managing your finances, employees, contracts, and reputation.
  • -
  • You can enjoy the relaxing and soothing effects of driving a truck, listening to music, and watching the scenery.
  • -
-

The challenges and fun aspects of this game

-

Gamedva Truck Simulator Ultimate APK is not just a game for learning, but also a game for fun. Here are some of the challenges and fun aspects of this game:

-
    -
  • You can face various obstacles and hazards on the road, such as traffic jams, accidents, police patrols, speed cameras, tolls, and weather conditions.
  • -
  • You can customize your truck with different colors, decals, accessories, and parts to make it look unique and stylish.
  • -
  • You can join the online multiplayer mode and compete with other players from around the world. You can chat with them, join convoys, race with them, or cooperate with them.
  • -
  • You can also download and install various DLC mods that add new features, maps, trucks, cargoes, and scenarios to the game. You can find them on [12](https://www.gamedva.com/truck-simulator-ultimate-mod) or other websites.
  • -
-

Tips and tricks for playing Gamedva Truck Simulator Ultimate APK

-

How to earn money and upgrade your truck

-

Money is an important resource in Gamedva Truck Simulator Ultimate APK. You need money to buy new trucks, upgrade your existing trucks, pay for fuel, repairs, tolls, fines, and taxes. You also need money to hire employees, build offices, and expand your business. Here are some tips on how to earn money and upgrade your truck:

-
    -
  • Complete as many contracts as possible. You can find them on the freight market or the auction market. Choose the ones that offer the best rewards and suit your preferences.
  • -
  • Drive carefully and follow the traffic rules. Avoid speeding, running red lights, crashing into other vehicles or objects, or damaging your cargo. These will cost you money in fines or repairs.
  • -
  • Save fuel by driving at a moderate speed, using cruise control, avoiding sudden acceleration or braking, and planning your route ahead.
  • -
  • Upgrade your truck with better engines, transmissions, tires, chassis, and other parts that improve its performance and efficiency.
  • -
  • Sell your old trucks or trade them for new ones when you have enough money.
  • -
-

How to avoid traffic violations and accidents

-

Traffic violations and accidents are some of the common problems that you may encounter while playing Gamedva Truck Simulator Ultimate APK. They can ruin your reputation, lower your rating, reduce your income, and damage your truck or cargo. Here are some tips on how to avoid traffic violations and accidents:

-
    -
  • Pay attention to the traffic signs and signals. They will tell you the speed limit, the direction of traffic, the lane restrictions, the road conditions, and other important information.
  • -
  • Use your mirrors, indicators, headlights, horn, and wipers when necessary. They will help you see better, communicate with other drivers, and alert them of your presence or intentions.
  • -
  • Maintain a safe distance from other vehicles. Do not tailgate them or cut them off. Give way to them when they have the right of way or when they signal to change lanes or turn.
  • -
  • Be careful when overtaking or changing lanes. Make sure there is enough space and visibility before you do so. Do not overtake on curves or hills. Do not change lanes in intersections or roundabouts.
  • -roadworks, obstacles, or hazards. Adjust your speed and direction accordingly. -
  • Follow the instructions of the GPS, the radio, or the dispatcher. They will guide you to your destination and warn you of any issues or events on the road.
  • -
-

How to use the multiplayer mode and DLC mods

-

Gamedva Truck Simulator Ultimate APK has a multiplayer mode that allows you to play with other players online. You can also download and install various DLC mods that add new features, maps, trucks, cargoes, and scenarios to the game. Here are some tips on how to use the multiplayer mode and DLC mods:

-
    -
  • To join the multiplayer mode, you need to create an account and log in to the game. You can then choose a server and a room to join. You can also create your own room and invite your friends or other players.
  • -
  • In the multiplayer mode, you can chat with other players, join convoys, race with them, or cooperate with them. You can also see their trucks, cargoes, locations, and ratings on the map.
  • -
  • To download and install DLC mods, you need to go to [12](https://www.gamedva.com/truck-simulator-ultimate-mod) or any other website that provides them. You can then choose the mod that you want and download it to your device.
  • -
  • To install the mod, you need to locate the downloaded file and copy it to the game folder. You can then launch the game and enable the mod from the settings menu.
  • -
  • To use the mod, you need to follow the instructions that come with it. Some mods may require you to start a new game or load a specific save file. Some mods may also conflict with each other or with the original game files. Be careful when using mods and backup your game data before installing them.
  • -
-

Conclusion

-

A summary of the main points and a recommendation

-

Gamedva Truck Simulator Ultimate APK is a driving simulation game that lets you experience the life of a truck driver in a realistic and immersive way. You can transport various cargoes across different countries, manage your own trucking company, customize your trucks, and compete with other players online. The game has realistic graphics, physics, weather, traffic, toll roads, radio stations, and sound effects that make you feel like driving a real truck.

-

If you are looking for a fun and educational game that will challenge your driving skills and business acumen, you should definitely try Gamedva Truck Simulator Ultimate APK. You can download it for free from [13](https://play.google.com/store/apps/details?id=com.zuuks.truck.simulator.ultimate) or other sources. You can also download and install various DLC mods that add new features, maps, trucks, cargoes, and scenarios to the game from [12](https://www.gamedva.com/truck-simulator-ultimate-mod) or other websites.

-

FAQs

-

Q1: Is Gamedva Truck Simulator Ultimate APK safe to download?

-

A1: Yes, Gamedva Truck Simulator Ultimate APK is safe to download as long as you get it from a trusted source like [13](https://play.google.com/store/apps/details?id=com.zuuks.truck.simulator.ultimate) or other reputable websites. However, you should always scan any downloaded file with an antivirus software before installing it on your device.

-

Q2: How much storage space does the game require?

-

A2: The game requires about 1 GB of storage space on your device. However, this may vary depending on the version of the game and the DLC mods that you install.

-

Q3: Can I play the game offline?

-

A3: Yes, you can play the game offline without an internet connection. However, you will not be able to access some features like the multiplayer mode, the auction market, or the radio stations.

-

Q4: What are the best trucks to buy in the game?

-

A4: The best trucks to buy in the game depend on your personal preference and budget. However, some of the most popular trucks in the game are:

- - - -modern design and a low fuel consumption. - - - -
TruckPriceFeatures
Mercedes-Benz Actros$180000A powerful and reliable truck with a spacious cabin and a high fuel capacity.
Scania R730$220000A strong and durable truck with a high performance and a comfortable interior.
MAN TGX$240000A versatile and efficient truck with a good balance of power and speed.
Renault Magnum$260000A stylish and spacious truck with a large windshield and a smooth ride.
-

Q5: How can I contact the developers for feedback or support?

-

A5: You can contact the developers of Gamedva Truck Simulator Ultimate APK by sending an email to info@zuuks.com or by visiting their website [11](https://www.zuuks.com/). You can also follow them on their social media accounts on Facebook, Twitter, Instagram, and YouTube.

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/fffiloni/audioldm-text-to-audio-generation-copy/audioldm/latent_diffusion/attention.py b/spaces/fffiloni/audioldm-text-to-audio-generation-copy/audioldm/latent_diffusion/attention.py deleted file mode 100644 index 583dd169e7ec9502ee29faeb12689a46494838c0..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/audioldm-text-to-audio-generation-copy/audioldm/latent_diffusion/attention.py +++ /dev/null @@ -1,468 +0,0 @@ -from inspect import isfunction -import math -import torch -import torch.nn.functional as F -from torch import nn -from einops import rearrange - -from audioldm.latent_diffusion.util import checkpoint - - -def exists(val): - return val is not None - - -def uniq(arr): - return {el: True for el in arr}.keys() - - -def default(val, d): - if exists(val): - return val - return d() if isfunction(d) else d - - -def max_neg_value(t): - return -torch.finfo(t.dtype).max - - -def init_(tensor): - dim = tensor.shape[-1] - std = 1 / math.sqrt(dim) - tensor.uniform_(-std, std) - return tensor - - -# feedforward -class GEGLU(nn.Module): - def __init__(self, dim_in, dim_out): - super().__init__() - self.proj = nn.Linear(dim_in, dim_out * 2) - - def forward(self, x): - x, gate = self.proj(x).chunk(2, dim=-1) - return x * F.gelu(gate) - - -class FeedForward(nn.Module): - def __init__(self, dim, dim_out=None, mult=4, glu=False, dropout=0.0): - super().__init__() - inner_dim = int(dim * mult) - dim_out = default(dim_out, dim) - project_in = ( - nn.Sequential(nn.Linear(dim, inner_dim), nn.GELU()) - if not glu - else GEGLU(dim, inner_dim) - ) - - self.net = nn.Sequential( - project_in, nn.Dropout(dropout), nn.Linear(inner_dim, dim_out) - ) - - def forward(self, x): - return self.net(x) - - -def zero_module(module): - """ - Zero out the parameters of a module and return it. - """ - for p in module.parameters(): - p.detach().zero_() - return module - - -def Normalize(in_channels): - return torch.nn.GroupNorm( - num_groups=32, num_channels=in_channels, eps=1e-6, affine=True - ) - - -class LinearAttention(nn.Module): - def __init__(self, dim, heads=4, dim_head=32): - super().__init__() - self.heads = heads - hidden_dim = dim_head * heads - self.to_qkv = nn.Conv2d(dim, hidden_dim * 3, 1, bias=False) - self.to_out = nn.Conv2d(hidden_dim, dim, 1) - - def forward(self, x): - b, c, h, w = x.shape - qkv = self.to_qkv(x) - q, k, v = rearrange( - qkv, "b (qkv heads c) h w -> qkv b heads c (h w)", heads=self.heads, qkv=3 - ) - k = k.softmax(dim=-1) - context = torch.einsum("bhdn,bhen->bhde", k, v) - out = torch.einsum("bhde,bhdn->bhen", context, q) - out = rearrange( - out, "b heads c (h w) -> b (heads c) h w", heads=self.heads, h=h, w=w - ) - return self.to_out(out) - - -class SpatialSelfAttention(nn.Module): - def __init__(self, in_channels): - super().__init__() - self.in_channels = in_channels - - self.norm = Normalize(in_channels) - self.q = torch.nn.Conv2d( - in_channels, in_channels, kernel_size=1, stride=1, padding=0 - ) - self.k = torch.nn.Conv2d( - in_channels, in_channels, kernel_size=1, stride=1, padding=0 - ) - self.v = torch.nn.Conv2d( - in_channels, in_channels, kernel_size=1, stride=1, padding=0 - ) - self.proj_out = torch.nn.Conv2d( - in_channels, in_channels, kernel_size=1, stride=1, padding=0 - ) - - def forward(self, x): - h_ = x - h_ = self.norm(h_) - q = self.q(h_) - k = self.k(h_) - v = self.v(h_) - - # compute attention - b, c, h, w = q.shape - q = rearrange(q, "b c h w -> b (h w) c") - k = rearrange(k, "b c h w -> b c (h w)") - w_ = torch.einsum("bij,bjk->bik", q, k) - - w_ = w_ * (int(c) ** (-0.5)) - w_ = torch.nn.functional.softmax(w_, dim=2) - - # attend to values - v = rearrange(v, "b c h w -> b c (h w)") - w_ = rearrange(w_, "b i j -> b j i") - h_ = torch.einsum("bij,bjk->bik", v, w_) - h_ = rearrange(h_, "b c (h w) -> b c h w", h=h) - h_ = self.proj_out(h_) - - return x + h_ - - -class CrossAttention(nn.Module): - """ - ### Cross Attention Layer - This falls-back to self-attention when conditional embeddings are not specified. - """ - - # use_flash_attention: bool = True - use_flash_attention: bool = False - def __init__( - self, - query_dim, - context_dim=None, - heads=8, - dim_head=64, - dropout=0.0, - is_inplace: bool = True, - ): - # def __init__(self, d_model: int, d_cond: int, n_heads: int, d_head: int, is_inplace: bool = True): - """ - :param d_model: is the input embedding size - :param n_heads: is the number of attention heads - :param d_head: is the size of a attention head - :param d_cond: is the size of the conditional embeddings - :param is_inplace: specifies whether to perform the attention softmax computation inplace to - save memory - """ - super().__init__() - - self.is_inplace = is_inplace - self.n_heads = heads - self.d_head = dim_head - - # Attention scaling factor - self.scale = dim_head**-0.5 - - # The normal self-attention layer - if context_dim is None: - context_dim = query_dim - - # Query, key and value mappings - d_attn = dim_head * heads - self.to_q = nn.Linear(query_dim, d_attn, bias=False) - self.to_k = nn.Linear(context_dim, d_attn, bias=False) - self.to_v = nn.Linear(context_dim, d_attn, bias=False) - - # Final linear layer - self.to_out = nn.Sequential(nn.Linear(d_attn, query_dim), nn.Dropout(dropout)) - - # Setup [flash attention](https://github.com/HazyResearch/flash-attention). - # Flash attention is only used if it's installed - # and `CrossAttention.use_flash_attention` is set to `True`. - try: - # You can install flash attention by cloning their Github repo, - # [https://github.com/HazyResearch/flash-attention](https://github.com/HazyResearch/flash-attention) - # and then running `python setup.py install` - from flash_attn.flash_attention import FlashAttention - - self.flash = FlashAttention() - # Set the scale for scaled dot-product attention. - self.flash.softmax_scale = self.scale - # Set to `None` if it's not installed - except ImportError: - self.flash = None - - def forward(self, x, context=None, mask=None): - """ - :param x: are the input embeddings of shape `[batch_size, height * width, d_model]` - :param cond: is the conditional embeddings of shape `[batch_size, n_cond, d_cond]` - """ - - # If `cond` is `None` we perform self attention - has_cond = context is not None - if not has_cond: - context = x - - # Get query, key and value vectors - q = self.to_q(x) - k = self.to_k(context) - v = self.to_v(context) - - # Use flash attention if it's available and the head size is less than or equal to `128` - if ( - CrossAttention.use_flash_attention - and self.flash is not None - and not has_cond - and self.d_head <= 128 - ): - return self.flash_attention(q, k, v) - # Otherwise, fallback to normal attention - else: - return self.normal_attention(q, k, v) - - def flash_attention(self, q: torch.Tensor, k: torch.Tensor, v: torch.Tensor): - """ - #### Flash Attention - :param q: are the query vectors before splitting heads, of shape `[batch_size, seq, d_attn]` - :param k: are the query vectors before splitting heads, of shape `[batch_size, seq, d_attn]` - :param v: are the query vectors before splitting heads, of shape `[batch_size, seq, d_attn]` - """ - - # Get batch size and number of elements along sequence axis (`width * height`) - batch_size, seq_len, _ = q.shape - - # Stack `q`, `k`, `v` vectors for flash attention, to get a single tensor of - # shape `[batch_size, seq_len, 3, n_heads * d_head]` - qkv = torch.stack((q, k, v), dim=2) - # Split the heads - qkv = qkv.view(batch_size, seq_len, 3, self.n_heads, self.d_head) - - # Flash attention works for head sizes `32`, `64` and `128`, so we have to pad the heads to - # fit this size. - if self.d_head <= 32: - pad = 32 - self.d_head - elif self.d_head <= 64: - pad = 64 - self.d_head - elif self.d_head <= 128: - pad = 128 - self.d_head - else: - raise ValueError(f"Head size ${self.d_head} too large for Flash Attention") - - # Pad the heads - if pad: - qkv = torch.cat( - (qkv, qkv.new_zeros(batch_size, seq_len, 3, self.n_heads, pad)), dim=-1 - ) - - # Compute attention - # $$\underset{seq}{softmax}\Bigg(\frac{Q K^\top}{\sqrt{d_{key}}}\Bigg)V$$ - # This gives a tensor of shape `[batch_size, seq_len, n_heads, d_padded]` - # TODO here I add the dtype changing - out, _ = self.flash(qkv.type(torch.float16)) - # Truncate the extra head size - out = out[:, :, :, : self.d_head].float() - # Reshape to `[batch_size, seq_len, n_heads * d_head]` - out = out.reshape(batch_size, seq_len, self.n_heads * self.d_head) - - # Map to `[batch_size, height * width, d_model]` with a linear layer - return self.to_out(out) - - def normal_attention(self, q: torch.Tensor, k: torch.Tensor, v: torch.Tensor): - """ - #### Normal Attention - - :param q: are the query vectors before splitting heads, of shape `[batch_size, seq, d_attn]` - :param k: are the query vectors before splitting heads, of shape `[batch_size, seq, d_attn]` - :param v: are the query vectors before splitting heads, of shape `[batch_size, seq, d_attn]` - """ - - # Split them to heads of shape `[batch_size, seq_len, n_heads, d_head]` - q = q.view(*q.shape[:2], self.n_heads, -1) # [bs, 64, 20, 32] - k = k.view(*k.shape[:2], self.n_heads, -1) # [bs, 1, 20, 32] - v = v.view(*v.shape[:2], self.n_heads, -1) - - # Calculate attention $\frac{Q K^\top}{\sqrt{d_{key}}}$ - attn = torch.einsum("bihd,bjhd->bhij", q, k) * self.scale - - # Compute softmax - # $$\underset{seq}{softmax}\Bigg(\frac{Q K^\top}{\sqrt{d_{key}}}\Bigg)$$ - if self.is_inplace: - half = attn.shape[0] // 2 - attn[half:] = attn[half:].softmax(dim=-1) - attn[:half] = attn[:half].softmax(dim=-1) - else: - attn = attn.softmax(dim=-1) - - # Compute attention output - # $$\underset{seq}{softmax}\Bigg(\frac{Q K^\top}{\sqrt{d_{key}}}\Bigg)V$$ - # attn: [bs, 20, 64, 1] - # v: [bs, 1, 20, 32] - out = torch.einsum("bhij,bjhd->bihd", attn, v) - # Reshape to `[batch_size, height * width, n_heads * d_head]` - out = out.reshape(*out.shape[:2], -1) - # Map to `[batch_size, height * width, d_model]` with a linear layer - return self.to_out(out) - - -# class CrossAttention(nn.Module): -# def __init__(self, query_dim, context_dim=None, heads=8, dim_head=64, dropout=0.): -# super().__init__() -# inner_dim = dim_head * heads -# context_dim = default(context_dim, query_dim) - -# self.scale = dim_head ** -0.5 -# self.heads = heads - -# self.to_q = nn.Linear(query_dim, inner_dim, bias=False) -# self.to_k = nn.Linear(context_dim, inner_dim, bias=False) -# self.to_v = nn.Linear(context_dim, inner_dim, bias=False) - -# self.to_out = nn.Sequential( -# nn.Linear(inner_dim, query_dim), -# nn.Dropout(dropout) -# ) - -# def forward(self, x, context=None, mask=None): -# h = self.heads - -# q = self.to_q(x) -# context = default(context, x) -# k = self.to_k(context) -# v = self.to_v(context) - -# q, k, v = map(lambda t: rearrange(t, 'b n (h d) -> (b h) n d', h=h), (q, k, v)) - -# sim = einsum('b i d, b j d -> b i j', q, k) * self.scale - -# if exists(mask): -# mask = rearrange(mask, 'b ... -> b (...)') -# max_neg_value = -torch.finfo(sim.dtype).max -# mask = repeat(mask, 'b j -> (b h) () j', h=h) -# sim.masked_fill_(~mask, max_neg_value) - -# # attention, what we cannot get enough of -# attn = sim.softmax(dim=-1) - -# out = einsum('b i j, b j d -> b i d', attn, v) -# out = rearrange(out, '(b h) n d -> b n (h d)', h=h) -# return self.to_out(out) - - -class BasicTransformerBlock(nn.Module): - def __init__( - self, - dim, - n_heads, - d_head, - dropout=0.0, - context_dim=None, - gated_ff=True, - checkpoint=True, - ): - super().__init__() - self.attn1 = CrossAttention( - query_dim=dim, heads=n_heads, dim_head=d_head, dropout=dropout - ) # is a self-attention - self.ff = FeedForward(dim, dropout=dropout, glu=gated_ff) - self.attn2 = CrossAttention( - query_dim=dim, - context_dim=context_dim, - heads=n_heads, - dim_head=d_head, - dropout=dropout, - ) # is self-attn if context is none - self.norm1 = nn.LayerNorm(dim) - self.norm2 = nn.LayerNorm(dim) - self.norm3 = nn.LayerNorm(dim) - self.checkpoint = checkpoint - - def forward(self, x, context=None): - if context is None: - return checkpoint(self._forward, (x,), self.parameters(), self.checkpoint) - else: - return checkpoint( - self._forward, (x, context), self.parameters(), self.checkpoint - ) - - def _forward(self, x, context=None): - x = self.attn1(self.norm1(x)) + x - x = self.attn2(self.norm2(x), context=context) + x - x = self.ff(self.norm3(x)) + x - return x - - -class SpatialTransformer(nn.Module): - """ - Transformer block for image-like data. - First, project the input (aka embedding) - and reshape to b, t, d. - Then apply standard transformer action. - Finally, reshape to image - """ - - def __init__( - self, - in_channels, - n_heads, - d_head, - depth=1, - dropout=0.0, - context_dim=None, - no_context=False, - ): - super().__init__() - - if no_context: - context_dim = None - - self.in_channels = in_channels - inner_dim = n_heads * d_head - self.norm = Normalize(in_channels) - - self.proj_in = nn.Conv2d( - in_channels, inner_dim, kernel_size=1, stride=1, padding=0 - ) - - self.transformer_blocks = nn.ModuleList( - [ - BasicTransformerBlock( - inner_dim, n_heads, d_head, dropout=dropout, context_dim=context_dim - ) - for d in range(depth) - ] - ) - - self.proj_out = zero_module( - nn.Conv2d(inner_dim, in_channels, kernel_size=1, stride=1, padding=0) - ) - - def forward(self, x, context=None): - # note: if no context is given, cross-attention defaults to self-attention - b, c, h, w = x.shape - x_in = x - x = self.norm(x) - x = self.proj_in(x) - x = rearrange(x, "b c h w -> b (h w) c") - for block in self.transformer_blocks: - x = block(x, context=context) - x = rearrange(x, "b (h w) c -> b c h w", h=h, w=w) - x = self.proj_out(x) - return x + x_in diff --git a/spaces/filehost/txt/edit_menu.py b/spaces/filehost/txt/edit_menu.py deleted file mode 100644 index 81a2caea22a6137fd1c68ce8209c3263f3ded25f..0000000000000000000000000000000000000000 --- a/spaces/filehost/txt/edit_menu.py +++ /dev/null @@ -1,87 +0,0 @@ -from tkinter import * -from tkinter.simpledialog import * -from tkinter.filedialog import * -from tkinter.messagebox import * - - -class Edit(): - def popup(self, event): - self.rightClick.post(event.x_root, event.y_root) - - def copy(self, *args): - sel = self.text.selection_get() - self.clipboard = sel - - def cut(self, *args): - sel = self.text.selection_get() - self.clipboard = sel - self.text.delete(SEL_FIRST, SEL_LAST) - - def paste(self, *args): - self.text.insert(INSERT, self.clipboard) - - def selectAll(self, *args): - self.text.tag_add(SEL, "1.0", END) - self.text.mark_set(0.0, END) - self.text.see(INSERT) - - def undo(self, *args): - self.text.edit_undo() - - def redo(self, *args): - self.text.edit_redo() - - def find(self, *args): - self.text.tag_remove('found', '1.0', END) - target = askstring('Find', 'Search String:') - - if target: - idx = '1.0' - while 1: - idx = self.text.search(target, idx, nocase=1, stopindex=END) - if not idx: break - lastidx = '%s+%dc' % (idx, len(target)) - self.text.tag_add('found', idx, lastidx) - idx = lastidx - self.text.tag_config('found', foreground='white', background='blue') - - def __init__(self, text, root): - self.clipboard = None - self.text = text - self.rightClick = Menu(root) - - -def main(root, text, menubar): - - objEdit = Edit(text, root) - - editmenu = Menu(menubar) - editmenu.add_command(label="Copy", command=objEdit.copy, accelerator="Ctrl+C") - editmenu.add_command(label="Cut", command=objEdit.cut, accelerator="Ctrl+X") - editmenu.add_command(label="Paste", command=objEdit.paste, accelerator="Ctrl+V") - editmenu.add_command(label="Undo", command=objEdit.undo, accelerator="Ctrl+Z") - editmenu.add_command(label="Redo", command=objEdit.redo, accelerator="Ctrl+Y") - editmenu.add_command(label="Find", command=objEdit.find, accelerator="Ctrl+F") - editmenu.add_separator() - editmenu.add_command(label="Select All", command=objEdit.selectAll, accelerator="Ctrl+A") - menubar.add_cascade(label="Edit", menu=editmenu) - - root.bind_all("", objEdit.undo) - root.bind_all("", objEdit.redo) - root.bind_all("", objEdit.find) - root.bind_all("Control-a", objEdit.selectAll) - - objEdit.rightClick.add_command(label="Copy", command=objEdit.copy) - objEdit.rightClick.add_command(label="Cut", command=objEdit.cut) - objEdit.rightClick.add_command(label="Paste", command=objEdit.paste) - objEdit.rightClick.add_separator() - objEdit.rightClick.add_command(label="Select All", command=objEdit.selectAll) - objEdit.rightClick.bind("", objEdit.selectAll) - - text.bind("", objEdit.popup) - - root.config(menu=menubar) - - -if __name__ == "__main__": - print("Please run 'main.py'") \ No newline at end of file diff --git a/spaces/fkhuggingme/gpt-academic/crazy_functions/test_project/cpp/cppipc/shm.cpp b/spaces/fkhuggingme/gpt-academic/crazy_functions/test_project/cpp/cppipc/shm.cpp deleted file mode 100644 index 593ce3129dc1574dbc8fc8b088cf595df215de93..0000000000000000000000000000000000000000 --- a/spaces/fkhuggingme/gpt-academic/crazy_functions/test_project/cpp/cppipc/shm.cpp +++ /dev/null @@ -1,103 +0,0 @@ - -#include -#include - -#include "libipc/shm.h" - -#include "libipc/utility/pimpl.h" -#include "libipc/memory/resource.h" - -namespace ipc { -namespace shm { - -class handle::handle_ : public pimpl { -public: - shm::id_t id_ = nullptr; - void* m_ = nullptr; - - ipc::string n_; - std::size_t s_ = 0; -}; - -handle::handle() - : p_(p_->make()) { -} - -handle::handle(char const * name, std::size_t size, unsigned mode) - : handle() { - acquire(name, size, mode); -} - -handle::handle(handle&& rhs) - : handle() { - swap(rhs); -} - -handle::~handle() { - release(); - p_->clear(); -} - -void handle::swap(handle& rhs) { - std::swap(p_, rhs.p_); -} - -handle& handle::operator=(handle rhs) { - swap(rhs); - return *this; -} - -bool handle::valid() const noexcept { - return impl(p_)->m_ != nullptr; -} - -std::size_t handle::size() const noexcept { - return impl(p_)->s_; -} - -char const * handle::name() const noexcept { - return impl(p_)->n_.c_str(); -} - -std::int32_t handle::ref() const noexcept { - return shm::get_ref(impl(p_)->id_); -} - -void handle::sub_ref() noexcept { - shm::sub_ref(impl(p_)->id_); -} - -bool handle::acquire(char const * name, std::size_t size, unsigned mode) { - release(); - impl(p_)->id_ = shm::acquire((impl(p_)->n_ = name).c_str(), size, mode); - impl(p_)->m_ = shm::get_mem(impl(p_)->id_, &(impl(p_)->s_)); - return valid(); -} - -std::int32_t handle::release() { - if (impl(p_)->id_ == nullptr) return -1; - return shm::release(detach()); -} - -void* handle::get() const { - return impl(p_)->m_; -} - -void handle::attach(id_t id) { - if (id == nullptr) return; - release(); - impl(p_)->id_ = id; - impl(p_)->m_ = shm::get_mem(impl(p_)->id_, &(impl(p_)->s_)); -} - -id_t handle::detach() { - auto old = impl(p_)->id_; - impl(p_)->id_ = nullptr; - impl(p_)->m_ = nullptr; - impl(p_)->s_ = 0; - impl(p_)->n_.clear(); - return old; -} - -} // namespace shm -} // namespace ipc diff --git a/spaces/floriankrempl/mtg_rules_bot/mtg/objects/message.py b/spaces/floriankrempl/mtg_rules_bot/mtg/objects/message.py deleted file mode 100644 index ed0d13d6244460c7e53dac66cf526bba76f49038..0000000000000000000000000000000000000000 --- a/spaces/floriankrempl/mtg_rules_bot/mtg/objects/message.py +++ /dev/null @@ -1,10 +0,0 @@ -from dataclasses import dataclass, field -from .card import Card - - -@dataclass -class Message: - text: str - role: str - processed_text: str - cards: list[Card] = field(default_factory=list) diff --git a/spaces/frncscp/bullerengue/musika/22kHz/data.py b/spaces/frncscp/bullerengue/musika/22kHz/data.py deleted file mode 100644 index 3116fdeaaa5d84b30bae83b68a93a8fd7d3a5c9d..0000000000000000000000000000000000000000 --- a/spaces/frncscp/bullerengue/musika/22kHz/data.py +++ /dev/null @@ -1,54 +0,0 @@ -import tensorflow as tf -from glob import glob - -from utils import Utils_functions - - -class Data_functions: - def __init__(self, args): - - self.args = args - self.U = Utils_functions(args) - - options = tf.data.Options() - options.experimental_deterministic = False - - @tf.function - def read_npy(self, p): - x = tf.reshape( - tf.io.decode_raw(tf.io.read_file(p), tf.float32)[-(self.args.max_lat_len * self.args.latdepth * 2) :], - [self.args.max_lat_len, self.args.latdepth * 2], - ) - randnum = tf.random.uniform((), 0, self.args.max_lat_len - self.args.latlen, dtype=tf.int64) - x = x[randnum : randnum + self.args.latlen, :] - return x - - def create_dataset(self): - - print("Calculating total number of samples in data folder...") - datalen = len(glob(self.args.train_path + "/*.npy")) - print(f"Found {datalen} total samples") - - options = tf.data.Options() - options.experimental_deterministic = False - - if datalen > self.args.totsamples: - ds = tf.data.Dataset.list_files(self.args.train_path + "/*.npy").shuffle(datalen).take(self.args.totsamples) - else: - ds = ( - tf.data.Dataset.list_files(self.args.train_path + "/*.npy") - .repeat((self.args.totsamples // datalen) + 1) - .shuffle(datalen * ((self.args.totsamples // datalen) + 1)) - .take(self.args.totsamples) - ) - - ds = ( - ds.map(self.read_npy, num_parallel_calls=tf.data.experimental.AUTOTUNE) - .batch(self.args.bs, drop_remainder=True) - .prefetch(tf.data.experimental.AUTOTUNE) - .with_options(options) - ) # .apply(tf.data.experimental.ignore_errors()) - - print("Dataset is ready!") - - return ds diff --git a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmcv/visualization/__init__.py b/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmcv/visualization/__init__.py deleted file mode 100644 index 835df136bdcf69348281d22914d41aa84cdf92b1..0000000000000000000000000000000000000000 --- a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmcv/visualization/__init__.py +++ /dev/null @@ -1,9 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .color import Color, color_val -from .image import imshow, imshow_bboxes, imshow_det_bboxes -from .optflow import flow2rgb, flowshow, make_color_wheel - -__all__ = [ - 'Color', 'color_val', 'imshow', 'imshow_bboxes', 'imshow_det_bboxes', - 'flowshow', 'flow2rgb', 'make_color_wheel' -] diff --git a/spaces/giswqs/Streamlit/data/html/sfo_buildings.html b/spaces/giswqs/Streamlit/data/html/sfo_buildings.html deleted file mode 100644 index 96e62d787b97977e8a5a311d679f69d10eed6964..0000000000000000000000000000000000000000 --- a/spaces/giswqs/Streamlit/data/html/sfo_buildings.html +++ /dev/null @@ -1,34 +0,0 @@ - - - - - - - - - -
- - - - diff --git "a/spaces/giswqs/Streamlit/pages/10_\360\237\214\215_Earth_Engine_Datasets.py" "b/spaces/giswqs/Streamlit/pages/10_\360\237\214\215_Earth_Engine_Datasets.py" deleted file mode 100644 index 963db06a56f324c397e065e979d911f34b6dd268..0000000000000000000000000000000000000000 --- "a/spaces/giswqs/Streamlit/pages/10_\360\237\214\215_Earth_Engine_Datasets.py" +++ /dev/null @@ -1,157 +0,0 @@ -import ee -import streamlit as st -import geemap.foliumap as geemap - -st.set_page_config(layout="wide") - -st.sidebar.info( - """ - - Web App URL: - - GitHub repository: - """ -) - -st.sidebar.title("Contact") -st.sidebar.info( - """ - Qiusheng Wu at [wetlands.io](https://wetlands.io) | [GitHub](https://github.com/giswqs) | [Twitter](https://twitter.com/giswqs) | [YouTube](https://www.youtube.com/c/QiushengWu) | [LinkedIn](https://www.linkedin.com/in/qiushengwu) - """ -) - - -def nlcd(): - - # st.header("National Land Cover Database (NLCD)") - - row1_col1, row1_col2 = st.columns([3, 1]) - width = 950 - height = 600 - - Map = geemap.Map(center=[40, -100], zoom=4) - - # Select the seven NLCD epoches after 2000. - years = ["2001", "2004", "2006", "2008", "2011", "2013", "2016", "2019"] - - # Get an NLCD image by year. - def getNLCD(year): - # Import the NLCD collection. - dataset = ee.ImageCollection("USGS/NLCD_RELEASES/2019_REL/NLCD") - - # Filter the collection by year. - nlcd = dataset.filter(ee.Filter.eq("system:index", year)).first() - - # Select the land cover band. - landcover = nlcd.select("landcover") - return landcover - - with row1_col2: - selected_year = st.multiselect("Select a year", years) - add_legend = st.checkbox("Show legend") - - if selected_year: - for year in selected_year: - Map.addLayer(getNLCD(year), {}, "NLCD " + year) - - if add_legend: - Map.add_legend( - legend_title="NLCD Land Cover Classification", builtin_legend="NLCD" - ) - with row1_col1: - Map.to_streamlit(width=width, height=height) - - else: - with row1_col1: - Map.to_streamlit(width=width, height=height) - - -def search_data(): - - # st.header("Search Earth Engine Data Catalog") - - Map = geemap.Map() - - if "ee_assets" not in st.session_state: - st.session_state["ee_assets"] = None - if "asset_titles" not in st.session_state: - st.session_state["asset_titles"] = None - - col1, col2 = st.columns([2, 1]) - - dataset = None - with col2: - keyword = st.text_input( - "Enter a keyword to search (e.g., elevation)", "") - if keyword: - ee_assets = geemap.search_ee_data(keyword) - asset_titles = [x["title"] for x in ee_assets] - asset_types = [x["type"] for x in ee_assets] - - translate = { - "image_collection": "ee.ImageCollection('", - "image": "ee.Image('", - "table": "ee.FeatureCollection('", - "table_collection": "ee.FeatureCollection('", - } - - dataset = st.selectbox("Select a dataset", asset_titles) - if len(ee_assets) > 0: - st.session_state["ee_assets"] = ee_assets - st.session_state["asset_titles"] = asset_titles - - if dataset is not None: - with st.expander("Show dataset details", True): - index = asset_titles.index(dataset) - - html = geemap.ee_data_html( - st.session_state["ee_assets"][index]) - html = html.replace("\n", "") - st.markdown(html, True) - - ee_id = ee_assets[index]["id"] - uid = ee_assets[index]["uid"] - st.markdown(f"""**Earth Engine Snippet:** `{ee_id}`""") - ee_asset = f"{translate[asset_types[index]]}{ee_id}')" - vis_params = st.text_input( - "Enter visualization parameters as a dictionary", {} - ) - layer_name = st.text_input("Enter a layer name", uid) - button = st.button("Add dataset to map") - if button: - vis = {} - try: - if vis_params.strip() == "": - # st.error("Please enter visualization parameters") - vis_params = "{}" - vis = eval(vis_params) - if not isinstance(vis, dict): - st.error( - "Visualization parameters must be a dictionary") - try: - Map.addLayer(eval(ee_asset), vis, layer_name) - except Exception as e: - st.error(f"Error adding layer: {e}") - except Exception as e: - st.error(f"Invalid visualization parameters: {e}") - - with col1: - Map.to_streamlit() - else: - with col1: - Map.to_streamlit() - - -def app(): - st.title("Earth Engine Data Catalog") - - apps = ["Search Earth Engine Data Catalog", - "National Land Cover Database (NLCD)"] - - selected_app = st.selectbox("Select an app", apps) - - if selected_app == "National Land Cover Database (NLCD)": - nlcd() - elif selected_app == "Search Earth Engine Data Catalog": - search_data() - - -app() diff --git a/spaces/gngpostalsrvc/COHeN_demo/app.py b/spaces/gngpostalsrvc/COHeN_demo/app.py deleted file mode 100644 index d64e2ff04b37be3bd75f4291372566163267a0fa..0000000000000000000000000000000000000000 --- a/spaces/gngpostalsrvc/COHeN_demo/app.py +++ /dev/null @@ -1,37 +0,0 @@ -import re -import gradio as gr -from transformers import ( - AutoModelForSequenceClassification, - AutoTokenizer, - pipeline -) -from transformers_interpret import SequenceClassificationExplainer -from hebrewtools.functions import sbl_normalization - -model_name = 'gngpostalsrvc/COHeN' -tokenizer = AutoTokenizer.from_pretrained(model_name) -model = AutoModelForSequenceClassification.from_pretrained(model_name) -cls_explainer = SequenceClassificationExplainer(model, tokenizer) - -pipe = pipeline("text-classification", model=model_name) - -pattern = re.compile("[^\s\u05d0-\u05ea\u05b0-\u05bc\u05be\u05c1\u05c2\u05c7]") - -def predict(text): - text = " ".join([word for word in text.split() if word not in ['\u05e1', '\u05e4', '']]) - text = re.sub(pattern, "", text) - text = sbl_normalization(text) - word_attributions = cls_explainer(text) - results = pipe(text)[0] - label = f"{results['label']} ({results['score']:.2})" - return label, word_attributions[1:-1] - -iface = gr.Interface( - fn=predict, - inputs=gr.Text(label="Input Text"), - outputs=[gr.Text(label="Label"), gr.HighlightedText(label="Word Importance", show_legend=True).style(color_map={"-": "red", "+": "green"})], - theme=gr.themes.Base(), - examples=[['וְסָפְדָה הָאָרֶץ מִשְׁפָּחוֺת מִשְׁפָּחוֺת לְבָד מִשְׁפַּחַת בֵּית־דָּוִיד לְבָד וּנְשֵׁיהֶם לְבָד מִשְׁפַּחַת בֵּית־נָתָן לְבָד וּנְשֵׁיהֶם לְבָד'], ['וַיֹּאמֶר דָּוִד אֶל־אוּרִיָּה שֵׁב בָּזֶה גַּם־הַיּוֺם וּמָחָר אֲשַׁלְּחֶךָּ וַיֵּשֶׁב אוּרִיָּה בִירוּשָׁלִַם בַּיּוֺם הַהוּא וּמִמָּחֳרָת']] -) - -iface.launch() \ No newline at end of file diff --git a/spaces/gotiQspiryo/whisper-ui/examples/Cubase 8 Crack Xenon Password 10 Why You Should Choose It Over Other Music Software.md b/spaces/gotiQspiryo/whisper-ui/examples/Cubase 8 Crack Xenon Password 10 Why You Should Choose It Over Other Music Software.md deleted file mode 100644 index f2da867194f381af924125f7942225f65c783d17..0000000000000000000000000000000000000000 --- a/spaces/gotiQspiryo/whisper-ui/examples/Cubase 8 Crack Xenon Password 10 Why You Should Choose It Over Other Music Software.md +++ /dev/null @@ -1,6 +0,0 @@ -

cubase 8 crack xenon password 10


Download File ✫✫✫ https://urlgoal.com/2uyNzb



- - aaccfb2cb3
-
-
-

diff --git a/spaces/gradio/HuBERT/fairseq/tasks/__init__.py b/spaces/gradio/HuBERT/fairseq/tasks/__init__.py deleted file mode 100644 index 79dde74057f40a368590cbf0ca0d290f1787a264..0000000000000000000000000000000000000000 --- a/spaces/gradio/HuBERT/fairseq/tasks/__init__.py +++ /dev/null @@ -1,136 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -"""isort:skip_file""" - -import argparse -import importlib -import os - -from fairseq.dataclass import FairseqDataclass -from fairseq.dataclass.utils import merge_with_parent, populate_dataclass -from hydra.core.config_store import ConfigStore - -from .fairseq_task import FairseqTask, LegacyFairseqTask # noqa - - -# register dataclass -TASK_DATACLASS_REGISTRY = {} -TASK_REGISTRY = {} -TASK_CLASS_NAMES = set() - - -def setup_task(cfg: FairseqDataclass, **kwargs): - task = None - task_name = getattr(cfg, "task", None) - - if isinstance(task_name, str): - # legacy tasks - task = TASK_REGISTRY[task_name] - if task_name in TASK_DATACLASS_REGISTRY: - dc = TASK_DATACLASS_REGISTRY[task_name] - cfg = populate_dataclass(dc(), cfg) - else: - task_name = getattr(cfg, "_name", None) - - if task_name and task_name in TASK_DATACLASS_REGISTRY: - dc = TASK_DATACLASS_REGISTRY[task_name] - cfg = merge_with_parent(dc(), cfg) - task = TASK_REGISTRY[task_name] - - assert ( - task is not None - ), f"Could not infer task type from {cfg}. Available tasks: {TASK_REGISTRY.keys()}" - - return task.setup_task(cfg, **kwargs) - - -def register_task(name, dataclass=None): - """ - New tasks can be added to fairseq with the - :func:`~fairseq.tasks.register_task` function decorator. - - For example:: - - @register_task('classification') - class ClassificationTask(FairseqTask): - (...) - - .. note:: - - All Tasks must implement the :class:`~fairseq.tasks.FairseqTask` - interface. - - Args: - name (str): the name of the task - """ - - def register_task_cls(cls): - if name in TASK_REGISTRY: - raise ValueError("Cannot register duplicate task ({})".format(name)) - if not issubclass(cls, FairseqTask): - raise ValueError( - "Task ({}: {}) must extend FairseqTask".format(name, cls.__name__) - ) - if cls.__name__ in TASK_CLASS_NAMES: - raise ValueError( - "Cannot register task with duplicate class name ({})".format( - cls.__name__ - ) - ) - TASK_REGISTRY[name] = cls - TASK_CLASS_NAMES.add(cls.__name__) - - if dataclass is not None and not issubclass(dataclass, FairseqDataclass): - raise ValueError( - "Dataclass {} must extend FairseqDataclass".format(dataclass) - ) - - cls.__dataclass = dataclass - if dataclass is not None: - TASK_DATACLASS_REGISTRY[name] = dataclass - - cs = ConfigStore.instance() - node = dataclass() - node._name = name - cs.store(name=name, group="task", node=node, provider="fairseq") - - return cls - - return register_task_cls - - -def get_task(name): - return TASK_REGISTRY[name] - - -def import_tasks(tasks_dir, namespace): - for file in os.listdir(tasks_dir): - path = os.path.join(tasks_dir, file) - if ( - not file.startswith("_") - and not file.startswith(".") - and (file.endswith(".py") or os.path.isdir(path)) - ): - task_name = file[: file.find(".py")] if file.endswith(".py") else file - importlib.import_module(namespace + "." + task_name) - - # expose `task_parser` for sphinx - if task_name in TASK_REGISTRY: - parser = argparse.ArgumentParser(add_help=False) - group_task = parser.add_argument_group("Task name") - # fmt: off - group_task.add_argument('--task', metavar=task_name, - help='Enable this task with: ``--task=' + task_name + '``') - # fmt: on - group_args = parser.add_argument_group( - "Additional command-line arguments" - ) - TASK_REGISTRY[task_name].add_args(group_args) - globals()[task_name + "_parser"] = parser - - -# automatically import any Python files in the tasks/ directory -tasks_dir = os.path.dirname(__file__) -import_tasks(tasks_dir, "fairseq.tasks") diff --git a/spaces/gustavoespindola/SmartStay/README.md b/spaces/gustavoespindola/SmartStay/README.md deleted file mode 100644 index 1f92430d8bbfde99d691a579d94eac63c51491a5..0000000000000000000000000000000000000000 --- a/spaces/gustavoespindola/SmartStay/README.md +++ /dev/null @@ -1,53 +0,0 @@ ---- -title: SmartStay -emoji: 🏖️ -colorFrom: purple -colorTo: pink -sdk: streamlit -sdk_version: 1.21.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference - -# SmartStay - -SmartStay is an AI-powered recommendation system designed to provide personalized hotel recommendations for Booking.com users. By utilizing OpenAI's advanced language model and analyzing the latest user reviews, SmartStay offers intelligent suggestions to help users find the perfect hotel for their needs. - -# Instructions -When deploying the app, make sure to add your apify api key to the environment variables. You can get your api key by creating an account at https://apify.com/ - -## Key Features -- AI-Powered Recommendations: SmartStay leverages the power of OpenAI's language model to analyze and understand the sentiment, context, and key features mentioned in Booking.com user reviews. -- Personalized Suggestions: The recommendation system takes into account user preferences, such as location, amenities, and budget, to generate tailored recommendations. -- Real-time Data Analysis: SmartStay continuously updates its recommendation database by crawling and processing the latest reviews from Booking.com, ensuring that users receive up-to-date and accurate suggestions. -- User-Friendly Interface: The intuitive interface makes it easy for users to input their preferences, view recommendations, and explore additional details about each hotel. -- Trustworthy Recommendations: SmartStay prioritizes transparency and provides explanations for its recommendations, enabling users to understand the reasoning behind each suggestion. - -## Usage -- Launch the SmartStay application. -- Enter the booking.com URL for your desired location. -- Complete settings parameters -- Click on the "Generate Recommendations" button. -- SmartStay will process the data and display a list of personalized hotel recommendations. - -## Contributing -Contributions are welcome! To contribute to SmartStay, follow these steps: - -- Fork the repository. -- Create a new branch: git checkout -b feature/your-feature -- Make your changes and commit them: git commit -m 'Add your feature' -- Push the changes to your forked repository: git push origin feature/your-feature -- Submit a pull request. - -## Acknowledgements -- OpenAI for providing the powerful language model used in SmartStay. -- Booking.com for their extensive collection of user reviews and data. -- APIFY for providing the powerful web scraping tool used in SmartStay. - -## Troubleshooting -- If you get error "raiting" not found, or no recommendations, try the button again. - -## Contact -For any inquiries or suggestions, please contact the SmartStay team at espindolage@gmail.com. diff --git a/spaces/h2oai/wave-tour/examples/plot_line_labels_custom.py b/spaces/h2oai/wave-tour/examples/plot_line_labels_custom.py deleted file mode 100644 index cbc5e53318d7847cc3a7f687f93cbc3ba43eccba..0000000000000000000000000000000000000000 --- a/spaces/h2oai/wave-tour/examples/plot_line_labels_custom.py +++ /dev/null @@ -1,29 +0,0 @@ -# Plot / Line / Labels / Custom -# Add labels to a line #plot. -# --- -from h2o_wave import site, data, ui - -page = site['/demo'] - -page.add('example', ui.plot_card( - box='1 1 4 5', - title='Line, labels, custom', - data=data('year price', 9, rows=[ - ('1991', 3), - ('1992', 4), - ('1993', 3.5), - ('1994', 5), - ('1995', 4.9), - ('1996', 6), - ('1997', 7), - ('1998', 9), - ('1999', 13), - ]), - plot=ui.plot([ - ui.mark(type='line', x_scale='time', x='=year', y='=price', y_min=0, - label='=${{intl price minimum_fraction_digits=2 maximum_fraction_digits=2}}', - label_fill_color='rgba(0,0,0,0.65)', label_stroke_color='$red', label_stroke_size=2) - ]) -)) - -page.save() diff --git a/spaces/haotiz/glip-zeroshot-demo/maskrcnn_benchmark/modeling/box_coder.py b/spaces/haotiz/glip-zeroshot-demo/maskrcnn_benchmark/modeling/box_coder.py deleted file mode 100644 index 46a4acb3247003da2e6e24a4d28deb86de7d7aae..0000000000000000000000000000000000000000 --- a/spaces/haotiz/glip-zeroshot-demo/maskrcnn_benchmark/modeling/box_coder.py +++ /dev/null @@ -1,95 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved. -import math - -import torch - - -class BoxCoder(object): - """ - This class encodes and decodes a set of bounding boxes into - the representation used for training the regressors. - """ - - def __init__(self, weights, bbox_xform_clip=math.log(1000. / 16)): - """ - Arguments: - weights (4-element tuple) - bbox_xform_clip (float) - """ - self.weights = weights - self.bbox_xform_clip = bbox_xform_clip - - def encode(self, reference_boxes, proposals): - """ - Encode a set of proposals with respect to some - reference boxes - - Arguments: - reference_boxes (Tensor): reference boxes - proposals (Tensor): boxes to be encoded - """ - - TO_REMOVE = 1 # TODO remove - ex_widths = proposals[:, 2] - proposals[:, 0] + TO_REMOVE - ex_heights = proposals[:, 3] - proposals[:, 1] + TO_REMOVE - ex_ctr_x = proposals[:, 0] + 0.5 * ex_widths - ex_ctr_y = proposals[:, 1] + 0.5 * ex_heights - - gt_widths = reference_boxes[:, 2] - reference_boxes[:, 0] + TO_REMOVE - gt_heights = reference_boxes[:, 3] - reference_boxes[:, 1] + TO_REMOVE - gt_ctr_x = reference_boxes[:, 0] + 0.5 * gt_widths - gt_ctr_y = reference_boxes[:, 1] + 0.5 * gt_heights - - wx, wy, ww, wh = self.weights - targets_dx = wx * (gt_ctr_x - ex_ctr_x) / ex_widths - targets_dy = wy * (gt_ctr_y - ex_ctr_y) / ex_heights - targets_dw = ww * torch.log(gt_widths / ex_widths) - targets_dh = wh * torch.log(gt_heights / ex_heights) - - targets = torch.stack((targets_dx, targets_dy, targets_dw, targets_dh), dim=1) - return targets - - def decode(self, rel_codes, boxes): - """ - From a set of original boxes and encoded relative box offsets, - get the decoded boxes. - - Arguments: - rel_codes (Tensor): encoded boxes - boxes (Tensor): reference boxes. - """ - - boxes = boxes.to(rel_codes.dtype) - - TO_REMOVE = 1 # TODO remove - widths = boxes[:, 2] - boxes[:, 0] + TO_REMOVE - heights = boxes[:, 3] - boxes[:, 1] + TO_REMOVE - ctr_x = boxes[:, 0] + 0.5 * widths - ctr_y = boxes[:, 1] + 0.5 * heights - - wx, wy, ww, wh = self.weights - dx = rel_codes[:, 0::4] / wx - dy = rel_codes[:, 1::4] / wy - dw = rel_codes[:, 2::4] / ww - dh = rel_codes[:, 3::4] / wh - - # Prevent sending too large values into torch.exp() - dw = torch.clamp(dw, max=self.bbox_xform_clip) - dh = torch.clamp(dh, max=self.bbox_xform_clip) - - pred_ctr_x = dx * widths[:, None] + ctr_x[:, None] - pred_ctr_y = dy * heights[:, None] + ctr_y[:, None] - pred_w = torch.exp(dw) * widths[:, None] - pred_h = torch.exp(dh) * heights[:, None] - - pred_boxes = torch.zeros_like(rel_codes) - # x1 - pred_boxes[:, 0::4] = pred_ctr_x - 0.5 * pred_w - # y1 - pred_boxes[:, 1::4] = pred_ctr_y - 0.5 * pred_h - # x2 (note: "- 1" is correct; don't be fooled by the asymmetry) - pred_boxes[:, 2::4] = pred_ctr_x + 0.5 * pred_w - 1 - # y2 (note: "- 1" is correct; don't be fooled by the asymmetry) - pred_boxes[:, 3::4] = pred_ctr_y + 0.5 * pred_h - 1 - - return pred_boxes diff --git a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/scripts/parsing_fusion.sh b/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/scripts/parsing_fusion.sh deleted file mode 100644 index 107bcf6b0532a7f807c76cd706e48aab767a5da3..0000000000000000000000000000000000000000 --- a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/scripts/parsing_fusion.sh +++ /dev/null @@ -1,6 +0,0 @@ -python logits_fusion.py \ ---test_json_path ./data/CIHP/crop.json \ ---global_output_dir ./data/CIHP/global_pic_parsing \ ---msrcnn_output_dir ./data/CIHP/crop_pic_parsing \ ---gt_output_dir ./data/CIHP/crop_pic_parsing \ ---save_dir ./data/CIHP/mhp_fusion_parsing diff --git a/spaces/hasibzunair/fifa-tryon-demo/models/mnist_train.py b/spaces/hasibzunair/fifa-tryon-demo/models/mnist_train.py deleted file mode 100644 index 5d859d07e87e6fbb0f2e3266335fea37042fac00..0000000000000000000000000000000000000000 --- a/spaces/hasibzunair/fifa-tryon-demo/models/mnist_train.py +++ /dev/null @@ -1,113 +0,0 @@ -# encoding: utf-8 - -import os -import torch -import random -import argparse -import mnist_model -import data_loader -import torch.nn as nn -import torch.optim as optim -import torch.nn.functional as F -from torch.autograd import Variable - -# Training settings -parser = argparse.ArgumentParser() -parser.add_argument('--batch-size', type=int, default=64) -parser.add_argument('--test-batch-size', type=int, default=1000) -parser.add_argument('--epochs', type=int, default=10) -parser.add_argument('--lr', type=float, default=0.01) -parser.add_argument('--momentum', type=float, default=0.5) -parser.add_argument('--no-cuda', action='store_true', default=False) -parser.add_argument('--seed', type=int, default=1) -parser.add_argument('--log-interval', type=int, default=10) -parser.add_argument('--save-interval', type=int, default=100) -parser.add_argument('--model', required=True) -parser.add_argument('--angle', type=int, default=60) -parser.add_argument('--span_range', type=int, default=0.9) -parser.add_argument('--grid_size', type=int, default=4) -args = parser.parse_args() -args.cuda = not args.no_cuda and torch.cuda.is_available() - -args.span_range_height = args.span_range_width = args.span_range -args.grid_height = args.grid_width = args.grid_size -args.image_height = args.image_width = 28 - -torch.manual_seed(args.seed) -if args.cuda: - torch.cuda.manual_seed(args.seed) - -model = mnist_model.get_model(args) -if args.cuda: - model.cuda() - -optimizer = optim.SGD(model.parameters(), lr=args.lr, momentum=args.momentum) -train_loader = data_loader.get_train_loader(args) -test_loader = data_loader.get_test_loader(args) - - -def train(epoch): - model.train() - for batch_idx, (data, target) in enumerate(train_loader): - if args.cuda: - data, target = data.cuda(), target.cuda() - # print(data.shape) - data, target = Variable(data), Variable(target) - optimizer.zero_grad() - output = model(data) - loss = F.nll_loss(output, target) - loss.backward() - optimizer.step() - if batch_idx % args.log_interval == 0: - print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format( - epoch, batch_idx * len(data), len(train_loader.dataset), - 100. * batch_idx / len(train_loader), loss.data)) - if batch_idx % args.save_interval == 0: - checkpoint_path = checkpoint_dir + \ - 'epoch%03d_iter%03d.pth' % (epoch, batch_idx) - torch.save(model.cpu().state_dict(), checkpoint_path) - if args.cuda: - model.cuda() - - -def test(epoch): - model.eval() - test_loss = 0 - correct = 0 - for data, target in test_loader: - if args.cuda: - data, target = data.cuda(), target.cuda() - data, target = Variable(data, volatile=True), Variable(target) - output = model(data) - test_loss += F.nll_loss(output, target).data - # get the index of the max log-probability - pred = output.data.max(1)[1] - correct += pred.eq(target.data).cpu().sum() - - test_loss = test_loss - # loss function already averages over batch size - test_loss /= len(test_loader) - accuracy = 100. * correct / len(test_loader.dataset) - print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.02f}%)\n'.format( - test_loss, correct, len(test_loader.dataset), accuracy, - )) - log_file.write('{:.02f}\n'.format(accuracy)) - log_file.flush() - os.fsync(log_file) - - -checkpoint_dir = 'checkpoint/%s_angle%d_grid%d/' % ( - args.model, args.angle, args.grid_size, -) -if not os.path.isdir(checkpoint_dir): - os.makedirs(checkpoint_dir) -if not os.path.isdir('accuracy_log'): - os.makedirs('accuracy_log') -log_file_path = 'accuracy_log/%s_angle%d_grid%d.txt' % ( - args.model, args.angle, args.grid_size, -) - -with open(log_file_path, 'w') as log_file: - for epoch in range(1, args.epochs + 1): - train(epoch) - test(epoch) diff --git a/spaces/hilmyblaze/WebUI-Counterfeit-V2.5/Humse-Badhkar-Kaun-1981-EXCLUSIVE-Full-Movie-48.md b/spaces/hilmyblaze/WebUI-Counterfeit-V2.5/Humse-Badhkar-Kaun-1981-EXCLUSIVE-Full-Movie-48.md deleted file mode 100644 index b3726e3fbc2039a7ea606721612b8cb742fe101d..0000000000000000000000000000000000000000 --- a/spaces/hilmyblaze/WebUI-Counterfeit-V2.5/Humse-Badhkar-Kaun-1981-EXCLUSIVE-Full-Movie-48.md +++ /dev/null @@ -1,68 +0,0 @@ -## Humse Badhkar Kaun 1981 Full Movie 48 - - - - - - - - - -**DOWNLOAD » [https://bionallopi.blogspot.com/?file=2txmkn](https://bionallopi.blogspot.com/?file=2txmkn)** - - - - - - - - - - - - ` - -# Humse Badhkar Kaun 1981 Full Movie 48: A Classic Bollywood Adventure Drama - - - -Humse Badhkar Kaun 1981 Full Movie 48 is a Hindi film that tells the story of four brothers who are separated from their parents and each other in their childhood and grow up to become four different men with divergent values and conflicting methods. Fate eventually sets their paths for a head-on collision. - - - -The film stars Mithun Chakraborty, Vijayendra Ghatge, Ranjeeta, Amjad Khan, Danny Denzongpa, Kajal Kiran and Ranjeet in the lead roles. The film was directed by Deepak Bahry and produced by Ramesh Behl. The film was a box office hit and received positive reviews from critics and audiences alike. - - - -The film is known for its thrilling action sequences, melodious songs, and memorable performances by the actors. The film also features the popular devotional song "Deva O Deva, Ganpati Deva, Tumse Badhkar Kaun" which is sung by Kishore Kumar and Asha Bhosle. - - - -Humse Badhkar Kaun 1981 Full Movie 48 is a classic Bollywood adventure drama that showcases the bond of brotherhood, the power of love, and the triumph of good over evil. The film is a must-watch for all fans of Hindi cinema and Mithun Chakraborty. - -` ` - -## Humse Badhkar Kaun 1981 Full Movie 48: A Star-Studded Cast and Crew - - - -Humse Badhkar Kaun 1981 Full Movie 48 boasts of a star-studded cast and crew that includes some of the most popular and talented names of the Hindi film industry. The film features four versatile actors in the roles of the four brothers who have different personalities and professions. Amjad Khan plays Chandan/Bholaram, the eldest brother who becomes a milkman. Danny Denzongpa plays Raju/Johny, the second brother who becomes a burglar. Vijayendra Ghatge plays Bablu/DSP Vijay, the third brother who becomes a police officer. Mithun Chakraborty plays Pappu/Tony, the youngest brother who becomes a masked robber. - - - -The film also has two beautiful actresses in the roles of the love interests of the brothers. Ranjeeta Kaur plays Tina, a dancer who falls in love with Bholaram. Kajal Kiran plays Rekha, a rich girl who loves Johny. The film also has Purnima as Radha, the mother of the brothers who loses her sanity after being separated from them. Ranjeet plays Lalchand, the villain who kills the father of the brothers and tries to steal their treasure. - - - -The film was directed by Deepak Bahry, who was known for making action-packed films with Mithun Chakraborty. The film was produced by Pranlal Mehta under his banner Prathima Films. The film had music by Raamlaxman, who composed some catchy songs for the film. The lyrics were written by Ravinder Rawal, who penned some memorable lines for the songs. The film had cinematography by Arvind Laad and editing by Mukhtar Ahmed. - - - -Humse Badhkar Kaun 1981 Full Movie 48 is a film that showcases the talent and charisma of its cast and crew. The film is a treat for all fans of Bollywood action drama and Mithun Chakraborty. - -` 1b8d091108 - - - - - diff --git a/spaces/hjzhp/cgpt-online/src/env.d.ts b/spaces/hjzhp/cgpt-online/src/env.d.ts deleted file mode 100644 index 88e0666bc38b7daa219a24a221f5f57e9860692e..0000000000000000000000000000000000000000 --- a/spaces/hjzhp/cgpt-online/src/env.d.ts +++ /dev/null @@ -1,16 +0,0 @@ -/// - -interface ImportMetaEnv { - readonly OPENAI_API_KEY: string - readonly HTTPS_PROXY: string - readonly OPENAI_API_BASE_URL: string - readonly HEAD_SCRIPTS: string - readonly SECRET_KEY: string - readonly SITE_PASSWORD: string - readonly OPENAI_API_MODEL: string - -} - -interface ImportMeta { - readonly env: ImportMetaEnv -} diff --git a/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/utilities/recursive_delete_npz.py b/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/utilities/recursive_delete_npz.py deleted file mode 100644 index 60428778a73f336ade51805176b89e5780fc2384..0000000000000000000000000000000000000000 --- a/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/utilities/recursive_delete_npz.py +++ /dev/null @@ -1,37 +0,0 @@ -# Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - - -from batchgenerators.utilities.file_and_folder_operations import * -import argparse -import os - - -def recursive_delete_npz(current_directory: str): - npz_files = subfiles(current_directory, join=True, suffix=".npz") - npz_files = [i for i in npz_files if not i.endswith("segFromPrevStage.npz")] # to be extra safe - _ = [os.remove(i) for i in npz_files] - for d in subdirs(current_directory, join=False): - if d != "pred_next_stage": - recursive_delete_npz(join(current_directory, d)) - - -if __name__ == "__main__": - parser = argparse.ArgumentParser(usage="USE THIS RESPONSIBLY! DANGEROUS! I (Fabian) use this to remove npz files " - "after I ran figure_out_what_to_submit") - parser.add_argument("-f", help="folder", required=True) - - args = parser.parse_args() - - recursive_delete_npz(args.f) diff --git a/spaces/huggingface-projects/wordalle/frontend/svelte.config.js b/spaces/huggingface-projects/wordalle/frontend/svelte.config.js deleted file mode 100644 index 84ba69cbc92feabd4162d8d1e46796849651055c..0000000000000000000000000000000000000000 --- a/spaces/huggingface-projects/wordalle/frontend/svelte.config.js +++ /dev/null @@ -1,32 +0,0 @@ -import adapter from '@sveltejs/adapter-static'; -import preprocess from 'svelte-preprocess'; - -const dev = process.env.NODE_ENV === 'development'; - -console.log('dev', dev); -/** @type {import('@sveltejs/kit').Config} */ -const config = { - // Consult https://github.com/sveltejs/svelte-preprocess - // for more information about preprocessors - preprocess: preprocess({ - postcss: true - }), - - kit: { - paths: { - base: '/static' - }, - adapter: adapter({ - pages: 'build', - assets: 'build', - fallback: null, - precompress: false - }), - - prerender: { - default: true - } - } -}; - -export default config; diff --git a/spaces/hysts/PnP-diffusion-features/README.md b/spaces/hysts/PnP-diffusion-features/README.md deleted file mode 100644 index fec82b4113778aeff248da00f9f3fc6fe1d3e773..0000000000000000000000000000000000000000 --- a/spaces/hysts/PnP-diffusion-features/README.md +++ /dev/null @@ -1,16 +0,0 @@ ---- -title: PnP Diffusion Features -emoji: 🐢 -colorFrom: blue -colorTo: pink -sdk: gradio -sdk_version: 3.36.1 -python_version: 3.10.11 -app_file: app.py -pinned: false -suggested_hardware: a10g-small ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference - -https://arxiv.org/abs/2211.12572 diff --git a/spaces/hyxue/HiFiFace-inference-demo/AdaptiveWingLoss/core/evaler.py b/spaces/hyxue/HiFiFace-inference-demo/AdaptiveWingLoss/core/evaler.py deleted file mode 100644 index 9a1a3c26560dc6a34067df513f7cc85798fa3b25..0000000000000000000000000000000000000000 --- a/spaces/hyxue/HiFiFace-inference-demo/AdaptiveWingLoss/core/evaler.py +++ /dev/null @@ -1,125 +0,0 @@ -import matplotlib - -matplotlib.use("Agg") -import math -import torch -import copy -import time -from torch.autograd import Variable -import shutil -from skimage import io -import numpy as np -from utils.utils import fan_NME, show_landmarks, get_preds_fromhm -from PIL import Image, ImageDraw -import os -import sys -import cv2 -import matplotlib.pyplot as plt - - -device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") - - -def eval_model( - model, dataloaders, dataset_sizes, writer, use_gpu=True, epoches=5, dataset="val", save_path="./", num_landmarks=68 -): - global_nme = 0 - model.eval() - for epoch in range(epoches): - running_loss = 0 - step = 0 - total_nme = 0 - total_count = 0 - fail_count = 0 - nmes = [] - # running_corrects = 0 - - # Iterate over data. - with torch.no_grad(): - for data in dataloaders[dataset]: - total_runtime = 0 - run_count = 0 - step_start = time.time() - step += 1 - # get the inputs - inputs = data["image"].type(torch.FloatTensor) - labels_heatmap = data["heatmap"].type(torch.FloatTensor) - labels_boundary = data["boundary"].type(torch.FloatTensor) - landmarks = data["landmarks"].type(torch.FloatTensor) - loss_weight_map = data["weight_map"].type(torch.FloatTensor) - # wrap them in Variable - if use_gpu: - inputs = inputs.to(device) - labels_heatmap = labels_heatmap.to(device) - labels_boundary = labels_boundary.to(device) - loss_weight_map = loss_weight_map.to(device) - else: - inputs, labels_heatmap = Variable(inputs), Variable(labels_heatmap) - labels_boundary = Variable(labels_boundary) - labels = torch.cat((labels_heatmap, labels_boundary), 1) - single_start = time.time() - outputs, boundary_channels = model(inputs) - single_end = time.time() - total_runtime += time.time() - single_start - run_count += 1 - step_end = time.time() - for i in range(inputs.shape[0]): - img = inputs[i] - img = img.cpu().numpy() - img = img.transpose((1, 2, 0)) * 255.0 - img = img.astype(np.uint8) - img = Image.fromarray(img) - # pred_heatmap = outputs[-1][i].detach().cpu()[:-1, :, :] - pred_heatmap = outputs[-1][:, :-1, :, :][i].detach().cpu() - pred_landmarks, _ = get_preds_fromhm(pred_heatmap.unsqueeze(0)) - pred_landmarks = pred_landmarks.squeeze().numpy() - - gt_landmarks = data["landmarks"][i].numpy() - if num_landmarks == 68: - left_eye = np.average(gt_landmarks[36:42], axis=0) - right_eye = np.average(gt_landmarks[42:48], axis=0) - norm_factor = np.linalg.norm(left_eye - right_eye) - # norm_factor = np.linalg.norm(gt_landmarks[36]- gt_landmarks[45]) - - elif num_landmarks == 98: - norm_factor = np.linalg.norm(gt_landmarks[60] - gt_landmarks[72]) - elif num_landmarks == 19: - left, top = gt_landmarks[-2, :] - right, bottom = gt_landmarks[-1, :] - norm_factor = math.sqrt(abs(right - left) * abs(top - bottom)) - gt_landmarks = gt_landmarks[:-2, :] - elif num_landmarks == 29: - # norm_factor = np.linalg.norm(gt_landmarks[8]- gt_landmarks[9]) - norm_factor = np.linalg.norm(gt_landmarks[16] - gt_landmarks[17]) - single_nme = ( - np.sum(np.linalg.norm(pred_landmarks * 4 - gt_landmarks, axis=1)) / pred_landmarks.shape[0] - ) / norm_factor - - nmes.append(single_nme) - total_count += 1 - if single_nme > 0.1: - fail_count += 1 - if step % 10 == 0: - print( - "Step {} Time: {:.6f} Input Mean: {:.6f} Output Mean: {:.6f}".format( - step, step_end - step_start, torch.mean(labels), torch.mean(outputs[0]) - ) - ) - # gt_landmarks = landmarks.numpy() - # pred_heatmap = outputs[-1].to('cpu').numpy() - gt_landmarks = landmarks - batch_nme = fan_NME(outputs[-1][:, :-1, :, :].detach().cpu(), gt_landmarks, num_landmarks) - # batch_nme = 0 - total_nme += batch_nme - epoch_nme = total_nme / dataset_sizes["val"] - global_nme += epoch_nme - nme_save_path = os.path.join(save_path, "nme_log.npy") - np.save(nme_save_path, np.array(nmes)) - print( - "NME: {:.6f} Failure Rate: {:.6f} Total Count: {:.6f} Fail Count: {:.6f}".format( - epoch_nme, fail_count / total_count, total_count, fail_count - ) - ) - print("Evaluation done! Average NME: {:.6f}".format(global_nme / epoches)) - print("Everage runtime for a single batch: {:.6f}".format(total_runtime / run_count)) - return model diff --git a/spaces/hyxue/HiFiFace-inference-demo/Deep3DFaceRecon_pytorch/models/arcface_torch/configs/wf12m_mbf.py b/spaces/hyxue/HiFiFace-inference-demo/Deep3DFaceRecon_pytorch/models/arcface_torch/configs/wf12m_mbf.py deleted file mode 100644 index d1cb93b2f168e3a64e65d1f8d6cf058e41676c6a..0000000000000000000000000000000000000000 --- a/spaces/hyxue/HiFiFace-inference-demo/Deep3DFaceRecon_pytorch/models/arcface_torch/configs/wf12m_mbf.py +++ /dev/null @@ -1,28 +0,0 @@ -from easydict import EasyDict as edict - -# make training faster -# our RAM is 256G -# mount -t tmpfs -o size=140G tmpfs /train_tmp - -config = edict() -config.margin_list = (1.0, 0.0, 0.4) -config.network = "mbf" -config.resume = False -config.output = None -config.embedding_size = 512 -config.sample_rate = 1.0 -config.interclass_filtering_threshold = 0 -config.fp16 = True -config.weight_decay = 1e-4 -config.batch_size = 128 -config.optimizer = "sgd" -config.lr = 0.1 -config.verbose = 2000 -config.dali = False - -config.rec = "/train_tmp/WebFace12M" -config.num_classes = 617970 -config.num_image = 12720066 -config.num_epoch = 20 -config.warmup_epoch = 0 -config.val_targets = [] diff --git a/spaces/ibvhim/Gradio-Apps/README.md b/spaces/ibvhim/Gradio-Apps/README.md deleted file mode 100644 index 09c883d8ac7e9da77a9500384ee39557e22f6187..0000000000000000000000000000000000000000 --- a/spaces/ibvhim/Gradio-Apps/README.md +++ /dev/null @@ -1,36 +0,0 @@ ---- -title: Gradio Apps -emoji: 📚 -language: - - en - - hi -tags: - - Chatbot - - Image Classification - - Text-to-Speech - - Speech-to-Text -colorFrom: indigo -colorTo: gray -sdk: gradio -sdk_version: 3.15.0 -app_file: app.py -pinned: false ---- - -# Gradio Web Applications : __This repository contains various ML/CV/NLP web applications By [ibvhim](https://github.com/ibvhim)__ - ---- -#### 1. ChatBot 🤖 [DialoBot by Microsoft](https://huggingface.co/microsoft/DialoGPT-small): - - __DialoGPT__ is a variant of the GPT-2 language model developed by Microsoft. It is designed to generate human-like text, and is particularly well-suited for generating dialogue. DialoGPT is trained on a large dataset of human conversations, and is able to generate responses that are coherent and appropriate in a given context. The model is also able to maintain conversation coherence over a series of turns, making it well-suited for tasks such as chatbots and dialogue systems. The original GPT-2 model was developed by OpenAI, and DialoGPT is based on this model. - - __Preview:__ ![Preview](https://imgur.com/axpPfY2.gif) - -#### 2. Image Classification 🖼 [EfficientNet-Lite-4 by GoogleAI](https://huggingface.co/onnx/EfficientNet-Lite4/tree/main): - - __EfficientNet Lite 4__ is a smaller version of the EfficientNet architecture, which was developed by Google AI. The "Lite" versions of EfficientNet are designed to be smaller and faster than the original models, while still maintaining good performance on a variety of tasks. EfficientNet Lite 4 is a convolutional neural network (CNN) that is trained to recognize patterns in data. It is often used for image classification, object detection, and other tasks that involve analyzing and understanding visual data. - - __Preview__ ![Preview](https://imgur.com/WnRyRrT.gif) - -#### Voice-to-Text Translation & Chat 📩 - - __Voice-to-Text Translated Chatbots__ are chatbots that are able to transcribe spoken language into text and translate it into another language. They are often used in customer service or language learning applications, and are able to handle a conversation with a user in real time. The chatbot is able to understand the user's spoken input, transcribe it into text, translate it into the desired language, and generate a response in the same language. The response is then translated back into the user's language and spoken aloud to the user. This allows for a seamless conversation between users who speak different languages. - - __Preview__ ![Preview](https://imgur.com/QNWY07B.gif) ---- -

Please do ❤ the repository if you liked it!

-

Byee!

\ No newline at end of file diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Discografia De La Rondalla Bautista Ebenezer.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Discografia De La Rondalla Bautista Ebenezer.md deleted file mode 100644 index bd049c5fee29a3a2af27b47806c81b80f982477c..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Discografia De La Rondalla Bautista Ebenezer.md +++ /dev/null @@ -1,7 +0,0 @@ - -

Volvera Rondalla La Fe download a new pc game full version for personal computer. Volvera Rondalla La Fe Rondalla Bautista Ebenezer An Amaraj. Simple to install and should able to recognize any stylized character. Finalmente, prepara la carrera de “La Fe” que creciera en el 2012. Leer La Fe como música de lucha.

-

discografia de la rondalla bautista ebenezer


Download ✵✵✵ https://urlin.us/2uEwUB



-

Rondalla Bautista Ebenezer 40 Anos Tributo, Dlds La Fe Como Música de Lucha Barbita. Online Shopping from a great selection at Music Store, Discogs, Home & Living, Entertainment, Guitar, Apparel & Accessories at CSCMusic. Volvera Rondalla La Fe https://www.pandora.com/artist/ rondalla-bautista-ebenezer/eres-fiel-vol-6/dobla fielbol/ronda mujer/viva/viva-66776826. El nombre de la pesca en las cajas de arena, sus códigos completos y ahora el rondalla. Volvera Rondalla La Fe https://www.pandora.com/artist/ rondalla-bautista-ebenezer/eres-fiel-vol-6/dobla fielbol/ronda mujer/viva/viva-66776826. Rondalla Bautista Ebenezer rondalla cristiana la fe cnticos del recuerdo en concierto 01 biografa 02 tributo 03 que haria provided to by ditto. Volvera Rondalla La Fe Discografia De La Rondalla Bautista Ebenezer. 8 2015 rondalla bautista ebenezer rondalla cristiana la fe cnticos del recuerdo en concierto 01 biografa 02 tributo 03 que haria provided to by ditto.

-

discografia de la rondalla bautista ebenezer. the assassin's creed valhalla not opening issue has been resolved. the following is an index of images discografia de la rondalla bautista ebenezer. many different reviews, screenshots, artwork and more. albums singles and eps featuring rondalla bautista ebenezer fans also like about your privacy. rondalla bautista ebenezer. discografia de la rondalla bautista ebenezer. finally, try launching the game file to check whether the assassin's creed valhalla not opening issue has been.

899543212b
-
-
\ No newline at end of file diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/HHD Online Player (Gunday Movie In Hindi Download 720p).md b/spaces/inplisQlawa/anything-midjourney-v4-1/HHD Online Player (Gunday Movie In Hindi Download 720p).md deleted file mode 100644 index 63eb9621b7c47465f9112a939979d541a32e1596..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/HHD Online Player (Gunday Movie In Hindi Download 720p).md +++ /dev/null @@ -1,8 +0,0 @@ - -

now with more than 3,000 movies and shows available to stream, watch and download in hd quality, hdtvnow.com is. download ganday movie full hd 1020p in hindi hd online player in hindi full movie hd free download in hindi. gunday movie download full 720p full movie download in hd 720p 1080p 100mbps (ramayan tv series hd full movie in hindi). i had seen gunday movie before in hindi full movie hd online but there was no download option.

-

HHD Online Player (Gunday movie in hindi download 720p)


DOWNLOADhttps://urlin.us/2uExv8



-

gunday movie full hd 1020p in hindi hd online player in hindi full movie hd free download in hindi - online hindi movie download videos. gunday full movie download hindi 720p. full hd karmatama movie hd 1080p web-dl | hindi hd movie download. i was able to download the movie from the same link earlier but it was a 1.5gb file. gunday movie full hd 1080p download. movie singing subtitle english full hindi movie hd 1080p download full hindi movie hd.

-

hindi full movie hd online download, download full hd hindi movie in hindi hd 1080p (32bit). despite being my favourite gunday movie and the fact that i was more than happy with the quality, i was disappointed with the price of the dvd, so decided to download it for free on 720p full hd download. hd online player (gunday movie in hindi hd download 720p) in hindi full movie hd 1080p we have provide hd free download online hd.

-

hindi full movie hd online download, hd online player (gunday movie in hindi hd download 720p). download gunday (2014) full movie in hindi full hd 1080p. gunday full movie download hindi 720p. download full hd karmatama movie hd 1080p web-dl | hindi hd movie download.

899543212b
-
-
\ No newline at end of file diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Hetman File Repair Keygen Download.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Hetman File Repair Keygen Download.md deleted file mode 100644 index a608babdfa933ca714dc5d8d8cb2081ddd900051..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Hetman File Repair Keygen Download.md +++ /dev/null @@ -1,10 +0,0 @@ -

hetman file repair keygen download


Download Zip 🔗 https://urlin.us/2uExDp



-
-Hetman File Repair Keygen Torrent . hetman file repair, hetman file repair crack, hetman file repair registration key, hetman file repair download, . Hetman File Repair Download Keys.File Repair is a program for recovering corrupted Microsoft Office documents. -The File Repair program is designed to repair damaged Microsoft Office documents. -The program has a fairly clear and. -Recovery of damaged Microsoft Office documents. -As a result of experimenting with different versions of MS Office, users may find that some of its documents are damaged or not working. 8a78ff9644
-
-
-

diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Multilizer 2013 Pdf Translator Full Crack __EXCLUSIVE__.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Multilizer 2013 Pdf Translator Full Crack __EXCLUSIVE__.md deleted file mode 100644 index ccf20ce8120ff002947b1875b47248b76d5e127b..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Multilizer 2013 Pdf Translator Full Crack __EXCLUSIVE__.md +++ /dev/null @@ -1,38 +0,0 @@ -

Multilizer 2013 Pdf Translator Full Crack


Download Zip ►►► https://urlin.us/2uExPR



- -google docs 2013 pdf translator - -nrf adc driver (image) - -nrf adc driver - -b65l router wifi plus bluetooth dongle - -nokia 3810 - -unconnected usb - -the bootlog - -and i have no idea what to do to fix it. i need help. - -A: - -The solution for my problem was sudo apt-get install linux-headers-generic-lts-trusty. - -I'm now running on Ubuntu Trusty Tahr. - -You can check your kernel version with: - -$ uname -r - -3.13.0-26-generic - -I was able to install the AMD driver using the terminal with sudo apt-get install fglrx fglrx-amdcccle, then restart the computer, and it worked like a charm. - -A comparison of male and female pattern baldness treatments. - -Several different modalities for the treatment of male and female pattern baldness are available, ranging from drug treatments such as minoxidil to surgical therapies such as follicular unit transplantation. Previous studies have compared these methods, most frequently for male pattern baldness. This study compares the efficacy and safety of three commercially available male pattern baldness treatments (minoxidil, finasteride, and the FUTRABALD system) and the effectiveness and safety of a range of surgical methods for female pattern baldness (follicular unit transplantation and strip grafting). For each treatment modality, three systematic reviews of the literature were carried out and a meta-analysis was performed for the reviews comparing modalities. Information about the safety and efficacy of treatments was extracted from the clinical trials and published literature. Eight treatments for male pattern baldness were compared with eight treatments for female pattern baldness (including three finasteride-only treatments and two finasteride-minoxidil combination treatments). The FUTRABALD device was significantly better than other surgical modalities, and finasteride 1 mg daily was as effective as minoxidil 2% twice daily. The FUTRABALD device is the most effective non-surgical method for treating male pattern baldness, whereas finasteride 1 mg daily is as effective as minoxidil 2% twice daily.A multipartite extension to the Gilli-Williamson-Walker ( 4fefd39f24
-
-
-

diff --git a/spaces/inreVtussa/clothingai/Examples/Dcs A 10c Warthog Crack.md b/spaces/inreVtussa/clothingai/Examples/Dcs A 10c Warthog Crack.md deleted file mode 100644 index eaea19c51af5b6118d820052da8346221b488381..0000000000000000000000000000000000000000 --- a/spaces/inreVtussa/clothingai/Examples/Dcs A 10c Warthog Crack.md +++ /dev/null @@ -1,6 +0,0 @@ -

Dcs A 10c Warthog Crack


DOWNLOADhttps://tiurll.com/2uCiRc



- -Mindcrack FTB S02 E44 Do AnderZ Hate Americans? ImAnderZEL ... Digital Combat Simulator (DCS ... 4d29de3e1b
-
-
-

diff --git a/spaces/ismot/1702t1/Post-Porcessing.md b/spaces/ismot/1702t1/Post-Porcessing.md deleted file mode 100644 index f9633df488ffe2dd4050d78786dba69f6b0d3cae..0000000000000000000000000000000000000000 --- a/spaces/ismot/1702t1/Post-Porcessing.md +++ /dev/null @@ -1,35 +0,0 @@ -# Post-Processing -## Step - -1. Simplify polygon by [DP algorithm](https://en.wikipedia.org/wiki/Ramer%E2%80%93Douglas%E2%80%93Peucker_algorithm) - -![img.png](src/fig/post_processing/img_0.png) - -2. Detect occlusion, calculating box fill with 1 - -![img.png](src/fig/post_processing/img_1.png) - -3. Fill in reasonable sampling section - -![img.png](src/fig/post_processing/img_2.png) - -4. Output processed polygon - -![img.png](src/fig/post_processing/img_3.png) - -## performance -It works, and a performance comparison on the MatterportLayout dataset: - -| Method | 2D IoU(%) | 3D IoU(%) | RMSE | $\mathbf{\delta_{1}}$ | -|--|--|--|--|--| -without post-proc | 83.52 | 81.11 | 0.204 | 0.951 | -original post-proc |83.12 | 80.71 | 0.230 | 0.936|\ -optimized post-proc | 83.48 | 81.08| 0.214 | 0.940 | - -original: - -![img.png](src/fig/post_processing/original.png) - -optimized: - -![img.png](src/fig/post_processing/optimized.png) diff --git a/spaces/issenn/so-vits-svc-4.0-spaces-sample/modules/gradio/components.py b/spaces/issenn/so-vits-svc-4.0-spaces-sample/modules/gradio/components.py deleted file mode 100644 index 7a6aa6e737445b37019b184e0a689b59d0c2a6d1..0000000000000000000000000000000000000000 --- a/spaces/issenn/so-vits-svc-4.0-spaces-sample/modules/gradio/components.py +++ /dev/null @@ -1,33 +0,0 @@ -import gradio as gr - - -def add_classes_to_gradio_component(comp): - """ - this adds gradio-* to the component for css styling (ie gradio-button to gr.Button), as well as some others - """ - - comp.elem_classes = [f"gradio-{comp.get_block_name()}", *(comp.elem_classes or [])] - - if getattr(comp, 'multiselect', False): - comp.elem_classes.append('multiselect') - -def IOComponent_init(self, *args, **kwargs): - res = original_IOComponent_init(self, *args, **kwargs) - add_classes_to_gradio_component(self) - return res - -original_IOComponent_init = gr.components.IOComponent.__init__ -gr.components.IOComponent.__init__ = IOComponent_init - - -class Button(gr.Button): - """ - Small button with single emoji as text, fits inside gradio forms - """ - - def __init__(self, *args, **kwargs): - classes = kwargs.pop("elem_classes", []) - super().__init__(*args, elem_classes=["tool", *classes], **kwargs) - - def get_block_name(self): - return "button" diff --git a/spaces/ivn888/Twitter-dashboard/panel-geodashboard-twitter/graphs/bar_plots.py b/spaces/ivn888/Twitter-dashboard/panel-geodashboard-twitter/graphs/bar_plots.py deleted file mode 100644 index c8d21cde0fe7917952518fe1f6afcf864381219d..0000000000000000000000000000000000000000 --- a/spaces/ivn888/Twitter-dashboard/panel-geodashboard-twitter/graphs/bar_plots.py +++ /dev/null @@ -1,55 +0,0 @@ -import hvplot.pandas # noqa -from bokeh.models import HoverTool, WheelZoomTool -from pd_utils.utils import filter_df_by_bbox - -BAR_COLOR = "#03DAC6" - - -def get_top5_langs(in_data, x_range, y_range): - """ - Returns a bar plot showing the top 5 - languages within the current map extent. - """ - - def hook(plot, element): - """ - Custom hook for disabling zoom on axis - """ - - # Disable zoom on axis - for tool in plot.state.toolbar.tools: - if isinstance(tool, WheelZoomTool): - tool.zoom_on_axis = False - break - - # Filter the tweet locations by bounding box - out_data = filter_df_by_bbox(in_data, x_range, y_range) - - # Define a custom Hover tool for the bar plot - lang_hover = HoverTool( - tooltips=[("Language", "@tweet_lang"), ("Tweets", "@count")], - point_policy="follow_mouse", - ) - - # Get the top 5 most common languages - lang_df = out_data["tweet_lang"].value_counts().head(5).to_frame() - lang_df = lang_df.reset_index() - - # Create the bar plot - lang_plt = lang_df.hvplot.bar( - title="", - x="tweet_lang", - y="count", - xlabel="Language", - ylabel="Tweets", - yformatter="%.0f", - line_width=0, - color=BAR_COLOR, - tools=[lang_hover], - alpha=0.7, - min_height=300, - min_width=300, - responsive=True, - ).opts(hooks=[hook]) - - return lang_plt diff --git a/spaces/jbilcke-hf/VideoChain-UI/src/components/ui/button.tsx b/spaces/jbilcke-hf/VideoChain-UI/src/components/ui/button.tsx deleted file mode 100644 index d0042a291a9dfc9d3ca1bc323f08a3f276df79b5..0000000000000000000000000000000000000000 --- a/spaces/jbilcke-hf/VideoChain-UI/src/components/ui/button.tsx +++ /dev/null @@ -1,56 +0,0 @@ -import * as React from "react" -import { Slot } from "@radix-ui/react-slot" -import { cva, type VariantProps } from "class-variance-authority" - -import { cn } from "@/lib/utils" - -const buttonVariants = cva( - "inline-flex items-center justify-center rounded-md text-sm font-medium ring-offset-white transition-colors focus-visible:outline-none focus-visible:ring-2 focus-visible:ring-stone-400 focus-visible:ring-offset-2 disabled:pointer-events-none disabled:opacity-50 dark:ring-offset-stone-950 dark:focus-visible:ring-stone-800", - { - variants: { - variant: { - default: "bg-stone-900 text-stone-50 hover:bg-stone-900/90 dark:bg-stone-50 dark:text-stone-900 dark:hover:bg-stone-50/90", - destructive: - "bg-red-500 text-stone-50 hover:bg-red-500/90 dark:bg-red-900 dark:text-red-50 dark:hover:bg-red-900/90", - outline: - "border border-stone-200 bg-white hover:bg-stone-100 hover:text-stone-900 dark:border-stone-800 dark:bg-stone-950 dark:hover:bg-stone-800 dark:hover:text-stone-50", - secondary: - "bg-stone-100 text-stone-900 hover:bg-stone-100/80 dark:bg-stone-800 dark:text-stone-50 dark:hover:bg-stone-800/80", - ghost: "hover:bg-stone-100 hover:text-stone-900 dark:hover:bg-stone-800 dark:hover:text-stone-50", - link: "text-stone-900 underline-offset-4 hover:underline dark:text-stone-50", - }, - size: { - default: "h-10 px-4 py-2", - sm: "h-9 rounded-md px-3", - lg: "h-11 rounded-md px-8", - icon: "h-10 w-10", - }, - }, - defaultVariants: { - variant: "default", - size: "default", - }, - } -) - -export interface ButtonProps - extends React.ButtonHTMLAttributes, - VariantProps { - asChild?: boolean -} - -const Button = React.forwardRef( - ({ className, variant, size, asChild = false, ...props }, ref) => { - const Comp = asChild ? Slot : "button" - return ( - - ) - } -) -Button.displayName = "Button" - -export { Button, buttonVariants } diff --git a/spaces/jie1/succ1/DLKcat/DeeplearningApproach/Code/analysis/SuppleFig8b_DNDS.py b/spaces/jie1/succ1/DLKcat/DeeplearningApproach/Code/analysis/SuppleFig8b_DNDS.py deleted file mode 100644 index 8b7e239c13bbbd0bacfddaca52d3073f26e9138f..0000000000000000000000000000000000000000 --- a/spaces/jie1/succ1/DLKcat/DeeplearningApproach/Code/analysis/SuppleFig8b_DNDS.py +++ /dev/null @@ -1,395 +0,0 @@ -#!/usr/bin/python -# coding: utf-8 - -# Author: LE YUAN - -import os -import csv -import math -import xlrd -import pickle -import numpy as np -import pandas as pd -from rdkit import Chem -from Bio import SeqIO -from collections import defaultdict -from scipy import stats -from scipy.stats import ranksums -import seaborn as sns -import matplotlib.pyplot as plt -import matplotlib.pyplot as plt -from matplotlib import rc - - -def getIndex() : - # Downloaded the data orthomcl_SeqIDs_index.txt from the Figshare data repository (10.6084/m9.figshare.5854692; - # https://figshare.com/articles/Tempo_and_mode_of_genome_evolution_in_the_budding_yeast_subphylum/5854692) - # get the ortholog accoding to protein sequence id, that means Alloascoidea_hylecoeti@Seq_1 as the key, 0_0 as the value - with open("../../Data/directory/to/orthomcl_SeqIDs_index.txt", "r") as indexFile : - indexs = indexFile.readlines() - - indexSeqId = dict() - for index in indexs : - index_Seq = index.strip().split(": ") - indexSeqId[index_Seq[0]] = index_Seq[1] - - return indexSeqId - -def getIndex2() : - # Downloaded the data orthomcl_SeqIDs_index.txt from the Figshare data repository (10.6084/m9.figshare.5854692; - # https://figshare.com/articles/Tempo_and_mode_of_genome_evolution_in_the_budding_yeast_subphylum/5854692) - # get the ortholog accoding to protein sequence id, that means Alloascoidea_hylecoeti@Seq_1 as the key, 0_0 as the value - with open("../../Data/directory/to/orthomcl_SeqIDs_index.txt", "r") as indexFile : - indexs = indexFile.readlines() - - indexSeqId = dict() - for index in indexs : - index_Seq = index.strip().split(": ") - indexSeqId[index_Seq[1]] = index_Seq[0] - - return indexSeqId - -def getOrthologIndex() : - # Downloaded the data orthomcl_clusters.txt from the Figshare data repository (10.6084/m9.figshare.5854692; - # https://figshare.com/articles/Tempo_and_mode_of_genome_evolution_in_the_budding_yeast_subphylum/5854692) - with open("../../Data/directory/to/orthomcl_clusters.txt", "r") as orthologFile : - orthologs = orthologFile.readlines() - - orthologIndex = dict() - for ortholog in orthologs : - ortholog_Index = ortholog.strip().split(" ") - # orthologIndex = {'OG1001': {'328_2397', '189_1696', '279_256',.....}} - ortholog = ortholog_Index[0][:-1] - - orthologIndex[ortholog] = ortholog_Index[1:] - - return orthologIndex - -def getOrthologIndex2() : - # Downloaded the data orthomcl_clusters.txt from the Figshare data repository (10.6084/m9.figshare.5854692; - # https://figshare.com/articles/Tempo_and_mode_of_genome_evolution_in_the_budding_yeast_subphylum/5854692) - with open("../../Data/directory/to/orthomcl_clusters.txt", "r") as orthologFile : - orthologs = orthologFile.readlines() - - orthologIndex = dict() - for ortholog in orthologs : - ortholog_Index = ortholog.strip().split(" ") - # orthologIndex = {'OG1001': {'328_2397', '189_1696', '279_256',.....}} - ortholog = ortholog_Index[0][:-1] - - for index in ortholog_Index[1:] : - orthologIndex[index] = ortholog - # print(orthologIndex) # {'302_3224': 'OG1000', '317_1502': 'OG1000', '318_1938': 'OG1001', '320_301': 'OG1001', '325_5347': 'OG1001'} - - return orthologIndex - -def get_organisms() : - filenames = os.listdir('../../Data/MLKCATRESULT/') - filenames = [filename.split('ForKcat')[0] for filename in filenames if filename.endswith('.txt')] - print(len(filenames)) # 343 - # print(filenames[:3]) # ['yHMPu5000035645_Yarrowia_divulgata', 'Saccharomyces_uvarum', 'Cyberlindnera_jadinii'] - return filenames - -def getDNDS_all() : - with open('../../Data/gene_dn_ds_03_02.csv', 'r') as infile : - lines = infile.readlines()[1:] - # print(len(lines1)) - - dnds_dict = dict() - for line in lines : - data = line.strip().split(',') - # print(data) - if data[2] : - OG_line = line.strip().split(',')[1].split('.')[0] - dnds_score = line.strip().split(',')[2] - # print(dnds_score) - - dnds_dict[OG_line] = float(dnds_score) - - return dnds_dict - -def median(lst): - sortedLst = sorted(lst) - lstLen = len(lst) - index = (lstLen - 1) // 2 - - if (lstLen % 2): - return sortedLst[index] - else: - return (sortedLst[index] + sortedLst[index + 1])/2.0 - -def species_clade() : - with open("../../../BayesianApproach/Data/343_phenotype_clade.tsv", 'r') as infile : - lines = infile.readlines()[1:] - - species = list() - clade = list() - - for line in lines : - data = line.strip().split('\t') - species.append(data[0]) - clade.append(data[1]) - - print(species[-3:]) - print(clade[-3:]) - - species_clade = dict(zip(species,clade)) - # print(len(species_clade)) - return species_clade - -def main() : - - SeqIdIndex = getIndex2() - IndexOrtholog = getOrthologIndex2() - ortholog_DNDS = getDNDS_all() - organisms = get_organisms() - species_clades = species_clade() - - # organisms = ['Saccharomyces_cerevisiae','Yarrowia_lipolytica','Kluyveromyces_marxianus','Kluyveromyces_lactis','Komagataella_pastoris','Lachancea_kluyveri','Candida_albicans'] - # organisms = ['Saccharomyces_cerevisiae','Yarrowia_lipolytica','Kluyveromyces_marxianus','Kluyveromyces_lactis','Lachancea_kluyveri', - # 'Saccharomyces_uvarum'] - - all_clades = ['Outgroup', 'Lipomycetaceae', 'Trigonopsidaceae', 'Dipodascaceae/Trichomonascaceae', 'Alloascoideaceae', 'Sporopachydermia clade', - 'Pichiaceae', 'CUG-Ala', 'CUG-Ser1', 'CUG-Ser2', 'Phaffomycetaceae', 'Saccharomycodaceae', 'Saccharomycetaceae'] - - all_clades_order = {'Outgroup':1, 'Lipomycetaceae':2, 'Trigonopsidaceae':3, 'Dipodascaceae/Trichomonascaceae':4, 'Alloascoideaceae':5, 'Sporopachydermia clade':6, - 'Pichiaceae':7, 'CUG-Ala':8, 'CUG-Ser1':9, 'CUG-Ser2':10, 'Phaffomycetaceae':11, 'Saccharomycodaceae':12, 'Saccharomycetaceae':13} - - alldata = dict() - alldata['type'] = list() - alldata['clade'] = list() - alldata['Kcat_value'] = list() - counts_cluster_1 = list() - counts_cluster_2 = list() - - for clade in all_clades : - for organism in organisms : - if species_clades[organism.lower()] == clade : - # print('This is', organism) - - with open('../prediction/343species_0115/%s_PredictionResults.txt' % organism, 'r') as infile : - lines = infile.readlines() - - # seqIds_values = dict() - seq_kcat = list() - - for line in lines[1:] : - seqIds_values = dict() - # seq_kcat = list() - data = line.strip('\n').split('\t') - smiles = data[4].split(';') - seqIds = data[5].split(';') - values = data[-1].split(';') - - if values : - for i, seqId in enumerate(seqIds) : - for value in values : - if value : - try : - Kcats = value.split(',') - if Kcats[i] != '#' : - seqIds_values[seqId].append(float(Kcats[i])) - else : - pass - except : - Kcat = list() - if Kcats[i] != '#' : - Kcat.append(float(Kcats[i])) - seqIds_values[seqId] = Kcat - else : - pass - # print(seqIds_values) - - for seqId, value in seqIds_values.items() : - max_value = max(value) - seq_kcat.append((seqId, max_value)) - - # print(len(seq_kcat)) # 5876 - # print(seq_kcat[:3]) - seq_kcat_no_copy = list(set(seq_kcat)) - # print(len(seq_kcat_no_copy)) # 2992 - # print(seq_kcat_no_copy[:3]) - # print(len(seqIds_values)) - - for item in seq_kcat_no_copy : - seqId = item[0] - max_value = item[1] - index = SeqIdIndex[seqId] - ortholog = IndexOrtholog[index] - try : - dnds = float(ortholog_DNDS[ortholog]) - # print(dnds) - kcatValue = math.log10(max_value) - - if dnds>0 and dnds<=0.15 : - # alldata['type'].append('Conserved') - alldata['type'].append('dN/dS <= 0.15') - # alldata['clade'].append(clade) - alldata['clade'].append(all_clades_order[clade]) - alldata['Kcat_value'].append(kcatValue) - else : - # alldata['type'].append('Non-conserved') - alldata['type'].append('dN/dS > 0.15') - # alldata['clade'].append(clade) - alldata['clade'].append(all_clades_order[clade]) - alldata['Kcat_value'].append(kcatValue) - - except : - continue - - # All clades: - # ['Saccharomycodaceae', 'CUG-Ser1', 'CUG-Ser2', 'Dipodascaceae/Trichomonascaceae', 'Pichiaceae', 'Lipomycetaceae', 'Alloascoideaceae', - # 'Sporopachydermia clade', 'Saccharomycetaceae', 'Trigonopsidaceae', 'Phaffomycetaceae', 'CUG-Ala', 'Outgroup'] - - # print(alldata['type'][:3]) - # print(alldata['clade'][:3]) - # print(alldata['Kcat_value'][:3]) - - # print(len(alldata['type'])) - # print(len(alldata['clade'])) - # print(len(alldata['Kcat_value'])) - - allData = pd.DataFrame(alldata) - # print(type(allData)) - - # for clade in all_clades : - # print('This is the clade:', clade) - # cluster_1 = list() - # cluster_2 = list() - # # types = allData.iloc[:,1] - # # print(len(types)) - # # print(types[:3]) - # # for clade_type in types : - # # if clade_type == clade : - # for row_index, row in allData.iterrows() : - # if row['clade'] == clade and row['type'] == 'dN/dS <= 0.15' : - # # print(row['Kcat_value']) - # cluster_1.append(row['Kcat_value']) - # if row['clade'] == clade and row['type'] == 'dN/dS > 0.15' : - # # print(row['Kcat_value']) - # cluster_2.append(row['Kcat_value']) - - # stat, p_value = ranksums(cluster_1,cluster_2) - # print('The P_value between the two dN/dS clusters is:', p_value) - - # Results : - # This is the clade: Outgroup - # The P_value between the two dN/dS clusters is: 1.6243302130328922e-61 - # This is the clade: Lipomycetaceae - # The P_value between the two dN/dS clusters is: 7.879651158117646e-67 - # This is the clade: CUG-Ser1 - # The P_value between the two dN/dS clusters is: 0.0 - # This is the clade: Phaffomycetaceae - # The P_value between the two dN/dS clusters is: 3.6142539325434596e-75 - # This is the clade: Dipodascaceae/Trichomonascaceae - # The P_value between the two dN/dS clusters is: 8.512690063502762e-117 - # This is the clade: Trigonopsidaceae - # The P_value between the two dN/dS clusters is: 4.157606980523744e-25 - # This is the clade: Saccharomycodaceae - # The P_value between the two dN/dS clusters is: 7.633228849794443e-58 - # This is the clade: Sporopachydermia clade - # The P_value between the two dN/dS clusters is: 1.2098972408782565e-07 - # This is the clade: Pichiaceae - # The P_value between the two dN/dS clusters is: 1.8480664486765028e-291 - # This is the clade: CUG-Ser2 - # The P_value between the two dN/dS clusters is: 2.801972561349211e-19 - # This is the clade: CUG-Ala - # The P_value between the two dN/dS clusters is: 3.1431166089138013e-38 - # This is the clade: Saccharomycetaceae - # The P_value between the two dN/dS clusters is: 3.553336840154913e-298 - # This is the clade: Alloascoideaceae - # The P_value between the two dN/dS clusters is: 0.0002450681317830253 - - plt.figure(figsize=(2.5, 2.0)) - # To solve the 'Helvetica' font cannot be used in PDF file - # https://stackoverflow.com/questions/59845568/the-pdf-backend-does-not-currently-support-the-selected-font - rc('font',**{'family':'serif','serif':['Helvetica']}) - plt.rcParams['pdf.fonttype'] = 42 - - plt.axes([0.12,0.12,0.83,0.83]) - - plt.tick_params(direction='in') - plt.tick_params(which='major',length=1.5) - plt.tick_params(which='major',width=0.4) - plt.tick_params(which='major',width=0.4) - - palette = {"dN/dS <= 0.15": '#b2182b', "dN/dS > 0.15": '#2166ac'} - - ax = sns.boxplot(data=alldata, x="clade", y="Kcat_value", hue="type", - palette=palette, showfliers=False, linewidth=0.5) - - # https://stackoverflow.com/questions/58476654/how-to-remove-or-hide-x-axis-label-from-seaborn-boxplot - # plt.xlabel(None) will remove the Label, but not the ticks. - ax.set(xlabel=None) - # ax.set(xticks=None) - - for patch in ax.artists: - r, g, b, a = patch.get_facecolor() - patch.set_facecolor((r, g, b, 0.3)) - - # print(ax.artists) - # print(ax.lines) - # print(len(ax.lines)) - # https://cduvallet.github.io/posts/2018/03/boxplots-in-python - for i, artist in enumerate(ax.artists): - # print(i) - - if i % 2 == 0: - col = '#2166ac' - else: - col = '#b2182b' - - # if i % 2 == 0: - # col = '#b2182b' - # else: - # col = '#2166ac' - - # This sets the color for the main box - artist.set_edgecolor(col) - - # Each box has 5 associated Line2D objects (to make the whiskers, fliers, etc.) - # Loop over them here, and use the same colour as above - for j in range(i*5,i*5+5): - # print(j) - line = ax.lines[j] - line.set_color(col) - line.set_mfc(col) - line.set_mec(col) - handles = [ax.artists[0], ax.artists[1]] - - # for tick in ax.get_xticklabels() : - # tick.set_rotation(30) - - plt.rcParams['font.family'] = 'Helvetica' - - for i in range(len(all_clades)) : - plt.text(i-0.3, 2.6, '***', fontweight ="normal", fontsize=6) - - plt.ylabel("$k$$_\mathregular{cat}$ value", fontname='Helvetica', fontsize=7) - - plt.xticks(rotation=30,ha='right') - plt.ylim(-2,5) - plt.yticks([-2,-1,0,1,2,3,4,5]) - plt.xticks(fontsize=7) - plt.yticks(fontsize=6) - - ax.spines['bottom'].set_linewidth(0.5) - ax.spines['left'].set_linewidth(0.5) - ax.spines['top'].set_linewidth(0.5) - ax.spines['right'].set_linewidth(0.5) - - ax = plt.gca() - # handles,labels = ax.get_legend_handles_labels() - labels = ax.get_legend_handles_labels()[1] - # print(handles) - # print(labels) - # specify just one legend - lgd = plt.legend(handles[0:2], labels[0:2], loc=1, frameon=False, prop={'size': 6}) - - # https://blog.csdn.net/weixin_38314865/article/details/88633880 - plt.savefig("../../Results/figures/SuppleFig8b.pdf", dpi=400, bbox_inches = 'tight') - - -if __name__ == '__main__' : - species_clade() - # main() diff --git a/spaces/jiejiejie0420/bingo/src/components/ui/input.tsx b/spaces/jiejiejie0420/bingo/src/components/ui/input.tsx deleted file mode 100644 index 684a857f3d769b78818fb13de1abaebfb09ca79c..0000000000000000000000000000000000000000 --- a/spaces/jiejiejie0420/bingo/src/components/ui/input.tsx +++ /dev/null @@ -1,25 +0,0 @@ -import * as React from 'react' - -import { cn } from '@/lib/utils' - -export interface InputProps - extends React.InputHTMLAttributes {} - -const Input = React.forwardRef( - ({ className, type, ...props }, ref) => { - return ( - - ) - } -) -Input.displayName = 'Input' - -export { Input } diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/Crypto/Cipher/ChaCha20.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/Crypto/Cipher/ChaCha20.py deleted file mode 100644 index 8f3ec24df038773aa56d34cb6c96e1f0568762a1..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/Crypto/Cipher/ChaCha20.py +++ /dev/null @@ -1,287 +0,0 @@ -# =================================================================== -# -# Copyright (c) 2014, Legrandin -# All rights reserved. -# -# Redistribution and use in source and binary forms, with or without -# modification, are permitted provided that the following conditions -# are met: -# -# 1. Redistributions of source code must retain the above copyright -# notice, this list of conditions and the following disclaimer. -# 2. Redistributions in binary form must reproduce the above copyright -# notice, this list of conditions and the following disclaimer in -# the documentation and/or other materials provided with the -# distribution. -# -# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS -# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT -# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS -# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE -# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, -# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, -# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; -# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER -# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT -# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN -# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE -# POSSIBILITY OF SUCH DAMAGE. -# =================================================================== - -from Crypto.Random import get_random_bytes - -from Crypto.Util.py3compat import _copy_bytes -from Crypto.Util._raw_api import (load_pycryptodome_raw_lib, - create_string_buffer, - get_raw_buffer, VoidPointer, - SmartPointer, c_size_t, - c_uint8_ptr, c_ulong, - is_writeable_buffer) - -_raw_chacha20_lib = load_pycryptodome_raw_lib("Crypto.Cipher._chacha20", - """ - int chacha20_init(void **pState, - const uint8_t *key, - size_t keySize, - const uint8_t *nonce, - size_t nonceSize); - - int chacha20_destroy(void *state); - - int chacha20_encrypt(void *state, - const uint8_t in[], - uint8_t out[], - size_t len); - - int chacha20_seek(void *state, - unsigned long block_high, - unsigned long block_low, - unsigned offset); - int hchacha20( const uint8_t key[32], - const uint8_t nonce16[16], - uint8_t subkey[32]); - """) - - -def _HChaCha20(key, nonce): - - assert(len(key) == 32) - assert(len(nonce) == 16) - - subkey = bytearray(32) - result = _raw_chacha20_lib.hchacha20( - c_uint8_ptr(key), - c_uint8_ptr(nonce), - c_uint8_ptr(subkey)) - if result: - raise ValueError("Error %d when deriving subkey with HChaCha20" % result) - - return subkey - - -class ChaCha20Cipher(object): - """ChaCha20 (or XChaCha20) cipher object. - Do not create it directly. Use :py:func:`new` instead. - - :var nonce: The nonce with length 8, 12 or 24 bytes - :vartype nonce: bytes - """ - - block_size = 1 - - def __init__(self, key, nonce): - """Initialize a ChaCha20/XChaCha20 cipher object - - See also `new()` at the module level.""" - - self.nonce = _copy_bytes(None, None, nonce) - - # XChaCha20 requires a key derivation with HChaCha20 - # See 2.3 in https://tools.ietf.org/html/draft-arciszewski-xchacha-03 - if len(nonce) == 24: - key = _HChaCha20(key, nonce[:16]) - nonce = b'\x00' * 4 + nonce[16:] - self._name = "XChaCha20" - else: - self._name = "ChaCha20" - nonce = self.nonce - - self._next = ("encrypt", "decrypt") - - self._state = VoidPointer() - result = _raw_chacha20_lib.chacha20_init( - self._state.address_of(), - c_uint8_ptr(key), - c_size_t(len(key)), - nonce, - c_size_t(len(nonce))) - if result: - raise ValueError("Error %d instantiating a %s cipher" % (result, - self._name)) - self._state = SmartPointer(self._state.get(), - _raw_chacha20_lib.chacha20_destroy) - - def encrypt(self, plaintext, output=None): - """Encrypt a piece of data. - - Args: - plaintext(bytes/bytearray/memoryview): The data to encrypt, of any size. - Keyword Args: - output(bytes/bytearray/memoryview): The location where the ciphertext - is written to. If ``None``, the ciphertext is returned. - Returns: - If ``output`` is ``None``, the ciphertext is returned as ``bytes``. - Otherwise, ``None``. - """ - - if "encrypt" not in self._next: - raise TypeError("Cipher object can only be used for decryption") - self._next = ("encrypt",) - return self._encrypt(plaintext, output) - - def _encrypt(self, plaintext, output): - """Encrypt without FSM checks""" - - if output is None: - ciphertext = create_string_buffer(len(plaintext)) - else: - ciphertext = output - - if not is_writeable_buffer(output): - raise TypeError("output must be a bytearray or a writeable memoryview") - - if len(plaintext) != len(output): - raise ValueError("output must have the same length as the input" - " (%d bytes)" % len(plaintext)) - - result = _raw_chacha20_lib.chacha20_encrypt( - self._state.get(), - c_uint8_ptr(plaintext), - c_uint8_ptr(ciphertext), - c_size_t(len(plaintext))) - if result: - raise ValueError("Error %d while encrypting with %s" % (result, self._name)) - - if output is None: - return get_raw_buffer(ciphertext) - else: - return None - - def decrypt(self, ciphertext, output=None): - """Decrypt a piece of data. - - Args: - ciphertext(bytes/bytearray/memoryview): The data to decrypt, of any size. - Keyword Args: - output(bytes/bytearray/memoryview): The location where the plaintext - is written to. If ``None``, the plaintext is returned. - Returns: - If ``output`` is ``None``, the plaintext is returned as ``bytes``. - Otherwise, ``None``. - """ - - if "decrypt" not in self._next: - raise TypeError("Cipher object can only be used for encryption") - self._next = ("decrypt",) - - try: - return self._encrypt(ciphertext, output) - except ValueError as e: - raise ValueError(str(e).replace("enc", "dec")) - - def seek(self, position): - """Seek to a certain position in the key stream. - - Args: - position (integer): - The absolute position within the key stream, in bytes. - """ - - position, offset = divmod(position, 64) - block_low = position & 0xFFFFFFFF - block_high = position >> 32 - - result = _raw_chacha20_lib.chacha20_seek( - self._state.get(), - c_ulong(block_high), - c_ulong(block_low), - offset - ) - if result: - raise ValueError("Error %d while seeking with %s" % (result, self._name)) - - -def _derive_Poly1305_key_pair(key, nonce): - """Derive a tuple (r, s, nonce) for a Poly1305 MAC. - - If nonce is ``None``, a new 12-byte nonce is generated. - """ - - if len(key) != 32: - raise ValueError("Poly1305 with ChaCha20 requires a 32-byte key") - - if nonce is None: - padded_nonce = nonce = get_random_bytes(12) - elif len(nonce) == 8: - # See RFC7538, 2.6: [...] ChaCha20 as specified here requires a 96-bit - # nonce. So if the provided nonce is only 64-bit, then the first 32 - # bits of the nonce will be set to a constant number. - # This will usually be zero, but for protocols with multiple senders it may be - # different for each sender, but should be the same for all - # invocations of the function with the same key by a particular - # sender. - padded_nonce = b'\x00\x00\x00\x00' + nonce - elif len(nonce) == 12: - padded_nonce = nonce - else: - raise ValueError("Poly1305 with ChaCha20 requires an 8- or 12-byte nonce") - - rs = new(key=key, nonce=padded_nonce).encrypt(b'\x00' * 32) - return rs[:16], rs[16:], nonce - - -def new(**kwargs): - """Create a new ChaCha20 or XChaCha20 cipher - - Keyword Args: - key (bytes/bytearray/memoryview): The secret key to use. - It must be 32 bytes long. - nonce (bytes/bytearray/memoryview): A mandatory value that - must never be reused for any other encryption - done with this key. - - For ChaCha20, it must be 8 or 12 bytes long. - - For XChaCha20, it must be 24 bytes long. - - If not provided, 8 bytes will be randomly generated - (you can find them back in the ``nonce`` attribute). - - :Return: a :class:`Crypto.Cipher.ChaCha20.ChaCha20Cipher` object - """ - - try: - key = kwargs.pop("key") - except KeyError as e: - raise TypeError("Missing parameter %s" % e) - - nonce = kwargs.pop("nonce", None) - if nonce is None: - nonce = get_random_bytes(8) - - if len(key) != 32: - raise ValueError("ChaCha20/XChaCha20 key must be 32 bytes long") - - if len(nonce) not in (8, 12, 24): - raise ValueError("Nonce must be 8/12 bytes(ChaCha20) or 24 bytes (XChaCha20)") - - if kwargs: - raise TypeError("Unknown parameters: " + str(kwargs)) - - return ChaCha20Cipher(key, nonce) - -# Size of a data block (in bytes) -block_size = 1 - -# Size of a key (in bytes) -key_size = 32 diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/dns/rdtypes/CH/A.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/dns/rdtypes/CH/A.py deleted file mode 100644 index e457f38a08caaefb443397bbd70db40d4efdd488..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/dns/rdtypes/CH/A.py +++ /dev/null @@ -1,60 +0,0 @@ -# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license - -# Copyright (C) 2003-2007, 2009-2011 Nominum, Inc. -# -# Permission to use, copy, modify, and distribute this software and its -# documentation for any purpose with or without fee is hereby granted, -# provided that the above copyright notice and this permission notice -# appear in all copies. -# -# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES -# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF -# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR -# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES -# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN -# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT -# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. - -import struct - -import dns.immutable -import dns.rdtypes.mxbase - - -@dns.immutable.immutable -class A(dns.rdata.Rdata): - - """A record for Chaosnet""" - - # domain: the domain of the address - # address: the 16-bit address - - __slots__ = ["domain", "address"] - - def __init__(self, rdclass, rdtype, domain, address): - super().__init__(rdclass, rdtype) - self.domain = self._as_name(domain) - self.address = self._as_uint16(address) - - def to_text(self, origin=None, relativize=True, **kw): - domain = self.domain.choose_relativity(origin, relativize) - return "%s %o" % (domain, self.address) - - @classmethod - def from_text( - cls, rdclass, rdtype, tok, origin=None, relativize=True, relativize_to=None - ): - domain = tok.get_name(origin, relativize, relativize_to) - address = tok.get_uint16(base=8) - return cls(rdclass, rdtype, domain, address) - - def _to_wire(self, file, compress=None, origin=None, canonicalize=False): - self.domain.to_wire(file, compress, origin, canonicalize) - pref = struct.pack("!H", self.address) - file.write(pref) - - @classmethod - def from_wire_parser(cls, rdclass, rdtype, parser, origin=None): - domain = parser.get_name(origin) - address = parser.get_uint16() - return cls(rdclass, rdtype, domain, address) diff --git a/spaces/jone/Music_Source_Separation/bytesep/callbacks/base_callbacks.py b/spaces/jone/Music_Source_Separation/bytesep/callbacks/base_callbacks.py deleted file mode 100644 index ef62dd591f1516aa41e2ba347cc3aaa558854f8d..0000000000000000000000000000000000000000 --- a/spaces/jone/Music_Source_Separation/bytesep/callbacks/base_callbacks.py +++ /dev/null @@ -1,44 +0,0 @@ -import logging -import os -from typing import NoReturn - -import pytorch_lightning as pl -import torch -import torch.nn as nn -from pytorch_lightning.utilities import rank_zero_only - - -class SaveCheckpointsCallback(pl.Callback): - def __init__( - self, - model: nn.Module, - checkpoints_dir: str, - save_step_frequency: int, - ): - r"""Callback to save checkpoints every #save_step_frequency steps. - - Args: - model: nn.Module - checkpoints_dir: str, directory to save checkpoints - save_step_frequency: int - """ - self.model = model - self.checkpoints_dir = checkpoints_dir - self.save_step_frequency = save_step_frequency - os.makedirs(self.checkpoints_dir, exist_ok=True) - - @rank_zero_only - def on_batch_end(self, trainer: pl.Trainer, _) -> NoReturn: - r"""Save checkpoint.""" - global_step = trainer.global_step - - if global_step % self.save_step_frequency == 0: - - checkpoint_path = os.path.join( - self.checkpoints_dir, "step={}.pth".format(global_step) - ) - - checkpoint = {'step': global_step, 'model': self.model.state_dict()} - - torch.save(checkpoint, checkpoint_path) - logging.info("Save checkpoint to {}".format(checkpoint_path)) diff --git a/spaces/jonigata/PoseMaker2/external/coco.py b/spaces/jonigata/PoseMaker2/external/coco.py deleted file mode 100644 index 865a95bc02fedd318f32d2e7aa8397147d78fdb5..0000000000000000000000000000000000000000 --- a/spaces/jonigata/PoseMaker2/external/coco.py +++ /dev/null @@ -1,181 +0,0 @@ -dataset_info = dict( - dataset_name='coco', - paper_info=dict( - author='Lin, Tsung-Yi and Maire, Michael and ' - 'Belongie, Serge and Hays, James and ' - 'Perona, Pietro and Ramanan, Deva and ' - r'Doll{\'a}r, Piotr and Zitnick, C Lawrence', - title='Microsoft coco: Common objects in context', - container='European conference on computer vision', - year='2014', - homepage='http://cocodataset.org/', - ), - keypoint_info={ - 0: - dict(name='nose', id=0, color=[51, 153, 255], type='upper', swap=''), - 1: - dict( - name='left_eye', - id=1, - color=[51, 153, 255], - type='upper', - swap='right_eye'), - 2: - dict( - name='right_eye', - id=2, - color=[51, 153, 255], - type='upper', - swap='left_eye'), - 3: - dict( - name='left_ear', - id=3, - color=[51, 153, 255], - type='upper', - swap='right_ear'), - 4: - dict( - name='right_ear', - id=4, - color=[51, 153, 255], - type='upper', - swap='left_ear'), - 5: - dict( - name='left_shoulder', - id=5, - color=[0, 255, 0], - type='upper', - swap='right_shoulder'), - 6: - dict( - name='right_shoulder', - id=6, - color=[255, 128, 0], - type='upper', - swap='left_shoulder'), - 7: - dict( - name='left_elbow', - id=7, - color=[0, 255, 0], - type='upper', - swap='right_elbow'), - 8: - dict( - name='right_elbow', - id=8, - color=[255, 128, 0], - type='upper', - swap='left_elbow'), - 9: - dict( - name='left_wrist', - id=9, - color=[0, 255, 0], - type='upper', - swap='right_wrist'), - 10: - dict( - name='right_wrist', - id=10, - color=[255, 128, 0], - type='upper', - swap='left_wrist'), - 11: - dict( - name='left_hip', - id=11, - color=[0, 255, 0], - type='lower', - swap='right_hip'), - 12: - dict( - name='right_hip', - id=12, - color=[255, 128, 0], - type='lower', - swap='left_hip'), - 13: - dict( - name='left_knee', - id=13, - color=[0, 255, 0], - type='lower', - swap='right_knee'), - 14: - dict( - name='right_knee', - id=14, - color=[255, 128, 0], - type='lower', - swap='left_knee'), - 15: - dict( - name='left_ankle', - id=15, - color=[0, 255, 0], - type='lower', - swap='right_ankle'), - 16: - dict( - name='right_ankle', - id=16, - color=[255, 128, 0], - type='lower', - swap='left_ankle') - }, - skeleton_info={ - 0: - dict(link=('left_ankle', 'left_knee'), id=0, color=[0, 255, 0]), - 1: - dict(link=('left_knee', 'left_hip'), id=1, color=[0, 255, 0]), - 2: - dict(link=('right_ankle', 'right_knee'), id=2, color=[255, 128, 0]), - 3: - dict(link=('right_knee', 'right_hip'), id=3, color=[255, 128, 0]), - 4: - dict(link=('left_hip', 'right_hip'), id=4, color=[51, 153, 255]), - 5: - dict(link=('left_shoulder', 'left_hip'), id=5, color=[51, 153, 255]), - 6: - dict(link=('right_shoulder', 'right_hip'), id=6, color=[51, 153, 255]), - 7: - dict( - link=('left_shoulder', 'right_shoulder'), - id=7, - color=[51, 153, 255]), - 8: - dict(link=('left_shoulder', 'left_elbow'), id=8, color=[0, 255, 0]), - 9: - dict( - link=('right_shoulder', 'right_elbow'), id=9, color=[255, 128, 0]), - 10: - dict(link=('left_elbow', 'left_wrist'), id=10, color=[0, 255, 0]), - 11: - dict(link=('right_elbow', 'right_wrist'), id=11, color=[255, 128, 0]), - 12: - dict(link=('left_eye', 'right_eye'), id=12, color=[51, 153, 255]), - 13: - dict(link=('nose', 'left_eye'), id=13, color=[51, 153, 255]), - 14: - dict(link=('nose', 'right_eye'), id=14, color=[51, 153, 255]), - 15: - dict(link=('left_eye', 'left_ear'), id=15, color=[51, 153, 255]), - 16: - dict(link=('right_eye', 'right_ear'), id=16, color=[51, 153, 255]), - 17: - dict(link=('left_ear', 'left_shoulder'), id=17, color=[51, 153, 255]), - 18: - dict( - link=('right_ear', 'right_shoulder'), id=18, color=[51, 153, 255]) - }, - joint_weights=[ - 1., 1., 1., 1., 1., 1., 1., 1.2, 1.2, 1.5, 1.5, 1., 1., 1.2, 1.2, 1.5, - 1.5 - ], - sigmas=[ - 0.026, 0.025, 0.025, 0.035, 0.035, 0.079, 0.079, 0.072, 0.072, 0.062, - 0.062, 0.107, 0.107, 0.087, 0.087, 0.089, 0.089 - ]) diff --git a/spaces/jorge-henao/ask2democracy/examples.py b/spaces/jorge-henao/ask2democracy/examples.py deleted file mode 100644 index 5d9adb8d56a0013753a3f8f6ad0f8445501c39e0..0000000000000000000000000000000000000000 --- a/spaces/jorge-henao/ask2democracy/examples.py +++ /dev/null @@ -1,52 +0,0 @@ -examples = [ -['¿Que va a hacer con la deuda del ICETEX?'], -['¿que va a hacer con el ESMAD?'], -['¿eliminará el servicio militar obligatorio?'], -['¿habrá diversidad de género en los altos cargos del gobierno?'], -['¿Como impulsará la creación de startus o emprendimientos creados por jóvenes?'], -['¿Cómo de para garantizar la protección de lideres sociales y ambientales?'], -['¿Cómo promoverá el mercado laboral de los jóvenes?'], -['¿Cuales medidas tomará para mejorar la remuneración de los jóvenes ?'], -['aciones para aumentar la capacidad del sistema de salud en materia de prevención'], -['¿dialogará con el ELN?'], -['¿regulará las plataformas móviles de transporte?'], -['¿está de acuerdo con la legalización de la marhihuana?'], -['¿apoyaría las Pymes para entrar el mercado de la marihuana legal?'], -['¿implementaría el uso del Canabis para tratar enfermedades no crónicas ni terminales?'], -['¿Por qué implementará el uso del Canabis?'], -['¿consideraría el cultivo de la hoja de coca para uso farmacéutico y otros usos lícitos?'], -['¿Que política implementará para aprovechar plantas como la hoja de coca en usos alternativos como abonos?'], -['¿garantizará el derecho del ejercicio de la prostitución?'], -['¿eliminará la figura del porte especial de armas?'], -['¿Qué mecanismo implementará para garantizar la paridad de género en la política nacional y territorial?'], -['¿cuotas de participación igualitaria en cargos públicos para personas de todas las identidades de género?'], -['¿Que va hacer para eliminar las barreras de acceso, tenencia y formalización de la tierra para las mujeres?'], -['¿Cómo incentivará la denuncia por parte de hombres víctimas de violencia de genero?'], -['¿tendrá en cuenta la identificación de genero no binario para efectos de registros públios?'], -['¿aceptaría el matrimonio o union marital de hecho entre familias poliamorosas?'], -['¿eliminará el 4 por mil?'], -['¿aprueva acuerdo de ESCASÚ?'], -['¿qué va a pasar con las EPS?'], -['¿Que propone respecto a la medicina preventiva?'], -['¿consumo mínimo vital de agua?'], -['¿Replanteará las relaciones con Estados Unidos?'], -['¿renegociará los TLC?'], -['¿Seguirá importando alimentos o fomentará la producción nacional?'], -['¿Qué políticas sociales implementará para ayudar a las familias vulnerables?'], -['¿Cómo financiaría las pensiones?'], -['¿De dónde va a sacar la plata para financiar las pensiones?'], -['¿Que propone para la transición energética?'], -['¿Cómo fortalecerá las capacidades para producir localmente medicamentos e insumos esenciales para la salud de los colombianos?'], -['¿Cómo hará para preservar la vida de los y las líderes sociales afrodescendientes?'], -['¿Cómo hará una transición energética justa?'], -['¿Cómo revitalizará el proceso de paz?'], -['¿Cómo transformará la actual tragedia educativa en Colombia en una oportunidad para tener el sistema educativo que el país necesita?'], -['¿Cómo va a garantizar el enfoque de género en sus políticas públicas?'], -['¿Ejecutará una reforma agraria?'], -['¿Habrá reforma tributaria en el gobierno?'], -['¿Qué acciones concretas tomará para reducir el hacinamiento en las cárceles?'], -['¿Qué recursos nacionales destinará a la movilidad en Bogotá?'], -['¿Qué va a hacer contra la corrupción?'], -['¿recomponerá las relaciones diplomáticas con el Gobierno venezolano?'], -['¿Qué propone sobre las pensiones?'] -] \ No newline at end of file diff --git a/spaces/jsu27/decomp-diffusion/README.md b/spaces/jsu27/decomp-diffusion/README.md deleted file mode 100644 index e4e6c1dbf228409d1b218a2b7c7a813d6cd0839e..0000000000000000000000000000000000000000 --- a/spaces/jsu27/decomp-diffusion/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Decomp Diffusion -emoji: 📈 -colorFrom: pink -colorTo: indigo -sdk: gradio -sdk_version: 3.34.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/juancopi81/youtube-music-transcribe/t5x/partitioning.py b/spaces/juancopi81/youtube-music-transcribe/t5x/partitioning.py deleted file mode 100644 index a0e9c3d46c9c1ef4142b554eb577d3821fa89e1d..0000000000000000000000000000000000000000 --- a/spaces/juancopi81/youtube-music-transcribe/t5x/partitioning.py +++ /dev/null @@ -1,902 +0,0 @@ -# Copyright 2022 The T5X Authors. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -"""Utilities for partitioning.""" - -import abc -import collections -import dataclasses -import typing -from typing import Any, Callable, Optional, Sequence, Tuple, Union - -from absl import logging -import cached_property -from flax import traverse_util -from flax.linen import partitioning as flax_partitioning -import jax -from jax import numpy as jnp -from jax import random -from jax.experimental import PartitionSpec -from jax.experimental.maps import Mesh -from jax.experimental.pjit import pjit as jax_pjit -import numpy as np -from t5x import train_state as train_state_lib - -JaxDevice = jax.lib.xla_client.Device -TpuMesh = Tuple[int, int, int, int] # (x, y, z, num_cores). -OtherMesh = Tuple[int, int] -HardwareMesh = Union[TpuMesh, OtherMesh] -PyTreeDef = type(jax.tree_structure(None)) -TrainState = train_state_lib.TrainState -LogicalAxisRules = Sequence[Tuple[str, Optional[str]]] - -if typing.TYPE_CHECKING: # See b/163639353 - cached_property = property # pylint: disable=invalid-name -else: - cached_property = cached_property.cached_property - - -class AxisNames(tuple): - """Tuple of strings specifying name for each axis. - - We create a separate class for this so JAX's pytree utilities can distinguish - it from a tuple that should be treated as a pytree, instead treating it as a - leaf. - """ - - def __new__(cls, *names): - return tuple.__new__(AxisNames, names) - - def __repr__(self): - return 'AxisNames%s' % tuple.__repr__(self) - - -# pjit wrappers for cpu fallback. -# ----------------------------------------------------------------------------- -# TODO(levskaya): upstream this fallback behavior to jax pjit. -def pjit( - fun: Callable, # pylint: disable=g-bare-generic - in_axis_resources, - out_axis_resources, - static_argnums: Union[int, Sequence[int]] = (), - donate_argnums: Union[int, Sequence[int]] = (), - backend: Optional[str] = None): - """Wrapper for pjit that calls normal jit on cpu.""" - if jax.devices(backend)[0].platform == 'cpu': - return jax.jit( - fun, static_argnums=static_argnums, donate_argnums=donate_argnums) - else: - return jax_pjit( - fun, - in_axis_resources, - out_axis_resources, - static_argnums=static_argnums, - donate_argnums=donate_argnums) - - -def with_sharding_constraint(x, axis_resources): - """Wrapper for pjit with_sharding_constraint, no-op on cpu or outside pjit.""" - if jax.devices()[0].platform == 'cpu' or not global_mesh_defined(): - return x - else: - return jax.experimental.pjit.with_sharding_constraint(x, axis_resources) - - -# pjit Mesh creation functions. -# ----------------------------------------------------------------------------- -def bounds_from_last_device( - last_device: jax.lib.xla_client.Device) -> HardwareMesh: - """Get the bound from the given last device.""" - # Must be passed the device at the highest-coordinate corner of the - # relevant mesh, which is a requirement we know is satisfied by the last - # device in jax.devices(). - if hasattr(last_device, 'coords'): - x, y, z = last_device.coords - return x + 1, y + 1, z + 1, last_device.core_on_chip + 1 - else: - # On non-TPU platforms, the "mesh" is hosts x devices per host in order - # to take advantage of faster within-host interconnect. - return jax.host_count(), jax.local_device_count() - - -def get_coords(device: jax.lib.xla_client.Device) -> HardwareMesh: - """Returns the coordinates of the given device.""" - if hasattr(device, 'coords'): - return (*device.coords, device.core_on_chip) - return (device.process_index, device.id % jax.local_device_count()) - - -def global_mesh_defined(): - """Checks if global xmap/pjit mesh resource environment is defined.""" - maps_env = jax.experimental.maps.thread_resources.env - return maps_env.physical_mesh.devices.shape != () # pylint: disable=g-explicit-bool-comparison - - -def get_mesh(model_parallel_submesh: HardwareMesh, - input_devices: Sequence[JaxDevice] = (), - input_local_devices: Sequence[JaxDevice] = (), - tile_by_host_if_needed: bool = True, - backend: Optional[str] = None) -> Mesh: - """Construct an xmap/pjit Mesh for the given model-parallel submesh. - - The resulting mesh has two resource axes: 'model', with the provided submesh - shape, and 'data', which covers the rest of the mesh. - - Args: - model_parallel_submesh: a HardwareMesh spec, namely (x,y,z,core) on TPU for - a single model-parallel replica's "tile" in the physical device mesh. The - first three elements (`x`, `y`, and `z`) should be factors of the pod - slice; e.g., if you are using df_4x8, then `x` should be a factor of 4 - (one of 1, 2, 4), `y` should be a factor of 8 (one of 1, 2, 4, 8), and `z` - must be 1, because TPU v3 slices are only 2D. `z` can be >1 for TPU v4 - (and maybe later TPUs) that allow 3D slices. `core` is the number of cores - to use from each TPU node. As communication is usually fastest inside the - same node, if you need a tile of more than 1 core, then - you should first increase `core`: e.g., for TPU v3, (1,1,1,2) is better - than (2,1,1,1). To pick a good spec, try a few possible values until you - get high TPU utilization. - input_devices: the devices to use, will use jax.devices() if this is not - set. - input_local_devices: the local devices to use, will use jax.local_devices() - if this is not set. - tile_by_host_if_needed: JAX currently requires that the parts of any sharded - array that are located on one host's local devices form a single - contiguous slice. A best effort will be made to achieve this without - "tiling" the device assignment over hosts (which can reduce XLA collective - performance). If this flag is True, then the device assignment will be - tiled over hosts if necessary to satisfy this constraint and create a - buildable mesh; if false, mesh construction will fail instead. - backend: get devices from the pinned backend, if specified. This is - useful for explicitly specifying the devices other than relying on - jax_platform_name. - - Returns: - A xmap / pjit Mesh containing the virtual device mesh with data, model axes. - """ - input_devices = input_devices or jax.devices(backend) - input_local_devices = input_local_devices or jax.local_devices(0, backend) - last_device = input_devices[-1] - global_hardware_mesh = bounds_from_last_device(last_device) - mesh_ndim = len(global_hardware_mesh) - local_hardware_mesh = bounds_from_last_device(input_local_devices[-1]) - mesh_err = ( - f'each dimension of the model parallel submesh {model_parallel_submesh} ' - 'must be a factor of the corresponding dimension of the global device ' - f'mesh {global_hardware_mesh}') - assert not any( - g % m - for g, m in zip(global_hardware_mesh, model_parallel_submesh)), mesh_err - assert not any( - g % l for g, l in zip(global_hardware_mesh, local_hardware_mesh)) - devices = np.empty(global_hardware_mesh, dtype=np.object) - for device in input_devices: - device_coords = get_coords(device) - devices[device_coords] = device - tile_by_host = tile_by_host_if_needed - if len(global_hardware_mesh) == 4: - # enable contiguous local chunks without host tiling by making Z major - global_hardware_mesh = typing.cast(Tuple[int, int, int, int], - global_hardware_mesh) - model_parallel_submesh = typing.cast(Tuple[int, int, int, int], - model_parallel_submesh) - gx, gy, gz, gc = global_hardware_mesh - mx, my, mz, mc = model_parallel_submesh - if (mx == gx > 1 and my == mz == 1) or (mx == 1 and my == gy > 1 and - mz == gz > 1): - logging.info('ensuring YZ plane has a Z-major device order') - # YZ should be ZY - assert mc == gc, (mc, gc) - global_hardware_mesh = gx, gz, gy, gc - model_parallel_submesh = mx, mz, my, mc - devices = devices.swapaxes(1, 2) - tile_by_host = False - if (my == gy > 1 and mx == mz == 1) or (my == 1 and mx == gx > 1 and - mz == gz > 1): - logging.info('ensuring XZ plane has a Z-major device order') - # XZ should be ZX - assert mc == gc, (mc, gc) - global_hardware_mesh = gz, gy, gx, gc - model_parallel_submesh = mz, my, mx, mc - devices = devices.swapaxes(0, 2) - tile_by_host = False - if tile_by_host: - logging.warning( - 'Tiling device assignment mesh by hosts, which may lead to ' - 'reduced XLA collective performance. To avoid this, modify ' - 'the model parallel submesh or run with more tasks per host.') - tile_err = ( - 'to tile the mesh by hosts, each dimension of the model parallel ' - 'submesh must be either a factor or a multiple of the corresponding ' - 'dimension of the per-host submesh') - - def dh_dd_mh_md(g: int, m: int, l: int) -> Tuple[int, int, int, int]: - """Split a global mesh dimension into four tiling components. - - Args: - g: global mesh bounds dimension size - m: model-parallel submesh bounds dimension size - l: local submesh bounds dimension size - - Returns: - The resulting tuple divides the dimension into the hosts component of - the data-parallel submesh, the devices component of the data-parallel - submesh, the hosts component of the model-parallel submesh, and the - devices component of the model-parallel submesh. - """ - d = g // m - if m >= l: - assert not m % l, tile_err - return (d, 1, m // l, l) - else: - assert not l % m, tile_err - return (d // (l // m), l // m, 1, m) - - # e.g. [(x_data_hosts, x_data_devs, x_model_hosts, x_model_devs), ...] - dh_dd_mh_md_tups = map(dh_dd_mh_md, global_hardware_mesh, - model_parallel_submesh, local_hardware_mesh) - # reshape to e.g. (x_dh, x_dd, x_mh, x_md, y_dh, ...) - devices = devices.reshape(*(s for t in dh_dd_mh_md_tups for s in t)) # pylint: disable=g-complex-comprehension - # TODO(jekbradbury): reorder local subgroups for ring locality - # Transpose to [data_host], [data_device], [model_host], [model_device] - # block ordering e.g. (x_dh, y_dh, ..., x_dd, y_dd, ...) - devices = devices.transpose(*(4 * i for i in range(mesh_ndim)), - *(4 * i + 1 for i in range(mesh_ndim)), - *(4 * i + 2 for i in range(mesh_ndim)), - *(4 * i + 3 for i in range(mesh_ndim))) - else: - # e.g. [(x_data, x_model), (y_data, y_model), ...] - model_data_tups = [ - (g // m, m) - for g, m in zip(global_hardware_mesh, model_parallel_submesh) - ] - # reshape to e.g. (x_data, x_model, y_data, y_model...) - devices = devices.reshape(*(s for t in model_data_tups for s in t)) # pylint: disable=g-complex-comprehension - # TODO(jekbradbury): reorder small subgroups for ring locality - # transpose to e.g. (x_data, y_data, ..., x_model, ...) - devices = devices.transpose(*(2 * i for i in range(mesh_ndim)), - *(2 * i + 1 for i in range(mesh_ndim))) - # reshape to (data, model) - devices = devices.reshape(-1, np.prod(model_parallel_submesh)) - global_mesh = Mesh(devices, ['data', 'model']) - logging.info('global_mesh axes_names: %s', global_mesh.axis_names) - logging.info('global_mesh devices: %s', global_mesh.devices) - return global_mesh - - -def get_cpu_mesh() -> Mesh: - """Trivial mesh for CPU Testing.""" - devices = np.empty((jax.host_count(), jax.local_device_count()), - dtype=np.object) - for device in jax.devices(): - devices[device.process_index, device.id % jax.local_device_count()] = device - return Mesh(devices, ['data', 'model']) - - -def get_gpu_mesh() -> Mesh: - """Simple mesh for GPUs.""" - devices = np.empty((jax.host_count(), jax.local_device_count()), - dtype=np.object) - for device in jax.devices(): - devices[device.process_index, device.id % jax.local_device_count()] = device - return Mesh(devices, ['data', 'model']) - - -def default_mesh(num_partitions: int, - model_parallel_submesh: Optional[HardwareMesh] = None, - backend: Optional[str] = None) -> Mesh: - """Attempt to return a default mesh for simple cases. - - Args: - num_partitions: number of partitions to use, will be ignored if - model_parallel_submesh is provided. - model_parallel_submesh: 4-tuple that specifies the x,y,z,c submesh to use as - the model-parallel device tile. - backend: get devices from the pinned backend, if specified. This is useful - for explicitly specifying the devices other than relying on - jax_platform_name. - - Returns: - xmap/pjit 2D Mesh with 'data', 'model' mesh axes. - """ - last_device = jax.devices(backend)[-1] - platform = last_device.platform - device_kind = last_device.device_kind - bounds = bounds_from_last_device(last_device) - - if model_parallel_submesh: - return get_mesh(model_parallel_submesh, backend=backend) - - if platform == 'cpu': - return get_cpu_mesh() - elif platform == 'gpu': - return get_gpu_mesh() - - mps = None - if device_kind in ('TPU v2', 'TPU v3'): - if num_partitions == 1: - mps = (1, 1, 1, 1) - elif num_partitions == 2: - mps = (1, 1, 1, 2) - elif num_partitions == 4: - mps = (2, 1, 1, 2) - elif num_partitions == 8: - mps = (2, 2, 1, 2) - elif num_partitions == 16: - mps = (4, 2, 1, 2) - # assume the use of megacore on TPU v4 - elif device_kind == 'TPU v4' and bounds[3] == 1: - if num_partitions == 1: - mps = (1, 1, 1, 1) - elif num_partitions == 2: - mps = (1, 2, 1, 1) - elif num_partitions == 4: - if bounds[0] >= 4: - mps = (4, 1, 1, 1) - else: - mps = (2, 2, 1, 1) - elif num_partitions == 8: - if bounds[2] >= 8: - mps = (1, 1, 8, 1) - else: - mps = (4, 2, 1, 1) - elif num_partitions == 16: - if bounds[2] >= 16: - mps = (1, 1, 16, 1) - elif bounds[0] >= 8: - mps = (8, 2, 1, 1) - else: - mps = (4, 4, 1, 1) - - if mps is None: - raise ValueError('No default mesh for this configuration: specify ' - 'config.model_parallel_submesh explicitly.') - return get_mesh(mps, backend=backend) - - -# Data chunking helper. -# ----------------------------------------------------------------------------- -@dataclasses.dataclass -class LocalChunkInfo: - # The logical slice of an array located on this host's local devices. - slice: Tuple[slice, ...] - # A unique index for this host/local chunk among chunks with the same slice. - replica_id: int - - -class LocalChunker: - """Utility class to aid chunking of sharded arrays in multihost settings.""" - - def __init__(self, global_mesh: Mesh): - self.global_mesh = global_mesh - local_mesh = global_mesh.local_mesh - first_local_device = local_mesh.devices.reshape(-1)[0] - host_location = collections.OrderedDict( - zip( - global_mesh.shape.keys(), - list(zip(*np.nonzero( - global_mesh.devices == first_local_device)))[0])) - self.num_chunks = collections.OrderedDict() - self.chunk_ids = collections.OrderedDict() - self.mesh_axes = list(global_mesh.shape.keys()) - for mesh_axis in self.mesh_axes: - num_devices_per_chunk = local_mesh.shape[mesh_axis] - self.num_chunks[mesh_axis] = ( - global_mesh.shape[mesh_axis] // num_devices_per_chunk) - self.chunk_ids[mesh_axis] = ( - host_location[mesh_axis] // num_devices_per_chunk) - - def get_local_chunk_info( - self, global_shape: Tuple[int, ...], - mesh_axes: Sequence[Optional[str]]) -> LocalChunkInfo: - """Get the local chunk info for a given array shape and sharded axes. - - Args: - global_shape: the global, unsharded shape of the array to chunk. - mesh_axes: a sequence of names (or None) of equal rank to `global_shape` - that specifies which mesh dimensions the array is sharded along. - - Returns: - LocalChunkInfo containing the logical slices of the array found on this - host's local devices, as well as the replica index for this chunk among - chunks with the same slice. The latter is used to determine which - host should write this chunk during checkpointing. - """ - local_slice = [slice(None) for dim in global_shape] - sharded_mesh_axes = set() - for i, (mesh_axis, size) in enumerate(zip(mesh_axes, global_shape)): - if not mesh_axis: - continue - sharded_mesh_axes.add(mesh_axis) - if not isinstance(mesh_axis, str): - raise NotImplementedError('TODO(jekbradbury)') - chunk_id = self.chunk_ids[mesh_axis] - chunk_size = size // self.num_chunks[mesh_axis] - local_slice[i] = slice(chunk_id * chunk_size, (chunk_id + 1) * chunk_size) - - replicated_mesh_axes = [ - mesh_axis for mesh_axis in self.mesh_axes - if mesh_axis not in sharded_mesh_axes - ] - replica_id = 0 - for mesh_axis in replicated_mesh_axes: - chunk_id = self.chunk_ids[mesh_axis] - replica_id = replica_id * self.num_chunks[mesh_axis] + chunk_id - - return LocalChunkInfo(tuple(local_slice), replica_id) - - -def standard_logical_axis_rules( - activation_partitioning_dims: int = 1, - parameter_partitioning_dims: int = 1, - additional_rules: Optional[LogicalAxisRules] = None) -> LogicalAxisRules: - """Default sharding rules for T5X model in terms of logical axis names. - - Args: - activation_partitioning_dims: enables 2-D activation sharding when set to 2. - parameter_partitioning_dims: enables 2-D parameter sharding when set to 2. - additional_rules: additional rules (a sequence of tuples) that will be - appended to the standard rules. - - Returns: - Sequence of logical axis rules - """ - logging.info( - '`activation_partitioning_dims` = %d, `parameter_partitioning_dims` = %d', - activation_partitioning_dims, parameter_partitioning_dims) - - if activation_partitioning_dims == 1 and parameter_partitioning_dims == 1: - rules = [ - ('batch', 'data'), - ('vocab', 'model'), - ('embed', None), - ('mlp', 'model'), - ('heads', 'model'), - ('kv', None), - ('joined_kv', 'model'), # joined heads+kv dim in 2D attn param layouts - ] - elif activation_partitioning_dims == 2 and parameter_partitioning_dims == 1: - rules = [ - ('batch', 'data'), - ('vocab', 'model'), - ('mlp', 'model'), - ('heads', 'model'), - ('kv', None), - ('joined_kv', 'model'), - ('embed', 'model'), - ] - elif activation_partitioning_dims == 1 and parameter_partitioning_dims == 2: - rules = [ - ('batch', 'data'), - ('vocab', 'model'), - ('mlp', 'model'), - ('heads', 'model'), - ('kv', None), - ('joined_kv', 'model'), - ('embed', 'data'), - ] - elif activation_partitioning_dims == 2 and parameter_partitioning_dims == 2: - rules = [ - ('batch', 'data'), - ('vocab', 'model'), - ('mlp', 'model'), - ('heads', 'model'), - ('kv', None), - ('joined_kv', 'model'), - ('embed', 'model'), - ('embed', 'data'), - ] - else: - raise ValueError( - f'`activation_partitioning_dims` = {activation_partitioning_dims} ' - f'`parameter_partitioning_dims` = {parameter_partitioning_dims} ' - 'is not supported.') - - # Add the common rules for the replicated logical axes names. - replicated_rules = [ - ('relpos_buckets', None), - ('abspos_buckets', None), - ('length', None), - ('layers', None), - ('stack', None), - ('mlp_activations', None), - ] - rules.extend(replicated_rules) - - if additional_rules: - rules.extend(additional_rules) - - return rules - - -# NB: This needs to be top-level for the jax compilation cache. -def _id_fn(x, ix): - """Identity function for copying parameters to the devices, sharded.""" - # A pure identity such as `lambda x, *: x` can get optimized away, so we - # include a random.split as a cheap function that cannot be optimized away. - return x, random.split(jnp.array([ix, ix], dtype=jnp.uint32)) - - -@dataclasses.dataclass -class DataLayout: - """Represents data layout for the partitioned model.""" - batch_size: int - shard_id: int - num_shards: int - is_first_host_in_replica_set: bool - - -PartitionedCallable = Callable[..., Any] -CompiledPartitionedCallable = Callable[..., Any] - - -class BasePartitioner(metaclass=abc.ABCMeta): - """Interface for partitioning computations across hardware devices.""" - - def __init__(self, - num_partitions: Optional[int] = None, - model_parallel_submesh: Optional[HardwareMesh] = None, - params_on_devices: bool = True, - backend: Optional[str] = None): - """Configures the partitioner. - - Args: - num_partitions: the number of partitions to use. Ignored if - `model_parallel_submesh` is provided. - model_parallel_submesh: 4-tuple that specifies the x,y,z,c submesh to use - as the model-parallel device tile. This submesh is used for the larger - of the two parameter dimensions, and, if 2-D activation sharding is - enabled, for the model dimension of activations. The rest of the mesh is - used for data parallelism and, if 2-D parameter sharding is enabled, the - other parameter dimension. - params_on_devices: whether to keep the params on devices, if False - - params stay in the host memory. Note that some partitioners might ignore - this setting, for example if they don't support storing all params on - device memory. - backend: get devices from the pinned backend, if specified. This is useful - for explicitly specifying the devices other than relying on - jax_platform_name. - """ - - if not num_partitions and not model_parallel_submesh: - raise ValueError('At least one of `num_partitions` or ' - '`model_parallel_submesh` must be set.') - - if model_parallel_submesh is not None and len(model_parallel_submesh) != 4: - logging.error( - '`model_parallel_submesh` must be either None or a 4-tuple. Got ' - 'Got `num_partitions=%s`. A ValueError will be raised beginning ' - 'March 1, 2022.', model_parallel_submesh) - - if bool(num_partitions) and bool(model_parallel_submesh): - logging.error( - 'At most one of `num_partitions` or `model_parallel_submesh` can be ' - 'set. Got `num_partitions=%s` and `model_parallel_submesh`=%s. A ' - 'ValueError will be raised beginning March 21, 2022.', num_partitions, - model_parallel_submesh) - - self._num_partitions = num_partitions - self._model_parallel_submesh = model_parallel_submesh - self._params_on_devices = params_on_devices - self._data_axis = 'data' - self._backend = backend - - @property - def mesh(self) -> Mesh: - raise NotImplementedError - - @property - def data_partition_spec(self) -> PartitionSpec: - return PartitionSpec(self._data_axis) - - def get_data_layout(self, - batch_size: Optional[int] = None, - host_index: Optional[int] = None) -> DataLayout: - """Returns filled `DataLayout` based on the partitioned model layout. - - Args: - batch_size: if set, indicates the requested batch size. The exception will - be raised if this batch size is not compatible with the layout. If not - set, the batch size is inferred from the layout. - host_index: indicates the host index to use for the calculations, if not - set - use JAX-provided one. Should be in [0, num_hosts) interval and the - order should match the order of corresponding CPU devices in - `jax.devices()`. - - Returns: - Filled `DataLayout` structure. - """ - if host_index is not None: - raise NotImplementedError('Explicit host_index is not yet implemented.') - if self._data_axis is None: - return DataLayout( - batch_size=batch_size, - shard_id=0, - num_shards=1, - is_first_host_in_replica_set=(jax.process_index() == 0)) - mesh_size = self._local_chunker.global_mesh.shape[self._data_axis] - batch_size = batch_size or mesh_size - if batch_size % mesh_size: - raise ValueError( - f'Batch size ({batch_size}) must be divisible by corresponding ' - f'mesh size ({mesh_size}).') - num_shards = self._local_chunker.num_chunks[self._data_axis] - if batch_size % num_shards: - raise ValueError( - f'Batch size ({batch_size}) must be divisible by number of ' - f'replicas ({num_shards}).') - replica_id = self._local_chunker.get_local_chunk_info( - (batch_size,), [self._data_axis]).replica_id - return DataLayout( - batch_size=batch_size, - shard_id=self._local_chunker.chunk_ids[self._data_axis], - num_shards=num_shards, - is_first_host_in_replica_set=(replica_id == 0)) - - def get_local_chunk_info( - self, global_shape: Tuple[int, ...], - mesh_axes: Sequence[Optional[str]]) -> LocalChunkInfo: - """Returns the local chunk info for a given array shape and sharded axes.""" - return self._local_chunker.get_local_chunk_info(global_shape, mesh_axes) - - @property - def params_on_devices(self): - return self._params_on_devices - - def move_params_to_devices(self, train_state: TrainState, - train_state_axes: TrainState) -> TrainState: - """Moves the optimizer parameters to devices.""" - p_id_fn = self.partition( - _id_fn, - in_axis_resources=(train_state_axes, None), - out_axis_resources=(train_state_axes, None), - donate_argnums=(0,)) - train_state, _ = p_id_fn(train_state, jnp.ones((), dtype=jnp.uint32)) - return train_state - - @property - @abc.abstractmethod - def _local_chunker(self): - """Returns the chunker that matches the parameters of this partitioner.""" - raise NotImplementedError - - def get_logical_axes(self, train_state: TrainState) -> TrainState: - """Returns a copy of TrainState with Optional[AxisNames] as leaves.""" - # By default, return None for the logical axes. - return train_state.restore_state( - jax.tree_map(lambda x: None, train_state.state_dict())) - - def get_mesh_axes(self, train_state: TrainState) -> TrainState: - """Returns a copy of TrainState with Optional[PartitionSpecs] as leaves.""" - raise NotImplementedError - - @abc.abstractmethod - def partition( - self, - fn: Callable, # pylint: disable=g-bare-generic - in_axis_resources, - out_axis_resources, - static_argnums: Union[int, Sequence[int]] = (), - donate_argnums: Union[int, Sequence[int]] = () - ) -> PartitionedCallable: - """Partitions the computation using partitioner-specific implementation. - - Args: - fn: the function to partition. - in_axis_resources: Pytree of structure matching that of arguments to `fn`, - with all actual arguments replaced by resource assignment - specifications. It is also valid to specify a pytree prefix (e.g. one - value in place of a whole subtree), in which case the leaves get - broadcast to all values in that subtree. - The valid resource assignment specifications are: - `None`: in which case the value will be replicated on all devices - `PartitionSpec`: a tuple of length at most equal to the rank of the - partitioned value. Each element can be a `None`, a mesh axis or a - tuple of mesh axes, and specifies the set of resources assigned to - partition the value's dimension matching its position in the spec. - out_axis_resources: Like `in_axis_resources`, but specifies resource - assignment for function outputs. - static_argnums: an optional int or collection of ints that specify which - positional arguments to treat as static (compile-time constant) in the - partitioned function. - donate_argnums: an optional int or collection of ints that specify which - argument buffers are "donated" to the computation. It is safe to donate - argument buffers if you no longer need them once the computation has - finished. - - Returns: - A partitioned version of the input function. - """ - raise NotImplementedError - - @abc.abstractmethod - def compile(self, partitioned_fn: PartitionedCallable, - *args) -> CompiledPartitionedCallable: - """Compiles and returns the partitioned function, or the original. - - Args: - partitioned_fn: The partitioned function. - *args: Sample arguments to the partitioned function matching the input - shapes that will be passed to the compiled function. - - Returns: - The compiled function, or the original if this partitioner does not - support compilation. - """ - raise NotImplementedError - - -class PjittedFnWithContext(PartitionedCallable): - """Wraps pjitted function to apply the appropriate contexts.""" - - def __init__(self, - pjitted_fn, - partition_mesh: Mesh, - logical_axis_rules: flax_partitioning.LogicalRules = ()): - self._pjitted_fn = pjitted_fn - self._mesh = partition_mesh - self._logical_axis_rules = logical_axis_rules - - def __call__(self, *args): - with Mesh(self._mesh.devices, - self._mesh.axis_names), flax_partitioning.axis_rules( - self._logical_axis_rules): - return self._pjitted_fn(*args) - - def lower(self, *args): - with Mesh(self._mesh.devices, - self._mesh.axis_names), flax_partitioning.axis_rules( - self._logical_axis_rules): - return self._pjitted_fn.lower(*args) - - -class BasePjitPartitioner(BasePartitioner): - """Partitioner that uses T5X version of jax.pjit.""" - - @cached_property - def _local_chunker(self) -> LocalChunker: - return LocalChunker(self.mesh) - - @cached_property - def mesh(self) -> Mesh: - return default_mesh(self._num_partitions, self._model_parallel_submesh, - self._backend) - - def partition( - self, - fn: Callable, # pylint: disable=g-bare-generic - in_axis_resources, - out_axis_resources, - static_argnums: Union[int, Sequence[int]] = (), - donate_argnums: Union[int, Sequence[int]] = () - ) -> PjittedFnWithContext: - pjitted = pjit( - fn, - in_axis_resources=in_axis_resources, - out_axis_resources=out_axis_resources, - static_argnums=static_argnums, - donate_argnums=donate_argnums, - backend=self._backend) - - return PjittedFnWithContext(pjitted, self.mesh) - - def compile(self, partitioned_fn: PjittedFnWithContext, - *args) -> CompiledPartitionedCallable: - return partitioned_fn.lower(*args).compile() - - -class PjitPartitioner(BasePjitPartitioner): - """Partitioner that uses named axes and jax.pjit.""" - - def __init__(self, - num_partitions: Optional[int] = None, - model_parallel_submesh: Optional[HardwareMesh] = None, - params_on_devices: bool = True, - backend: Optional[str] = None, - logical_axis_rules: Optional[LogicalAxisRules] = None): - """PjitPartitioner constructor. - - See https://github.com/google-research/text-to-text-transfer-transformer/blob/main/README.mdx/usage/partitioning for details. - - Args: - num_partitions: an integer that specifies the size of the model parallel - submesh to be automatically selected for the current topology. See - `model_parallel_submesh` for details on how this submesh is used. - Mutually exlusive with `model_parallel_submesh`. - model_parallel_submesh: is a 4-tuple that specifies the `(x, y, z, c)` - submesh model-parallel device tile, an axis of accelerator parallelism - orthogonal to data parallelism. Array axes in a model's parameters or - activations can be sharded over this submesh using axis rules (see - `logical_axis_rules`) that map them to 'model'. The effective number of - model sub-partitions is equal to `np.prod(model_parallel_submesh)` and - must evenly divide the total number of devices (i.e., - `jax.device_count() % np.prod(model_parallel_submesh) == 0`). The rest - of the TPU mesh is the data parallel submesh, providing - `jax.device_count() // np.prod(model_parallel_submesh)` partitions. It - is used for data (batch) parallelism and to shard other array axes that - are mapped to 'data'. This argument is mutually exclusive with - `num_partitions`. - params_on_devices: whether to keep the params on devices, if False - - params stay in the host memory. Note that some partitioners might ignore - this setting, for example if they don't support storing all params on - device memory. - backend: get devices from the pinned backend, if specified. This is - useful for explicitly specifying the devices other than relying on - jax_platform_name. - logical_axis_rules: a priority-ordered sequence of KV tuples that maps - logical axis names to either `None` (not sharded), 'model' (to shard - across the model-parallel submesh), or 'data' (to shard across the - data-parallel submesh). - """ - super().__init__( - num_partitions=num_partitions, - model_parallel_submesh=model_parallel_submesh, - params_on_devices=params_on_devices, - backend=backend) - if logical_axis_rules is None: - logical_axis_rules = standard_logical_axis_rules() - self._logical_axis_rules = tuple(logical_axis_rules) - self._data_axis, = flax_partitioning.logical_to_mesh_axes( - ['batch'], logical_axis_rules) - - def partition( - self, - fn: Callable, # pylint: disable=g-bare-generic - in_axis_resources, - out_axis_resources, - static_argnums: Union[int, Sequence[int]] = (), - donate_argnums: Union[int, Sequence[int]] = () - ) -> PjittedFnWithContext: - """Partitions the function using jax.pjit.""" - pjitted = pjit( - fn, - in_axis_resources=in_axis_resources, - out_axis_resources=out_axis_resources, - static_argnums=static_argnums, - donate_argnums=donate_argnums, - backend=self._backend) - - return PjittedFnWithContext(pjitted, self.mesh, self._logical_axis_rules) - - @property - def logical_axis_rules(self): - """Returns the logical axis rules.""" - return self._logical_axis_rules - - def get_logical_axes(self, train_state: TrainState) -> TrainState: - """Returns a copy of TrainState with Optional[AxisNames] as leaves.""" - return train_state.as_logical_axes() - - def get_mesh_axes(self, train_state: TrainState) -> TrainState: - """Returns a copy of TrainState with Optional[PartitionSpecs] as leaves.""" - logical_axes = self.get_logical_axes(train_state) - - def _logical_to_mesh_axes(param_name, logical_axes): - if logical_axes is None: - return None - elif logical_axes is traverse_util.empty_node: - return traverse_util.empty_node - try: - return flax_partitioning.logical_to_mesh_axes(logical_axes, - self._logical_axis_rules) - except ValueError as e: - raise ValueError(f'Failed to map logical axes for {param_name}') from e - - flat_logical_axes = traverse_util.flatten_dict( - logical_axes.state_dict(), keep_empty_nodes=True, sep='/') - flat_mesh_axes = { - k: _logical_to_mesh_axes(k, v) for k, v in flat_logical_axes.items() - } - - return logical_axes.restore_state( - traverse_util.unflatten_dict(flat_mesh_axes, sep='/')) diff --git a/spaces/julien-c/nbconvert/README.md b/spaces/julien-c/nbconvert/README.md deleted file mode 100644 index 0c4f9fde658d3f00ff7fe64bc79df6c36bedf317..0000000000000000000000000000000000000000 --- a/spaces/julien-c/nbconvert/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: nbconvert -emoji: 💪 -colorFrom: purple -colorTo: purple -sdk: docker -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/jx-yang/deep-thinking/app.py b/spaces/jx-yang/deep-thinking/app.py deleted file mode 100644 index 1d10789e097d7fa1908d26a492c1981788c99f7e..0000000000000000000000000000000000000000 --- a/spaces/jx-yang/deep-thinking/app.py +++ /dev/null @@ -1,224 +0,0 @@ -import json -from pathlib import Path - -import gradio as gr - -import torch -from torch.nn import functional as F -from torch.utils.data import DataLoader - -from common import setup_cpu -from models import build_tokenizer, build_model -from models.meta_optimizer import AttnOptimWrapper -from tasks import load_task -from tasks.loader import TokenizedForMCRightPad - -DISPLAY_MAPPING = { - "sst2": {"positive": "Pos", "negative": "Neg"}, -} - - -@torch.no_grad() -def do_infer_probs(model, exemplar_attn_kv, exemplar_attn_mask, batched_choices_input): - batched_choices_logprobs = [] - for batched_one_choice_input in batched_choices_input: - ( - batch_input_ids, - batch_attention_mask, - batch_choice_start, - batch_choice_end, - ) = batched_one_choice_input - bs = len(batch_input_ids) - - merged_attn_mask = torch.cat((exemplar_attn_mask.expand(bs, -1), batch_attention_mask), dim=1) - # [B, #Heads, Length, Hidden] - expand_exemplar_attn_kv = [[layer_k.expand((bs, -1, -1, -1)), layer_v.expand((bs, -1, -1, -1))] for layer_k, layer_v in exemplar_attn_kv] - - batched_logits = model( - input_ids=batch_input_ids, # [B, L'] - attention_mask=merged_attn_mask, # [B, L + L'] - past_key_values=expand_exemplar_attn_kv, # num_layers * 2 * [B, num_heads, L, H] - ).logits - batched_output = F.log_softmax(batched_logits, dim=-1) # [B, L', Vocab] - - batched_one_choice_logprobs = [] - for input_ids, choice_start, choice_end, lm_logprobs in zip(batch_input_ids, batch_choice_start, batch_choice_end, batched_output): - choice_tokens = input_ids[choice_start:choice_end].unsqueeze(1) # [L, 1] - choice_logprobs = lm_logprobs[choice_start - 1 : choice_end - 1] # [L, Vocab] - - extracted = torch.gather(choice_logprobs, -1, choice_tokens).squeeze(-1) - - choice_length = choice_end - choice_start - lm_log_p = torch.sum(extracted).item() - norm_lm_log_p = (lm_log_p / choice_length).item() - - choice_lm_info = {"lm_log_p": lm_log_p, "norm_lm_log_p": norm_lm_log_p} - batched_one_choice_logprobs.append(choice_lm_info) - batched_choices_logprobs.append(batched_one_choice_logprobs) - return batched_choices_logprobs - - -@torch.no_grad() -def process_once(dataset_name, exemplar_str, forward_steps, raw_data): - setup_cpu(seed=seed) - TaskHandler = load_task(dataset_name) - task_agent = TaskHandler(prompt_version) - - processed_data = task_agent.dataset_preprocess(raw_data) - dataset = TokenizedForMCRightPad(processed_data, tokenizer, task_agent.multiple_choice_promptify) - - exemplar_input_ids, exemplar_attn_mask = dataset.tokenize_demonstration(exemplar_str) - loader = DataLoader(dataset, shuffle=False, drop_last=False, batch_size=1) - meta_optim = AttnOptimWrapper(model, model_name, step_size=step_size, momentum=momentum) - meta_optim.init() - - for _ in range(forward_steps): - exemplar_kv = meta_optim.step(exemplar_input_ids) - - generated_info = [] # question * [choice0_prob, choice1_prob] - for batch_input in loader: - batch_output = do_infer_probs(model, exemplar_kv, exemplar_attn_mask.unsqueeze(0), batch_input) # [batch_of_choice0, batch_of_choice1, ...] - zipped_logprobs = list(zip(*batch_output)) # batch * (choice0, choice1, ...) - generated_info.extend(zipped_logprobs) - - all_predicted = [] - num_correct = 0 - for idx, (data, choice_info) in enumerate(zip(processed_data, generated_info)): - merged_choice_info = task_agent.merge_choice_info(choice_info) - merged_predictions_idx = task_agent.choice_info_to_predictions(merged_choice_info)["lm_log_p"] - predicted = task_agent.CHOICES[merged_predictions_idx] - ground_truth = task_agent.CHOICES[data["answer_idx"]] - - res = f"{DISPLAY_MAPPING[dataset_name][predicted]}" - if predicted == ground_truth: - res += " ✅" - num_correct += 1 - else: - res += " ❌" - all_predicted.append(res) - all_predicted.append(f"{100*num_correct / len(all_predicted):.2f}%") - return all_predicted - - -def transpose(l): - return list(map(list, zip(*l))) - - -def button_pressed(prev_state): - dataset_name = prev_state["dataset_name"] - exemplar_str = prev_state["exemplar_str"] - forward_steps = prev_state["step"] + 2 - raw_data = prev_state["raw_data"] - prev_table_data = prev_state["table_data"] - - current_output = process_once(dataset_name, exemplar_str, forward_steps, raw_data) - - t_prev = transpose(prev_table_data) - if forward_steps == 1: - t_prev.append(["**ICL**"] + current_output) - else: - t_prev.append([f"**Step={forward_steps}**"] + current_output) - updated_table_data = transpose(t_prev) - - ret = [ - { - "dataset_name": dataset_name, - "exemplar_str": exemplar_str, - "raw_data": raw_data, - "step": forward_steps, - "table_data": updated_table_data, - }, - f"Click here to train LLM ! Now Step: {forward_steps}", - updated_table_data, - ] - return ret - - -if __name__ == "__main__": - dataset_name = "sst2" - seed = 0 - prompt_version = "default" - kv_iter = 10 - - model_name, model_size = "opt", "125m" - step_size, momentum = 0.01, 0.9 - setup_cpu(seed=seed) - tokenizer = build_tokenizer(model_name, model_size, padding_side="right") - model = build_model(model_name, model_size, False) - torch.autograd.set_grad_enabled(False) - - print(f"Dataset: {dataset_name}") - task_root = Path("example_sets").joinpath(dataset_name) - - with task_root.joinpath("demos.txt").open("r") as f: - demos = f.read() - with task_root.joinpath("sample.pkl").open("r") as f: - raw_data = json.load(f) - - icl_result = process_once(dataset_name, demos, 1, raw_data) - - text = """We utilize a Large Language Model (LLM) to perform in-context learning (ICL) for sentiment classification of movie reviews. - -Taking the following two labeled examples as demonstrations, we predict the sentiment of the subsequent test input. - -Directly employing ICL results in lower prediction accuracy. However, in our proposed approach, **Deep-Thinking**, we repeatedly apply **Forward Tuning**, leading to improved accuracy of the model.""" - - css = """ -#the-table { overflow: auto; } -#the-table > div:nth-child(2) { margin: auto; width: fit-content; } -#the-table > div > div > div > table { width: auto; margin: 0; white-space: normal; } -#the-table > div > div > div > table > thead {display: none} -#the-table > div > div > div > table > tbody > tr:last-child {background-color: beige} -#the-table > div > div > div > table > tbody > tr:first-child {background-color: lightgray} -#the-table > div > div > div > table > tbody > tr > td {padding: 0 2px;} -#the-table > div > div > div > table > tbody > tr > td:first-child {min-width: 300px;} -#the-table > div > div > div > table > tbody > tr > td:not(:first-child) {white-space: nowrap; } -#the-text { font-size: large; } -#main-button { max-width: 500px; margin: 0 auto; } - """ - - title = "🤔 Iterative Forward Tuning Boosts In-context Learning in Language Models" - demo = gr.Blocks(css=css, title="🤔Deep-Thinking") - with demo: - gr.Markdown(f"

{title}

") - gr.Markdown( - """ -

-[Paper] -[Code] -

""" - ) - - gr.Markdown(text, elem_id="the-text") - with gr.Tab("SST-2"): - mapping = ["negative", "positive"] - - init_columns = [[e["sentence"]] for e in raw_data] - - init_table_result = [["**Test Input**"], *init_columns, ["**Accuracy**"]] - init_table_result = transpose(init_table_result) - init_table_result.append(["**ICL**"] + icl_result) - init_table_result = transpose(init_table_result) - - state = gr.State( - { - "dataset_name": "sst2", - "exemplar_str": demos, - "raw_data": raw_data, - "step": 1, - "table_data": init_table_result, - } - ) - - prompt = gr.Textbox(label="Demonstrations (Prompt template formatted)", value=demos) - gr.Markdown("

👇 Run forward tuning once !

") - step_button = gr.Button("Click here to train LLM ! Now Step: 1", variant="primary", elem_id="main-button") - big_table = gr.DataFrame( - value=init_table_result, - elem_id="the-table", - datatype=["markdown"] * 50, - headers=None, - ) - step_button.click(button_pressed, inputs=[state], outputs=[state, step_button, big_table]) - - demo.launch(server_name="0.0.0.0") diff --git a/spaces/jyseo/3DFuse/ldm/modules/encoders/__init__.py b/spaces/jyseo/3DFuse/ldm/modules/encoders/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/kangvcar/RealChar/client/web/src/index.css b/spaces/kangvcar/RealChar/client/web/src/index.css deleted file mode 100644 index 5cb92035c2c1d5c531b66941e98f67d026e79ba0..0000000000000000000000000000000000000000 --- a/spaces/kangvcar/RealChar/client/web/src/index.css +++ /dev/null @@ -1,14 +0,0 @@ -body { - margin: 0; - font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', 'Roboto', 'Oxygen', - 'Ubuntu', 'Cantarell', 'Fira Sans', 'Droid Sans', 'Helvetica Neue', - sans-serif; - -webkit-font-smoothing: antialiased; - -moz-osx-font-smoothing: grayscale; - background-color: #02081d; -} - -code { - font-family: source-code-pro, Menlo, Monaco, Consolas, 'Courier New', - monospace; -} diff --git a/spaces/keneonyeachonam/DockerImageRecognitionToText021723/README.md b/spaces/keneonyeachonam/DockerImageRecognitionToText021723/README.md deleted file mode 100644 index 8cbd8f789c3047480d52cfa3e3278ca3b709512d..0000000000000000000000000000000000000000 --- a/spaces/keneonyeachonam/DockerImageRecognitionToText021723/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: DockerImageRecognitionToText021723 -emoji: 👁 -colorFrom: purple -colorTo: gray -sdk: docker -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/kevinwang676/ChatGLM2-SadTalker-VC/src/facerender/modules/mapping.py b/spaces/kevinwang676/ChatGLM2-SadTalker-VC/src/facerender/modules/mapping.py deleted file mode 100644 index 0e3a1c2d1770996080c08e9daafb346f05d7bcdd..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/ChatGLM2-SadTalker-VC/src/facerender/modules/mapping.py +++ /dev/null @@ -1,47 +0,0 @@ -import numpy as np - -import torch -import torch.nn as nn -import torch.nn.functional as F - - -class MappingNet(nn.Module): - def __init__(self, coeff_nc, descriptor_nc, layer, num_kp, num_bins): - super( MappingNet, self).__init__() - - self.layer = layer - nonlinearity = nn.LeakyReLU(0.1) - - self.first = nn.Sequential( - torch.nn.Conv1d(coeff_nc, descriptor_nc, kernel_size=7, padding=0, bias=True)) - - for i in range(layer): - net = nn.Sequential(nonlinearity, - torch.nn.Conv1d(descriptor_nc, descriptor_nc, kernel_size=3, padding=0, dilation=3)) - setattr(self, 'encoder' + str(i), net) - - self.pooling = nn.AdaptiveAvgPool1d(1) - self.output_nc = descriptor_nc - - self.fc_roll = nn.Linear(descriptor_nc, num_bins) - self.fc_pitch = nn.Linear(descriptor_nc, num_bins) - self.fc_yaw = nn.Linear(descriptor_nc, num_bins) - self.fc_t = nn.Linear(descriptor_nc, 3) - self.fc_exp = nn.Linear(descriptor_nc, 3*num_kp) - - def forward(self, input_3dmm): - out = self.first(input_3dmm) - for i in range(self.layer): - model = getattr(self, 'encoder' + str(i)) - out = model(out) + out[:,:,3:-3] - out = self.pooling(out) - out = out.view(out.shape[0], -1) - #print('out:', out.shape) - - yaw = self.fc_yaw(out) - pitch = self.fc_pitch(out) - roll = self.fc_roll(out) - t = self.fc_t(out) - exp = self.fc_exp(out) - - return {'yaw': yaw, 'pitch': pitch, 'roll': roll, 't': t, 'exp': exp} \ No newline at end of file diff --git a/spaces/kevinwang676/ChatGLM2-SadTalker/src/face3d/models/arcface_torch/backbones/mobilefacenet.py b/spaces/kevinwang676/ChatGLM2-SadTalker/src/face3d/models/arcface_torch/backbones/mobilefacenet.py deleted file mode 100644 index 87731491d76f9ff61cc70e57bb3f18c54fae308c..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/ChatGLM2-SadTalker/src/face3d/models/arcface_torch/backbones/mobilefacenet.py +++ /dev/null @@ -1,130 +0,0 @@ -''' -Adapted from https://github.com/cavalleria/cavaface.pytorch/blob/master/backbone/mobilefacenet.py -Original author cavalleria -''' - -import torch.nn as nn -from torch.nn import Linear, Conv2d, BatchNorm1d, BatchNorm2d, PReLU, Sequential, Module -import torch - - -class Flatten(Module): - def forward(self, x): - return x.view(x.size(0), -1) - - -class ConvBlock(Module): - def __init__(self, in_c, out_c, kernel=(1, 1), stride=(1, 1), padding=(0, 0), groups=1): - super(ConvBlock, self).__init__() - self.layers = nn.Sequential( - Conv2d(in_c, out_c, kernel, groups=groups, stride=stride, padding=padding, bias=False), - BatchNorm2d(num_features=out_c), - PReLU(num_parameters=out_c) - ) - - def forward(self, x): - return self.layers(x) - - -class LinearBlock(Module): - def __init__(self, in_c, out_c, kernel=(1, 1), stride=(1, 1), padding=(0, 0), groups=1): - super(LinearBlock, self).__init__() - self.layers = nn.Sequential( - Conv2d(in_c, out_c, kernel, stride, padding, groups=groups, bias=False), - BatchNorm2d(num_features=out_c) - ) - - def forward(self, x): - return self.layers(x) - - -class DepthWise(Module): - def __init__(self, in_c, out_c, residual=False, kernel=(3, 3), stride=(2, 2), padding=(1, 1), groups=1): - super(DepthWise, self).__init__() - self.residual = residual - self.layers = nn.Sequential( - ConvBlock(in_c, out_c=groups, kernel=(1, 1), padding=(0, 0), stride=(1, 1)), - ConvBlock(groups, groups, groups=groups, kernel=kernel, padding=padding, stride=stride), - LinearBlock(groups, out_c, kernel=(1, 1), padding=(0, 0), stride=(1, 1)) - ) - - def forward(self, x): - short_cut = None - if self.residual: - short_cut = x - x = self.layers(x) - if self.residual: - output = short_cut + x - else: - output = x - return output - - -class Residual(Module): - def __init__(self, c, num_block, groups, kernel=(3, 3), stride=(1, 1), padding=(1, 1)): - super(Residual, self).__init__() - modules = [] - for _ in range(num_block): - modules.append(DepthWise(c, c, True, kernel, stride, padding, groups)) - self.layers = Sequential(*modules) - - def forward(self, x): - return self.layers(x) - - -class GDC(Module): - def __init__(self, embedding_size): - super(GDC, self).__init__() - self.layers = nn.Sequential( - LinearBlock(512, 512, groups=512, kernel=(7, 7), stride=(1, 1), padding=(0, 0)), - Flatten(), - Linear(512, embedding_size, bias=False), - BatchNorm1d(embedding_size)) - - def forward(self, x): - return self.layers(x) - - -class MobileFaceNet(Module): - def __init__(self, fp16=False, num_features=512): - super(MobileFaceNet, self).__init__() - scale = 2 - self.fp16 = fp16 - self.layers = nn.Sequential( - ConvBlock(3, 64 * scale, kernel=(3, 3), stride=(2, 2), padding=(1, 1)), - ConvBlock(64 * scale, 64 * scale, kernel=(3, 3), stride=(1, 1), padding=(1, 1), groups=64), - DepthWise(64 * scale, 64 * scale, kernel=(3, 3), stride=(2, 2), padding=(1, 1), groups=128), - Residual(64 * scale, num_block=4, groups=128, kernel=(3, 3), stride=(1, 1), padding=(1, 1)), - DepthWise(64 * scale, 128 * scale, kernel=(3, 3), stride=(2, 2), padding=(1, 1), groups=256), - Residual(128 * scale, num_block=6, groups=256, kernel=(3, 3), stride=(1, 1), padding=(1, 1)), - DepthWise(128 * scale, 128 * scale, kernel=(3, 3), stride=(2, 2), padding=(1, 1), groups=512), - Residual(128 * scale, num_block=2, groups=256, kernel=(3, 3), stride=(1, 1), padding=(1, 1)), - ) - self.conv_sep = ConvBlock(128 * scale, 512, kernel=(1, 1), stride=(1, 1), padding=(0, 0)) - self.features = GDC(num_features) - self._initialize_weights() - - def _initialize_weights(self): - for m in self.modules(): - if isinstance(m, nn.Conv2d): - nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu') - if m.bias is not None: - m.bias.data.zero_() - elif isinstance(m, nn.BatchNorm2d): - m.weight.data.fill_(1) - m.bias.data.zero_() - elif isinstance(m, nn.Linear): - nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu') - if m.bias is not None: - m.bias.data.zero_() - - def forward(self, x): - with torch.cuda.amp.autocast(self.fp16): - x = self.layers(x) - x = self.conv_sep(x.float() if self.fp16 else x) - x = self.features(x) - return x - - -def get_mbf(fp16, num_features): - return MobileFaceNet(fp16, num_features) \ No newline at end of file diff --git a/spaces/kevinwang676/ChatGLM2-VC-SadTalker/src/face3d/data/base_dataset.py b/spaces/kevinwang676/ChatGLM2-VC-SadTalker/src/face3d/data/base_dataset.py deleted file mode 100644 index 1bd57d082d519f512d7114b4f867b6695fb7de06..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/ChatGLM2-VC-SadTalker/src/face3d/data/base_dataset.py +++ /dev/null @@ -1,125 +0,0 @@ -"""This module implements an abstract base class (ABC) 'BaseDataset' for datasets. - -It also includes common transformation functions (e.g., get_transform, __scale_width), which can be later used in subclasses. -""" -import random -import numpy as np -import torch.utils.data as data -from PIL import Image -import torchvision.transforms as transforms -from abc import ABC, abstractmethod - - -class BaseDataset(data.Dataset, ABC): - """This class is an abstract base class (ABC) for datasets. - - To create a subclass, you need to implement the following four functions: - -- <__init__>: initialize the class, first call BaseDataset.__init__(self, opt). - -- <__len__>: return the size of dataset. - -- <__getitem__>: get a data point. - -- : (optionally) add dataset-specific options and set default options. - """ - - def __init__(self, opt): - """Initialize the class; save the options in the class - - Parameters: - opt (Option class)-- stores all the experiment flags; needs to be a subclass of BaseOptions - """ - self.opt = opt - # self.root = opt.dataroot - self.current_epoch = 0 - - @staticmethod - def modify_commandline_options(parser, is_train): - """Add new dataset-specific options, and rewrite default values for existing options. - - Parameters: - parser -- original option parser - is_train (bool) -- whether training phase or test phase. You can use this flag to add training-specific or test-specific options. - - Returns: - the modified parser. - """ - return parser - - @abstractmethod - def __len__(self): - """Return the total number of images in the dataset.""" - return 0 - - @abstractmethod - def __getitem__(self, index): - """Return a data point and its metadata information. - - Parameters: - index - - a random integer for data indexing - - Returns: - a dictionary of data with their names. It ususally contains the data itself and its metadata information. - """ - pass - - -def get_transform(grayscale=False): - transform_list = [] - if grayscale: - transform_list.append(transforms.Grayscale(1)) - transform_list += [transforms.ToTensor()] - return transforms.Compose(transform_list) - -def get_affine_mat(opt, size): - shift_x, shift_y, scale, rot_angle, flip = 0., 0., 1., 0., False - w, h = size - - if 'shift' in opt.preprocess: - shift_pixs = int(opt.shift_pixs) - shift_x = random.randint(-shift_pixs, shift_pixs) - shift_y = random.randint(-shift_pixs, shift_pixs) - if 'scale' in opt.preprocess: - scale = 1 + opt.scale_delta * (2 * random.random() - 1) - if 'rot' in opt.preprocess: - rot_angle = opt.rot_angle * (2 * random.random() - 1) - rot_rad = -rot_angle * np.pi/180 - if 'flip' in opt.preprocess: - flip = random.random() > 0.5 - - shift_to_origin = np.array([1, 0, -w//2, 0, 1, -h//2, 0, 0, 1]).reshape([3, 3]) - flip_mat = np.array([-1 if flip else 1, 0, 0, 0, 1, 0, 0, 0, 1]).reshape([3, 3]) - shift_mat = np.array([1, 0, shift_x, 0, 1, shift_y, 0, 0, 1]).reshape([3, 3]) - rot_mat = np.array([np.cos(rot_rad), np.sin(rot_rad), 0, -np.sin(rot_rad), np.cos(rot_rad), 0, 0, 0, 1]).reshape([3, 3]) - scale_mat = np.array([scale, 0, 0, 0, scale, 0, 0, 0, 1]).reshape([3, 3]) - shift_to_center = np.array([1, 0, w//2, 0, 1, h//2, 0, 0, 1]).reshape([3, 3]) - - affine = shift_to_center @ scale_mat @ rot_mat @ shift_mat @ flip_mat @ shift_to_origin - affine_inv = np.linalg.inv(affine) - return affine, affine_inv, flip - -def apply_img_affine(img, affine_inv, method=Image.BICUBIC): - return img.transform(img.size, Image.AFFINE, data=affine_inv.flatten()[:6], resample=Image.BICUBIC) - -def apply_lm_affine(landmark, affine, flip, size): - _, h = size - lm = landmark.copy() - lm[:, 1] = h - 1 - lm[:, 1] - lm = np.concatenate((lm, np.ones([lm.shape[0], 1])), -1) - lm = lm @ np.transpose(affine) - lm[:, :2] = lm[:, :2] / lm[:, 2:] - lm = lm[:, :2] - lm[:, 1] = h - 1 - lm[:, 1] - if flip: - lm_ = lm.copy() - lm_[:17] = lm[16::-1] - lm_[17:22] = lm[26:21:-1] - lm_[22:27] = lm[21:16:-1] - lm_[31:36] = lm[35:30:-1] - lm_[36:40] = lm[45:41:-1] - lm_[40:42] = lm[47:45:-1] - lm_[42:46] = lm[39:35:-1] - lm_[46:48] = lm[41:39:-1] - lm_[48:55] = lm[54:47:-1] - lm_[55:60] = lm[59:54:-1] - lm_[60:65] = lm[64:59:-1] - lm_[65:68] = lm[67:64:-1] - lm = lm_ - return lm diff --git a/spaces/kevinwang676/ChatGLM2-VC-SadTalker/src/face3d/models/arcface_torch/run.sh b/spaces/kevinwang676/ChatGLM2-VC-SadTalker/src/face3d/models/arcface_torch/run.sh deleted file mode 100644 index 61af4b4950eb11334e55362e3e3c5e2796979a01..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/ChatGLM2-VC-SadTalker/src/face3d/models/arcface_torch/run.sh +++ /dev/null @@ -1,2 +0,0 @@ -CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python -m torch.distributed.launch --nproc_per_node=8 --nnodes=1 --node_rank=0 --master_addr="127.0.0.1" --master_port=1234 train.py configs/ms1mv3_r50 -ps -ef | grep "train" | grep -v grep | awk '{print "kill -9 "$2}' | sh diff --git a/spaces/kevinwang676/SadTalker/src/utils/hparams.py b/spaces/kevinwang676/SadTalker/src/utils/hparams.py deleted file mode 100644 index 743c5c7d5a5a9e686f1ccd6fb3c2fb5cb382d62b..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/SadTalker/src/utils/hparams.py +++ /dev/null @@ -1,160 +0,0 @@ -from glob import glob -import os - -class HParams: - def __init__(self, **kwargs): - self.data = {} - - for key, value in kwargs.items(): - self.data[key] = value - - def __getattr__(self, key): - if key not in self.data: - raise AttributeError("'HParams' object has no attribute %s" % key) - return self.data[key] - - def set_hparam(self, key, value): - self.data[key] = value - - -# Default hyperparameters -hparams = HParams( - num_mels=80, # Number of mel-spectrogram channels and local conditioning dimensionality - # network - rescale=True, # Whether to rescale audio prior to preprocessing - rescaling_max=0.9, # Rescaling value - - # Use LWS (https://github.com/Jonathan-LeRoux/lws) for STFT and phase reconstruction - # It"s preferred to set True to use with https://github.com/r9y9/wavenet_vocoder - # Does not work if n_ffit is not multiple of hop_size!! - use_lws=False, - - n_fft=800, # Extra window size is filled with 0 paddings to match this parameter - hop_size=200, # For 16000Hz, 200 = 12.5 ms (0.0125 * sample_rate) - win_size=800, # For 16000Hz, 800 = 50 ms (If None, win_size = n_fft) (0.05 * sample_rate) - sample_rate=16000, # 16000Hz (corresponding to librispeech) (sox --i ) - - frame_shift_ms=None, # Can replace hop_size parameter. (Recommended: 12.5) - - # Mel and Linear spectrograms normalization/scaling and clipping - signal_normalization=True, - # Whether to normalize mel spectrograms to some predefined range (following below parameters) - allow_clipping_in_normalization=True, # Only relevant if mel_normalization = True - symmetric_mels=True, - # Whether to scale the data to be symmetric around 0. (Also multiplies the output range by 2, - # faster and cleaner convergence) - max_abs_value=4., - # max absolute value of data. If symmetric, data will be [-max, max] else [0, max] (Must not - # be too big to avoid gradient explosion, - # not too small for fast convergence) - # Contribution by @begeekmyfriend - # Spectrogram Pre-Emphasis (Lfilter: Reduce spectrogram noise and helps model certitude - # levels. Also allows for better G&L phase reconstruction) - preemphasize=True, # whether to apply filter - preemphasis=0.97, # filter coefficient. - - # Limits - min_level_db=-100, - ref_level_db=20, - fmin=55, - # Set this to 55 if your speaker is male! if female, 95 should help taking off noise. (To - # test depending on dataset. Pitch info: male~[65, 260], female~[100, 525]) - fmax=7600, # To be increased/reduced depending on data. - - ###################### Our training parameters ################################# - img_size=96, - fps=25, - - batch_size=16, - initial_learning_rate=1e-4, - nepochs=300000, ### ctrl + c, stop whenever eval loss is consistently greater than train loss for ~10 epochs - num_workers=20, - checkpoint_interval=3000, - eval_interval=3000, - writer_interval=300, - save_optimizer_state=True, - - syncnet_wt=0.0, # is initially zero, will be set automatically to 0.03 later. Leads to faster convergence. - syncnet_batch_size=64, - syncnet_lr=1e-4, - syncnet_eval_interval=1000, - syncnet_checkpoint_interval=10000, - - disc_wt=0.07, - disc_initial_learning_rate=1e-4, -) - - - -# Default hyperparameters -hparamsdebug = HParams( - num_mels=80, # Number of mel-spectrogram channels and local conditioning dimensionality - # network - rescale=True, # Whether to rescale audio prior to preprocessing - rescaling_max=0.9, # Rescaling value - - # Use LWS (https://github.com/Jonathan-LeRoux/lws) for STFT and phase reconstruction - # It"s preferred to set True to use with https://github.com/r9y9/wavenet_vocoder - # Does not work if n_ffit is not multiple of hop_size!! - use_lws=False, - - n_fft=800, # Extra window size is filled with 0 paddings to match this parameter - hop_size=200, # For 16000Hz, 200 = 12.5 ms (0.0125 * sample_rate) - win_size=800, # For 16000Hz, 800 = 50 ms (If None, win_size = n_fft) (0.05 * sample_rate) - sample_rate=16000, # 16000Hz (corresponding to librispeech) (sox --i ) - - frame_shift_ms=None, # Can replace hop_size parameter. (Recommended: 12.5) - - # Mel and Linear spectrograms normalization/scaling and clipping - signal_normalization=True, - # Whether to normalize mel spectrograms to some predefined range (following below parameters) - allow_clipping_in_normalization=True, # Only relevant if mel_normalization = True - symmetric_mels=True, - # Whether to scale the data to be symmetric around 0. (Also multiplies the output range by 2, - # faster and cleaner convergence) - max_abs_value=4., - # max absolute value of data. If symmetric, data will be [-max, max] else [0, max] (Must not - # be too big to avoid gradient explosion, - # not too small for fast convergence) - # Contribution by @begeekmyfriend - # Spectrogram Pre-Emphasis (Lfilter: Reduce spectrogram noise and helps model certitude - # levels. Also allows for better G&L phase reconstruction) - preemphasize=True, # whether to apply filter - preemphasis=0.97, # filter coefficient. - - # Limits - min_level_db=-100, - ref_level_db=20, - fmin=55, - # Set this to 55 if your speaker is male! if female, 95 should help taking off noise. (To - # test depending on dataset. Pitch info: male~[65, 260], female~[100, 525]) - fmax=7600, # To be increased/reduced depending on data. - - ###################### Our training parameters ################################# - img_size=96, - fps=25, - - batch_size=2, - initial_learning_rate=1e-3, - nepochs=100000, ### ctrl + c, stop whenever eval loss is consistently greater than train loss for ~10 epochs - num_workers=0, - checkpoint_interval=10000, - eval_interval=10, - writer_interval=5, - save_optimizer_state=True, - - syncnet_wt=0.0, # is initially zero, will be set automatically to 0.03 later. Leads to faster convergence. - syncnet_batch_size=64, - syncnet_lr=1e-4, - syncnet_eval_interval=10000, - syncnet_checkpoint_interval=10000, - - disc_wt=0.07, - disc_initial_learning_rate=1e-4, -) - - -def hparams_debug_string(): - values = hparams.values() - hp = [" %s: %s" % (name, values[name]) for name in sorted(values) if name != "sentences"] - return "Hyperparameters:\n" + "\n".join(hp) diff --git a/spaces/kevinwang676/VoiceChangers/src/facerender/animate.py b/spaces/kevinwang676/VoiceChangers/src/facerender/animate.py deleted file mode 100644 index 781f5a3318a086049cc6b74393073ddda7001d5e..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/VoiceChangers/src/facerender/animate.py +++ /dev/null @@ -1,257 +0,0 @@ -import os -import cv2 -import yaml -import numpy as np -import warnings -from skimage import img_as_ubyte -import safetensors -import safetensors.torch -warnings.filterwarnings('ignore') - - -import imageio -import torch -import torchvision - - -from src.facerender.modules.keypoint_detector import HEEstimator, KPDetector -from src.facerender.modules.mapping import MappingNet -from src.facerender.modules.generator import OcclusionAwareGenerator, OcclusionAwareSPADEGenerator -from src.facerender.modules.make_animation import make_animation - -from pydub import AudioSegment -from src.utils.face_enhancer import enhancer_generator_with_len, enhancer_list -from src.utils.paste_pic import paste_pic -from src.utils.videoio import save_video_with_watermark - -try: - import webui # in webui - in_webui = True -except: - in_webui = False - -class AnimateFromCoeff(): - - def __init__(self, sadtalker_path, device): - - with open(sadtalker_path['facerender_yaml']) as f: - config = yaml.safe_load(f) - - generator = OcclusionAwareSPADEGenerator(**config['model_params']['generator_params'], - **config['model_params']['common_params']) - kp_extractor = KPDetector(**config['model_params']['kp_detector_params'], - **config['model_params']['common_params']) - he_estimator = HEEstimator(**config['model_params']['he_estimator_params'], - **config['model_params']['common_params']) - mapping = MappingNet(**config['model_params']['mapping_params']) - - generator.to(device) - kp_extractor.to(device) - he_estimator.to(device) - mapping.to(device) - for param in generator.parameters(): - param.requires_grad = False - for param in kp_extractor.parameters(): - param.requires_grad = False - for param in he_estimator.parameters(): - param.requires_grad = False - for param in mapping.parameters(): - param.requires_grad = False - - if sadtalker_path is not None: - if 'checkpoint' in sadtalker_path: # use safe tensor - self.load_cpk_facevid2vid_safetensor(sadtalker_path['checkpoint'], kp_detector=kp_extractor, generator=generator, he_estimator=None) - else: - self.load_cpk_facevid2vid(sadtalker_path['free_view_checkpoint'], kp_detector=kp_extractor, generator=generator, he_estimator=he_estimator) - else: - raise AttributeError("Checkpoint should be specified for video head pose estimator.") - - if sadtalker_path['mappingnet_checkpoint'] is not None: - self.load_cpk_mapping(sadtalker_path['mappingnet_checkpoint'], mapping=mapping) - else: - raise AttributeError("Checkpoint should be specified for video head pose estimator.") - - self.kp_extractor = kp_extractor - self.generator = generator - self.he_estimator = he_estimator - self.mapping = mapping - - self.kp_extractor.eval() - self.generator.eval() - self.he_estimator.eval() - self.mapping.eval() - - self.device = device - - def load_cpk_facevid2vid_safetensor(self, checkpoint_path, generator=None, - kp_detector=None, he_estimator=None, - device="cpu"): - - checkpoint = safetensors.torch.load_file(checkpoint_path) - - if generator is not None: - x_generator = {} - for k,v in checkpoint.items(): - if 'generator' in k: - x_generator[k.replace('generator.', '')] = v - generator.load_state_dict(x_generator) - if kp_detector is not None: - x_generator = {} - for k,v in checkpoint.items(): - if 'kp_extractor' in k: - x_generator[k.replace('kp_extractor.', '')] = v - kp_detector.load_state_dict(x_generator) - if he_estimator is not None: - x_generator = {} - for k,v in checkpoint.items(): - if 'he_estimator' in k: - x_generator[k.replace('he_estimator.', '')] = v - he_estimator.load_state_dict(x_generator) - - return None - - def load_cpk_facevid2vid(self, checkpoint_path, generator=None, discriminator=None, - kp_detector=None, he_estimator=None, optimizer_generator=None, - optimizer_discriminator=None, optimizer_kp_detector=None, - optimizer_he_estimator=None, device="cpu"): - checkpoint = torch.load(checkpoint_path, map_location=torch.device(device)) - if generator is not None: - generator.load_state_dict(checkpoint['generator']) - if kp_detector is not None: - kp_detector.load_state_dict(checkpoint['kp_detector']) - if he_estimator is not None: - he_estimator.load_state_dict(checkpoint['he_estimator']) - if discriminator is not None: - try: - discriminator.load_state_dict(checkpoint['discriminator']) - except: - print ('No discriminator in the state-dict. Dicriminator will be randomly initialized') - if optimizer_generator is not None: - optimizer_generator.load_state_dict(checkpoint['optimizer_generator']) - if optimizer_discriminator is not None: - try: - optimizer_discriminator.load_state_dict(checkpoint['optimizer_discriminator']) - except RuntimeError as e: - print ('No discriminator optimizer in the state-dict. Optimizer will be not initialized') - if optimizer_kp_detector is not None: - optimizer_kp_detector.load_state_dict(checkpoint['optimizer_kp_detector']) - if optimizer_he_estimator is not None: - optimizer_he_estimator.load_state_dict(checkpoint['optimizer_he_estimator']) - - return checkpoint['epoch'] - - def load_cpk_mapping(self, checkpoint_path, mapping=None, discriminator=None, - optimizer_mapping=None, optimizer_discriminator=None, device='cpu'): - checkpoint = torch.load(checkpoint_path, map_location=torch.device(device)) - if mapping is not None: - mapping.load_state_dict(checkpoint['mapping']) - if discriminator is not None: - discriminator.load_state_dict(checkpoint['discriminator']) - if optimizer_mapping is not None: - optimizer_mapping.load_state_dict(checkpoint['optimizer_mapping']) - if optimizer_discriminator is not None: - optimizer_discriminator.load_state_dict(checkpoint['optimizer_discriminator']) - - return checkpoint['epoch'] - - def generate(self, x, video_save_dir, pic_path, crop_info, enhancer=None, background_enhancer=None, preprocess='crop', img_size=256): - - source_image=x['source_image'].type(torch.FloatTensor) - source_semantics=x['source_semantics'].type(torch.FloatTensor) - target_semantics=x['target_semantics_list'].type(torch.FloatTensor) - source_image=source_image.to(self.device) - source_semantics=source_semantics.to(self.device) - target_semantics=target_semantics.to(self.device) - if 'yaw_c_seq' in x: - yaw_c_seq = x['yaw_c_seq'].type(torch.FloatTensor) - yaw_c_seq = x['yaw_c_seq'].to(self.device) - else: - yaw_c_seq = None - if 'pitch_c_seq' in x: - pitch_c_seq = x['pitch_c_seq'].type(torch.FloatTensor) - pitch_c_seq = x['pitch_c_seq'].to(self.device) - else: - pitch_c_seq = None - if 'roll_c_seq' in x: - roll_c_seq = x['roll_c_seq'].type(torch.FloatTensor) - roll_c_seq = x['roll_c_seq'].to(self.device) - else: - roll_c_seq = None - - frame_num = x['frame_num'] - - predictions_video = make_animation(source_image, source_semantics, target_semantics, - self.generator, self.kp_extractor, self.he_estimator, self.mapping, - yaw_c_seq, pitch_c_seq, roll_c_seq, use_exp = True) - - predictions_video = predictions_video.reshape((-1,)+predictions_video.shape[2:]) - predictions_video = predictions_video[:frame_num] - - video = [] - for idx in range(predictions_video.shape[0]): - image = predictions_video[idx] - image = np.transpose(image.data.cpu().numpy(), [1, 2, 0]).astype(np.float32) - video.append(image) - result = img_as_ubyte(video) - - ### the generated video is 256x256, so we keep the aspect ratio, - original_size = crop_info[0] - if original_size: - result = [ cv2.resize(result_i,(img_size, int(img_size * original_size[1]/original_size[0]) )) for result_i in result ] - - video_name = x['video_name'] + '.mp4' - path = os.path.join(video_save_dir, 'temp_'+video_name) - - imageio.mimsave(path, result, fps=float(25)) - - av_path = os.path.join(video_save_dir, video_name) - return_path = av_path - - audio_path = x['audio_path'] - audio_name = os.path.splitext(os.path.split(audio_path)[-1])[0] - new_audio_path = os.path.join(video_save_dir, audio_name+'.wav') - start_time = 0 - # cog will not keep the .mp3 filename - sound = AudioSegment.from_file(audio_path) - frames = frame_num - end_time = start_time + frames*1/25*1000 - word1=sound.set_frame_rate(16000) - word = word1[start_time:end_time] - word.export(new_audio_path, format="wav") - - save_video_with_watermark(path, new_audio_path, av_path, watermark= False) - print(f'The generated video is named {video_save_dir}/{video_name}') - - if 'full' in preprocess.lower(): - # only add watermark to the full image. - video_name_full = x['video_name'] + '_full.mp4' - full_video_path = os.path.join(video_save_dir, video_name_full) - return_path = full_video_path - paste_pic(path, pic_path, crop_info, new_audio_path, full_video_path, extended_crop= True if 'ext' in preprocess.lower() else False) - print(f'The generated video is named {video_save_dir}/{video_name_full}') - else: - full_video_path = av_path - - #### paste back then enhancers - if enhancer: - video_name_enhancer = x['video_name'] + '_enhanced.mp4' - enhanced_path = os.path.join(video_save_dir, 'temp_'+video_name_enhancer) - av_path_enhancer = os.path.join(video_save_dir, video_name_enhancer) - return_path = av_path_enhancer - - try: - enhanced_images_gen_with_len = enhancer_generator_with_len(full_video_path, method=enhancer, bg_upsampler=background_enhancer) - imageio.mimsave(enhanced_path, enhanced_images_gen_with_len, fps=float(25)) - except: - enhanced_images_gen_with_len = enhancer_list(full_video_path, method=enhancer, bg_upsampler=background_enhancer) - imageio.mimsave(enhanced_path, enhanced_images_gen_with_len, fps=float(25)) - - save_video_with_watermark(enhanced_path, new_audio_path, av_path_enhancer, watermark= False) - print(f'The generated video is named {video_save_dir}/{video_name_enhancer}') - os.remove(enhanced_path) - - os.remove(path) - os.remove(new_audio_path) - - return return_path - diff --git a/spaces/kidcoconut/spcstm_omdenasaudi_liverhccxai/uix/pages/__init__.py b/spaces/kidcoconut/spcstm_omdenasaudi_liverhccxai/uix/pages/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/king007/GPT-Prompt-Generate-2/README.md b/spaces/king007/GPT-Prompt-Generate-2/README.md deleted file mode 100644 index 2d3b3ace3fa6b3cd245b01c2128f5ee421c04255..0000000000000000000000000000000000000000 --- a/spaces/king007/GPT-Prompt-Generate-2/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: GPT Prompt Generate 2 -emoji: 👨🏻‍🎤 -colorFrom: purple -colorTo: pink -sdk: gradio -sdk_version: 3.16.2 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmseg/models/builder.py b/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmseg/models/builder.py deleted file mode 100644 index 1f5b971252bfc971c3ffbaa27746d69b1d3ea9fd..0000000000000000000000000000000000000000 --- a/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmseg/models/builder.py +++ /dev/null @@ -1,46 +0,0 @@ -import warnings - -from annotator.uniformer.mmcv.cnn import MODELS as MMCV_MODELS -from annotator.uniformer.mmcv.utils import Registry - -MODELS = Registry('models', parent=MMCV_MODELS) - -BACKBONES = MODELS -NECKS = MODELS -HEADS = MODELS -LOSSES = MODELS -SEGMENTORS = MODELS - - -def build_backbone(cfg): - """Build backbone.""" - return BACKBONES.build(cfg) - - -def build_neck(cfg): - """Build neck.""" - return NECKS.build(cfg) - - -def build_head(cfg): - """Build head.""" - return HEADS.build(cfg) - - -def build_loss(cfg): - """Build loss.""" - return LOSSES.build(cfg) - - -def build_segmentor(cfg, train_cfg=None, test_cfg=None): - """Build segmentor.""" - if train_cfg is not None or test_cfg is not None: - warnings.warn( - 'train_cfg and test_cfg is deprecated, ' - 'please specify them in model', UserWarning) - assert cfg.get('train_cfg') is None or train_cfg is None, \ - 'train_cfg specified in both outer field and model field ' - assert cfg.get('test_cfg') is None or test_cfg is None, \ - 'test_cfg specified in both outer field and model field ' - return SEGMENTORS.build( - cfg, default_args=dict(train_cfg=train_cfg, test_cfg=test_cfg)) diff --git a/spaces/kkawamu1/huggingface_multi_inference_rank_eval/app/main_page.py b/spaces/kkawamu1/huggingface_multi_inference_rank_eval/app/main_page.py deleted file mode 100644 index bd5a5735e8bd404cfb5a6d678b198630cce4f4c4..0000000000000000000000000000000000000000 --- a/spaces/kkawamu1/huggingface_multi_inference_rank_eval/app/main_page.py +++ /dev/null @@ -1,74 +0,0 @@ -# Copyright 2022 Ken Kawamura -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import streamlit as st -import streamlit.components.v1 as stc - -st.set_page_config( - page_title="Example", layout="wide" -) - -#style taken from https://fossheim.io/writing/posts/css-text-gradient/ -stc.html(""" - - - - - - -

Multiple Choice
QA

- - - - - - - """, height=1000) - -st.sidebar.markdown("# Home Page 🤗") diff --git a/spaces/koajoel/PolyFormer/fairseq/examples/fast_noisy_channel/README.md b/spaces/koajoel/PolyFormer/fairseq/examples/fast_noisy_channel/README.md deleted file mode 100644 index f2631a8c34d11bdf7d351c6807b6fe415f5715e1..0000000000000000000000000000000000000000 --- a/spaces/koajoel/PolyFormer/fairseq/examples/fast_noisy_channel/README.md +++ /dev/null @@ -1,345 +0,0 @@ -# Language Models not just for Pre-training: Fast Online Neural Noisy Channel Modeling - -## Introduction -- [Yee et al. (2019)](https://www.aclweb.org/anthology/D19-1571.pdf) introduce a simple and effective noisy channel modeling approach for neural machine translation. However, the noisy channel online decoding approach introduced in this paper is too slow to be practical. -- To address this, [Bhosale et al. (2020)](http://www.statmt.org/wmt20/pdf/2020.wmt-1.68.pdf) introduces 3 simple approximations to make this approach very fast and practical without much loss in accuracy. -- This README provides intructions on how to run online decoding or generation with the noisy channel modeling approach, including ways to make it very fast without much loss in accuracy. - -## Noisy Channel Modeling - -[Yee et al. (2019)](https://www.aclweb.org/anthology/D19-1571.pdf) applies the Bayes Rule to predict `P(y|x)`, the probability of the target `y` given the source `x`. -```P(y|x) = P(x|y) * P(y) / P(x)``` -- `P(x|y)` predicts the source `x` given the target `y` and is referred to as the **channel model** -- `P(y)` is a **language model** over the target `y` -- `P(x)` is generally not modeled since it is constant for all `y`. - -We use Transformer models to parameterize the direct model `P(y|x)`, the channel model `P(x|y)` and the language model `P(y)`. - -During online decoding with beam search, we generate the top `K2` candidates per beam and score them with the following linear combination of the channel model, the language model as well as the direct model scores. - -```(1 / t) * log(P(y|x) + (1 / s) * ( λ1 * log(P(x|y)) + λ2 * log(P(y) ) )``` -- `t` - Target Prefix Length -- `s` - Source Length -- `λ1` - Channel Model Weight -- `λ2` - Language Model Weight - -The top `beam_size` candidates based on the above combined scores are chosen to continue the beams in beam search. In beam search with a direct model alone, the scores from the direct model `P(y|x)` are used to choose the top candidates in beam search. - -This framework provides a great way to utlize strong target language models trained on large amounts of unlabeled data. Language models can prefer targets unrelated to the source, so we also need a channel model whose role is to ensure that the target preferred by the language model also translates back to the source. - -### Training Translation Models and Language Models - -For training Transformer models in fairseq for machine translation, refer to instructions [here](https://github.com/pytorch/fairseq/tree/main/examples/translation) - -For training Transformer models in fairseq for language modeling, refer to instructions [here](https://github.com/pytorch/fairseq/tree/main/examples/language_model) - -### Generation with Language Model for German-English translation with fairseq - -Here are instructions to generate using a direct model and a target-side language model. - -Note: -- Download and install fairseq as per instructions [here](https://github.com/pytorch/fairseq) -- Preprocess and binarize the dataset as per instructions in section [Test Data Preprocessing](#test-data-preprocessing) - -```sh -binarized_data=data_dir/binarized -direct_model=de_en_seed4.pt -lm_model=en_lm.pt -lm_data=lm_data -wget https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/direct_models/seed4.pt -O ${direct_model} -wget https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/lm_model/transformer_lm.pt -O ${lm_model} -mkdir -p ${lm_data} -wget https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/lm_model/lm_dict/dict.txt -O ${lm_data}/dict.txt - -k2=10 -lenpen=0.16 -lm_wt=0.14 -fairseq-generate ${binarized_data} \ - --user-dir examples/fast_noisy_channel \ - --beam 5 \ - --path ${direct_model} \ - --lm-model ${lm_model} \ - --lm-data ${lm_data} \ - --k2 ${k2} \ - --combine-method lm_only \ - --task noisy_channel_translation \ - --lenpen ${lenpen} \ - --lm-wt ${lm_wt} \ - --gen-subset valid \ - --remove-bpe \ - --fp16 \ - --batch-size 10 -``` -### Noisy Channel Generation for German-English translation with fairseq - -Here are instructions for noisy channel generation with a direct model, channel model and language model as explained in section [Noisy Channel Modeling](#noisy-channel-modeling). - -Note: -- Download and install fairseq as per instructions [here](https://github.com/pytorch/fairseq) -- Preprocess and binarize the dataset as per instructions in section [Test Data Preprocessing](#test-data-preprocessing) - -```sh -binarized_data=data_dir/binarized -direct_model=de_en_seed4.pt -lm_model=en_lm.pt -lm_data=lm_data -ch_model=en_de.big.seed4.pt -wget https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/direct_models/seed4.pt -O ${direct_model} -wget https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/lm_model/transformer_lm.pt -O ${lm_model} -mkdir -p ${lm_data} -wget https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/lm_model/lm_dict/dict.txt -O ${lm_data}/dict.txt -wget https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/big.seed4.pt -O ${ch_model} - -k2=10 -lenpen=0.21 -lm_wt=0.50 -bw_wt=0.30 -fairseq-generate ${binarized_data} \ - --user-dir examples/fast_noisy_channel \ - --beam 5 \ - --path ${direct_model} \ - --lm-model ${lm_model} \ - --lm-data ${lm_data} \ - --channel-model ${ch_model} \ - --k2 ${k2} \ - --combine-method noisy_channel \ - --task noisy_channel_translation \ - --lenpen ${lenpen} \ - --lm-wt ${lm_wt} \ - --ch-wt ${bw_wt} \ - --gen-subset test \ - --remove-bpe \ - --fp16 \ - --batch-size 1 -``` -## Fast Noisy Channel Modeling - -[Bhosale et al. (2020)](http://www.statmt.org/wmt20/pdf/2020.wmt-1.68.pdf) introduces 3 approximations that speed up online noisy channel decoding - -- Smaller channel models (`Tranformer Base` with 1 encoder and decoder layer each vs. `Transformer Big`) - - This involves training a channel model that is possibly smaller and less accurate in terms of BLEU than a channel model of the same size as the direct model. - - Since the role of the channel model is mainly to assign low scores to generations from the language model if they don't translate back to the source, we may not need the most accurate channel model for this purpose. -- Smaller output vocabulary size for the channel model (~30,000 -> ~1000) - - The channel model doesn't need to score the full output vocabulary, it just needs to score the source tokens, which are completely known. - - This is specified using the arguments `--channel-scoring-type src_vocab --top-k-vocab 500` - - This means that the output vocabulary for the channel model will be the source tokens for all examples in the batch and the top-K most frequent tokens in the vocabulary - - This reduces the memory consumption needed to store channel model scores significantly -- Smaller number of candidates (`k2`) scored per beam - - This is specified by reducing the argument `--k2` - - -### Fast Noisy Channel Generation for German-English translation with fairseq - -Here are instructions for **fast** noisy channel generation with a direct model, channel model and language model as explained in section [Fast Noisy Channel Modeling](#fast-noisy-channel-modeling). The main differences are that we use a smaller channel model, reduce `--k2`, set `--channel-scoring-type src_vocab --top-k-vocab 500` and increase the `--batch-size`. - -Note: -- Download and install fairseq as per instructions [here](https://github.com/pytorch/fairseq) -- Preprocess and binarize the dataset as per instructions in section [Test Data Preprocessing](#test-data-preprocessing) - -```sh -binarized_data=data_dir/binarized -direct_model=de_en_seed4.pt -lm_model=en_lm.pt -lm_data=lm_data -small_ch_model=en_de.base_1_1.seed4.pt -wget https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/direct_models/seed4.pt -O ${direct_model} -wget https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/lm_model/transformer_lm.pt -O ${lm_model} -mkdir -p ${lm_data} -wget https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/lm_model/lm_dict/dict.txt -O ${lm_data}/dict.txt -wget https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/base_1_1.seed4.pt -O ${small_ch_model} - -k2=3 -lenpen=0.23 -lm_wt=0.58 -bw_wt=0.26 -fairseq-generate ${binarized_data} \ - --user-dir examples/fast_noisy_channel \ - --beam 5 \ - --path ${direct_model} \ - --lm-model ${lm_model} \ - --lm-data ${lm_data} \ - --channel-model ${small_ch_model} \ - --k2 ${k2} \ - --combine-method noisy_channel \ - --task noisy_channel_translation \ - --lenpen ${lenpen} \ - --lm-wt ${lm_wt} \ - --ch-wt ${bw_wt} \ - --gen-subset test \ - --remove-bpe \ - --fp16 \ - --batch-size 50 \ - --channel-scoring-type src_vocab --top-k-vocab 500 -``` - -## Test Data Preprocessing - -For preprocessing and binarizing the test sets for Romanian-English and German-English translation, we use the following script - - -```sh -FAIRSEQ=/path/to/fairseq -cd $FAIRSEQ -SCRIPTS=$FAIRSEQ/mosesdecoder/scripts -if [ ! -d "${SCRIPTS}" ]; then - echo 'Cloning Moses github repository (for tokenization scripts)...' - git clone https://github.com/moses-smt/mosesdecoder.git -fi -TOKENIZER=$SCRIPTS/tokenizer/tokenizer.perl -NORMALIZE=$SCRIPTS/tokenizer/normalize-punctuation.perl - -s=de -t=en -test=wmt18 - -mkdir -p data_dir - -# Tokenization -if [ $s == "ro" ] ; then - # Note: Get normalise-romanian.py and remove-diacritics.py from - # https://github.com/rsennrich/wmt16-scripts/tree/master/preprocess - sacrebleu -t $test -l $s-$t --echo src | \ - $NORMALIZE -l $s | \ - python normalise-romanian.py | \ - python remove-diacritics.py | \ - $TOKENIZER -l $s -a -q > data_dir/$test.$s-$t.$s -else - sacrebleu -t $test -l $s-$t --echo src | perl $NORMALIZE -l $s | perl $TOKENIZER -threads 8 -a -l $s > data_dir/$test.$s-$t.$s -fi - -sacrebleu -t $test -l $s-$t --echo ref | perl $NORMALIZE -l $t | perl $TOKENIZER -threads 8 -a -l $t > data_dir/$test.$s-$t.$t - - -# Applying BPE -src_bpe_code=/path/to/source/language/bpe/code -tgt_bpe_code=/path/to/target/language/bpe/code -src_dict=/path/to/source/language/dict -tgt_dict=/path/to/target/language/dict - -FASTBPE=$FAIRSEQ/fastBPE -if [ ! -d "${FASTBPE}" ] ; then - git clone https://github.com/glample/fastBPE.git - # Follow compilation instructions at https://github.com/glample/fastBPE - g++ -std=c++11 -pthread -O3 fastBPE/main.cc -IfastBPE -o fast -fi - -${FASTBPE}/fast applybpe data_dir/bpe.$test.$s-$t.$s data_dir/$test.$s-$t.$s ${src_bpe_code} -${FASTBPE}/fast applybpe data_dir/bpe.$test.$s-$t.$s data_dir/$test.$s-$t.$s ${tgt_bpe_code} - -fairseq-preprocess -s $s -t $t \ - --testpref data_dir/bpe.$test.$s-$t \ - --destdir data_dir/binarized \ - --srcdict ${src_dict} \ - --tgtdict ${tgt_dict} -``` - -## Calculating BLEU - -```sh -DETOKENIZER=$SCRIPTS/tokenizer/detokenizer.perl -cat ${generation_output} | grep -P "^H" | sort -V | cut -f 3- | $DETOKENIZER -l $t -q -a | sacrebleu -t $test -l $s-$t -``` - - -## Romanian-English Translation - -The direct and channel models are trained using bitext data (WMT16) combined with backtranslated data (The monolingual data used for backtranslation comes from http://data.statmt.org/rsennrich/wmt16_backtranslations/ (Sennrich et al., 2016c)) - -The backtranslated data is generated using an ensemble of 3 English-Romanian models trained on bitext training data (WMT16) with unrestricted sampling. - -### BPE Codes and Dictionary - -We learn a joint BPE vocabulary of 18K types on the bitext training data which is used for both the source and target. -||Path| -|----------|------| -| BPE Code | [joint_bpe_18k](https://dl.fbaipublicfiles.com/fast_noisy_channel/ro_en/bpe_18k) | -| Dictionary | [dict](https://dl.fbaipublicfiles.com/fast_noisy_channel/ro_en/dict) | - -### Direct Models -For Ro-En with backtranslation, the direct and channel models use a Transformer-Big architecture. - -| Seed | Model | -|----|----| -| 2 | [ro_en_seed2.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/ro_en/direct_models/seed2.pt) -| 4 | [ro_en_seed4.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/ro_en/direct_models/seed4.pt) -| 6 | [ro_en_seed6.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/ro_en/direct_models/seed6.pt) - -### Channel Models -For channel models, we follow the same steps as for the direct models. But backtranslated data is generated in the opposite direction using [this Romanian monolingual data](http://data.statmt.org/rsennrich/wmt16_backtranslations/). -The best lenpen, LM weight and CH weight are obtained by sweeping over the validation set (wmt16/dev) using beam 5. -| Model Size | Lenpen | LM Weight | CH Weight | Seed 2 | Seed 4 | Seed 6 | -|----|----|----|----|----|----|----| -| `big` | 0.84 | 0.64 | 0.56 | [big.seed2.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/ro_en/channel_models/big.seed2.pt) | [big.seed2.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/ro_en/channel_models/big.seed2.pt) | [big.seed2.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/ro_en/channel_models/big.seed2.pt) | -| `base_1_1` | 0.63 | 0.40 | 0.37 | [base_1_1.seed2.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/ro_en/channel_models/base_1_1.seed2.pt) | [base_1_1.seed4.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/ro_en/channel_models/base_1_1.seed4.pt) | [base_1_1.seed6.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/ro_en/channel_models/base_1_1.seed6.pt) | - -### Language Model -The model is trained on de-duplicated English Newscrawl data from 2007-2018 comprising 186 million sentences or 4.5B words after normalization and tokenization. -| | Path | -|----|----| -| `--lm-model` | [transformer_en_lm](https://dl.fbaipublicfiles.com/fast_noisy_channel/ro_en/lm_model/transformer_lm.pt) | -| `--lm-data` | [lm_data](https://dl.fbaipublicfiles.com/fast_noisy_channel/ro_en/lm_model/lm_dict) - -## German-English Translation - -### BPE Codes and Dictionaries - -| | Path| -|----------|------| -| Source BPE Code | [de_bpe_code_24K](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/de_bpe_code_24K) | -| Target BPE Code | [en_bpe_code_24K](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/en_bpe_code_24K) -| Source Dictionary | [de_dict](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/de_dict) | -| Target Dictionary | [en_dict](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/en_dict) | - -### Direct Models -We train on WMT’19 training data. Following [Ng et al., 2019](http://statmt.org/wmt19/pdf/53/WMT33.pdf), we apply language identification filtering and remove sentences longer than 250 tokens as well as sentence pairs with a source/target length ratio exceeding 1.5. This results in 26.8M sentence pairs. -We use the Transformer-Big architecture for the direct model. - -| Seed | Model | -|:----:|----| -| 4 | [de_en_seed4.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/direct_models/seed4.pt) -| 5 | [de_en_seed5.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/direct_models/seed5.pt) -| 6 | [de_en_seed6.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/direct_models/seed6.pt) - -### Channel Models - -We train on WMT’19 training data. Following [Ng et al., 2019](http://statmt.org/wmt19/pdf/53/WMT33.pdf), we apply language identification filtering and remove sentences longer than 250 tokens as well as sentence pairs with a source/target length ratio exceeding 1.5. This results in 26.8M sentence pairs. - -| Model Size | Seed 4 | Seed 5 | Seed 6 | -|----|----|----|----| -| `big` | [big.seed4.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/big.seed4.pt) | [big.seed5.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/big.seed5.pt) | [big.seed6.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/big.seed6.pt) | -| `big_1_1` | [big_1_1.seed4.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/big_1_1.seed4.pt) | [big_1_1.seed5.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/big_1_1.seed5.pt) | [big_1_1.seed6.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/big_1_1.seed6.pt) | -| `base` | [base.seed4.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/base.seed4.pt) | [base.seed5.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/base.seed5.pt) | [base.seed6.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/base.seed6.pt) | -| `base_1_1` | [base_1_1.seed4.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/base_1_1.seed4.pt) | [base_1_1.seed5.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/base_1_1.seed5.pt) | [base_1_1.seed6.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/base_1_1.seed6.pt) | -| `half` | [half.seed4.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/half.seed4.pt) | [half.seed5.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/half.seed5.pt) | [half.seed6.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/half.seed6.pt) | -| `half_1_1` | [half_1_1.seed4.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/half_1_1.seed4.pt) | [half_1_1.seed5.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/half_1_1.seed5.pt) | [half_1_1.seed6.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/half_1_1.seed6.pt) | -| `quarter` | [quarter.seed4.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/quarter.seed4.pt) | [quarter.seed5.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/quarter.seed5.pt) | [quarter.seed6.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/quarter.seed6.pt) | -| `quarter_1_1` | [quarter_1_1.seed4.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/quarter_1_1.seed4.pt) | [quarter_1_1.seed5.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/quarter_1_1.seed5.pt) | [quarter_1_1.seed6.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/quarter_1_1.seed6.pt) | -| `8th` | [8th.seed4.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/8th.seed4.pt) | [8th.seed5.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/8th.seed5.pt) | [8th.seed6.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/8th.seed6.pt) | -| `8th_1_1` | [8th_1_1.seed4.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/8th_1_1.seed4.pt) | [8th_1_1.seed5.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/8th_1_1.seed5.pt) | [8th_1_1.seed6.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/8th_1_1.seed6.pt) | -| `16th` | [16th.seed4.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/16th.seed4.pt) | [16th.seed5.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/16th.seed5.pt) | [16th.seed6.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/16th.seed6.pt) | -| `16th_1_1` | [16th_1_1.seed4.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/16th_1_1.seed4.pt) | [16th_1_1.seed5.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/16th_1_1.seed5.pt) | [16th_1_1.seed6.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/16th_1_1.seed6.pt) | - -### Language Model -The model is trained on de-duplicated English Newscrawl data from 2007-2018 comprising 186 million sentences or 4.5B words after normalization and tokenization. -| | Path | -|----|----| -| `--lm-model` | [transformer_en_lm](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/lm_model/transformer_lm.pt) | -| `--lm-data` | [lm_data](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/lm_model/lm_dict/) - - -## Citation - -```bibtex -@inproceedings{bhosale2020language, - title={Language Models not just for Pre-training: Fast Online Neural Noisy Channel Modeling}, - author={Shruti Bhosale and Kyra Yee and Sergey Edunov and Michael Auli}, - booktitle={Proceedings of the Fifth Conference on Machine Translation (WMT)}, - year={2020}, -} - -@inproceedings{yee2019simple, - title={Simple and Effective Noisy Channel Modeling for Neural Machine Translation}, - author={Yee, Kyra and Dauphin, Yann and Auli, Michael}, - booktitle={Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)}, - pages={5700--5705}, - year={2019} -} -``` diff --git a/spaces/kolibril13/tldraw-solara-test/README.md b/spaces/kolibril13/tldraw-solara-test/README.md deleted file mode 100644 index d3aad916bd961ee48e695c3e220d3a3f852ec4a5..0000000000000000000000000000000000000000 --- a/spaces/kolibril13/tldraw-solara-test/README.md +++ /dev/null @@ -1,25 +0,0 @@ ---- -title: Tldraw -emoji: 🚀 -colorFrom: green -colorTo: indigo -sdk: docker -pinned: false -license: mit -app_port: 8765 -duplicated_from: giswqs/solara-geospatial ---- - -## Introduction - -A collection of [Solara](https://github.com/widgetti/solara) web apps for geospatial applications - -Just a proof-of-concept for now. Not all features are working yet. More features will be added in the future. - -- Web App: -- GitHub: -- Hugging Face: - -## Demos - -![](https://i.imgur.com/4uIEnAJ.gif) \ No newline at end of file diff --git a/spaces/kukuhtw/AutoGPT/autogpt/memory/no_memory.py b/spaces/kukuhtw/AutoGPT/autogpt/memory/no_memory.py deleted file mode 100644 index 0371e96ae89f5eb88dae019a66351a229596ed7a..0000000000000000000000000000000000000000 --- a/spaces/kukuhtw/AutoGPT/autogpt/memory/no_memory.py +++ /dev/null @@ -1,73 +0,0 @@ -"""A class that does not store any data. This is the default memory provider.""" -from __future__ import annotations - -from typing import Any - -from autogpt.memory.base import MemoryProviderSingleton - - -class NoMemory(MemoryProviderSingleton): - """ - A class that does not store any data. This is the default memory provider. - """ - - def __init__(self, cfg): - """ - Initializes the NoMemory provider. - - Args: - cfg: The config object. - - Returns: None - """ - pass - - def add(self, data: str) -> str: - """ - Adds a data point to the memory. No action is taken in NoMemory. - - Args: - data: The data to add. - - Returns: An empty string. - """ - return "" - - def get(self, data: str) -> list[Any] | None: - """ - Gets the data from the memory that is most relevant to the given data. - NoMemory always returns None. - - Args: - data: The data to compare to. - - Returns: None - """ - return None - - def clear(self) -> str: - """ - Clears the memory. No action is taken in NoMemory. - - Returns: An empty string. - """ - return "" - - def get_relevant(self, data: str, num_relevant: int = 5) -> list[Any] | None: - """ - Returns all the data in the memory that is relevant to the given data. - NoMemory always returns None. - - Args: - data: The data to compare to. - num_relevant: The number of relevant data to return. - - Returns: None - """ - return None - - def get_stats(self): - """ - Returns: An empty dictionary as there are no stats in NoMemory. - """ - return {} diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/anyio/streams/stapled.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/anyio/streams/stapled.py deleted file mode 100644 index 1b2862e3eac2ae6f18212d312e7cc7c2acdf0c5c..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/anyio/streams/stapled.py +++ /dev/null @@ -1,140 +0,0 @@ -from __future__ import annotations - -from dataclasses import dataclass -from typing import Any, Callable, Generic, Mapping, Sequence, TypeVar - -from ..abc import ( - ByteReceiveStream, - ByteSendStream, - ByteStream, - Listener, - ObjectReceiveStream, - ObjectSendStream, - ObjectStream, - TaskGroup, -) - -T_Item = TypeVar("T_Item") -T_Stream = TypeVar("T_Stream") - - -@dataclass(eq=False) -class StapledByteStream(ByteStream): - """ - Combines two byte streams into a single, bidirectional byte stream. - - Extra attributes will be provided from both streams, with the receive stream providing the - values in case of a conflict. - - :param ByteSendStream send_stream: the sending byte stream - :param ByteReceiveStream receive_stream: the receiving byte stream - """ - - send_stream: ByteSendStream - receive_stream: ByteReceiveStream - - async def receive(self, max_bytes: int = 65536) -> bytes: - return await self.receive_stream.receive(max_bytes) - - async def send(self, item: bytes) -> None: - await self.send_stream.send(item) - - async def send_eof(self) -> None: - await self.send_stream.aclose() - - async def aclose(self) -> None: - await self.send_stream.aclose() - await self.receive_stream.aclose() - - @property - def extra_attributes(self) -> Mapping[Any, Callable[[], Any]]: - return { - **self.send_stream.extra_attributes, - **self.receive_stream.extra_attributes, - } - - -@dataclass(eq=False) -class StapledObjectStream(Generic[T_Item], ObjectStream[T_Item]): - """ - Combines two object streams into a single, bidirectional object stream. - - Extra attributes will be provided from both streams, with the receive stream providing the - values in case of a conflict. - - :param ObjectSendStream send_stream: the sending object stream - :param ObjectReceiveStream receive_stream: the receiving object stream - """ - - send_stream: ObjectSendStream[T_Item] - receive_stream: ObjectReceiveStream[T_Item] - - async def receive(self) -> T_Item: - return await self.receive_stream.receive() - - async def send(self, item: T_Item) -> None: - await self.send_stream.send(item) - - async def send_eof(self) -> None: - await self.send_stream.aclose() - - async def aclose(self) -> None: - await self.send_stream.aclose() - await self.receive_stream.aclose() - - @property - def extra_attributes(self) -> Mapping[Any, Callable[[], Any]]: - return { - **self.send_stream.extra_attributes, - **self.receive_stream.extra_attributes, - } - - -@dataclass(eq=False) -class MultiListener(Generic[T_Stream], Listener[T_Stream]): - """ - Combines multiple listeners into one, serving connections from all of them at once. - - Any MultiListeners in the given collection of listeners will have their listeners moved into - this one. - - Extra attributes are provided from each listener, with each successive listener overriding any - conflicting attributes from the previous one. - - :param listeners: listeners to serve - :type listeners: Sequence[Listener[T_Stream]] - """ - - listeners: Sequence[Listener[T_Stream]] - - def __post_init__(self) -> None: - listeners: list[Listener[T_Stream]] = [] - for listener in self.listeners: - if isinstance(listener, MultiListener): - listeners.extend(listener.listeners) - del listener.listeners[:] # type: ignore[attr-defined] - else: - listeners.append(listener) - - self.listeners = listeners - - async def serve( - self, handler: Callable[[T_Stream], Any], task_group: TaskGroup | None = None - ) -> None: - from .. import create_task_group - - async with create_task_group() as tg: - for listener in self.listeners: - tg.start_soon(listener.serve, handler, task_group) - - async def aclose(self) -> None: - for listener in self.listeners: - await listener.aclose() - - @property - def extra_attributes(self) -> Mapping[Any, Callable[[], Any]]: - attributes: dict = {} - for listener in self.listeners: - attributes.update(listener.extra_attributes) - - return attributes diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/dateutil/zoneinfo/rebuild.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/dateutil/zoneinfo/rebuild.py deleted file mode 100644 index 684c6586f091350c347f2b6150935f5214ffec27..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/dateutil/zoneinfo/rebuild.py +++ /dev/null @@ -1,75 +0,0 @@ -import logging -import os -import tempfile -import shutil -import json -from subprocess import check_call, check_output -from tarfile import TarFile - -from dateutil.zoneinfo import METADATA_FN, ZONEFILENAME - - -def rebuild(filename, tag=None, format="gz", zonegroups=[], metadata=None): - """Rebuild the internal timezone info in dateutil/zoneinfo/zoneinfo*tar* - - filename is the timezone tarball from ``ftp.iana.org/tz``. - - """ - tmpdir = tempfile.mkdtemp() - zonedir = os.path.join(tmpdir, "zoneinfo") - moduledir = os.path.dirname(__file__) - try: - with TarFile.open(filename) as tf: - for name in zonegroups: - tf.extract(name, tmpdir) - filepaths = [os.path.join(tmpdir, n) for n in zonegroups] - - _run_zic(zonedir, filepaths) - - # write metadata file - with open(os.path.join(zonedir, METADATA_FN), 'w') as f: - json.dump(metadata, f, indent=4, sort_keys=True) - target = os.path.join(moduledir, ZONEFILENAME) - with TarFile.open(target, "w:%s" % format) as tf: - for entry in os.listdir(zonedir): - entrypath = os.path.join(zonedir, entry) - tf.add(entrypath, entry) - finally: - shutil.rmtree(tmpdir) - - -def _run_zic(zonedir, filepaths): - """Calls the ``zic`` compiler in a compatible way to get a "fat" binary. - - Recent versions of ``zic`` default to ``-b slim``, while older versions - don't even have the ``-b`` option (but default to "fat" binaries). The - current version of dateutil does not support Version 2+ TZif files, which - causes problems when used in conjunction with "slim" binaries, so this - function is used to ensure that we always get a "fat" binary. - """ - - try: - help_text = check_output(["zic", "--help"]) - except OSError as e: - _print_on_nosuchfile(e) - raise - - if b"-b " in help_text: - bloat_args = ["-b", "fat"] - else: - bloat_args = [] - - check_call(["zic"] + bloat_args + ["-d", zonedir] + filepaths) - - -def _print_on_nosuchfile(e): - """Print helpful troubleshooting message - - e is an exception raised by subprocess.check_call() - - """ - if e.errno == 2: - logging.error( - "Could not find zic. Perhaps you need to install " - "libc-bin or some other package that provides it, " - "or it's not in your PATH?") diff --git a/spaces/leafShen/CodeFormer/CodeFormer/facelib/parsing/__init__.py b/spaces/leafShen/CodeFormer/CodeFormer/facelib/parsing/__init__.py deleted file mode 100644 index 72656e4b5f61df8cd0838588b0c6488fcc886e16..0000000000000000000000000000000000000000 --- a/spaces/leafShen/CodeFormer/CodeFormer/facelib/parsing/__init__.py +++ /dev/null @@ -1,23 +0,0 @@ -import torch - -from facelib.utils import load_file_from_url -from .bisenet import BiSeNet -from .parsenet import ParseNet - - -def init_parsing_model(model_name='bisenet', half=False, device='cuda'): - if model_name == 'bisenet': - model = BiSeNet(num_class=19) - model_url = 'https://github.com/sczhou/CodeFormer/releases/download/v0.1.0/parsing_bisenet.pth' - elif model_name == 'parsenet': - model = ParseNet(in_size=512, out_size=512, parsing_ch=19) - model_url = 'https://github.com/sczhou/CodeFormer/releases/download/v0.1.0/parsing_parsenet.pth' - else: - raise NotImplementedError(f'{model_name} is not implemented.') - - model_path = load_file_from_url(url=model_url, model_dir='weights/facelib', progress=True, file_name=None) - load_net = torch.load(model_path, map_location=lambda storage, loc: storage) - model.load_state_dict(load_net, strict=True) - model.eval() - model = model.to(device) - return model diff --git "a/spaces/leogabraneth/text-generation-webui-main/docs/02 \342\200\220 Default and Notebook Tabs.md" "b/spaces/leogabraneth/text-generation-webui-main/docs/02 \342\200\220 Default and Notebook Tabs.md" deleted file mode 100644 index c450635ec5a674bb2bfc578ad179b3e6fc45bef5..0000000000000000000000000000000000000000 --- "a/spaces/leogabraneth/text-generation-webui-main/docs/02 \342\200\220 Default and Notebook Tabs.md" +++ /dev/null @@ -1,35 +0,0 @@ -Used to generate raw completions starting from your prompt. - -## Default tab - -This tab contains two main text boxes: Input, where you enter your prompt, and Output, where the model output will appear. - -### Input - -The number on the lower right of the Input box counts the number of tokens in the input. It gets updated whenever you update the input text as long as a model is loaded (otherwise there is no tokenizer to count the tokens). - -Below the Input box, the following buttons can be found: - -* **Generate**: starts a new generation. -* **Stop**: stops an ongoing generation as soon as the next token is generated (which can take a while for a slow model). -* **Continue**: starts a new generation taking as input the text in the "Output" box. - -In the **Prompt** menu, you can select from some predefined prompts defined under `text-generation-webui/prompts`. The 💾 button saves your current input as a new prompt, the 🗑️ button deletes the selected prompt, and the 🔄 button refreshes the list. If you come up with an interesting prompt for a certain task, you are welcome to submit it to the repository. - -### Output - -Four tabs can be found: - -* **Raw**: where the raw text generated by the model appears. -* **Markdown**: it contains a "Render" button. You can click on it at any time to render the current output as markdown. This is particularly useful for models that generate LaTeX equations like GALACTICA. -* **HTML**: displays the output in an HTML style that is meant to be easier to read. Its style is defined under `text-generation-webui/css/html_readable_style.css`. -* **Logits**: when you click on "Get next token probabilities", this tab displays the 50 most likely next tokens and their probabilities based on your current input. If "Use samplers" is checked, the probabilities will be the ones after the sampling parameters in the "Parameters" > "Generation" tab are applied. Otherwise, they will be the raw probabilities generated by the model. -* **Tokens**: allows you to tokenize your prompt and see the ID numbers for the individuals tokens. - -## Notebook tab - -Precisely the same thing as the Default tab, with the difference that the output appears in the same text box as the input. - -It contains the following additional button: - -* **Regenerate**: uses your previous input for generation while discarding the last output. diff --git a/spaces/leogabraneth/text-generation-webui-main/extensions/perplexity_colors/script.py b/spaces/leogabraneth/text-generation-webui-main/extensions/perplexity_colors/script.py deleted file mode 100644 index 2a986ac40b4194a5751015241d82046ce95cbca2..0000000000000000000000000000000000000000 --- a/spaces/leogabraneth/text-generation-webui-main/extensions/perplexity_colors/script.py +++ /dev/null @@ -1,309 +0,0 @@ -import time - -import gradio -import numpy as np -import torch -from transformers import LogitsProcessor - -from modules import html_generator, shared - -params = { - 'active': True, - 'color_by_perplexity': False, - 'color_by_probability': False, - 'ppl_scale': 15.0, # No slider for this right now, because I don't think it really needs to be changed. Very large perplexity scores don't show up often. - 'probability_dropdown': False, - 'verbose': False # For debugging mostly -} - - -class PerplexityLogits(LogitsProcessor): - def __init__(self, verbose=False): - self.generated_token_ids = [] - self.selected_probs = [] - self.top_token_ids_list = [] - self.top_probs_list = [] - self.perplexities_list = [] - self.last_probs = None - self.verbose = verbose - - def __call__(self, input_ids, scores): - # t0 = time.time() - probs = torch.softmax(scores, dim=-1, dtype=torch.float) - log_probs = torch.nan_to_num(torch.log(probs)) # Note: This is to convert log(0) nan to 0, but probs*log_probs makes this 0 not affect the perplexity. - entropy = -torch.sum(probs * log_probs) - entropy = entropy.cpu().numpy() - perplexity = round(float(np.exp(entropy)), 4) - self.perplexities_list.append(perplexity) - last_token_id = int(input_ids[0][-1].cpu().numpy().item()) - # Store the generated tokens (not sure why this isn't accessible in the output endpoint!) - self.generated_token_ids.append(last_token_id) - # Get last probability, and add to the list if it wasn't there - if len(self.selected_probs) > 0: - # Is the selected token in the top tokens? - if self.verbose: - print('Probs: Token after', shared.tokenizer.decode(last_token_id)) - print('Probs:', [shared.tokenizer.decode(token_id) for token_id in self.top_token_ids_list[-1][0]]) - print('Probs:', [round(float(prob), 4) for prob in self.top_probs_list[-1][0]]) - if last_token_id in self.top_token_ids_list[-1][0]: - idx = self.top_token_ids_list[-1][0].index(last_token_id) - self.selected_probs.append(self.top_probs_list[-1][0][idx]) - else: - self.top_token_ids_list[-1][0].append(last_token_id) - last_prob = round(float(self.last_probs[last_token_id]), 4) - self.top_probs_list[-1][0].append(last_prob) - self.selected_probs.append(last_prob) - else: - self.selected_probs.append(1.0) # Placeholder for the last token of the prompt - - if self.verbose: - pplbar = "-" - if not np.isnan(perplexity): - pplbar = "*" * round(perplexity) - print(f"PPL: Token after {shared.tokenizer.decode(last_token_id)}\t{perplexity:.2f}\t{pplbar}") - - # Get top 5 probabilities - top_tokens_and_probs = torch.topk(probs, 5) - top_probs = top_tokens_and_probs.values.cpu().numpy().astype(float).tolist() - top_token_ids = top_tokens_and_probs.indices.cpu().numpy().astype(int).tolist() - - self.top_token_ids_list.append(top_token_ids) - self.top_probs_list.append(top_probs) - - probs = probs.cpu().numpy().flatten() - self.last_probs = probs # Need to keep this as a reference for top probs - - # t1 = time.time() - # print(f"PPL Processor: {(t1-t0):.3f} s") - # About 1 ms, though occasionally up to around 100 ms, not sure why... - # Doesn't actually modify the logits! - return scores - - -# Stores the perplexity and top probabilities -ppl_logits_processor = None - - -def logits_processor_modifier(logits_processor_list, input_ids): - global ppl_logits_processor - if params['active']: - ppl_logits_processor = PerplexityLogits(verbose=params['verbose']) - logits_processor_list.append(ppl_logits_processor) - - -def output_modifier(text): - global ppl_logits_processor - # t0 = time.time() - - if not params['active']: - return text - - # TODO: It's probably more efficient to do this above rather than modifying all these lists - # Remove last element of perplexities_list, top_token_ids_list, top_tokens_list, top_probs_list since everything is off by one because this extension runs before generation - perplexities = ppl_logits_processor.perplexities_list[:-1] - top_token_ids_list = ppl_logits_processor.top_token_ids_list[:-1] - top_tokens_list = [[shared.tokenizer.decode(token_id) for token_id in top_token_ids[0]] for top_token_ids in top_token_ids_list] - top_probs_list = ppl_logits_processor.top_probs_list[:-1] - # Remove first element of generated_token_ids, generated_tokens, selected_probs because they are for the last token of the prompt - gen_token_ids = ppl_logits_processor.generated_token_ids[1:] - gen_tokens = [shared.tokenizer.decode(token_id) for token_id in gen_token_ids] - sel_probs = ppl_logits_processor.selected_probs[1:] - - end_part = '' if params['probability_dropdown'] else '' # Helps with finding the index after replacing part of the text. - - i = 0 - for token, prob, ppl, top_tokens, top_probs in zip(gen_tokens, sel_probs, perplexities, top_tokens_list, top_probs_list): - color = 'ffffff' - if params['color_by_probability'] and params['color_by_perplexity']: - color = probability_perplexity_color_scale(prob, ppl) - elif params['color_by_perplexity']: - color = perplexity_color_scale(ppl) - elif params['color_by_probability']: - color = probability_color_scale(prob) - if token in text[i:]: - if params['probability_dropdown']: - text = text[:i] + text[i:].replace(token, add_dropdown_html(token, color, top_tokens, top_probs[0], ppl), 1) - else: - text = text[:i] + text[i:].replace(token, add_color_html(token, color), 1) - i += text[i:].find(end_part) + len(end_part) - - # Use full perplexity list for calculating the average here. - print('Average perplexity:', round(np.mean(ppl_logits_processor.perplexities_list[:-1]), 4)) - # t1 = time.time() - # print(f"Modifier: {(t1-t0):.3f} s") - # About 50 ms - return text - - -def probability_color_scale(prob): - ''' - Green-yellow-red color scale - ''' - - rv = 0 - gv = 0 - if prob <= 0.5: - rv = 'ff' - gv = hex(int(255 * prob * 2))[2:] - if len(gv) < 2: - gv = '0' * (2 - len(gv)) + gv - else: - rv = hex(int(255 - 255 * (prob - 0.5) * 2))[2:] - gv = 'ff' - if len(rv) < 2: - rv = '0' * (2 - len(rv)) + rv - - return rv + gv + '00' - - -def perplexity_color_scale(ppl): - ''' - Red component only, white for 0 perplexity (sorry if you're not in dark mode) - ''' - value = hex(max(int(255.0 - params['ppl_scale'] * (float(ppl) - 1.0)), 0))[2:] - if len(value) < 2: - value = '0' * (2 - len(value)) + value - - return 'ff' + value + value - - -def probability_perplexity_color_scale(prob, ppl): - ''' - Green-yellow-red for probability and blue component for perplexity - ''' - - rv = 0 - gv = 0 - bv = hex(min(max(int(params['ppl_scale'] * (float(ppl) - 1.0)), 0), 255))[2:] - if len(bv) < 2: - bv = '0' * (2 - len(bv)) + bv - - if prob <= 0.5: - rv = 'ff' - gv = hex(int(255 * prob * 2))[2:] - if len(gv) < 2: - gv = '0' * (2 - len(gv)) + gv - else: - rv = hex(int(255 - 255 * (prob - 0.5) * 2))[2:] - gv = 'ff' - if len(rv) < 2: - rv = '0' * (2 - len(rv)) + rv - - return rv + gv + bv - - -def add_color_html(token, color): - return f'{token}' - - -# TODO: Major issue: Applying this to too many tokens will cause a permanent slowdown in generation speed until the messages are removed from the history. -# I think the issue is from HTML elements taking up space in the visible history, and things like history deepcopy add latency proportional to the size of the history. -# Potential solution is maybe to modify the main generation code to send just the internal text and not the visible history, to avoid moving too much around. -# I wonder if we can also avoid using deepcopy here. -def add_dropdown_html(token, color, top_tokens, top_probs, perplexity=0): - html = f'
{token}
' - return html # About 750 characters per token... - - -def custom_css(): - return """ - .dropdown { - display: none; - position: absolute; - z-index: 50; - background-color: var(--block-background-fill); - box-shadow: 0px 8px 16px 0px rgba(0,0,0,0.2); - width: max-content; - overflow: visible; - padding: 5px; - border-radius: 10px; - border: 1px solid var(--border-color-primary); - } - - .dropdown-content { - border: none; - z-index: 50; - } - - .dropdown-content tr.selected { - background-color: var(--block-label-background-fill); - } - - .dropdown-content td { - color: var(--body-text-color); - } - - .hoverable { - color: var(--body-text-color); - position: relative; - display: inline-block; - overflow: visible; - font-size: 15px; - line-height: 1.75; - margin: 0; - padding: 0; - } - - .hoverable:hover .dropdown { - display: block; - } - - pre { - white-space: pre-wrap; - } - - # TODO: This makes the hover menus extend outside the bounds of the chat area, which is good. - # However, it also makes the scrollbar disappear, which is bad. - # The scroll bar needs to still be present. So for now, we can't see dropdowns that extend past the edge of the chat area. - #.chat { - # overflow-y: auto; - #} - """ - - -# Monkeypatch applied to html_generator.py -# We simply don't render markdown into HTML. We wrap everything in
 tags to preserve whitespace
-# formatting. If you're coloring tokens by perplexity or probability, or especially if you're using
-# the probability dropdown, you probably care more about seeing the tokens the model actually outputted
-# rather than rendering ```code blocks``` or *italics*.
-def convert_to_markdown(string):
-    return '
' + string + '
' - - -html_generator.convert_to_markdown = convert_to_markdown - - -def ui(): - def update_active_check(x): - params.update({'active': x}) - - def update_color_by_ppl_check(x): - params.update({'color_by_perplexity': x}) - - def update_color_by_prob_check(x): - params.update({'color_by_probability': x}) - - def update_prob_dropdown_check(x): - params.update({'probability_dropdown': x}) - - active_check = gradio.Checkbox(value=True, label="Compute probabilities and perplexity scores", info="Activate this extension. Note that this extension currently does not work with exllama or llama.cpp.") - color_by_ppl_check = gradio.Checkbox(value=False, label="Color by perplexity", info="Higher perplexity is more red. If also showing probability, higher perplexity has more blue component.") - color_by_prob_check = gradio.Checkbox(value=False, label="Color by probability", info="Green-yellow-red linear scale, with 100% green, 50% yellow, 0% red.") - prob_dropdown_check = gradio.Checkbox(value=False, label="Probability dropdown", info="Hover over a token to show a dropdown of top token probabilities. Currently slightly buggy with whitespace between tokens.") - - active_check.change(update_active_check, active_check, None) - color_by_ppl_check.change(update_color_by_ppl_check, color_by_ppl_check, None) - color_by_prob_check.change(update_color_by_prob_check, color_by_prob_check, None) - prob_dropdown_check.change(update_prob_dropdown_check, prob_dropdown_check, None) diff --git a/spaces/lewiswu1209/MockingBird/ppg_extractor/encoder/layer_norm.py b/spaces/lewiswu1209/MockingBird/ppg_extractor/encoder/layer_norm.py deleted file mode 100644 index db8be30ff70554edb179109037665e51c04510ec..0000000000000000000000000000000000000000 --- a/spaces/lewiswu1209/MockingBird/ppg_extractor/encoder/layer_norm.py +++ /dev/null @@ -1,33 +0,0 @@ -#!/usr/bin/env python3 -# -*- coding: utf-8 -*- - -# Copyright 2019 Shigeki Karita -# Apache 2.0 (http://www.apache.org/licenses/LICENSE-2.0) - -"""Layer normalization module.""" - -import torch - - -class LayerNorm(torch.nn.LayerNorm): - """Layer normalization module. - - :param int nout: output dim size - :param int dim: dimension to be normalized - """ - - def __init__(self, nout, dim=-1): - """Construct an LayerNorm object.""" - super(LayerNorm, self).__init__(nout, eps=1e-12) - self.dim = dim - - def forward(self, x): - """Apply layer normalization. - - :param torch.Tensor x: input tensor - :return: layer normalized tensor - :rtype torch.Tensor - """ - if self.dim == -1: - return super(LayerNorm, self).forward(x) - return super(LayerNorm, self).forward(x.transpose(1, -1)).transpose(1, -1) diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Camtasia 6.03 Serial Utorrent.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Camtasia 6.03 Serial Utorrent.md deleted file mode 100644 index 7a94ed1dd703591ea7a139688964967e2f69ec56..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/Camtasia 6.03 Serial Utorrent.md +++ /dev/null @@ -1,169 +0,0 @@ - -

Camtasia 6.03 Serial Utorrent: How to Download and Activate Camtasia Studio 6.0.3 for Free

- -

Camtasia Studio 6.0.3 is a powerful screen recording software that allows you to create professional-looking videos by capturing your screen activity, editing it with various tools and effects, and sharing it with your audience. However, this software is not free and you need to purchase a license key to activate it and enjoy its full features.

- -

But what if you don't have the budget to buy it? Is there a way to get Camtasia 6.03 Serial Utorrent for free? The answer is yes, but you need to be careful. There are many websites that claim to offer Camtasia 6.03 Serial Utorrent for free, but some of them may contain viruses, malware, or spyware that can harm your computer or steal your personal information.

-

Camtasia 6.03 Serial Utorrent


DOWNLOAD === https://bytlly.com/2uGwb6



- -

In this article, we will show you how to find a reliable and safe source of Camtasia 6.03 Serial Utorrent that can help you download and activate Camtasia Studio 6.0.3 without any risk. We will also give you some tips and warnings that you should keep in mind before you download and activate Camtasia Studio 6.0.3 with Camtasia 6.03 Serial Utorrent.

- -

How to Find a Reliable and Safe Source of Camtasia 6.03 Serial Utorrent

- -

One of the easiest ways to find a reliable and safe source of Camtasia 6.03 Serial Utorrent is to use a torrent client, such as uTorrent or BitTorrent. A torrent client is a software that allows you to download files from other users who are sharing them on a peer-to-peer network.

- -

To use a torrent client, you need to follow these steps:

- -
    -
  1. Download and install a torrent client on your computer.
  2. -
  3. Go to a reputable torrent site, such as The Pirate Bay or Kickass Torrents, and search for "Camtasia 6.03 Serial Utorrent".
  4. -
  5. Select the torrent file that has the most seeders and leechers, which indicate the number of users who are sharing and downloading the file.
  6. -
  7. Download the torrent file and open it with your torrent client.
  8. -
  9. Wait for the download to complete.
  10. -
- -

Once you have downloaded Camtasia 6.03 Serial Utorrent using a torrent client, you will have a zip file that contains the setup file and the serial key for Camtasia Studio 6.0.3.

- -

How to Download and Activate Camtasia Studio 6.0.3 with Camtasia 6.03 Serial Utorrent

- -

To download and activate Camtasia Studio 6.0.3 with Camtasia 6.03 Serial Utorrent, you need to follow these steps:

- -
    -
  1. Extract the zip file that contains Camtasia 6.03 Serial Utorrent using a software like WinRAR or 7-Zip.
  2. -
  3. Run the setup file and follow the installation instructions.
  4. -
  5. When prompted, enter the serial key that is provided in the zip file.
  6. -
  7. Complete the installation and launch Camtasia Studio 6.0.3.
  8. -
  9. Enjoy your activated Camtasia Studio 6.0.3 for free!
  10. -
- -

Congratulations! You have successfully downloaded and activated Camtasia Studio 6.0.3 with Camtasia 6.03 Serial Utorrent.

-

- -

Tips and Warnings

- -

Before you download and activate Camtasia Studio 6.0.3 with Camtasia 6.03 Serial Utorrent, here are some tips and warnings that you should keep in mind:

- -
    -
  • Make sure that you have a good antivirus software installed on your computer and scan the zip file before opening it.
  • -
  • Be careful when visiting torrent sites and avoid clicking on suspicious links or ads that may redirect you to malicious websites or download unwanted programs.
  • -
  • Do not share your serial key with anyone else or use it on multiple devices, as this may cause your license to be revoked or blocked by TechSmith, the developer of Camtasia Studio.
  • -
  • Do not update your Camtasia Studio 6.0.3 after activating it with Camtasia 6.03 Serial Utorrent, as this may cause your activation to be invalidated or detected by TechSmith.
  • -
  • Use Camtasia Studio 6.0.3 for personal and educational purposes only and do not use it for commercial or illegal activities.
  • -
- -

We hope that this article has helped you download and activate Camtasia Studio 6.0.3 with Camtasia 6.03 Serial Utorrent for free.

- -

If you have any questions or feedback, please leave a comment below.

-

What are the Benefits of Camtasia Studio 6.0.3?

- -

Camtasia Studio 6.0.3 is one of the best screen recording software that you can use for various purposes, such as:

- -
    -
  • Creating video tutorials, presentations, demos, or courses.
  • -
  • Recording webinars, online meetings, or live streams.
  • -
  • Capturing gameplay, software reviews, or product demos.
  • -
  • Editing and enhancing your videos with transitions, annotations, animations, or audio effects.
  • -
  • Sharing your videos with your audience on YouTube, Vimeo, or other platforms.
  • -
- -

Camtasia Studio 6.0.3 has many features that make it easy and convenient to use, such as:

- -
    -
  • A user-friendly interface that lets you record and edit your videos in one place.
  • -
  • A smart capture tool that automatically detects the optimal settings for your screen resolution and audio quality.
  • -
  • A library of royalty-free music, sound effects, images, and icons that you can use in your videos.
  • -
  • A timeline that lets you arrange and edit your clips, add transitions, zoom effects, callouts, or captions.
  • -
  • A preview window that lets you see how your video looks before exporting it.
  • -
  • A batch production tool that lets you export multiple videos at once with different formats and settings.
  • -
- -

With Camtasia Studio 6.0.3, you can create stunning videos that will impress your audience and achieve your goals.

- -

What are the Risks of Camtasia 6.03 Serial Utorrent?

- -

While Camtasia 6.03 Serial Utorrent may seem like a tempting option to get Camtasia Studio 6.0.3 for free, it also comes with many risks that you should be aware of, such as:

- -
    -
  • Virus infection: The zip file that contains Camtasia 6.03 Serial Utorrent may be infected with viruses, malware, or spyware that can damage your computer or steal your personal information.
  • -
  • Licensing issues: The serial key that is provided in the zip file may be invalid, expired, or already used by someone else. This may cause your license to be revoked or blocked by TechSmith, the developer of Camtasia Studio.
  • -
  • Update issues: If you update your Camtasia Studio 6.0.3 after activating it with Camtasia 6.03 Serial Utorrent, you may lose your activation or get detected by TechSmith. This may result in losing access to your software or facing legal consequences.
  • -
  • Ethical issues: Using Camtasia 6.03 Serial Utorrent is illegal and unethical, as it violates the terms and conditions of TechSmith and infringes their intellectual property rights. You may also be depriving them of their rightful income and support.
  • -
- -

Therefore, we do not recommend using Camtasia 6.03 Serial Utorrent to download and activate Camtasia Studio 6.0.3 for free. Instead, we suggest that you purchase a legitimate license key from TechSmith or look for other alternatives that are legal and safe.

-

What are the Alternatives to Camtasia 6.03 Serial Utorrent?

- -

If you are looking for other ways to get Camtasia Studio 6.0.3 for free or for a lower price, you may want to consider these alternatives:

- -
    -
  • Trial version: You can download and use Camtasia Studio 6.0.3 for free for 30 days from the official website of TechSmith. This will give you access to all the features and functions of the software, but you will not be able to save or export your videos after the trial period expires.
  • -
  • Educational discount: If you are a student, teacher, or staff member of an educational institution, you can get a 40% discount on Camtasia Studio 6.0.3 from TechSmith. You will need to provide a valid proof of eligibility, such as an email address or an ID card, to get this offer.
  • -
  • Other screen recording software: There are many other screen recording software that you can use instead of Camtasia Studio 6.0.3, such as OBS Studio, ScreenFlow, Snagit, or Bandicam. Some of them are free, while others are cheaper than Camtasia Studio 6.0.3. However, they may not have all the features and functions that Camtasia Studio 6.0.3 offers.
  • -
- -

These alternatives may help you get Camtasia Studio 6.0.3 for free or for a lower price legally and safely.

- -

Conclusion

- -

Camtasia Studio 6.0.3 is a great screen recording software that can help you create professional-looking videos for various purposes. However, it is not a free software and you need to purchase a license key to activate it and enjoy its full features.

- -

Camtasia 6.03 Serial Utorrent is a way to get Camtasia Studio 6.0.3 for free by using a torrent client and a serial key that are provided by some websites. However, this method is risky and illegal, as it may expose your computer to viruses, malware, or spyware, cause your license to be revoked or blocked by TechSmith, or result in legal consequences.

- -

Therefore, we do not recommend using Camtasia 6.03 Serial Utorrent to download and activate Camtasia Studio 6.0.3 for free. Instead, we suggest that you purchase a legitimate license key from TechSmith or look for other alternatives that are legal and safe.

- -

We hope that this article has helped you understand what Camtasia 6.03 Serial Utorrent is and what are its benefits and risks.

- -

If you have any questions or feedback, please leave a comment below.

-

How to Use Camtasia Studio 6.0.3 to Create Professional-Looking Videos

- -

Once you have downloaded and activated Camtasia Studio 6.0.3 with Camtasia 6.03 Serial Utorrent, you can start using it to create professional-looking videos for various purposes.

- -

To use Camtasia Studio 6.0.3 to create videos, you need to follow these steps:

- -
    -
  1. Launch Camtasia Studio 6.0.3 and select "Record the screen" or "Import media" to capture your screen activity or import existing video files.
  2. -
  3. Edit your video clips on the timeline by trimming, splitting, cropping, or rotating them.
  4. -
  5. Add transitions, annotations, animations, or audio effects to enhance your video.
  6. -
  7. Preview your video on the preview window and make any adjustments as needed.
  8. -
  9. Export your video by selecting "Produce and share" and choosing the format and settings that suit your needs.
  10. -
- -

With Camtasia Studio 6.0.3, you can create stunning videos that will impress your audience and achieve your goals.

- -

How to Get Support and Updates for Camtasia Studio 6.0.3

- -

If you encounter any problems or issues with Camtasia Studio 6.0.3, you can get support and updates from TechSmith or from other sources.

- -

To get support and updates for Camtasia Studio 6.0.3, you can try these options:

- -
    -
  • TechSmith website: You can visit the official website of TechSmith and access their support center, where you can find FAQs, tutorials, manuals, forums, or contact their customer service.
  • -
  • TechSmith blog: You can visit the official blog of TechSmith and read their articles, tips, tricks, or news about Camtasia Studio and other products.
  • -
  • TechSmith YouTube channel: You can visit the official YouTube channel of TechSmith and watch their videos, webinars, demos, or reviews about Camtasia Studio and other products.
  • -
  • Other websites: You can visit other websites that offer support and updates for Camtasia Studio 6.0.3, such as Dmilpennyaschaf or Sostante, where you can find license keys, keygens, cracks, or patches for Camtasia Studio 6.0.3.
  • -
- -

However, as we mentioned before, we do not recommend using Camtasia 6.03 Serial Utorrent or other sources that offer illegal or unethical ways to get Camtasia Studio 6.0.3 for free. Instead, we suggest that you purchase a legitimate license key from TechSmith or look for other alternatives that are legal and safe.

- -

Conclusion

- -

Camtasia Studio 6.0.3 is a great screen recording software that can help you create professional-looking videos for various purposes. However, it is not a free software and you need to purchase a license key to activate it and enjoy its full features.

- -

Camtasia 6.03 Serial Utorrent is a way to get Camtasia Studio 6.0.3 for free by using a torrent client and a serial key that are provided by some websites. However, this method is risky and illegal, as it may expose your computer to viruses, malware, or spyware, cause your license to be revoked or blocked by TechSmith, or result in legal consequences.

- -

Therefore, we do not recommend using Camtasia 6.03 Serial Utorrent to download and activate Camtasia Studio 6.0.3 for free. Instead, we suggest that you purchase a legitimate license key from TechSmith or look for other alternatives that are legal and safe.

- -

We hope that this article has helped you understand what Camtasia 6.03 Serial Utorrent is and what are its benefits and risks.

- -

If you have any questions or feedback, please leave a comment below.

-

In conclusion, Camtasia Studio 6.0.3 is a powerful screen recording software that can help you create professional-looking videos for various purposes. However, it is not a free software and you need to purchase a license key to activate it and enjoy its full features.

- -

Camtasia 6.03 Serial Utorrent is a way to get Camtasia Studio 6.0.3 for free by using a torrent client and a serial key that are provided by some websites. However, this method is risky and illegal, as it may expose your computer to viruses, malware, or spyware, cause your license to be revoked or blocked by TechSmith, or result in legal consequences.

- -

Therefore, we do not recommend using Camtasia 6.03 Serial Utorrent to download and activate Camtasia Studio 6.0.3 for free. Instead, we suggest that you purchase a legitimate license key from TechSmith or look for other alternatives that are legal and safe.

- -

We hope that this article has helped you understand what Camtasia 6.03 Serial Utorrent is and what are its benefits and risks.

- -

If you have any questions or feedback, please leave a comment below.

3cee63e6c2
-
-
\ No newline at end of file diff --git a/spaces/lindeberg/whisper-webui/app-network.py b/spaces/lindeberg/whisper-webui/app-network.py deleted file mode 100644 index 7605c4b126dfc7dac188dce38551ca8ae84d67db..0000000000000000000000000000000000000000 --- a/spaces/lindeberg/whisper-webui/app-network.py +++ /dev/null @@ -1,3 +0,0 @@ -# Run the app with no audio file restrictions, and make it available on the network -from app import create_ui -create_ui(-1, server_name="0.0.0.0") \ No newline at end of file diff --git a/spaces/lindeberg/whisper-webui/docs/colab.md b/spaces/lindeberg/whisper-webui/docs/colab.md deleted file mode 100644 index 3fcdb835327238764fb643b9bbd2e27b6e14f58c..0000000000000000000000000000000000000000 --- a/spaces/lindeberg/whisper-webui/docs/colab.md +++ /dev/null @@ -1,20 +0,0 @@ -# Running Whisper on Google Colab - -If you don't have a decent GPU or any experience in running command-line applications, you might want to try this Google Colab instead: - -* [Google Colab - Whisper WebUI GPU](https://colab.research.google.com/drive/1qeTSvi7Bt_5RMm88ipW4fkcsMOKlDDss?usp=sharing) -* [Screenshots](https://imgur.com/a/ZfY6uBO) - -The runtime (Runtime -> Change runtime type -> Hardware accelerator) should already be set top GPU. But if not, change it to GPU. - -Then, sign in to Google if you haven't already. Next, click on "Connect" at the top right. - -Under "Checking out WebUI from Git", click on the [play icon](https://imgur.com/a/81gOLyD) that appears in "[ ]" at the left. If you get a warning, click "Run anyway". - -After this step has completed, it should be get a green check mark. Then move on to the next section under "Installing dependencies", and click in "[ ]" again. This might take approximately 30 seconds. - -Once this has completed, scroll down to the "Run WebUI" section, and click on "[ ]". This will launch the WebUI in a shared link (expires in 72 hours). To open the UI, click on the link next to "Running on public URL", which will be something like https://12xxx.gradio.app/ - -The audio length in this version is not restricted, and it will run much faster as it is backed by a GPU. You can also run it using the "Large" model. Also note that it might take some time to start the model the first time, as it may need to download a 2.8 GB file on Google's servers. - -Once you're done, you can close the WebUI session by clicking the animated close button under "Run WebUI". You can also do this if you encounter any errors and need to restart the UI. You should also go to "Manage Sessions" and terminate the session, otherwise you may end up using all your free compute credits. \ No newline at end of file diff --git a/spaces/lint/anime_controlnet/README.md b/spaces/lint/anime_controlnet/README.md deleted file mode 100644 index ac449be86f9f8c68806b9f2eb36121cc9668bc6f..0000000000000000000000000000000000000000 --- a/spaces/lint/anime_controlnet/README.md +++ /dev/null @@ -1,76 +0,0 @@ ---- -title: Style ControlNet -emoji: ❅ -colorFrom: gray -colorTo: green -sdk: gradio -sdk_version: 3.30.0 -app_file: app.py -pinned: True -license: openrail ---- - -# ControlStyle -Proof of concept for controlling Stable Diffusion image style using a ControlNet. - -| ![](./examples/blue_eyes.gif) | ![](./examples/blue_eyes.png) | -| ------------- | ------------- | - -`prompt`: "beautiful woman with blue eyes", `controlnet_prompt`: "1girl, blue eyes" - -| ![](./examples/mountains.gif) | ![](./examples/mountains.png) | -| ------------- | ------------- | - -`prompt` and `controlnet_prompt`: "best quality, masterpiece, Dark hair, dark eyes, upper body, sun flare, outdoors, mountain, valley, sky. clouds, smiling" - -`controlnet_conditioning_scale` increments by 0.1 from 0 to 1, left to right. - - -## Try Style Controlnet with A1111 WebUI - -![](./examples/zerohint_grid.png) -![](./examples/hint_grid.png) -### Quick start: download the anime controlnets [here](https://huggingface.co/lint/anime_control/tree/main), - -Root folder has controlnets in Diffusers format, A1111_weights has controlnets for use with [A1111 Webui Controlnet Extension](https://github.com/Mikubill/sd-webui-controlnet). More details at the [HF repo page](https://huggingface.co/lint/anime_control). - -## Quick Start Training - -For a basic training example with HF Accelerate, run the following -``` -pip install -r requirements.txt -python quickstart_train.py -``` -By default, the script will download pipeline weights and an image dataset from HF Hub. -The base stable diffusion checkpoint and controlnet weights can either be in HF diffusers format or the original stable diffusion pytorch-lightning format (inferred based on whether destination is file or not) - -Use the `convert_state_dict.sh` to convert the trained controlnet state dict from `diffusers` format to one compatible with the [A1111 controlnet extension](https://github.com/Mikubill/sd-webui-controlnet) - -## Style Controlnet Web UI - -Launch the Web UI locally with -``` -python app.py -``` - -(My Hf Spaces below are currently out of date, I will fix them soon once I have time) - -Try the WebUI hosted on HF Spaces at https://huggingface.co/spaces/lint/anime_controlnet -![](./examples/controlstyle_ui.png) - - -WebUI also supports basic training -![](./examples/training_ui.png) - - -## ControlNet for Style - -Lvmin introduced the [Controlnet](https://github.com/lllyasviel/ControlNet) to use a cloned Stable Diffusion UNet to introduce external conditioning, such as body poses/sketch lines, to guide Stable Diffusion generation with fantastic results. - -I thought his approach might also work for introducing different styles (i.e. add anime style), in guiding the image generation process. Unlike the original controlnets, I initialized the controlnet weights from a distinct UNet (`andite/anything-v4.5`), and predominantly trained without any controlnet conditioning image on a synthetic anime dataset (`lint/anybooru`) distinct from the base model. Then the main controlnet weights were frozen, the input hint block weights added back in and trained on the same dataset using canny image processing to generate the controlnet conditioning image. - -I originally trained the anime style controlnets without any controlnet conditioning image, so that the controlnet would focus on adding anime style rather than structure to the image. I have these weights saved at https://huggingface.co/lint/anime_styler/tree/main/A1111_webui_weights, however they need to be used with my [fork](https://github.com/1lint/sd-webui-controlnet) of the controlnet extension, which has very minor changes allow the user to load the controlnet without the input hint block weights, and pass None as a valid controlnet "conditioning". - -Recently I added back in the input hint processing module, and trained only the controlnet input hint blocks on canny image generation. So the models in this repository are now just like regular controlnets, except for having a different initialization and training process. They can be used just like a regular controlnet, but the vast majority of the weights were trained on adding anime style, with just the input hint blocks trained on using the controlnet conditioning image. Though it seems to work alright from my limited testing so far, expect the canny image guidance to be weak so combine with original canny image controlnet as needed. - -Since the main controlnet weights were trained without any canny image conditioning, they can (and were intended to be) used without any controlnet conditioning image. However the existing A1111 Controlnet Extension expects the user to always pass a controlnet conditioning image, otherwise it will trigger an error. However you can pass a black square as the "conditioning image", which will add some unexpected random noise to the image due to the input hint block `bias` weights, however the noise is small enough that the controlnet still appears to "work". diff --git a/spaces/ljjggr/bingo/src/components/chat-notification.tsx b/spaces/ljjggr/bingo/src/components/chat-notification.tsx deleted file mode 100644 index 4be24d0f1755c8058698cfa66c736d8d4792475a..0000000000000000000000000000000000000000 --- a/spaces/ljjggr/bingo/src/components/chat-notification.tsx +++ /dev/null @@ -1,77 +0,0 @@ -import { useEffect } from 'react' -import Image from 'next/image' - -import IconWarning from '@/assets/images/warning.svg' -import { ChatError, ErrorCode, ChatMessageModel } from '@/lib/bots/bing/types' -import { ExternalLink } from './external-link' -import { useBing } from '@/lib/hooks/use-bing' - -export interface ChatNotificationProps extends Pick, 'bot'> { - message?: ChatMessageModel -} - -function getAction(error: ChatError, reset: () => void) { - if (error.code === ErrorCode.THROTTLE_LIMIT) { - reset() - return ( -
- 你已达到每日最大发送消息次数,请更换账号或隔一天后重试 -
- ) - } - if (error.code === ErrorCode.BING_FORBIDDEN) { - return ( - - 你的账号已在黑名单,请尝试更换账号及申请解封 - - ) - } - if (error.code === ErrorCode.CONVERSATION_LIMIT) { - return ( -
- 当前话题已中止,请点 - 重新开始 - 开启新的对话 -
- ) - } - if (error.code === ErrorCode.BING_CAPTCHA) { - return ( - - 点击通过人机验证 - - ) - } - if (error.code === ErrorCode.BING_UNAUTHORIZED) { - reset() - return ( - 没有获取到身份信息或身份信息失效,点此重新设置 - ) - } - return error.message -} - -export function ChatNotification({ message, bot }: ChatNotificationProps) { - useEffect(() => { - window.scrollBy(0, 2000) - }, [message]) - - if (!message?.error) return - - return ( -
-
-
-
-
- error - {getAction(message.error, () => bot.resetConversation())} -
-
-
-
-
- ) -} diff --git a/spaces/lnyan/stablediffusion-infinity/PyPatchMatch/csrc/nnf.h b/spaces/lnyan/stablediffusion-infinity/PyPatchMatch/csrc/nnf.h deleted file mode 100644 index b5c144a4a58649906c9c87a40044b5118a00aa04..0000000000000000000000000000000000000000 --- a/spaces/lnyan/stablediffusion-infinity/PyPatchMatch/csrc/nnf.h +++ /dev/null @@ -1,133 +0,0 @@ -#pragma once - -#include -#include "masked_image.h" - -class PatchDistanceMetric { -public: - PatchDistanceMetric(int patch_size) : m_patch_size(patch_size) {} - virtual ~PatchDistanceMetric() = default; - - inline int patch_size() const { return m_patch_size; } - virtual int operator()(const MaskedImage &source, int source_y, int source_x, const MaskedImage &target, int target_y, int target_x) const = 0; - static const int kDistanceScale; - -protected: - int m_patch_size; -}; - -class NearestNeighborField { -public: - NearestNeighborField() : m_source(), m_target(), m_field(), m_distance_metric(nullptr) { - // pass - } - NearestNeighborField(const MaskedImage &source, const MaskedImage &target, const PatchDistanceMetric *metric, int max_retry = 20) - : m_source(source), m_target(target), m_distance_metric(metric) { - m_field = cv::Mat(m_source.size(), CV_32SC3); - _randomize_field(max_retry); - } - NearestNeighborField(const MaskedImage &source, const MaskedImage &target, const PatchDistanceMetric *metric, const NearestNeighborField &other, int max_retry = 20) - : m_source(source), m_target(target), m_distance_metric(metric) { - m_field = cv::Mat(m_source.size(), CV_32SC3); - _initialize_field_from(other, max_retry); - } - - const MaskedImage &source() const { - return m_source; - } - const MaskedImage &target() const { - return m_target; - } - inline cv::Size source_size() const { - return m_source.size(); - } - inline cv::Size target_size() const { - return m_target.size(); - } - inline void set_source(const MaskedImage &source) { - m_source = source; - } - inline void set_target(const MaskedImage &target) { - m_target = target; - } - - inline int *mutable_ptr(int y, int x) { - return m_field.ptr(y, x); - } - inline const int *ptr(int y, int x) const { - return m_field.ptr(y, x); - } - - inline int at(int y, int x, int c) const { - return m_field.ptr(y, x)[c]; - } - inline int &at(int y, int x, int c) { - return m_field.ptr(y, x)[c]; - } - inline void set_identity(int y, int x) { - auto ptr = mutable_ptr(y, x); - ptr[0] = y, ptr[1] = x, ptr[2] = 0; - } - - void minimize(int nr_pass); - -private: - inline int _distance(int source_y, int source_x, int target_y, int target_x) { - return (*m_distance_metric)(m_source, source_y, source_x, m_target, target_y, target_x); - } - - void _randomize_field(int max_retry = 20, bool reset = true); - void _initialize_field_from(const NearestNeighborField &other, int max_retry); - void _minimize_link(int y, int x, int direction); - - MaskedImage m_source; - MaskedImage m_target; - cv::Mat m_field; // { y_target, x_target, distance_scaled } - const PatchDistanceMetric *m_distance_metric; -}; - - -class PatchSSDDistanceMetric : public PatchDistanceMetric { -public: - using PatchDistanceMetric::PatchDistanceMetric; - virtual int operator ()(const MaskedImage &source, int source_y, int source_x, const MaskedImage &target, int target_y, int target_x) const; - static const int kSSDScale; -}; - -class DebugPatchSSDDistanceMetric : public PatchDistanceMetric { -public: - DebugPatchSSDDistanceMetric(int patch_size, int width, int height) : PatchDistanceMetric(patch_size), m_width(width), m_height(height) {} - virtual int operator ()(const MaskedImage &source, int source_y, int source_x, const MaskedImage &target, int target_y, int target_x) const; -protected: - int m_width, m_height; -}; - -class RegularityGuidedPatchDistanceMetricV1 : public PatchDistanceMetric { -public: - RegularityGuidedPatchDistanceMetricV1(int patch_size, double dx1, double dy1, double dx2, double dy2, double weight) - : PatchDistanceMetric(patch_size), m_dx1(dx1), m_dy1(dy1), m_dx2(dx2), m_dy2(dy2), m_weight(weight) { - - assert(m_dy1 == 0); - assert(m_dx2 == 0); - m_scale = sqrt(m_dx1 * m_dx1 + m_dy2 * m_dy2) / 4; - } - virtual int operator ()(const MaskedImage &source, int source_y, int source_x, const MaskedImage &target, int target_y, int target_x) const; - -protected: - double m_dx1, m_dy1, m_dx2, m_dy2; - double m_scale, m_weight; -}; - -class RegularityGuidedPatchDistanceMetricV2 : public PatchDistanceMetric { -public: - RegularityGuidedPatchDistanceMetricV2(int patch_size, cv::Mat ijmap, double weight) - : PatchDistanceMetric(patch_size), m_ijmap(ijmap), m_weight(weight) { - - } - virtual int operator ()(const MaskedImage &source, int source_y, int source_x, const MaskedImage &target, int target_y, int target_x) const; - -protected: - cv::Mat m_ijmap; - double m_width, m_height, m_weight; -}; - diff --git a/spaces/ltg/chat-nort5/configuration_nort5.py b/spaces/ltg/chat-nort5/configuration_nort5.py deleted file mode 100644 index 60ef5248830d411ce56c84735afe234de2d70d49..0000000000000000000000000000000000000000 --- a/spaces/ltg/chat-nort5/configuration_nort5.py +++ /dev/null @@ -1,44 +0,0 @@ -from transformers.configuration_utils import PretrainedConfig - - -class NorT5Config(PretrainedConfig): - """Configuration class to store the configuration of a `NorT5`. - """ - def __init__( - self, - vocab_size=50000, - attention_probs_dropout_prob=0.1, - hidden_dropout_prob=0.1, - hidden_size=768, - intermediate_size=2048, - max_position_embeddings=512, - position_bucket_size=32, - num_attention_heads=12, - num_hidden_layers=12, - layer_norm_eps=1.0e-7, - output_all_encoded_layers=True, - pad_token_id=3, - cls_token_id=1, - sep_token_id=2, - bos_token_id=5, - eos_token_id=6, - **kwargs, - ): - super().__init__(**kwargs) - - self.vocab_size = vocab_size - self.hidden_size = hidden_size - self.num_hidden_layers = num_hidden_layers - self.num_attention_heads = num_attention_heads - self.intermediate_size = intermediate_size - self.hidden_dropout_prob = hidden_dropout_prob - self.attention_probs_dropout_prob = attention_probs_dropout_prob - self.max_position_embeddings = max_position_embeddings - self.output_all_encoded_layers = output_all_encoded_layers - self.position_bucket_size = position_bucket_size - self.layer_norm_eps = layer_norm_eps - self.pad_token_id = pad_token_id - self.cls_token_id = cls_token_id - self.sep_token_id = sep_token_id - self.bos_token_id = bos_token_id - self.eos_token_id = eos_token_id diff --git a/spaces/luckwill/chiakicc/text/__init__.py b/spaces/luckwill/chiakicc/text/__init__.py deleted file mode 100644 index 4e69c354dd24e3243980236eca962cd5945a92fc..0000000000000000000000000000000000000000 --- a/spaces/luckwill/chiakicc/text/__init__.py +++ /dev/null @@ -1,32 +0,0 @@ -""" from https://github.com/keithito/tacotron """ -from text import cleaners - - -def text_to_sequence(text, symbols, cleaner_names): - '''Converts a string of text to a sequence of IDs corresponding to the symbols in the text. - Args: - text: string to convert to a sequence - cleaner_names: names of the cleaner functions to run the text through - Returns: - List of integers corresponding to the symbols in the text - ''' - _symbol_to_id = {s: i for i, s in enumerate(symbols)} - - sequence = [] - - clean_text = _clean_text(text, cleaner_names) - for symbol in clean_text: - if symbol not in _symbol_to_id.keys(): - continue - symbol_id = _symbol_to_id[symbol] - sequence += [symbol_id] - return sequence - - -def _clean_text(text, cleaner_names): - for name in cleaner_names: - cleaner = getattr(cleaners, name) - if not cleaner: - raise Exception('Unknown cleaner: %s' % name) - text = cleaner(text) - return text diff --git a/spaces/lusea/Voice-Cloning-for-Bilibili/README.md b/spaces/lusea/Voice-Cloning-for-Bilibili/README.md deleted file mode 100644 index 88a96d0c588bd4ebe1fcf8227fb782b1e0058a1c..0000000000000000000000000000000000000000 --- a/spaces/lusea/Voice-Cloning-for-Bilibili/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Voice Cloning -emoji: 😻 -colorFrom: blue -colorTo: yellow -sdk: gradio -sdk_version: 3.27.0 -app_file: app.py -pinned: false -license: mit -duplicated_from: kevinwang676/Voice-Cloning-for-Bilibili ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/ma-xu/LIVE/pybind11/tests/test_factory_constructors.cpp b/spaces/ma-xu/LIVE/pybind11/tests/test_factory_constructors.cpp deleted file mode 100644 index 61cf33d16ed404563a3da803a4c2ecea4453a3b4..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/pybind11/tests/test_factory_constructors.cpp +++ /dev/null @@ -1,342 +0,0 @@ -/* - tests/test_factory_constructors.cpp -- tests construction from a factory function - via py::init_factory() - - Copyright (c) 2017 Jason Rhinelander - - All rights reserved. Use of this source code is governed by a - BSD-style license that can be found in the LICENSE file. -*/ - -#include "pybind11_tests.h" -#include "constructor_stats.h" -#include - -// Classes for testing python construction via C++ factory function: -// Not publicly constructible, copyable, or movable: -class TestFactory1 { - friend class TestFactoryHelper; - TestFactory1() : value("(empty)") { print_default_created(this); } - TestFactory1(int v) : value(std::to_string(v)) { print_created(this, value); } - TestFactory1(std::string v) : value(std::move(v)) { print_created(this, value); } - TestFactory1(TestFactory1 &&) = delete; - TestFactory1(const TestFactory1 &) = delete; - TestFactory1 &operator=(TestFactory1 &&) = delete; - TestFactory1 &operator=(const TestFactory1 &) = delete; -public: - std::string value; - ~TestFactory1() { print_destroyed(this); } -}; -// Non-public construction, but moveable: -class TestFactory2 { - friend class TestFactoryHelper; - TestFactory2() : value("(empty2)") { print_default_created(this); } - TestFactory2(int v) : value(std::to_string(v)) { print_created(this, value); } - TestFactory2(std::string v) : value(std::move(v)) { print_created(this, value); } -public: - TestFactory2(TestFactory2 &&m) { value = std::move(m.value); print_move_created(this); } - TestFactory2 &operator=(TestFactory2 &&m) { value = std::move(m.value); print_move_assigned(this); return *this; } - std::string value; - ~TestFactory2() { print_destroyed(this); } -}; -// Mixed direct/factory construction: -class TestFactory3 { -protected: - friend class TestFactoryHelper; - TestFactory3() : value("(empty3)") { print_default_created(this); } - TestFactory3(int v) : value(std::to_string(v)) { print_created(this, value); } -public: - TestFactory3(std::string v) : value(std::move(v)) { print_created(this, value); } - TestFactory3(TestFactory3 &&m) { value = std::move(m.value); print_move_created(this); } - TestFactory3 &operator=(TestFactory3 &&m) { value = std::move(m.value); print_move_assigned(this); return *this; } - std::string value; - virtual ~TestFactory3() { print_destroyed(this); } -}; -// Inheritance test -class TestFactory4 : public TestFactory3 { -public: - TestFactory4() : TestFactory3() { print_default_created(this); } - TestFactory4(int v) : TestFactory3(v) { print_created(this, v); } - virtual ~TestFactory4() { print_destroyed(this); } -}; -// Another class for an invalid downcast test -class TestFactory5 : public TestFactory3 { -public: - TestFactory5(int i) : TestFactory3(i) { print_created(this, i); } - virtual ~TestFactory5() { print_destroyed(this); } -}; - -class TestFactory6 { -protected: - int value; - bool alias = false; -public: - TestFactory6(int i) : value{i} { print_created(this, i); } - TestFactory6(TestFactory6 &&f) { print_move_created(this); value = f.value; alias = f.alias; } - TestFactory6(const TestFactory6 &f) { print_copy_created(this); value = f.value; alias = f.alias; } - virtual ~TestFactory6() { print_destroyed(this); } - virtual int get() { return value; } - bool has_alias() { return alias; } -}; -class PyTF6 : public TestFactory6 { -public: - // Special constructor that allows the factory to construct a PyTF6 from a TestFactory6 only - // when an alias is needed: - PyTF6(TestFactory6 &&base) : TestFactory6(std::move(base)) { alias = true; print_created(this, "move", value); } - PyTF6(int i) : TestFactory6(i) { alias = true; print_created(this, i); } - PyTF6(PyTF6 &&f) : TestFactory6(std::move(f)) { print_move_created(this); } - PyTF6(const PyTF6 &f) : TestFactory6(f) { print_copy_created(this); } - PyTF6(std::string s) : TestFactory6((int) s.size()) { alias = true; print_created(this, s); } - virtual ~PyTF6() { print_destroyed(this); } - int get() override { PYBIND11_OVERLOAD(int, TestFactory6, get, /*no args*/); } -}; - -class TestFactory7 { -protected: - int value; - bool alias = false; -public: - TestFactory7(int i) : value{i} { print_created(this, i); } - TestFactory7(TestFactory7 &&f) { print_move_created(this); value = f.value; alias = f.alias; } - TestFactory7(const TestFactory7 &f) { print_copy_created(this); value = f.value; alias = f.alias; } - virtual ~TestFactory7() { print_destroyed(this); } - virtual int get() { return value; } - bool has_alias() { return alias; } -}; -class PyTF7 : public TestFactory7 { -public: - PyTF7(int i) : TestFactory7(i) { alias = true; print_created(this, i); } - PyTF7(PyTF7 &&f) : TestFactory7(std::move(f)) { print_move_created(this); } - PyTF7(const PyTF7 &f) : TestFactory7(f) { print_copy_created(this); } - virtual ~PyTF7() { print_destroyed(this); } - int get() override { PYBIND11_OVERLOAD(int, TestFactory7, get, /*no args*/); } -}; - - -class TestFactoryHelper { -public: - // Non-movable, non-copyable type: - // Return via pointer: - static TestFactory1 *construct1() { return new TestFactory1(); } - // Holder: - static std::unique_ptr construct1(int a) { return std::unique_ptr(new TestFactory1(a)); } - // pointer again - static TestFactory1 *construct1_string(std::string a) { return new TestFactory1(a); } - - // Moveable type: - // pointer: - static TestFactory2 *construct2() { return new TestFactory2(); } - // holder: - static std::unique_ptr construct2(int a) { return std::unique_ptr(new TestFactory2(a)); } - // by value moving: - static TestFactory2 construct2(std::string a) { return TestFactory2(a); } - - // shared_ptr holder type: - // pointer: - static TestFactory3 *construct3() { return new TestFactory3(); } - // holder: - static std::shared_ptr construct3(int a) { return std::shared_ptr(new TestFactory3(a)); } -}; - -TEST_SUBMODULE(factory_constructors, m) { - - // Define various trivial types to allow simpler overload resolution: - py::module m_tag = m.def_submodule("tag"); -#define MAKE_TAG_TYPE(Name) \ - struct Name##_tag {}; \ - py::class_(m_tag, #Name "_tag").def(py::init<>()); \ - m_tag.attr(#Name) = py::cast(Name##_tag{}) - MAKE_TAG_TYPE(pointer); - MAKE_TAG_TYPE(unique_ptr); - MAKE_TAG_TYPE(move); - MAKE_TAG_TYPE(shared_ptr); - MAKE_TAG_TYPE(derived); - MAKE_TAG_TYPE(TF4); - MAKE_TAG_TYPE(TF5); - MAKE_TAG_TYPE(null_ptr); - MAKE_TAG_TYPE(null_unique_ptr); - MAKE_TAG_TYPE(null_shared_ptr); - MAKE_TAG_TYPE(base); - MAKE_TAG_TYPE(invalid_base); - MAKE_TAG_TYPE(alias); - MAKE_TAG_TYPE(unaliasable); - MAKE_TAG_TYPE(mixed); - - // test_init_factory_basic, test_bad_type - py::class_(m, "TestFactory1") - .def(py::init([](unique_ptr_tag, int v) { return TestFactoryHelper::construct1(v); })) - .def(py::init(&TestFactoryHelper::construct1_string)) // raw function pointer - .def(py::init([](pointer_tag) { return TestFactoryHelper::construct1(); })) - .def(py::init([](py::handle, int v, py::handle) { return TestFactoryHelper::construct1(v); })) - .def_readwrite("value", &TestFactory1::value) - ; - py::class_(m, "TestFactory2") - .def(py::init([](pointer_tag, int v) { return TestFactoryHelper::construct2(v); })) - .def(py::init([](unique_ptr_tag, std::string v) { return TestFactoryHelper::construct2(v); })) - .def(py::init([](move_tag) { return TestFactoryHelper::construct2(); })) - .def_readwrite("value", &TestFactory2::value) - ; - - // Stateful & reused: - int c = 1; - auto c4a = [c](pointer_tag, TF4_tag, int a) { (void) c; return new TestFactory4(a);}; - - // test_init_factory_basic, test_init_factory_casting - py::class_>(m, "TestFactory3") - .def(py::init([](pointer_tag, int v) { return TestFactoryHelper::construct3(v); })) - .def(py::init([](shared_ptr_tag) { return TestFactoryHelper::construct3(); })) - .def("__init__", [](TestFactory3 &self, std::string v) { new (&self) TestFactory3(v); }) // placement-new ctor - - // factories returning a derived type: - .def(py::init(c4a)) // derived ptr - .def(py::init([](pointer_tag, TF5_tag, int a) { return new TestFactory5(a); })) - // derived shared ptr: - .def(py::init([](shared_ptr_tag, TF4_tag, int a) { return std::make_shared(a); })) - .def(py::init([](shared_ptr_tag, TF5_tag, int a) { return std::make_shared(a); })) - - // Returns nullptr: - .def(py::init([](null_ptr_tag) { return (TestFactory3 *) nullptr; })) - .def(py::init([](null_unique_ptr_tag) { return std::unique_ptr(); })) - .def(py::init([](null_shared_ptr_tag) { return std::shared_ptr(); })) - - .def_readwrite("value", &TestFactory3::value) - ; - - // test_init_factory_casting - py::class_>(m, "TestFactory4") - .def(py::init(c4a)) // pointer - ; - - // Doesn't need to be registered, but registering makes getting ConstructorStats easier: - py::class_>(m, "TestFactory5"); - - // test_init_factory_alias - // Alias testing - py::class_(m, "TestFactory6") - .def(py::init([](base_tag, int i) { return TestFactory6(i); })) - .def(py::init([](alias_tag, int i) { return PyTF6(i); })) - .def(py::init([](alias_tag, std::string s) { return PyTF6(s); })) - .def(py::init([](alias_tag, pointer_tag, int i) { return new PyTF6(i); })) - .def(py::init([](base_tag, pointer_tag, int i) { return new TestFactory6(i); })) - .def(py::init([](base_tag, alias_tag, pointer_tag, int i) { return (TestFactory6 *) new PyTF6(i); })) - - .def("get", &TestFactory6::get) - .def("has_alias", &TestFactory6::has_alias) - - .def_static("get_cstats", &ConstructorStats::get, py::return_value_policy::reference) - .def_static("get_alias_cstats", &ConstructorStats::get, py::return_value_policy::reference) - ; - - // test_init_factory_dual - // Separate alias constructor testing - py::class_>(m, "TestFactory7") - .def(py::init( - [](int i) { return TestFactory7(i); }, - [](int i) { return PyTF7(i); })) - .def(py::init( - [](pointer_tag, int i) { return new TestFactory7(i); }, - [](pointer_tag, int i) { return new PyTF7(i); })) - .def(py::init( - [](mixed_tag, int i) { return new TestFactory7(i); }, - [](mixed_tag, int i) { return PyTF7(i); })) - .def(py::init( - [](mixed_tag, std::string s) { return TestFactory7((int) s.size()); }, - [](mixed_tag, std::string s) { return new PyTF7((int) s.size()); })) - .def(py::init( - [](base_tag, pointer_tag, int i) { return new TestFactory7(i); }, - [](base_tag, pointer_tag, int i) { return (TestFactory7 *) new PyTF7(i); })) - .def(py::init( - [](alias_tag, pointer_tag, int i) { return new PyTF7(i); }, - [](alias_tag, pointer_tag, int i) { return new PyTF7(10*i); })) - .def(py::init( - [](shared_ptr_tag, base_tag, int i) { return std::make_shared(i); }, - [](shared_ptr_tag, base_tag, int i) { auto *p = new PyTF7(i); return std::shared_ptr(p); })) - .def(py::init( - [](shared_ptr_tag, invalid_base_tag, int i) { return std::make_shared(i); }, - [](shared_ptr_tag, invalid_base_tag, int i) { return std::make_shared(i); })) // <-- invalid alias factory - - .def("get", &TestFactory7::get) - .def("has_alias", &TestFactory7::has_alias) - - .def_static("get_cstats", &ConstructorStats::get, py::return_value_policy::reference) - .def_static("get_alias_cstats", &ConstructorStats::get, py::return_value_policy::reference) - ; - - // test_placement_new_alternative - // Class with a custom new operator but *without* a placement new operator (issue #948) - class NoPlacementNew { - public: - NoPlacementNew(int i) : i(i) { } - static void *operator new(std::size_t s) { - auto *p = ::operator new(s); - py::print("operator new called, returning", reinterpret_cast(p)); - return p; - } - static void operator delete(void *p) { - py::print("operator delete called on", reinterpret_cast(p)); - ::operator delete(p); - } - int i; - }; - // As of 2.2, `py::init` no longer requires placement new - py::class_(m, "NoPlacementNew") - .def(py::init()) - .def(py::init([]() { return new NoPlacementNew(100); })) - .def_readwrite("i", &NoPlacementNew::i) - ; - - - // test_reallocations - // Class that has verbose operator_new/operator_delete calls - struct NoisyAlloc { - NoisyAlloc(const NoisyAlloc &) = default; - NoisyAlloc(int i) { py::print(py::str("NoisyAlloc(int {})").format(i)); } - NoisyAlloc(double d) { py::print(py::str("NoisyAlloc(double {})").format(d)); } - ~NoisyAlloc() { py::print("~NoisyAlloc()"); } - - static void *operator new(size_t s) { py::print("noisy new"); return ::operator new(s); } - static void *operator new(size_t, void *p) { py::print("noisy placement new"); return p; } - static void operator delete(void *p, size_t) { py::print("noisy delete"); ::operator delete(p); } - static void operator delete(void *, void *) { py::print("noisy placement delete"); } -#if defined(_MSC_VER) && _MSC_VER < 1910 - // MSVC 2015 bug: the above "noisy delete" isn't invoked (fixed in MSVC 2017) - static void operator delete(void *p) { py::print("noisy delete"); ::operator delete(p); } -#endif - }; - py::class_(m, "NoisyAlloc") - // Since these overloads have the same number of arguments, the dispatcher will try each of - // them until the arguments convert. Thus we can get a pre-allocation here when passing a - // single non-integer: - .def("__init__", [](NoisyAlloc *a, int i) { new (a) NoisyAlloc(i); }) // Regular constructor, runs first, requires preallocation - .def(py::init([](double d) { return new NoisyAlloc(d); })) - - // The two-argument version: first the factory pointer overload. - .def(py::init([](int i, int) { return new NoisyAlloc(i); })) - // Return-by-value: - .def(py::init([](double d, int) { return NoisyAlloc(d); })) - // Old-style placement new init; requires preallocation - .def("__init__", [](NoisyAlloc &a, double d, double) { new (&a) NoisyAlloc(d); }) - // Requires deallocation of previous overload preallocated value: - .def(py::init([](int i, double) { return new NoisyAlloc(i); })) - // Regular again: requires yet another preallocation - .def("__init__", [](NoisyAlloc &a, int i, std::string) { new (&a) NoisyAlloc(i); }) - ; - - - - - // static_assert testing (the following def's should all fail with appropriate compilation errors): -#if 0 - struct BadF1Base {}; - struct BadF1 : BadF1Base {}; - struct PyBadF1 : BadF1 {}; - py::class_> bf1(m, "BadF1"); - // wrapped factory function must return a compatible pointer, holder, or value - bf1.def(py::init([]() { return 3; })); - // incompatible factory function pointer return type - bf1.def(py::init([]() { static int three = 3; return &three; })); - // incompatible factory function std::shared_ptr return type: cannot convert shared_ptr to holder - // (non-polymorphic base) - bf1.def(py::init([]() { return std::shared_ptr(new BadF1()); })); -#endif -} diff --git a/spaces/matthoffner/AudioCraft_Plus/audiocraft/losses/stftloss.py b/spaces/matthoffner/AudioCraft_Plus/audiocraft/losses/stftloss.py deleted file mode 100644 index 5ad4b7d3324ee5b0e6064b6f71cf8caf0fdc3be7..0000000000000000000000000000000000000000 --- a/spaces/matthoffner/AudioCraft_Plus/audiocraft/losses/stftloss.py +++ /dev/null @@ -1,207 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. -# Adapted from MIT code under the original license -# Copyright 2019 Tomoki Hayashi -# MIT License (https://opensource.org/licenses/MIT) -import typing as tp - -import torch -from torch import nn -from torch.nn import functional as F - - -# TODO: Replace with torchaudio.STFT? -def _stft(x: torch.Tensor, fft_size: int, hop_length: int, win_length: int, - window: tp.Optional[torch.Tensor], normalized: bool) -> torch.Tensor: - """Perform STFT and convert to magnitude spectrogram. - - Args: - x: Input signal tensor (B, C, T). - fft_size (int): FFT size. - hop_length (int): Hop size. - win_length (int): Window length. - window (torch.Tensor or None): Window function type. - normalized (bool): Whether to normalize the STFT or not. - - Returns: - torch.Tensor: Magnitude spectrogram (B, C, #frames, fft_size // 2 + 1). - """ - B, C, T = x.shape - x_stft = torch.stft( - x.view(-1, T), fft_size, hop_length, win_length, window, - normalized=normalized, return_complex=True, - ) - x_stft = x_stft.view(B, C, *x_stft.shape[1:]) - real = x_stft.real - imag = x_stft.imag - - # NOTE(kan-bayashi): clamp is needed to avoid nan or inf - return torch.sqrt(torch.clamp(real ** 2 + imag ** 2, min=1e-7)).transpose(2, 1) - - -class SpectralConvergenceLoss(nn.Module): - """Spectral convergence loss. - """ - def __init__(self, epsilon: float = torch.finfo(torch.float32).eps): - super().__init__() - self.epsilon = epsilon - - def forward(self, x_mag: torch.Tensor, y_mag: torch.Tensor): - """Calculate forward propagation. - - Args: - x_mag: Magnitude spectrogram of predicted signal (B, #frames, #freq_bins). - y_mag: Magnitude spectrogram of groundtruth signal (B, #frames, #freq_bins). - Returns: - torch.Tensor: Spectral convergence loss value. - """ - return torch.norm(y_mag - x_mag, p="fro") / (torch.norm(y_mag, p="fro") + self.epsilon) - - -class LogSTFTMagnitudeLoss(nn.Module): - """Log STFT magnitude loss. - - Args: - epsilon (float): Epsilon value for numerical stability. - """ - def __init__(self, epsilon: float = torch.finfo(torch.float32).eps): - super().__init__() - self.epsilon = epsilon - - def forward(self, x_mag: torch.Tensor, y_mag: torch.Tensor): - """Calculate forward propagation. - - Args: - x_mag (torch.Tensor): Magnitude spectrogram of predicted signal (B, #frames, #freq_bins). - y_mag (torch.Tensor): Magnitude spectrogram of groundtruth signal (B, #frames, #freq_bins). - Returns: - torch.Tensor: Log STFT magnitude loss value. - """ - return F.l1_loss(torch.log(self.epsilon + y_mag), torch.log(self.epsilon + x_mag)) - - -class STFTLosses(nn.Module): - """STFT losses. - - Args: - n_fft (int): Size of FFT. - hop_length (int): Hop length. - win_length (int): Window length. - window (str): Window function type. - normalized (bool): Whether to use normalized STFT or not. - epsilon (float): Epsilon for numerical stability. - """ - def __init__(self, n_fft: int = 1024, hop_length: int = 120, win_length: int = 600, - window: str = "hann_window", normalized: bool = False, - epsilon: float = torch.finfo(torch.float32).eps): - super().__init__() - self.n_fft = n_fft - self.hop_length = hop_length - self.win_length = win_length - self.normalized = normalized - self.register_buffer("window", getattr(torch, window)(win_length)) - self.spectral_convergenge_loss = SpectralConvergenceLoss(epsilon) - self.log_stft_magnitude_loss = LogSTFTMagnitudeLoss(epsilon) - - def forward(self, x: torch.Tensor, y: torch.Tensor) -> tp.Tuple[torch.Tensor, torch.Tensor]: - """Calculate forward propagation. - - Args: - x (torch.Tensor): Predicted signal (B, T). - y (torch.Tensor): Groundtruth signal (B, T). - Returns: - torch.Tensor: Spectral convergence loss value. - torch.Tensor: Log STFT magnitude loss value. - """ - x_mag = _stft(x, self.n_fft, self.hop_length, - self.win_length, self.window, self.normalized) # type: ignore - y_mag = _stft(y, self.n_fft, self.hop_length, - self.win_length, self.window, self.normalized) # type: ignore - sc_loss = self.spectral_convergenge_loss(x_mag, y_mag) - mag_loss = self.log_stft_magnitude_loss(x_mag, y_mag) - - return sc_loss, mag_loss - - -class STFTLoss(nn.Module): - """Single Resolution STFT loss. - - Args: - n_fft (int): Nb of FFT. - hop_length (int): Hop length. - win_length (int): Window length. - window (str): Window function type. - normalized (bool): Whether to use normalized STFT or not. - epsilon (float): Epsilon for numerical stability. - factor_sc (float): Coefficient for the spectral loss. - factor_mag (float): Coefficient for the magnitude loss. - """ - def __init__(self, n_fft: int = 1024, hop_length: int = 120, win_length: int = 600, - window: str = "hann_window", normalized: bool = False, - factor_sc: float = 0.1, factor_mag: float = 0.1, - epsilon: float = torch.finfo(torch.float32).eps): - super().__init__() - self.loss = STFTLosses(n_fft, hop_length, win_length, window, normalized, epsilon) - self.factor_sc = factor_sc - self.factor_mag = factor_mag - - def forward(self, x: torch.Tensor, y: torch.Tensor) -> tp.Tuple[torch.Tensor, torch.Tensor]: - """Calculate forward propagation. - - Args: - x (torch.Tensor): Predicted signal (B, T). - y (torch.Tensor): Groundtruth signal (B, T). - Returns: - torch.Tensor: Single resolution STFT loss. - """ - sc_loss, mag_loss = self.loss(x, y) - return self.factor_sc * sc_loss + self.factor_mag * mag_loss - - -class MRSTFTLoss(nn.Module): - """Multi resolution STFT loss. - - Args: - n_ffts (Sequence[int]): Sequence of FFT sizes. - hop_lengths (Sequence[int]): Sequence of hop sizes. - win_lengths (Sequence[int]): Sequence of window lengths. - window (str): Window function type. - factor_sc (float): Coefficient for the spectral loss. - factor_mag (float): Coefficient for the magnitude loss. - normalized (bool): Whether to use normalized STFT or not. - epsilon (float): Epsilon for numerical stability. - """ - def __init__(self, n_ffts: tp.Sequence[int] = [1024, 2048, 512], hop_lengths: tp.Sequence[int] = [120, 240, 50], - win_lengths: tp.Sequence[int] = [600, 1200, 240], window: str = "hann_window", - factor_sc: float = 0.1, factor_mag: float = 0.1, - normalized: bool = False, epsilon: float = torch.finfo(torch.float32).eps): - super().__init__() - assert len(n_ffts) == len(hop_lengths) == len(win_lengths) - self.stft_losses = torch.nn.ModuleList() - for fs, ss, wl in zip(n_ffts, hop_lengths, win_lengths): - self.stft_losses += [STFTLosses(fs, ss, wl, window, normalized, epsilon)] - self.factor_sc = factor_sc - self.factor_mag = factor_mag - - def forward(self, x: torch.Tensor, y: torch.Tensor) -> torch.Tensor: - """Calculate forward propagation. - - Args: - x (torch.Tensor): Predicted signal (B, T). - y (torch.Tensor): Groundtruth signal (B, T). - Returns: - torch.Tensor: Multi resolution STFT loss. - """ - sc_loss = torch.Tensor([0.0]) - mag_loss = torch.Tensor([0.0]) - for f in self.stft_losses: - sc_l, mag_l = f(x, y) - sc_loss += sc_l - mag_loss += mag_l - sc_loss /= len(self.stft_losses) - mag_loss /= len(self.stft_losses) - - return self.factor_sc * sc_loss + self.factor_mag * mag_loss diff --git a/spaces/matthoffner/AudioCraft_Plus/audiocraft/solvers/diffusion.py b/spaces/matthoffner/AudioCraft_Plus/audiocraft/solvers/diffusion.py deleted file mode 100644 index 93dea2520836f458ab1b8514dca952b51d113ec2..0000000000000000000000000000000000000000 --- a/spaces/matthoffner/AudioCraft_Plus/audiocraft/solvers/diffusion.py +++ /dev/null @@ -1,279 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import typing as tp - -import flashy -import julius -import omegaconf -import torch -import torch.nn.functional as F - -from . import builders -from . import base -from .. import models -from ..modules.diffusion_schedule import NoiseSchedule -from ..metrics import RelativeVolumeMel -from ..models.builders import get_processor -from ..utils.samples.manager import SampleManager -from ..solvers.compression import CompressionSolver - - -class PerStageMetrics: - """Handle prompting the metrics per stage. - It outputs the metrics per range of diffusion states. - e.g. avg loss when t in [250, 500] - """ - def __init__(self, num_steps: int, num_stages: int = 4): - self.num_steps = num_steps - self.num_stages = num_stages - - def __call__(self, losses: dict, step: tp.Union[int, torch.Tensor]): - if type(step) is int: - stage = int((step / self.num_steps) * self.num_stages) - return {f"{name}_{stage}": loss for name, loss in losses.items()} - elif type(step) is torch.Tensor: - stage_tensor = ((step / self.num_steps) * self.num_stages).long() - out: tp.Dict[str, float] = {} - for stage_idx in range(self.num_stages): - mask = (stage_tensor == stage_idx) - N = mask.sum() - stage_out = {} - if N > 0: # pass if no elements in the stage - for name, loss in losses.items(): - stage_loss = (mask * loss).sum() / N - stage_out[f"{name}_{stage_idx}"] = stage_loss - out = {**out, **stage_out} - return out - - -class DataProcess: - """Apply filtering or resampling. - - Args: - initial_sr (int): Initial sample rate. - target_sr (int): Target sample rate. - use_resampling: Whether to use resampling or not. - use_filter (bool): - n_bands (int): Number of bands to consider. - idx_band (int): - device (torch.device or str): - cutoffs (): - boost (bool): - """ - def __init__(self, initial_sr: int = 24000, target_sr: int = 16000, use_resampling: bool = False, - use_filter: bool = False, n_bands: int = 4, - idx_band: int = 0, device: torch.device = torch.device('cpu'), cutoffs=None, boost=False): - """Apply filtering or resampling - Args: - initial_sr (int): sample rate of the dataset - target_sr (int): sample rate after resampling - use_resampling (bool): whether or not performs resampling - use_filter (bool): when True filter the data to keep only one frequency band - n_bands (int): Number of bands used - cuts (none or list): The cutoff frequencies of the band filtering - if None then we use mel scale bands. - idx_band (int): index of the frequency band. 0 are lows ... (n_bands - 1) highs - boost (bool): make the data scale match our music dataset. - """ - assert idx_band < n_bands - self.idx_band = idx_band - if use_filter: - if cutoffs is not None: - self.filter = julius.SplitBands(sample_rate=initial_sr, cutoffs=cutoffs).to(device) - else: - self.filter = julius.SplitBands(sample_rate=initial_sr, n_bands=n_bands).to(device) - self.use_filter = use_filter - self.use_resampling = use_resampling - self.target_sr = target_sr - self.initial_sr = initial_sr - self.boost = boost - - def process_data(self, x, metric=False): - if x is None: - return None - if self.boost: - x /= torch.clamp(x.std(dim=(1, 2), keepdim=True), min=1e-4) - x * 0.22 - if self.use_filter and not metric: - x = self.filter(x)[self.idx_band] - if self.use_resampling: - x = julius.resample_frac(x, old_sr=self.initial_sr, new_sr=self.target_sr) - return x - - def inverse_process(self, x): - """Upsampling only.""" - if self.use_resampling: - x = julius.resample_frac(x, old_sr=self.target_sr, new_sr=self.target_sr) - return x - - -class DiffusionSolver(base.StandardSolver): - """Solver for compression task. - - The diffusion task allows for MultiBand diffusion model training. - - Args: - cfg (DictConfig): Configuration. - """ - def __init__(self, cfg: omegaconf.DictConfig): - super().__init__(cfg) - self.cfg = cfg - self.device = cfg.device - self.sample_rate: int = self.cfg.sample_rate - self.codec_model = CompressionSolver.model_from_checkpoint( - cfg.compression_model_checkpoint, device=self.device) - - self.codec_model.set_num_codebooks(cfg.n_q) - assert self.codec_model.sample_rate == self.cfg.sample_rate, ( - f"Codec model sample rate is {self.codec_model.sample_rate} but " - f"Solver sample rate is {self.cfg.sample_rate}." - ) - assert self.codec_model.sample_rate == self.sample_rate, \ - f"Sample rate of solver {self.sample_rate} and codec {self.codec_model.sample_rate} " \ - "don't match." - - self.sample_processor = get_processor(cfg.processor, sample_rate=self.sample_rate) - self.register_stateful('sample_processor') - self.sample_processor.to(self.device) - - self.schedule = NoiseSchedule( - **cfg.schedule, device=self.device, sample_processor=self.sample_processor) - - self.eval_metric: tp.Optional[torch.nn.Module] = None - - self.rvm = RelativeVolumeMel() - self.data_processor = DataProcess(initial_sr=self.sample_rate, target_sr=cfg.resampling.target_sr, - use_resampling=cfg.resampling.use, cutoffs=cfg.filter.cutoffs, - use_filter=cfg.filter.use, n_bands=cfg.filter.n_bands, - idx_band=cfg.filter.idx_band, device=self.device) - - @property - def best_metric_name(self) -> tp.Optional[str]: - if self._current_stage == "evaluate": - return 'rvm' - else: - return 'loss' - - @torch.no_grad() - def get_condition(self, wav: torch.Tensor) -> torch.Tensor: - codes, scale = self.codec_model.encode(wav) - assert scale is None, "Scaled compression models not supported." - emb = self.codec_model.decode_latent(codes) - return emb - - def build_model(self): - """Build model and optimizer as well as optional Exponential Moving Average of the model. - """ - # Model and optimizer - self.model = models.builders.get_diffusion_model(self.cfg).to(self.device) - self.optimizer = builders.get_optimizer(self.model.parameters(), self.cfg.optim) - self.register_stateful('model', 'optimizer') - self.register_best_state('model') - self.register_ema('model') - - def build_dataloaders(self): - """Build audio dataloaders for each stage.""" - self.dataloaders = builders.get_audio_datasets(self.cfg) - - def show(self): - # TODO - raise NotImplementedError() - - def run_step(self, idx: int, batch: torch.Tensor, metrics: dict): - """Perform one training or valid step on a given batch.""" - x = batch.to(self.device) - loss_fun = F.mse_loss if self.cfg.loss.kind == 'mse' else F.l1_loss - - condition = self.get_condition(x) # [bs, 128, T/hop, n_emb] - sample = self.data_processor.process_data(x) - - input_, target, step = self.schedule.get_training_item(sample, - tensor_step=self.cfg.schedule.variable_step_batch) - out = self.model(input_, step, condition=condition).sample - - base_loss = loss_fun(out, target, reduction='none').mean(dim=(1, 2)) - reference_loss = loss_fun(input_, target, reduction='none').mean(dim=(1, 2)) - loss = base_loss / reference_loss ** self.cfg.loss.norm_power - - if self.is_training: - loss.mean().backward() - flashy.distrib.sync_model(self.model) - self.optimizer.step() - self.optimizer.zero_grad() - metrics = { - 'loss': loss.mean(), 'normed_loss': (base_loss / reference_loss).mean(), - } - metrics.update(self.per_stage({'loss': loss, 'normed_loss': base_loss / reference_loss}, step)) - metrics.update({ - 'std_in': input_.std(), 'std_out': out.std()}) - return metrics - - def run_epoch(self): - # reset random seed at the beginning of the epoch - self.rng = torch.Generator() - self.rng.manual_seed(1234 + self.epoch) - self.per_stage = PerStageMetrics(self.schedule.num_steps, self.cfg.metrics.num_stage) - # run epoch - super().run_epoch() - - def evaluate(self): - """Evaluate stage. - Runs audio reconstruction evaluation. - """ - self.model.eval() - evaluate_stage_name = f'{self.current_stage}' - loader = self.dataloaders['evaluate'] - updates = len(loader) - lp = self.log_progress(f'{evaluate_stage_name} estimate', loader, total=updates, updates=self.log_updates) - - metrics = {} - n = 1 - for idx, batch in enumerate(lp): - x = batch.to(self.device) - with torch.no_grad(): - y_pred = self.regenerate(x) - - y_pred = y_pred.cpu() - y = batch.cpu() # should already be on CPU but just in case - rvm = self.rvm(y_pred, y) - lp.update(**rvm) - if len(metrics) == 0: - metrics = rvm - else: - for key in rvm.keys(): - metrics[key] = (metrics[key] * n + rvm[key]) / (n + 1) - metrics = flashy.distrib.average_metrics(metrics) - return metrics - - @torch.no_grad() - def regenerate(self, wav: torch.Tensor, step_list: tp.Optional[list] = None): - """Regenerate the given waveform.""" - condition = self.get_condition(wav) - initial = self.schedule.get_initial_noise(self.data_processor.process_data(wav)) # sampling rate changes. - result = self.schedule.generate_subsampled(self.model, initial=initial, condition=condition, - step_list=step_list) - result = self.data_processor.inverse_process(result) - return result - - def generate(self): - """Generate stage.""" - sample_manager = SampleManager(self.xp) - self.model.eval() - generate_stage_name = f'{self.current_stage}' - - loader = self.dataloaders['generate'] - updates = len(loader) - lp = self.log_progress(generate_stage_name, loader, total=updates, updates=self.log_updates) - - for batch in lp: - reference, _ = batch - reference = reference.to(self.device) - estimate = self.regenerate(reference) - reference = reference.cpu() - estimate = estimate.cpu() - sample_manager.add_samples(estimate, self.epoch, ground_truth_wavs=reference) - flashy.distrib.barrier() diff --git a/spaces/matthoffner/baby-gorilla-agi/babyllamalit.py b/spaces/matthoffner/baby-gorilla-agi/babyllamalit.py deleted file mode 100644 index 6cffdde76ee143baf02f4f35138fa6cc0a9da0d4..0000000000000000000000000000000000000000 --- a/spaces/matthoffner/baby-gorilla-agi/babyllamalit.py +++ /dev/null @@ -1,296 +0,0 @@ -from collections import deque -from typing import Dict, List, Optional -from langchain import LLMChain, PromptTemplate -from langchain.embeddings import LlamaCppEmbeddings -from langchain.llms import BaseLLM -from langchain.llms import BaseLLM, LlamaCpp -from langchain.vectorstores import FAISS -from langchain.vectorstores.base import VectorStore -from pydantic import BaseModel, Field -import streamlit as st -import argparse - -parser = argparse.ArgumentParser() -parser.add_argument('--model', type=str, required=False) -args = parser.parse_args() -if args.model is None: - model_path = 'Gorilla-7B.ggmlv3.q4_0.bin' -else: - model_path = args.model - -# define the local llama model -llm = LlamaCpp(model_path=model_path, - use_mlock=True, - use_mmap=True, - n_ctx=2048, - temperature=0.8, - n_threads=10, - f16_kv=True) - - -class TaskCreationChain(LLMChain): - @classmethod - def from_llm(cls, llm: BaseLLM, objective: str, verbose: bool = True) -> LLMChain: - """Get the response parser.""" - task_creation_template = ( - "You are an task creation AI that uses the result of an execution agent" - " to create new tasks with the following objective: {objective}," - " The last completed task has the result: {result}." - " This result was based on this task description: {task_description}." - " These are incomplete tasks: {incomplete_tasks}." - " Based on the result, create new tasks to be completed" - " by the AI system that do not overlap with incomplete tasks." - " Return the tasks as an array." - ) - prompt = PromptTemplate( - template=task_creation_template, - partial_variables={"objective": objective}, - input_variables=["result", "task_description", "incomplete_tasks"], - ) - return cls(prompt=prompt, llm=llm, verbose=verbose) - - def get_next_task(self, result: Dict, task_description: str, task_list: List[str]) -> List[Dict]: - """Get the next task.""" - incomplete_tasks = ", ".join(task_list) - response = self.run(result=result, task_description=task_description, incomplete_tasks=incomplete_tasks) - new_tasks = response.split('\n') - return [{"task_name": task_name} for task_name in new_tasks if task_name.strip()] - - -class TaskPrioritizationChain(LLMChain): - """Chain to prioritize tasks.""" - - @classmethod - def from_llm(cls, llm: BaseLLM, objective: str, verbose: bool = True) -> LLMChain: - """Get the response parser.""" - task_prioritization_template = ( - "You are an task prioritization AI tasked with cleaning the formatting of and reprioritizing" - " the following tasks: {task_names}." - " Consider the ultimate objective of your team: {objective}." - " Do not remove any tasks. Return the result as a numbered list, like:" - " #. First task" - " #. Second task" - " Start the task list with number {next_task_id}." - ) - prompt = PromptTemplate( - template=task_prioritization_template, - partial_variables={"objective": objective}, - input_variables=["task_names", "next_task_id"], - ) - return cls(prompt=prompt, llm=llm, verbose=verbose) - - def prioritize_tasks(self, this_task_id: int, task_list: List[Dict]) -> List[Dict]: - """Prioritize tasks.""" - task_names = [t["task_name"] for t in task_list] - next_task_id = int(this_task_id) + 1 - response = self.run(task_names=task_names, next_task_id=next_task_id) - new_tasks = response.split('\n') - prioritized_task_list = [] - for task_string in new_tasks: - if not task_string.strip(): - continue - task_parts = task_string.strip().split(".", 1) - if len(task_parts) == 2: - task_id = task_parts[0].strip() - task_name = task_parts[1].strip() - prioritized_task_list.append({"task_id": task_id, "task_name": task_name}) - return prioritized_task_list - - -class ExecutionChain(LLMChain): - """Chain to execute tasks.""" - - vectorstore: VectorStore = Field(init=False) - - @classmethod - def from_llm(cls, llm: BaseLLM, vectorstore: VectorStore, verbose: bool = True) -> LLMChain: - """Get the response parser.""" - execution_template = ( - "You are an AI who performs one task based on the following objective: {objective}." - " Take into account these previously completed tasks: {context}." - " Your task: {task}." - " Response:" - ) - prompt = PromptTemplate( - template=execution_template, - input_variables=["objective", "context", "task"], - ) - return cls(prompt=prompt, llm=llm, verbose=verbose, vectorstore=vectorstore) - - def _get_top_tasks(self, query: str, k: int) -> List[str]: - """Get the top k tasks based on the query.""" - results = self.vectorstore.similarity_search_with_score(query, k=k) - if not results: - return [] - sorted_results, _ = zip(*sorted(results, key=lambda x: x[1], reverse=True)) - return [str(item.metadata['task']) for item in sorted_results] - - def execute_task(self, objective: str, task: str, k: int = 5) -> str: - """Execute a task.""" - context = self._get_top_tasks(query=objective, k=k) - return self.run(objective=objective, context=context, task=task) - - -class Message: - exp: st.expander - ai_icon = "./robot.png" - - def __init__(self, label: str): - message_area, icon_area = st.columns([10, 1]) - icon_area.image(self.ai_icon, caption="🦍") - - # Expander - self.exp = message_area.expander(label=label, expanded=True) - - def __enter__(self): - return self - - def __exit__(self, ex_type, ex_value, trace): - pass - - def write(self, content): - self.exp.markdown(content) - - -class BabyAGI(BaseModel): - """Controller model for the BabyAGI agent.""" - - objective: str = Field(alias="objective") - task_list: deque = Field(default_factory=deque) - task_creation_chain: TaskCreationChain = Field(...) - task_prioritization_chain: TaskPrioritizationChain = Field(...) - execution_chain: ExecutionChain = Field(...) - task_id_counter: int = Field(1) - - def add_task(self, task: Dict): - self.task_list.append(task) - - def print_task_list(self): - with Message(label="Task List") as m: - m.write("### Task List") - for t in self.task_list: - m.write("- " + str(t["task_id"]) + ": " + t["task_name"]) - m.write("") - - def print_next_task(self, task: Dict): - with Message(label="Next Task") as m: - m.write("### Next Task") - m.write("- " + str(task["task_id"]) + ": " + task["task_name"]) - m.write("") - - def print_task_result(self, result: str): - with Message(label="Task Result") as m: - m.write("### Task Result") - m.write(result) - m.write("") - - def print_task_ending(self): - with Message(label="Task Ending") as m: - m.write("### Task Ending") - m.write("") - - - def run(self, max_iterations: Optional[int] = None): - """Run the agent.""" - num_iters = 0 - while True: - if self.task_list: - self.print_task_list() - - # Step 1: Pull the first task - task = self.task_list.popleft() - self.print_next_task(task) - - # Step 2: Execute the task - result = self.execution_chain.execute_task( - self.objective, task["task_name"] - ) - this_task_id = int(task["task_id"]) - self.print_task_result(result) - - # Step 3: Store the result in Pinecone - result_id = f"result_{task['task_id']}" - self.execution_chain.vectorstore.add_texts( - texts=[result], - metadatas=[{"task": task["task_name"]}], - ids=[result_id], - ) - - # Step 4: Create new tasks and reprioritize task list - new_tasks = self.task_creation_chain.get_next_task( - result, task["task_name"], [t["task_name"] for t in self.task_list] - ) - for new_task in new_tasks: - self.task_id_counter += 1 - new_task.update({"task_id": self.task_id_counter}) - self.add_task(new_task) - self.task_list = deque( - self.task_prioritization_chain.prioritize_tasks( - this_task_id, list(self.task_list) - ) - ) - num_iters += 1 - if max_iterations is not None and num_iters == max_iterations: - self.print_task_ending() - break - - @classmethod - def from_llm_and_objectives( - cls, - llm: BaseLLM, - vectorstore: VectorStore, - objective: str, - first_task: str, - verbose: bool = False, - ) -> "BabyAGI": - """Initialize the BabyAGI Controller.""" - task_creation_chain = TaskCreationChain.from_llm( - llm, objective, verbose=verbose - ) - task_prioritization_chain = TaskPrioritizationChain.from_llm( - llm, objective, verbose=verbose - ) - execution_chain = ExecutionChain.from_llm(llm, vectorstore, verbose=verbose) - controller = cls( - objective=objective, - task_creation_chain=task_creation_chain, - task_prioritization_chain=task_prioritization_chain, - execution_chain=execution_chain, - ) - controller.add_task({"task_id": 1, "task_name": first_task}) - return controller - - -def main(): - st.set_page_config( - initial_sidebar_state="expanded", - page_title="Baby Gorilla AGI", - layout="centered", - ) - - st.markdown("

Baby Gorilla AGI 💻🦍

", unsafe_allow_html=True) - objective = st.text_input("Objective:", "Create a Python script that converts text to speech") - first_task = st.text_input("First Task:", "Develop a task list") - max_iterations = st.number_input("Max iterations", value=1, min_value=1, step=1) - button = st.button("✅") - - - embeddings_model = LlamaCppEmbeddings(model_path=model_path) - vectorstore = FAISS.from_texts(["_"], embeddings_model, metadatas=[{"task":first_task}]) - - if button: - try: - baby_agi = BabyAGI.from_llm_and_objectives( - llm=llm, - vectorstore=vectorstore, - objective=objective, - first_task=first_task, - verbose=True - ) - baby_agi.run(max_iterations=max_iterations) - except Exception as e: - st.error(e) - - -if __name__ == "__main__": - main() \ No newline at end of file diff --git a/spaces/matthoffner/web-llm-embed/src/styles.css b/spaces/matthoffner/web-llm-embed/src/styles.css deleted file mode 100644 index f2619b8ddbb0ced50fc2e0f9fdf06cc6a8f72a4d..0000000000000000000000000000000000000000 --- a/spaces/matthoffner/web-llm-embed/src/styles.css +++ /dev/null @@ -1,9 +0,0 @@ -body { - font-family: system-ui, -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, Oxygen, Ubuntu, Cantarell, 'Open Sans', 'Helvetica Neue', sans-serif; - background-color: black; - color: white; -} - -textarea { - color: white; -} diff --git a/spaces/maxmax20160403/sovits5.0/vits/commons.py b/spaces/maxmax20160403/sovits5.0/vits/commons.py deleted file mode 100644 index 045a538d5a3ef8033eca70639a894346b11d5f61..0000000000000000000000000000000000000000 --- a/spaces/maxmax20160403/sovits5.0/vits/commons.py +++ /dev/null @@ -1,187 +0,0 @@ -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - - -def slice_pitch_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, idx_str:idx_end] - return ret - - -def rand_slice_segments_with_pitch(x, pitch, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - ret_pitch = slice_pitch_segments(pitch, ids_str, segment_size) - return ret, ret_pitch, ids_str - - -def rand_spec_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size * dilation - dilation) / 2) - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def kl_divergence(m_p, logs_p, m_q, logs_q): - """KL(P||Q)""" - kl = (logs_q - logs_p) - 0.5 - kl += ( - 0.5 * (torch.exp(2.0 * logs_p) + ((m_p - m_q) ** 2)) * torch.exp(-2.0 * logs_q) - ) - return kl - - -def rand_gumbel(shape): - """Sample from the Gumbel distribution, protect from overflows.""" - uniform_samples = torch.rand(shape) * 0.99998 + 0.00001 - return -torch.log(-torch.log(uniform_samples)) - - -def rand_gumbel_like(x): - g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device) - return g - - -def slice_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, :, idx_str:idx_end] - return ret - - -def rand_slice_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def get_timing_signal_1d(length, channels, min_timescale=1.0, max_timescale=1.0e4): - position = torch.arange(length, dtype=torch.float) - num_timescales = channels // 2 - log_timescale_increment = math.log(float(max_timescale) / float(min_timescale)) / ( - num_timescales - 1 - ) - inv_timescales = min_timescale * torch.exp( - torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment - ) - scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1) - signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0) - signal = F.pad(signal, [0, 0, 0, channels % 2]) - signal = signal.view(1, channels, length) - return signal - - -def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return x + signal.to(dtype=x.dtype, device=x.device) - - -def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis) - - -def subsequent_mask(length): - mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0) - return mask - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def shift_1d(x): - x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1] - return x - - -def sequence_mask(length, max_length=None): - if max_length is None: - max_length = length.max() - x = torch.arange(max_length, dtype=length.dtype, device=length.device) - return x.unsqueeze(0) < length.unsqueeze(1) - - -def generate_path(duration, mask): - """ - duration: [b, 1, t_x] - mask: [b, 1, t_y, t_x] - """ - device = duration.device - - b, _, t_y, t_x = mask.shape - cum_duration = torch.cumsum(duration, -1) - - cum_duration_flat = cum_duration.view(b * t_x) - path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype) - path = path.view(b, t_x, t_y) - path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1] - path = path.unsqueeze(1).transpose(2, 3) * mask - return path - - -def clip_grad_value_(parameters, clip_value, norm_type=2): - if isinstance(parameters, torch.Tensor): - parameters = [parameters] - parameters = list(filter(lambda p: p.grad is not None, parameters)) - norm_type = float(norm_type) - if clip_value is not None: - clip_value = float(clip_value) - - total_norm = 0 - for p in parameters: - param_norm = p.grad.data.norm(norm_type) - total_norm += param_norm.item() ** norm_type - if clip_value is not None: - p.grad.data.clamp_(min=-clip_value, max=clip_value) - total_norm = total_norm ** (1.0 / norm_type) - return total_norm diff --git a/spaces/merve/anonymization/public/measuring-fairness/annotations.js b/spaces/merve/anonymization/public/measuring-fairness/annotations.js deleted file mode 100644 index 7ab68f297f98c655427a84de22388906182b240c..0000000000000000000000000000000000000000 --- a/spaces/merve/anonymization/public/measuring-fairness/annotations.js +++ /dev/null @@ -1,52 +0,0 @@ -/* Copyright 2020 Google LLC. All Rights Reserved. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -==============================================================================*/ - - - -var annotations = -[ -] - - -function addSwoop(c){ - var swoopy = d3.swoopyDrag() - .x(d => c.x(d.x)) - .y(d => c.y(d.y)) - .draggable(0) - .annotations(annotations) - - var swoopySel = c.svg.append('g.annotations').call(swoopy) - - c.svg.append('marker#arrow') - .attr('viewBox', '-10 -10 20 20') - .attr('markerWidth', 20) - .attr('markerHeight', 20) - .attr('orient', 'auto') - .append('path').at({d: 'M-6.75,-6.75 L 0,0 L -6.75,6.75'}) - - - swoopySel.selectAll('path').attr('marker-end', 'url(#arrow)') - window.annotationSel = swoopySel.selectAll('g') - .st({fontSize: 12, opacity: d => d.slide == 0 ? 1 : 0}) - - swoopySel.selectAll('text') - .each(function(d){ - d3.select(this) - .text('') //clear existing text - .tspans(d3.wordwrap(d.text, d.width || 20), 12) //wrap after 20 char - }) -} - - diff --git a/spaces/merve/dataset-worldviews/public/dataset-worldviews/script.js b/spaces/merve/dataset-worldviews/public/dataset-worldviews/script.js deleted file mode 100644 index 3ebba088d65f389af1b446a9ea90fcde674d5fdf..0000000000000000000000000000000000000000 --- a/spaces/merve/dataset-worldviews/public/dataset-worldviews/script.js +++ /dev/null @@ -1,588 +0,0 @@ - -console.clear(); - -var ttSel = d3.select("body").selectAppend("div.tooltip.tooltip-hidden"); -// For result tables -const columns = ["object", "n", "n correct", "accuracy"]; -const rowHeight = 50; -const rowWidth = 100; -const buffer = 2; - -const classifierBlobWidth = 50; -const classifierBlobHeight = 460; - -function drawShapesWithData(classifier) { - var divHeight = classifier.class == "show-shapes" ? 250 : 490; - - var c = d3.conventions({ - sel: d3.select("." + classifier.class).html(""), - width: 1300, - height: divHeight, - layers: "ds", - }); - - function runClassifier() { - classifier.isClassified = true; - var duration = 3000; - classifierSel.classed("is-classified", true); - graphResultsGroup.classed("is-classified", true); - - drawResults(); - buttonSel.text("Reset"); - - var minX = d3.min(shapeParams, (d) => d.endX - 50); - var timer = d3.timer((ms) => { - if (!classifier.isClassified) { - timer.stop(); - shapeSel.classed("is-classified", false); - return; - } - - var t = d3.easeCubicInOut(ms / duration); - t = d3.clamp(0, t, 1); - - shapeParams.forEach((d, i) => { - d.x = d.startX + (d.endX - d.startX) * t; - d.y = d.startY + (d.endY - d.startY) * t; - d.isClassified = d.x > minX; - }); - - shapeSel - .translate((d) => [d.x, d.y]) - .classed("is-classified", (d) => d.isClassified); - - if (t == 1) { - timer.stop(); - } - }); - } - - function resetClassifier() { - shapeSel.translate((d) => [d.startX, d.startY]); - shapeSel.classed("is-classified", false); - classifier.isClassified = false; - shapeSel - .transition("position") - .duration(0) - .translate((d) => [d.startX, d.startY]); - classifierSel.classed("is-classified", false); - graphResultsGroup.classed("is-classified", false); - if (classifier.class != "show-shapes") { - classifierBlobSel.attr("opacity", 100); - } - - drawResults(); - buttonSel.text("Run Classifier"); - } - - // Add run/reset button - var buttonSel = d3 - .select("." + classifier.class + "-button") - .html("") - .append("button#run") - .at({ - type: "button", - class: "classifier-button", - }) - .text("Run Classifier") - .on("click", () => { - // if already classified, reset - if (classifier.isClassified) { - // Resetting - resetClassifier(); - } else { - runClassifier(); - } - }); - - // Backgrounds for different classifications - var classifierSel = c.svg - .append("g") - .at({ - class: "classifier", - }) - .translate([465, 20]); - - classifierSel - .append("path.classifier-bg-shaded") - .at({ - d: classifierBgPathTop, - // fill: "#ccc", - // stroke: "#000", - }) - .translate([-50, 0]); - - classifierSel - .append("text.classifier-bg-text") - .at({ - fill: "#000", - textAnchor: "middle", - dominantBaseline: "central", - class: "monospace", - }) - .text("shaded") - .translate([160, 15]); - - classifierSel - .append("path.classifier-bg-unshaded") - .at({ - d: classifierBgPathBottom, - }) - .translate([-50, 160]); - - classifierSel - .append("text.classifier-bg-text") - .at({ - fill: "#000", - textAnchor: "middle", - dominantBaseline: "central", - class: "monospace", - }) - .text("unshaded") - .translate([160, 175]); - - // Add the shapes themselves - var shapeSel = c.svg - .appendMany("path.shape", shapeParams) - .at({ - d: (d) => d.path, - class: (d) => "gt-" + d.gt + " " + d.correctness, - }) - .translate(function (d) { - if (classifier.class == "show-shapes") { - return [d.initialX + 35, d.initialY-20]; - } else { - return [d.startX, d.startY]; - } - }) - .call(d3.attachTooltip) - .on("mouseover", (d) => { - ttSel.html(""); - if (classifier.usingLabel != "none") { - ttSel - .append("div") - .html( - `labeled: ${toPropertyString( - d[classifier.usingLabel], - classifier.isRounding - ).slice(0, -1)}` - ); - } - var gtSel = ttSel - .append("div") - .html( - `ground truth: ${d.gt}` - ); - if (classifier.isClassified) { - ttSel - .append("div.labeled-row") - .html( - `classified as: ${d.label}` - ); - - ttSel - .append("div.correct-row") - .classed("is-correct-tooltip", d.correctness == "correct") - .html(`
${d.correctness}ly classified `); - } - ttSel.classed("tt-text", true); - }); - - // If we're just showing shapes, ignore everything else - if (classifier.class == "show-shapes") return; - - // Add "classifier" line - var classifierBlobSel = c.svg - .append("g") - .at({ - class: "classifier-blob", - strokeWidth: 0, - }) - .translate([378, 20]); - - classifierBlobSel - .append("line.classifier-blob") - .at({ - class: "line", - x1: 27, - x2: 27, - y1: 0, - y2: 464, - stroke: "#000", - strokeWidth: 1, - }) - .style("stroke-dasharray", "5, 5"); - - classifierBlobSel - .append("text.classifier-blob-text") - .at({ - class: "classifier-blob-text monospace", - textAnchor: "middle", - dominantBaseline: "central", - }) - .text("is_shaded classifier") - .attr("transform", "translate(30,480) rotate(0)"); - - if (classifier.class == "show-shapes") { - classifierBlobSel.classed("is-classified", true); - } - - // Draw the results table with accuracies - // This will be hidden before classifier is run. - var graphResultsGroup = c.svg - .append("g") - .attr("class", "results") - .translate([-20, 19]); - - function drawResults() { - // Write text summary - summarySel = d3 - .select("." + classifier.class + "-summary") - .html(summaries[classifier.class]) - .translate([0, 20]); - summarySel.classed("summary-text", true); - summarySel.classed("is-classified", classifier.isClassified); - - if (!classifier.isClassified) { - c.layers[0].html(""); - classifier.wasClassified = false; - return; - } - - // Access results, which are calculated in shapes.js. - // If there are none, draw nothing. - results = allResults[classifier.class]; - if (!results) return; - - // Figure out which shapes should be highlighted on mouseover - // This depends on whether we're "rounding" edge case examples. - function isMatch(rowName, labelName, isRounding) { - // Not filtering at all - if (rowName == "shape") { - return true; - } - if (isRounding == true) { - // No "other" category - return labelName.includes(toOriginalString(rowName)) - ? true - : false; - } else { - // There is an "other" category, prefixed by "rt_" - if (labelName == toOriginalString(rowName)) { - return true; - } else if ( - labelName.includes("rt_") && - rowName == "other shapes" - ) { - return true; - } - return false; - } - } - - // Color the last row of each table - function getColor(d, i) { - if (i != 3) { - // not last index - return "#e6e6e6"; - } else { - var scaleRowValue = d3 - .scaleLinear() - .domain([0.3, 1.0]) - .range([0, 1]); - return d3.interpolateRdYlGn(scaleRowValue(d)); - } - } - - // Adjust text color for visibility - function getTextColor(d, i) { - if (i != 3) { - // not last index - return "#000000"; - } else { - var bgColor = getColor(d, i); - if (d < 0.3) { - // Alternative: use a brighter color? - // return d3.rgb(bgColor).brighter(-2); - return "#FFCCD8"; - } else { - // Alternative: use a darker color? - // return d3.rgb(bgColor).darker(2); - return "#000000"; - } - } - } - - // Draw results table - var tableSel = c.layers[0] - .html("") - .raise() - .st({ width: 400 }) - .append("div") - .translate([0, 10]) - .append("table.results-table.monospace") - .st({ width: 400 }); - - var header = tableSel - .append("thead") - .append("tr") - .appendMany("th", columns) - .text((d) => d); - - var rowSel = tableSel - .appendMany("tr", results) - .at({ - class: "row monospace", - }) - .on("mouseover", (row) => { - if (classifier.class == "default-classifier") { - return; - } - rowSel.classed("active", (d) => d == row); - shapeSel.classed("shape-row-unhighlighted", function (d) { - return !isMatch( - row.object, - d[classifier.usingLabel], - (isRounding = classifier.isRounding) - ); - }); - }) - .on("mouseout", (row) => { - rowSel.classed("active", function (d) { - if (d == row) { - return false; - } - }); - if (classifier.isClassified) { - shapeSel.classed("shape-row-unhighlighted", 0); - } - }); - - rowSel - .appendMany("td", (result) => - columns.map((column) => result[column]) - ) - .text((d) => d) - .st({ - backgroundColor: getColor, - color: getTextColor, - }); - - header.style("opacity", 0); - rowSel.style("opacity", 0); - - // If the classifier has already been run before, draw results right away. - // Otherwise, wait for other animation to run before drawing results. - var initialDelay = classifier.wasClassified ? 0 : 2000; - classifier.wasClassified = true; - - header - .transition() - .delay(initialDelay) - .duration(1000) - .style("opacity", 1); - rowSel - .transition() - .delay(function (d, i) { - return initialDelay + i * 200; - }) - .duration(1000) - .style("opacity", 1); - } - - // Draw the dropdowns for selecting different labels - function drawDropdown() { - if (!classifier.options) return; - - ["rounding", "category"].forEach(function (classifierType) { - if (!classifier.options[classifierType]) return; - var sel = d3 - .select("#" + classifier.class + "-select-" + classifierType) - .html(""); - sel.classed("dropdown", true); - sel.appendMany("option", classifier.options[classifierType]) - .at({ - value: function (d) { - return d.value; - }, - }) - .text((d) => d.label); - sel.on("change", function () { - if (classifierType == "rounding") { - classifier.isRounding = toBool(this.value); - } else { - classifier.usingLabel = this.value; - } - updateResults(); - drawResults(); - }); - }); - } - drawDropdown(); - updateResults(); - drawResults(); - - // For continuity, auto-run the second two classifiers - if ( - classifier.class == "second-classifier" || - classifier.class == "final-classifier" - ) { - runClassifier(); - } -} - -// Draw the "Labels Tell Stories" section -function drawConclusion() { - function drawNewspapers() { - d3.select(".conclusion-newspapers").html(function () { - var imgPath = - "img/newspapers_" + - document.getElementById("conclusion-select-category").value; - return ( - 'Newspapers with headlines about bias and fairness in shape data.' - ); - }); - } - - function drawInterface() { - d3.select(".conclusion-interface").html(function () { - var imgPath = - "img/confusing_" + - document.getElementById("conclusion-select-category").value; - return ( - '
A shape that is difficult to classify with several checkboxes, none of which describe the shape. Next to the interface is a text box with a single question mark in it.
' - ); - }); - } - - function drawConclusionSummary() { - classifierSel = d3 - .select(".conclusion-summary") - .html(summaries["conclusion"]); - classifierSel.classed("summary-text is-classified", true); - } - - function drawDropdown() { - var sel = d3.select("#conclusion-select-category").html(""); - sel.classed("dropdown", true); - sel.appendMany("option", conclusionOptions.category) - .at({ - value: function (d) { - return d.value; - }, - }) - .text((d) => d.label); - // sel.attr('select', 'circles, triangles, and rectangles'); - sel.on("change", function (d) { - makeConclusionUpdates(); - }); - } - - function makeConclusionUpdates() { - updateResults(); - drawNewspapers(); - drawInterface(); - drawConclusionSummary(); - } - drawDropdown(); - makeConclusionUpdates(); -} - -// Handle the parameters everywhere classifiers are drawn -var classifiers = [ - { - // Just the initial display of shapes, not interactive - class: "show-shapes", - colorBy: (d) => d.correctness, - isClassified: false, - isRounding: false, - usingLabel: "none", - }, - { - class: "default-classifier", - colorBy: (d) => d.correctness, - isClassified: false, - isRounding: false, - usingLabel: "none", - }, - { - class: "second-classifier", - colorBy: (d) => d.correctness, - isClassified: false, - isRounding: true, - usingLabel: "shape_name", - options: { - rounding: [ - { label: "with their best guess", value: true }, - { label: 'as "other"', value: false }, - ], - }, - }, - { - class: "final-classifier", - colorBy: (d) => d.correctness, - isClassified: false, - isRounding: true, - usingLabel: "shape_name", - options: { - rounding: [ - { label: "with our best guess", value: true }, - { label: 'as "other"', value: false }, - ], - category: [ - { - label: "circles, triangles, or rectangles", - value: "shape_name", - }, - { label: "pointy shapes or round shapes", value: "pointiness" }, - { label: "small shapes or big shapes", value: "size" }, - { label: "just shapes", value: "none" }, - ], - }, - }, -]; - -// "Labels Tell Stories" dropdown options -var conclusionOptions = { - category: [ - { label: "circles, triangles, and rectangles", value: "shape_name" }, - { label: "pointy shapes and round shapes", value: "pointiness" }, - { label: "small shapes and big shapes", value: "size" }, - ], -}; - -classifiers.forEach(drawShapesWithData); -drawConclusion(); - -// These images are loaded invisibly so they appear seamlessly on dropdown change -const preloadImages = [ - "img/confusing_pointiness.png", - "img/confusing_pointiness.svg", - "img/confusing_shape_name.png", - "img/confusing_shape_name.svg", - "img/confusing_size.png", - "img/confusing_size.svg", - "img/interface_default.png", - "img/interface_default.svg", - "img/interface_shape_name_false.png", - "img/interface_shape_name_false.svg", - "img/interface_shape_name_true.png", - "img/interface_shape_name_true.svg", - "img/newspapers_pointiness.png", - "img/newspapers_pointiness.svg", - "img/newspapers_shape_name.png", - "img/newspapers_shape_name.svg", - "img/newspapers_size.png", - "img/newspapers_size.svg", -]; - -d3.select(".preload-dropdown-img") - .html("") - .appendMany("img", preloadImages) - .at({ src: (d) => d }); diff --git a/spaces/merve/uncertainty-calibration/server-side/fill-in-the-blank/py/main.py b/spaces/merve/uncertainty-calibration/server-side/fill-in-the-blank/py/main.py deleted file mode 100644 index 2ac15bda96de733df52cd7730895ae18baf20529..0000000000000000000000000000000000000000 --- a/spaces/merve/uncertainty-calibration/server-side/fill-in-the-blank/py/main.py +++ /dev/null @@ -1,59 +0,0 @@ -import os -import json -import shutil - -from flask import Flask, request -from flask_cors import CORS - -import model_bert_large -import model_bert_zari_cda - -app = Flask(__name__) -CORS(app) - - -@app.route('/') -def hello_world(): - name = os.environ.get('NAME', 'Test') - print('[Hello]') - return 'Hello {}!'.format(name) - - -@app.route('/embed_test') -def embed_test(): - sentence = 'The dog went to the [MASK].' - print('[TEST] ', sentence) - return json.dumps(model_bert_large.get_embeddings(sentence)) - - -@app.route('/embed', methods=['POST']) -def embed(): - data = json.loads(request.data) - sentence = data['sentence'] - print('[BASE] ' + sentence) - return json.dumps(model_bert_large.get_embeddings(sentence)) - -@app.route('/embed_zari_cda', methods=['POST']) -def embed_zari_cda(): - data = json.loads(request.data) - sentence = data['sentence'] - print('[ZARI] ' + sentence) - return json.dumps(model_bert_zari_cda.get_embeddings(sentence)) - - -@app.route('/embed_group_top', methods=['POST']) -def embed_group_top(): - data = json.loads(request.data) - tokens = data['tokens'] - return json.dumps(model_bert_large.get_embedding_group_top(tokens)) - -@app.route('/get_embedding_group_top_low_mem', methods=['POST']) -def embed_group(): - data = json.loads(request.data) - tokens = data['tokens'] - return json.dumps(model_bert_large.get_embedding_group(tokens)) - -if __name__ == '__main__': - app.run(debug=True, host='0.0.0.0', port=int(os.environ.get('PORT', 5004))) - - diff --git a/spaces/mfrashad/CharacterGAN/models/stylegan/stylegan_tf/dnnlib/tflib/autosummary.py b/spaces/mfrashad/CharacterGAN/models/stylegan/stylegan_tf/dnnlib/tflib/autosummary.py deleted file mode 100644 index 43154f792e5ebe15ee6045a5acdfb279cebefcaa..0000000000000000000000000000000000000000 --- a/spaces/mfrashad/CharacterGAN/models/stylegan/stylegan_tf/dnnlib/tflib/autosummary.py +++ /dev/null @@ -1,184 +0,0 @@ -# Copyright (c) 2019, NVIDIA CORPORATION. All rights reserved. -# -# This work is licensed under the Creative Commons Attribution-NonCommercial -# 4.0 International License. To view a copy of this license, visit -# http://creativecommons.org/licenses/by-nc/4.0/ or send a letter to -# Creative Commons, PO Box 1866, Mountain View, CA 94042, USA. - -"""Helper for adding automatically tracked values to Tensorboard. - -Autosummary creates an identity op that internally keeps track of the input -values and automatically shows up in TensorBoard. The reported value -represents an average over input components. The average is accumulated -constantly over time and flushed when save_summaries() is called. - -Notes: -- The output tensor must be used as an input for something else in the - graph. Otherwise, the autosummary op will not get executed, and the average - value will not get accumulated. -- It is perfectly fine to include autosummaries with the same name in - several places throughout the graph, even if they are executed concurrently. -- It is ok to also pass in a python scalar or numpy array. In this case, it - is added to the average immediately. -""" - -from collections import OrderedDict -import numpy as np -import tensorflow as tf -from tensorboard import summary as summary_lib -from tensorboard.plugins.custom_scalar import layout_pb2 - -from . import tfutil -from .tfutil import TfExpression -from .tfutil import TfExpressionEx - -_dtype = tf.float64 -_vars = OrderedDict() # name => [var, ...] -_immediate = OrderedDict() # name => update_op, update_value -_finalized = False -_merge_op = None - - -def _create_var(name: str, value_expr: TfExpression) -> TfExpression: - """Internal helper for creating autosummary accumulators.""" - assert not _finalized - name_id = name.replace("/", "_") - v = tf.cast(value_expr, _dtype) - - if v.shape.is_fully_defined(): - size = np.prod(tfutil.shape_to_list(v.shape)) - size_expr = tf.constant(size, dtype=_dtype) - else: - size = None - size_expr = tf.reduce_prod(tf.cast(tf.shape(v), _dtype)) - - if size == 1: - if v.shape.ndims != 0: - v = tf.reshape(v, []) - v = [size_expr, v, tf.square(v)] - else: - v = [size_expr, tf.reduce_sum(v), tf.reduce_sum(tf.square(v))] - v = tf.cond(tf.is_finite(v[1]), lambda: tf.stack(v), lambda: tf.zeros(3, dtype=_dtype)) - - with tfutil.absolute_name_scope("Autosummary/" + name_id), tf.control_dependencies(None): - var = tf.Variable(tf.zeros(3, dtype=_dtype), trainable=False) # [sum(1), sum(x), sum(x**2)] - update_op = tf.cond(tf.is_variable_initialized(var), lambda: tf.assign_add(var, v), lambda: tf.assign(var, v)) - - if name in _vars: - _vars[name].append(var) - else: - _vars[name] = [var] - return update_op - - -def autosummary(name: str, value: TfExpressionEx, passthru: TfExpressionEx = None) -> TfExpressionEx: - """Create a new autosummary. - - Args: - name: Name to use in TensorBoard - value: TensorFlow expression or python value to track - passthru: Optionally return this TF node without modifications but tack an autosummary update side-effect to this node. - - Example use of the passthru mechanism: - - n = autosummary('l2loss', loss, passthru=n) - - This is a shorthand for the following code: - - with tf.control_dependencies([autosummary('l2loss', loss)]): - n = tf.identity(n) - """ - tfutil.assert_tf_initialized() - name_id = name.replace("/", "_") - - if tfutil.is_tf_expression(value): - with tf.name_scope("summary_" + name_id), tf.device(value.device): - update_op = _create_var(name, value) - with tf.control_dependencies([update_op]): - return tf.identity(value if passthru is None else passthru) - - else: # python scalar or numpy array - if name not in _immediate: - with tfutil.absolute_name_scope("Autosummary/" + name_id), tf.device(None), tf.control_dependencies(None): - update_value = tf.placeholder(_dtype) - update_op = _create_var(name, update_value) - _immediate[name] = update_op, update_value - - update_op, update_value = _immediate[name] - tfutil.run(update_op, {update_value: value}) - return value if passthru is None else passthru - - -def finalize_autosummaries() -> None: - """Create the necessary ops to include autosummaries in TensorBoard report. - Note: This should be done only once per graph. - """ - global _finalized - tfutil.assert_tf_initialized() - - if _finalized: - return None - - _finalized = True - tfutil.init_uninitialized_vars([var for vars_list in _vars.values() for var in vars_list]) - - # Create summary ops. - with tf.device(None), tf.control_dependencies(None): - for name, vars_list in _vars.items(): - name_id = name.replace("/", "_") - with tfutil.absolute_name_scope("Autosummary/" + name_id): - moments = tf.add_n(vars_list) - moments /= moments[0] - with tf.control_dependencies([moments]): # read before resetting - reset_ops = [tf.assign(var, tf.zeros(3, dtype=_dtype)) for var in vars_list] - with tf.name_scope(None), tf.control_dependencies(reset_ops): # reset before reporting - mean = moments[1] - std = tf.sqrt(moments[2] - tf.square(moments[1])) - tf.summary.scalar(name, mean) - tf.summary.scalar("xCustomScalars/" + name + "/margin_lo", mean - std) - tf.summary.scalar("xCustomScalars/" + name + "/margin_hi", mean + std) - - # Group by category and chart name. - cat_dict = OrderedDict() - for series_name in sorted(_vars.keys()): - p = series_name.split("/") - cat = p[0] if len(p) >= 2 else "" - chart = "/".join(p[1:-1]) if len(p) >= 3 else p[-1] - if cat not in cat_dict: - cat_dict[cat] = OrderedDict() - if chart not in cat_dict[cat]: - cat_dict[cat][chart] = [] - cat_dict[cat][chart].append(series_name) - - # Setup custom_scalar layout. - categories = [] - for cat_name, chart_dict in cat_dict.items(): - charts = [] - for chart_name, series_names in chart_dict.items(): - series = [] - for series_name in series_names: - series.append(layout_pb2.MarginChartContent.Series( - value=series_name, - lower="xCustomScalars/" + series_name + "/margin_lo", - upper="xCustomScalars/" + series_name + "/margin_hi")) - margin = layout_pb2.MarginChartContent(series=series) - charts.append(layout_pb2.Chart(title=chart_name, margin=margin)) - categories.append(layout_pb2.Category(title=cat_name, chart=charts)) - layout = summary_lib.custom_scalar_pb(layout_pb2.Layout(category=categories)) - return layout - -def save_summaries(file_writer, global_step=None): - """Call FileWriter.add_summary() with all summaries in the default graph, - automatically finalizing and merging them on the first call. - """ - global _merge_op - tfutil.assert_tf_initialized() - - if _merge_op is None: - layout = finalize_autosummaries() - if layout is not None: - file_writer.add_summary(layout) - with tf.device(None), tf.control_dependencies(None): - _merge_op = tf.summary.merge_all() - - file_writer.add_summary(_merge_op.eval(), global_step) diff --git a/spaces/micole66/Zero-Shot-Classification-Pretrained/README.md b/spaces/micole66/Zero-Shot-Classification-Pretrained/README.md deleted file mode 100644 index 616a1ecf75c15942f4e5759b077c587871ec4268..0000000000000000000000000000000000000000 --- a/spaces/micole66/Zero-Shot-Classification-Pretrained/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Zero Shot Classification Pretrained -emoji: 🦀 -colorFrom: indigo -colorTo: green -sdk: gradio -sdk_version: 3.16.1 -app_file: app.py -pinned: false -license: apache-2.0 -duplicated_from: ajcdp/Zero-Shot-Classification-Pretrained ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/micole66/bloomz/README.md b/spaces/micole66/bloomz/README.md deleted file mode 100644 index 17d6cb4dc14e593e8768e30893563b78c67b39a5..0000000000000000000000000000000000000000 --- a/spaces/micole66/bloomz/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Bloomz -emoji: 👁 -colorFrom: green -colorTo: green -sdk: gradio -sdk_version: 3.5 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/mikeee/radiobee-aligner/tests/test_paras2sents.py b/spaces/mikeee/radiobee-aligner/tests/test_paras2sents.py deleted file mode 100644 index 8e50dcb0634050a0a1d830899976b5d832fcf9e8..0000000000000000000000000000000000000000 --- a/spaces/mikeee/radiobee-aligner/tests/test_paras2sents.py +++ /dev/null @@ -1,40 +0,0 @@ -"""Test paras2sents.""" -# pylint: disable=invalid-name - -import numpy as np -import pandas as pd -from radiobee.paras2sents import paras2sents -from radiobee.shuffle_sents import shuffle_sents - -file_loc = r"data/test-dual-zh-en.xlsx" -paras = pd.read_excel(file_loc, header=0) -paras = paras[["text1", "text2", "likelihood"]].fillna("") - - -def test_paras2sents_dual(): - """Test paras2sents_dual.""" - sents = paras2sents(paras) - - assert np.array(sents).shape.__len__() > 1 - - assert len(sents) > 202 # 208 - # assert not sents - - -def test_paras2sents_dual_model_s(): - """Test paras2sents_dual_model_s.""" - sents1 = paras2sents(paras, shuffle_sents) - - # assert np.array(sents1).shape.__len__() > 1 - assert pd.DataFrame(sents1).shape.__len__() > 1 - - assert len(sents1) > 201 # 207 - # assert not sents - - -_ = """ -df = pd.DataFrame( - [list(sent) + [""] if len(sent) == 2 else list(sent) for sent in sents] -).fillna("") - -""" diff --git a/spaces/mikeee/radiobee-dev/start-radiobee.bat b/spaces/mikeee/radiobee-dev/start-radiobee.bat deleted file mode 100644 index e1f2d7cd1ca0b113afb8529e96f480e195c1e457..0000000000000000000000000000000000000000 --- a/spaces/mikeee/radiobee-dev/start-radiobee.bat +++ /dev/null @@ -1 +0,0 @@ -start "radiobee" run-radiobee \ No newline at end of file diff --git a/spaces/ml-energy/leaderboard/deployment/controller.Dockerfile b/spaces/ml-energy/leaderboard/deployment/controller.Dockerfile deleted file mode 100644 index 7ee82668cb6b8b82a3bd00fa72d2093a52874abb..0000000000000000000000000000000000000000 --- a/spaces/ml-energy/leaderboard/deployment/controller.Dockerfile +++ /dev/null @@ -1,30 +0,0 @@ -FROM ubuntu:22.04 - -# Basic installs -ARG DEBIAN_FRONTEND=noninteractive -ENV TZ='America/Detroit' -RUN apt-get update -qq \ - && apt-get -y --no-install-recommends install \ - tzdata software-properties-common wget git \ - && apt-get clean all \ - && rm -r /var/lib/apt/lists/* \ - && ln -fs /usr/share/zoneinfo/America/Detroit /etc/localtime \ - && dpkg-reconfigure -f noninteractive tzdata - -# Install Miniconda3 23.3.1 -ENV PATH="/root/.local/miniconda3/bin:$PATH" -RUN mkdir -p /root/.local \ - && wget https://repo.anaconda.com/miniconda/Miniconda3-py39_23.3.1-0-Linux-x86_64.sh \ - && mkdir /root/.conda \ - && bash Miniconda3-py39_23.3.1-0-Linux-x86_64.sh -b -p /root/.local/miniconda3 \ - && rm -f Miniconda3-py39_23.3.1-0-Linux-x86_64.sh \ - && ln -sf /root/.local/miniconda3/etc/profile.d/conda.sh /etc/profile.d/conda.sh - -# Install spitfight -ADD . /workspace/leaderboard -RUN cd /workspace/leaderboard \ - && pip install -e .[colosseum-controller] - -WORKDIR /workspace/leaderboard - -CMD ["python", "spitfight/colosseum/controller/router.py"] diff --git a/spaces/ml-energy/leaderboard/spitfight/colosseum/controller/router.py b/spaces/ml-energy/leaderboard/spitfight/colosseum/controller/router.py deleted file mode 100644 index 2d2493e35c9b3ac2b81a7222da09682dfffbf24f..0000000000000000000000000000000000000000 --- a/spaces/ml-energy/leaderboard/spitfight/colosseum/controller/router.py +++ /dev/null @@ -1,136 +0,0 @@ -import os -import json - -import uvicorn -from pydantic import BaseSettings -from fastapi import FastAPI, Depends -from fastapi.responses import StreamingResponse -from fastapi.exceptions import HTTPException -from text_generation.errors import OverloadedError, UnknownError, ValidationError - -from spitfight.log import get_logger, init_queued_root_logger, shutdown_queued_root_loggers -from spitfight.colosseum.common import ( - COLOSSEUM_MODELS_ROUTE, - COLOSSEUM_PROMPT_ROUTE, - COLOSSEUM_RESP_VOTE_ROUTE, - COLOSSEUM_ENERGY_VOTE_ROUTE, - COLOSSEUM_HEALTH_ROUTE, - ModelsResponse, - PromptRequest, - ResponseVoteRequest, - ResponseVoteResponse, - EnergyVoteRequest, - EnergyVoteResponse, -) -from spitfight.colosseum.controller.controller import ( - Controller, - init_global_controller, - get_global_controller, -) -from spitfight.utils import prepend_generator - - -class ControllerConfig(BaseSettings): - """Controller settings automatically loaded from environment variables.""" - # Controller - background_task_interval: int = 300 - max_num_req_states: int = 10000 - req_state_expiration_time: int = 600 - compose_files: list[str] = ["deployment/docker-compose-0.yaml", "deployment/docker-compose-1.yaml"] - - # Logging - log_dir: str = "/logs" - controller_log_file: str = "controller.log" - request_log_file: str = "requests.log" - uvicorn_log_file: str = "uvicorn.log" - - # Generation - max_new_tokens: int = 512 - do_sample: bool = True - temperature: float = 1.0 - repetition_penalty: float = 1.0 - top_k: int = 50 - top_p: float = 0.95 - - -app = FastAPI() -settings = ControllerConfig() -logger = get_logger("spitfight.colosseum.controller.router") - -@app.on_event("startup") -async def startup_event(): - init_queued_root_logger("uvicorn", os.path.join(settings.log_dir, settings.uvicorn_log_file)) - init_queued_root_logger("spitfight.colosseum.controller", os.path.join(settings.log_dir, settings.controller_log_file)) - init_queued_root_logger("colosseum_requests", os.path.join(settings.log_dir, settings.request_log_file)) - init_global_controller(settings) - -@app.on_event("shutdown") -async def shutdown_event(): - get_global_controller().shutdown() - shutdown_queued_root_loggers() - -@app.get(COLOSSEUM_MODELS_ROUTE, response_model=ModelsResponse) -async def models(controller: Controller = Depends(get_global_controller)): - return ModelsResponse(available_models=controller.get_available_models()) - -@app.post(COLOSSEUM_PROMPT_ROUTE) -async def prompt( - request: PromptRequest, - controller: Controller = Depends(get_global_controller), -): - generator = controller.prompt( - request.request_id, - request.prompt, - request.model_index, - request.model_preference, - ) - - # First try to get the first token in order to catch TGI errors. - try: - first_token = await generator.__anext__() - except OverloadedError: - name = controller.request_states[request.request_id].model_names[request.model_index] - logger.warning("Model %s is overloaded. Failed request: %s", name, repr(request)) - raise HTTPException(status_code=429, detail="Model overloaded. Pleaes try again later.") - except ValidationError as e: - logger.info("TGI returned validation error: %s. Failed request: %s", str(e), repr(request)) - raise HTTPException(status_code=422, detail=str(e)) - except StopAsyncIteration: - logger.info("TGI returned empty response. Failed request: %s", repr(request)) - return StreamingResponse( - iter([json.dumps("*The model generated an empty response.*").encode() + b"\0"]), - ) - except UnknownError as e: - logger.error("TGI returned unknown error: %s. Failed request: %s", str(e), repr(request)) - raise HTTPException(status_code=500, detail=str(e)) - - return StreamingResponse(prepend_generator(first_token, generator)) - -@app.post(COLOSSEUM_RESP_VOTE_ROUTE, response_model=ResponseVoteResponse) -async def response_vote( - request: ResponseVoteRequest, - controller: Controller = Depends(get_global_controller), -): - if (state := controller.response_vote(request.request_id, request.victory_index)) is None: - raise HTTPException(status_code=410, detail="Colosseum battle session timeout expired.") - return ResponseVoteResponse( - energy_consumptions=state.energy_consumptions, - model_names=state.model_names, - ) - -@app.post(COLOSSEUM_ENERGY_VOTE_ROUTE, response_model=EnergyVoteResponse) -async def energy_vote( - request: EnergyVoteRequest, - controller: Controller = Depends(get_global_controller), -): - if (state := controller.energy_vote(request.request_id, request.is_worth)) is None: - raise HTTPException(status_code=410, detail="Colosseum battle session timeout expired.") - return EnergyVoteResponse(model_names=state.model_names) - -@app.get(COLOSSEUM_HEALTH_ROUTE) -async def health(): - return "OK" - - -if __name__ == "__main__": - uvicorn.run(app, host="0.0.0.0", log_config=None) diff --git a/spaces/mofu-team/ggl-chk/README.md b/spaces/mofu-team/ggl-chk/README.md deleted file mode 100644 index 6ebe04a6e6c90c829340e54d73cde67708728227..0000000000000000000000000000000000000000 --- a/spaces/mofu-team/ggl-chk/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Ggl Chk -emoji: 🏃 -colorFrom: purple -colorTo: gray -sdk: gradio -sdk_version: 3.42.0 -app_file: app.py -pinned: false -license: wtfpl ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/monra/freegpt-webui/g4f/Provider/Providers/Yqcloud.py b/spaces/monra/freegpt-webui/g4f/Provider/Providers/Yqcloud.py deleted file mode 100644 index ad5c3a4326c68ceb7ee012fbf5bc072da72a7e40..0000000000000000000000000000000000000000 --- a/spaces/monra/freegpt-webui/g4f/Provider/Providers/Yqcloud.py +++ /dev/null @@ -1,39 +0,0 @@ -import os -import time -import requests - -from ...typing import sha256, Dict, get_type_hints -url = 'https://chat9.yqcloud.top/' -model = [ - 'gpt-3.5-turbo', -] -supports_stream = True -needs_auth = False - - -def _create_completion(model: str, messages: list, stream: bool, chatId: str, **kwargs): - - headers = { - 'authority': 'api.aichatos.cloud', - 'origin': 'https://chat9.yqcloud.top', - 'referer': 'https://chat9.yqcloud.top/', - 'user-agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36', - } - - json_data = { - 'prompt': str(messages), - 'userId': f'#/chat/{chatId}', - 'network': True, - 'apikey': '', - 'system': '', - 'withoutContext': False, - } - response = requests.post('https://api.aichatos.cloud/api/generateStream', - headers=headers, json=json_data, stream=True) - for token in response.iter_content(chunk_size=2046): - yield (token.decode('utf-8')) - - -params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + \ - '(%s)' % ', '.join( - [f"{name}: {get_type_hints(_create_completion)[name].__name__}" for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]]) \ No newline at end of file diff --git a/spaces/mrosinski/risk-predictor/README.md b/spaces/mrosinski/risk-predictor/README.md deleted file mode 100644 index 3855023512b1d27a93d297d8345a1744173e4bce..0000000000000000000000000000000000000000 --- a/spaces/mrosinski/risk-predictor/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Risk Predictor -emoji: 🦀 -colorFrom: red -colorTo: yellow -sdk: gradio -sdk_version: 3.0.21 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/ms180/espnet_onnx_demo/template/index.html b/spaces/ms180/espnet_onnx_demo/template/index.html deleted file mode 100644 index 8fee4e85ccf9672aa2c4ee75e2a5c75d416312f1..0000000000000000000000000000000000000000 --- a/spaces/ms180/espnet_onnx_demo/template/index.html +++ /dev/null @@ -1 +0,0 @@ -espnet_onnx_demo
\ No newline at end of file diff --git a/spaces/mshukor/UnIVAL/fairseq/fairseq/criterions/__init__.py b/spaces/mshukor/UnIVAL/fairseq/fairseq/criterions/__init__.py deleted file mode 100644 index 4dbf46a1cb31ce65c4224ae79cbc2d7cf9e4d111..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/fairseq/fairseq/criterions/__init__.py +++ /dev/null @@ -1,36 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -"""isort:skip_file""" - -import importlib -import os - -from fairseq import registry -from fairseq.criterions.fairseq_criterion import ( # noqa - FairseqCriterion, - LegacyFairseqCriterion, -) -from omegaconf import DictConfig - - -( - build_criterion_, - register_criterion, - CRITERION_REGISTRY, - CRITERION_DATACLASS_REGISTRY, -) = registry.setup_registry( - "--criterion", base_class=FairseqCriterion, default="cross_entropy" -) - - -def build_criterion(cfg: DictConfig, task): - return build_criterion_(cfg, task) - - -# automatically import any Python files in the criterions/ directory -for file in sorted(os.listdir(os.path.dirname(__file__))): - if file.endswith(".py") and not file.startswith("_"): - file_name = file[: file.find(".py")] - importlib.import_module("fairseq.criterions." + file_name) diff --git a/spaces/mshukor/UnIVAL/fairseq/fairseq/data/audio/feature_transforms/global_cmvn.py b/spaces/mshukor/UnIVAL/fairseq/fairseq/data/audio/feature_transforms/global_cmvn.py deleted file mode 100644 index e457ff176fee3b996da11f47e7dc61b81c445ba3..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/fairseq/fairseq/data/audio/feature_transforms/global_cmvn.py +++ /dev/null @@ -1,29 +0,0 @@ -import numpy as np -from fairseq.data.audio.feature_transforms import ( - AudioFeatureTransform, - register_audio_feature_transform, -) - - -@register_audio_feature_transform("global_cmvn") -class GlobalCMVN(AudioFeatureTransform): - """Global CMVN (cepstral mean and variance normalization). The global mean - and variance need to be pre-computed and stored in NumPy format (.npz).""" - - @classmethod - def from_config_dict(cls, config=None): - _config = {} if config is None else config - return GlobalCMVN(_config.get("stats_npz_path")) - - def __init__(self, stats_npz_path): - self.stats_npz_path = stats_npz_path - stats = np.load(stats_npz_path) - self.mean, self.std = stats["mean"], stats["std"] - - def __repr__(self): - return self.__class__.__name__ + f'(stats_npz_path="{self.stats_npz_path}")' - - def __call__(self, x): - x = np.subtract(x, self.mean) - x = np.divide(x, self.std) - return x diff --git a/spaces/msmilauer/AutoGPT-duplicated2/autogpt/json_utils/json_fix_general.py b/spaces/msmilauer/AutoGPT-duplicated2/autogpt/json_utils/json_fix_general.py deleted file mode 100644 index 7010fa3b9c1909de0e5a7f6ec13ca8aa418fe6c7..0000000000000000000000000000000000000000 --- a/spaces/msmilauer/AutoGPT-duplicated2/autogpt/json_utils/json_fix_general.py +++ /dev/null @@ -1,124 +0,0 @@ -"""This module contains functions to fix JSON strings using general programmatic approaches, suitable for addressing -common JSON formatting issues.""" -from __future__ import annotations - -import contextlib -import json -import re -from typing import Optional - -from autogpt.config import Config -from autogpt.json_utils.utilities import extract_char_position - -CFG = Config() - - -def fix_invalid_escape(json_to_load: str, error_message: str) -> str: - """Fix invalid escape sequences in JSON strings. - - Args: - json_to_load (str): The JSON string. - error_message (str): The error message from the JSONDecodeError - exception. - - Returns: - str: The JSON string with invalid escape sequences fixed. - """ - while error_message.startswith("Invalid \\escape"): - bad_escape_location = extract_char_position(error_message) - json_to_load = ( - json_to_load[:bad_escape_location] + json_to_load[bad_escape_location + 1 :] - ) - try: - json.loads(json_to_load) - return json_to_load - except json.JSONDecodeError as e: - if CFG.debug_mode: - print("json loads error - fix invalid escape", e) - error_message = str(e) - return json_to_load - - -def balance_braces(json_string: str) -> Optional[str]: - """ - Balance the braces in a JSON string. - - Args: - json_string (str): The JSON string. - - Returns: - str: The JSON string with braces balanced. - """ - - open_braces_count = json_string.count("{") - close_braces_count = json_string.count("}") - - while open_braces_count > close_braces_count: - json_string += "}" - close_braces_count += 1 - - while close_braces_count > open_braces_count: - json_string = json_string.rstrip("}") - close_braces_count -= 1 - - with contextlib.suppress(json.JSONDecodeError): - json.loads(json_string) - return json_string - - -def add_quotes_to_property_names(json_string: str) -> str: - """ - Add quotes to property names in a JSON string. - - Args: - json_string (str): The JSON string. - - Returns: - str: The JSON string with quotes added to property names. - """ - - def replace_func(match: re.Match) -> str: - return f'"{match[1]}":' - - property_name_pattern = re.compile(r"(\w+):") - corrected_json_string = property_name_pattern.sub(replace_func, json_string) - - try: - json.loads(corrected_json_string) - return corrected_json_string - except json.JSONDecodeError as e: - raise e - - -def correct_json(json_to_load: str) -> str: - """ - Correct common JSON errors. - Args: - json_to_load (str): The JSON string. - """ - - try: - if CFG.debug_mode: - print("json", json_to_load) - json.loads(json_to_load) - return json_to_load - except json.JSONDecodeError as e: - if CFG.debug_mode: - print("json loads error", e) - error_message = str(e) - if error_message.startswith("Invalid \\escape"): - json_to_load = fix_invalid_escape(json_to_load, error_message) - if error_message.startswith( - "Expecting property name enclosed in double quotes" - ): - json_to_load = add_quotes_to_property_names(json_to_load) - try: - json.loads(json_to_load) - return json_to_load - except json.JSONDecodeError as e: - if CFG.debug_mode: - print("json loads error - add quotes", e) - error_message = str(e) - if balanced_str := balance_braces(json_to_load): - return balanced_str - return json_to_load diff --git a/spaces/mukish45/Coconut_Grade_Classification/app.py b/spaces/mukish45/Coconut_Grade_Classification/app.py deleted file mode 100644 index fb1b895b26957a64075a1e4ec44e5c6fcef3b6e2..0000000000000000000000000000000000000000 --- a/spaces/mukish45/Coconut_Grade_Classification/app.py +++ /dev/null @@ -1,15 +0,0 @@ -import tensorflow -from tensorflow import keras -import gradio as gr - -model = keras.models.load_model('mymodel.h5') -potato_classes = ['Grade_A','Grade_B','Grade_C'] - -def predict_input_image(img): - img_3d=img.reshape(-1,256,256,3) - prediction=model.predict(img_3d)[0] - return {potato_classes[i]: float(prediction[i]) for i in range(3)} - -image = gr.inputs.Image(shape=(256,256)) -label = gr.outputs.Label(num_top_classes=3) -gr.Interface(fn=predict_input_image, inputs=image, outputs=label,interpretation='default').launch() \ No newline at end of file diff --git a/spaces/nateraw/lavila/lavila/models/gpt2_gated.py b/spaces/nateraw/lavila/lavila/models/gpt2_gated.py deleted file mode 100644 index d9c06d1e6f9b8b08e586c30c8a62850827fb5f64..0000000000000000000000000000000000000000 --- a/spaces/nateraw/lavila/lavila/models/gpt2_gated.py +++ /dev/null @@ -1,1615 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. - -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -# Part of the code is from https://github.com/huggingface/transformers/blob/main/src/transformers/models/gpt2/modeling_gpt2.py -# Modified by Yue Zhao -# -# -# coding=utf-8 -# Copyright 2018 The OpenAI Team Authors and HuggingFace Inc. team. -# Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -"""PyTorch OpenAI GPT-2 model.""" - -import copy -import math -import os -from dataclasses import dataclass -from typing import Optional, Tuple, Union - -import torch -import torch.utils.checkpoint -from packaging import version -from torch import nn -from torch.nn import BCEWithLogitsLoss, CrossEntropyLoss, MSELoss - - -if version.parse(torch.__version__) >= version.parse("1.6"): - is_amp_available = True - from torch.cuda.amp import autocast -else: - is_amp_available = False - -from transformers.activations import ACT2FN -from transformers.modeling_outputs import ( - BaseModelOutputWithPastAndCrossAttentions, - CausalLMOutputWithCrossAttentions, - SequenceClassifierOutputWithPast, - TokenClassifierOutput, -) -from transformers.modeling_utils import PreTrainedModel, SequenceSummary -from transformers.pytorch_utils import Conv1D, find_pruneable_heads_and_indices, prune_conv1d_layer -from transformers.utils import ( - ModelOutput, - add_code_sample_docstrings, - add_start_docstrings, - add_start_docstrings_to_model_forward, - logging, - replace_return_docstrings, -) -from transformers.utils.model_parallel_utils import assert_device_map, get_device_map -from transformers.models.gpt2.configuration_gpt2 import GPT2Config - - -logger = logging.get_logger(__name__) - -_CHECKPOINT_FOR_DOC = "gpt2" -_CONFIG_FOR_DOC = "GPT2Config" -_TOKENIZER_FOR_DOC = "GPT2Tokenizer" - -GPT2_PRETRAINED_MODEL_ARCHIVE_LIST = [ - "gpt2", - "gpt2-medium", - "gpt2-large", - "gpt2-xl", - "distilgpt2", - # See all GPT-2 models at https://huggingface.co/models?filter=gpt2 -] - - -def augment_gpt2_config(config, cross_attn_freq=1, gated_xattn=True): - new_config = copy.deepcopy(config) - new_config.add_cross_attention = True - new_config.add_cross_attention_freq = cross_attn_freq - new_config.is_tanh_gating = gated_xattn - return new_config - - -def load_tf_weights_in_gpt2(model, config, gpt2_checkpoint_path): - """Load tf checkpoints in a pytorch model""" - try: - import re - - import tensorflow as tf - except ImportError: - logger.error( - "Loading a TensorFlow model in PyTorch, requires TensorFlow to be installed. Please see " - "https://www.tensorflow.org/install/ for installation instructions." - ) - raise - tf_path = os.path.abspath(gpt2_checkpoint_path) - logger.info(f"Converting TensorFlow checkpoint from {tf_path}") - # Load weights from TF model - init_vars = tf.train.list_variables(tf_path) - names = [] - arrays = [] - for name, shape in init_vars: - logger.info(f"Loading TF weight {name} with shape {shape}") - array = tf.train.load_variable(tf_path, name) - names.append(name) - arrays.append(array.squeeze()) - - for name, array in zip(names, arrays): - name = name[6:] # skip "model/" - name = name.split("/") - pointer = model - for m_name in name: - if re.fullmatch(r"[A-Za-z]+\d+", m_name): - scope_names = re.split(r"(\d+)", m_name) - else: - scope_names = [m_name] - if scope_names[0] == "w" or scope_names[0] == "g": - pointer = getattr(pointer, "weight") - elif scope_names[0] == "b": - pointer = getattr(pointer, "bias") - elif scope_names[0] == "wpe" or scope_names[0] == "wte": - pointer = getattr(pointer, scope_names[0]) - pointer = getattr(pointer, "weight") - else: - pointer = getattr(pointer, scope_names[0]) - if len(scope_names) >= 2: - num = int(scope_names[1]) - pointer = pointer[num] - try: - assert ( - pointer.shape == array.shape - ), f"Pointer shape {pointer.shape} and array shape {array.shape} mismatched" - except AssertionError as e: - e.args += (pointer.shape, array.shape) - raise - logger.info(f"Initialize PyTorch weight {name}") - pointer.data = torch.from_numpy(array) - return model - - -class GPT2Attention(nn.Module): - def __init__(self, config, is_cross_attention=False, layer_idx=None): - super().__init__() - - max_positions = config.max_position_embeddings - self.register_buffer( - "bias", - torch.tril(torch.ones((max_positions, max_positions), dtype=torch.uint8)).view( - 1, 1, max_positions, max_positions - ), - ) - self.register_buffer("masked_bias", torch.tensor(-1e4)) - - self.embed_dim = config.hidden_size - self.num_heads = config.num_attention_heads - self.head_dim = self.embed_dim // self.num_heads - self.split_size = self.embed_dim - if self.head_dim * self.num_heads != self.embed_dim: - raise ValueError( - f"`embed_dim` must be divisible by num_heads (got `embed_dim`: {self.embed_dim} and `num_heads`: {self.num_heads})." - ) - - self.scale_attn_weights = config.scale_attn_weights - self.is_cross_attention = is_cross_attention - - # Layer-wise attention scaling, reordering, and upcasting - self.scale_attn_by_inverse_layer_idx = config.scale_attn_by_inverse_layer_idx - self.layer_idx = layer_idx - self.reorder_and_upcast_attn = config.reorder_and_upcast_attn - - if self.is_cross_attention: - self.c_attn = Conv1D(2 * self.embed_dim, self.embed_dim) - self.q_attn = Conv1D(self.embed_dim, self.embed_dim) - else: - self.c_attn = Conv1D(3 * self.embed_dim, self.embed_dim) - self.c_proj = Conv1D(self.embed_dim, self.embed_dim) - - self.attn_dropout = nn.Dropout(config.attn_pdrop) - self.resid_dropout = nn.Dropout(config.resid_pdrop) - - self.pruned_heads = set() - - def prune_heads(self, heads): - if len(heads) == 0: - return - heads, index = find_pruneable_heads_and_indices(heads, self.num_heads, self.head_dim, self.pruned_heads) - index_attn = torch.cat([index, index + self.split_size, index + (2 * self.split_size)]) - - # Prune conv1d layers - self.c_attn = prune_conv1d_layer(self.c_attn, index_attn, dim=1) - self.c_proj = prune_conv1d_layer(self.c_proj, index, dim=0) - - # Update hyper params - self.split_size = (self.split_size // self.num_heads) * (self.num_heads - len(heads)) - self.num_heads = self.num_heads - len(heads) - self.pruned_heads = self.pruned_heads.union(heads) - - def _attn(self, query, key, value, attention_mask=None, head_mask=None): - attn_weights = torch.matmul(query, key.transpose(-1, -2)) - - if self.scale_attn_weights: - attn_weights = attn_weights / (value.size(-1) ** 0.5) - - # Layer-wise attention scaling - if self.scale_attn_by_inverse_layer_idx: - attn_weights = attn_weights / float(self.layer_idx + 1) - - if not self.is_cross_attention: - # if only "normal" attention layer implements causal mask - query_length, key_length = query.size(-2), key.size(-2) - causal_mask = self.bias[:, :, key_length - query_length: key_length, :key_length].bool() - attn_weights = torch.where(causal_mask, attn_weights, self.masked_bias.to(attn_weights.dtype)) - - if attention_mask is not None: - # Apply the attention mask - attn_weights = attn_weights + attention_mask - - attn_weights = nn.functional.softmax(attn_weights, dim=-1) - - # Downcast (if necessary) back to V's dtype (if in mixed-precision) -- No-Op otherwise - attn_weights = attn_weights.type(value.dtype) - attn_weights = self.attn_dropout(attn_weights) - - # Mask heads if we want to - if head_mask is not None: - attn_weights = attn_weights * head_mask - - attn_output = torch.matmul(attn_weights, value) - - return attn_output, attn_weights - - def _upcast_and_reordered_attn(self, query, key, value, attention_mask=None, head_mask=None): - # Use `torch.baddbmm` (a bit more efficient w/ alpha param for scaling -- from Megatron-LM) - bsz, num_heads, q_seq_len, dk = query.size() - _, _, k_seq_len, _ = key.size() - - # Preallocate attn_weights for `baddbmm` - attn_weights = torch.empty(bsz * num_heads, q_seq_len, k_seq_len, dtype=torch.float32, device=query.device) - - # Compute Scale Factor - scale_factor = 1.0 - if self.scale_attn_weights: - scale_factor /= float(value.size(-1)) ** 0.5 - - if self.scale_attn_by_inverse_layer_idx: - scale_factor /= float(self.layer_idx + 1) - - # Upcast (turn off autocast) and reorder (Scale K by 1 / root(dk)) - if is_amp_available: - with autocast(enabled=False): - q, k = query.reshape(-1, q_seq_len, dk), key.transpose(-1, -2).reshape(-1, dk, k_seq_len) - attn_weights = torch.baddbmm(attn_weights, q.float(), k.float(), beta=0, alpha=scale_factor) - attn_weights = attn_weights.reshape(bsz, num_heads, q_seq_len, k_seq_len) - else: - q, k = query.reshape(-1, q_seq_len, dk), key.transpose(-1, -2).reshape(-1, dk, k_seq_len) - attn_weights = torch.baddbmm(attn_weights, q.float(), k.float(), beta=0, alpha=scale_factor) - attn_weights = attn_weights.reshape(bsz, num_heads, q_seq_len, k_seq_len) - - if not self.is_cross_attention: - # if only "normal" attention layer implements causal mask - query_length, key_length = query.size(-2), key.size(-2) - causal_mask = self.bias[:, :, key_length - query_length: key_length, :key_length].bool() - attn_weights = torch.where(causal_mask, attn_weights, self.masked_bias.to(attn_weights.dtype)) - - if attention_mask is not None: - # Apply the attention mask - attn_weights = attn_weights + attention_mask - - attn_weights = nn.functional.softmax(attn_weights, dim=-1) - - # Downcast (if necessary) back to V's dtype (if in mixed-precision) -- No-Op if otherwise - if attn_weights.dtype != torch.float32: - raise RuntimeError("Error with upcasting, attn_weights does not have dtype torch.float32") - attn_weights = attn_weights.type(value.dtype) - attn_weights = self.attn_dropout(attn_weights) - - # Mask heads if we want to - if head_mask is not None: - attn_weights = attn_weights * head_mask - - attn_output = torch.matmul(attn_weights, value) - - return attn_output, attn_weights - - def _split_heads(self, tensor, num_heads, attn_head_size): - """ - Splits hidden_size dim into attn_head_size and num_heads - """ - new_shape = tensor.size()[:-1] + (num_heads, attn_head_size) - tensor = tensor.view(new_shape) - return tensor.permute(0, 2, 1, 3) # (batch, head, seq_length, head_features) - - def _merge_heads(self, tensor, num_heads, attn_head_size): - """ - Merges attn_head_size dim and num_attn_heads dim into hidden_size - """ - tensor = tensor.permute(0, 2, 1, 3).contiguous() - new_shape = tensor.size()[:-2] + (num_heads * attn_head_size,) - return tensor.view(new_shape) - - def forward( - self, - hidden_states: Optional[Tuple[torch.FloatTensor]], - layer_past: Optional[Tuple[torch.Tensor]] = None, - attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, - encoder_hidden_states: Optional[torch.Tensor] = None, - encoder_attention_mask: Optional[torch.FloatTensor] = None, - use_cache: Optional[bool] = False, - output_attentions: Optional[bool] = False, - ) -> Tuple[Union[torch.Tensor, Tuple[torch.Tensor]], ...]: - if encoder_hidden_states is not None: - if not hasattr(self, "q_attn"): - raise ValueError( - "If class is used as cross attention, the weights `q_attn` have to be defined. " - "Please make sure to instantiate class with `GPT2Attention(..., is_cross_attention=True)`." - ) - - query = self.q_attn(hidden_states) - key, value = self.c_attn(encoder_hidden_states).split(self.split_size, dim=2) - attention_mask = encoder_attention_mask - else: - query, key, value = self.c_attn(hidden_states).split(self.split_size, dim=2) - - query = self._split_heads(query, self.num_heads, self.head_dim) - key = self._split_heads(key, self.num_heads, self.head_dim) - value = self._split_heads(value, self.num_heads, self.head_dim) - - if layer_past is not None: - past_key, past_value = layer_past - key = torch.cat((past_key, key), dim=-2) - value = torch.cat((past_value, value), dim=-2) - - if use_cache is True: - present = (key, value) - else: - present = None - - if self.reorder_and_upcast_attn: - attn_output, attn_weights = self._upcast_and_reordered_attn(query, key, value, attention_mask, head_mask) - else: - attn_output, attn_weights = self._attn(query, key, value, attention_mask, head_mask) - - attn_output = self._merge_heads(attn_output, self.num_heads, self.head_dim) - attn_output = self.c_proj(attn_output) - attn_output = self.resid_dropout(attn_output) - - outputs = (attn_output, present) - if output_attentions: - outputs += (attn_weights,) - - return outputs # a, present, (attentions) - - -class SqReLU(nn.Module): - """ - See So: Primer: Searching for Efficient Transformers for Language Modeling (So., https://arxiv.org/abs/2109.08668). - """ - - def __init__(self): - super().__init__() - self.act = self._sqrelu_python - - def _sqrelu_python(self, input: torch.Tensor) -> torch.Tensor: - return torch.pow(nn.functional.relu(input), 2) - - def forward(self, input: torch.Tensor) -> torch.Tensor: - return self.act(input) - - -class GPT2MLP(nn.Module): - def __init__(self, intermediate_size, config, squared_relu=False): - super().__init__() - embed_dim = config.hidden_size - self.c_fc = Conv1D(intermediate_size, embed_dim) - self.c_proj = Conv1D(embed_dim, intermediate_size) - if squared_relu: - self.act = SqReLU() - else: - self.act = ACT2FN[config.activation_function] - self.dropout = nn.Dropout(config.resid_pdrop) - - def forward(self, hidden_states: Optional[Tuple[torch.FloatTensor]]) -> torch.FloatTensor: - hidden_states = self.c_fc(hidden_states) - hidden_states = self.act(hidden_states) - hidden_states = self.c_proj(hidden_states) - hidden_states = self.dropout(hidden_states) - return hidden_states - - -class GPT2Block(nn.Module): - def __init__(self, config, layer_idx=None): - super().__init__() - hidden_size = config.hidden_size - inner_dim = config.n_inner if config.n_inner is not None else 4 * hidden_size - - self.ln_1 = nn.LayerNorm(hidden_size, eps=config.layer_norm_epsilon) - self.attn = GPT2Attention(config, layer_idx=layer_idx) - self.ln_2 = nn.LayerNorm(hidden_size, eps=config.layer_norm_epsilon) - - self.add_cross_attention_freq = config.add_cross_attention_freq - if config.add_cross_attention and layer_idx % config.add_cross_attention_freq == 0: - self.crossattention = GPT2Attention(config, is_cross_attention=True, layer_idx=layer_idx) - self.ln_cross_attn = nn.LayerNorm(hidden_size, eps=config.layer_norm_epsilon) - self.mlp_crossattention = GPT2MLP(inner_dim, config, squared_relu=True) - self.ln_2_crossattention = nn.LayerNorm(hidden_size, eps=config.layer_norm_epsilon) - if config.is_tanh_gating: - self.alpha_cattn = nn.Parameter(torch.zeros([])) - self.alpha_dense = nn.Parameter(torch.zeros([])) - - self.mlp = GPT2MLP(inner_dim, config) - - def forward( - self, - hidden_states: Optional[Tuple[torch.FloatTensor]], - layer_past: Optional[Tuple[torch.Tensor]] = None, - attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, - encoder_hidden_states: Optional[torch.Tensor] = None, - encoder_attention_mask: Optional[torch.FloatTensor] = None, - use_cache: Optional[bool] = False, - output_attentions: Optional[bool] = False, - ) -> Union[Tuple[torch.Tensor], Optional[Tuple[torch.Tensor, Tuple[torch.FloatTensor, ...]]]]: - if encoder_hidden_states is not None and self.attn.layer_idx % self.add_cross_attention_freq == 0: - # add one self-attention block for cross-attention - if not hasattr(self, "crossattention"): - raise ValueError( - f"If `encoder_hidden_states` are passed, {self} has to be instantiated with " - "cross-attention layers by setting `config.add_cross_attention=True`" - ) - residual = hidden_states - hidden_states = self.ln_cross_attn(hidden_states) - cross_attn_outputs = self.crossattention( - hidden_states, - attention_mask=attention_mask, - head_mask=head_mask, - encoder_hidden_states=encoder_hidden_states, - encoder_attention_mask=encoder_attention_mask, - output_attentions=output_attentions, - ) - attn_output = cross_attn_outputs[0] - if hasattr(self, "alpha_cattn"): - attn_output = torch.tanh(self.alpha_cattn) * attn_output - # residual connection - hidden_states = residual + attn_output - - residual = hidden_states - hidden_states = self.ln_2_crossattention(hidden_states) - feed_forward_hidden_states = self.mlp_crossattention(hidden_states) - if hasattr(self, "alpha_dense"): - feed_forward_hidden_states = torch.tanh(self.alpha_dense) * feed_forward_hidden_states - # residual connection - hidden_states = residual + feed_forward_hidden_states - - # Self-Attention - residual = hidden_states - hidden_states = self.ln_1(hidden_states) - attn_outputs = self.attn( - hidden_states, - layer_past=layer_past, - attention_mask=attention_mask, - head_mask=head_mask, - use_cache=use_cache, - output_attentions=output_attentions, - ) - attn_output = attn_outputs[0] # output_attn: a, present, (attentions) - outputs = attn_outputs[1:] - # residual connection - hidden_states = attn_output + residual - - # add cross attentions (follow the original order, not to mess things up) - if encoder_hidden_states is not None and self.attn.layer_idx % self.add_cross_attention_freq == 0: - outputs = outputs + cross_attn_outputs[2:] # add cross attentions if we output attention weights - - # FFN - residual = hidden_states - hidden_states = self.ln_2(hidden_states) - feed_forward_hidden_states = self.mlp(hidden_states) - # residual connection - hidden_states = residual + feed_forward_hidden_states - - if use_cache: - outputs = (hidden_states,) + outputs - else: - outputs = (hidden_states,) + outputs[1:] - - return outputs # hidden_states, present, (attentions, cross_attentions) - - -class GPT2PreTrainedModel(PreTrainedModel): - """ - An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained - models. - """ - - config_class = GPT2Config - load_tf_weights = load_tf_weights_in_gpt2 - base_model_prefix = "transformer" - is_parallelizable = True - supports_gradient_checkpointing = True - - def __init__(self, *inputs, **kwargs): - super().__init__(*inputs, **kwargs) - - def _init_weights(self, module): - """Initialize the weights.""" - if isinstance(module, (nn.Linear, Conv1D)): - # Slightly different from the TF version which uses truncated_normal for initialization - # cf https://github.com/pytorch/pytorch/pull/5617 - module.weight.data.normal_(mean=0.0, std=self.config.initializer_range) - if module.bias is not None: - module.bias.data.zero_() - elif isinstance(module, nn.Embedding): - module.weight.data.normal_(mean=0.0, std=self.config.initializer_range) - if module.padding_idx is not None: - module.weight.data[module.padding_idx].zero_() - elif isinstance(module, nn.LayerNorm): - module.bias.data.zero_() - module.weight.data.fill_(1.0) - - # Reinitialize selected weights subject to the OpenAI GPT-2 Paper Scheme: - # > A modified initialization which accounts for the accumulation on the residual path with model depth. Scale - # > the weights of residual layers at initialization by a factor of 1/√N where N is the # of residual layers. - # > -- GPT-2 :: https://openai.com/blog/better-language-models/ - # - # Reference (Megatron-LM): https://github.com/NVIDIA/Megatron-LM/blob/main/megatron/model/gpt_model.py - for name, p in module.named_parameters(): - if "c_proj" in name and "weight" in name: - # Special Scaled Initialization --> There are 2 Layer Norms per Transformer Block - p.data.normal_(mean=0.0, std=(self.config.initializer_range / math.sqrt(2 * self.config.n_layer))) - - def _set_gradient_checkpointing(self, module, value=False): - if isinstance(module, GPT2Model): - module.gradient_checkpointing = value - - -@dataclass -class GPT2DoubleHeadsModelOutput(ModelOutput): - """ - Base class for outputs of models predicting if two sentences are consecutive or not. - - Args: - loss (`torch.FloatTensor` of shape `(1,)`, *optional*, returned when `labels` is provided): - Language modeling loss. - mc_loss (`torch.FloatTensor` of shape `(1,)`, *optional*, returned when `mc_labels` is provided): - Multiple choice classification loss. - logits (`torch.FloatTensor` of shape `(batch_size, num_choices, sequence_length, config.vocab_size)`): - Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). - mc_logits (`torch.FloatTensor` of shape `(batch_size, num_choices)`): - Prediction scores of the multiple choice classification head (scores for each choice before SoftMax). - past_key_values (`Tuple[Tuple[torch.Tensor]]`, *optional*, returned when `use_cache=True` is passed or when `config.use_cache=True`): - Tuple of length `config.n_layers`, containing tuples of tensors of shape `(batch_size, num_heads, - sequence_length, embed_size_per_head)`). - - Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see - `past_key_values` input) to speed up sequential decoding. - hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): - Tuple of `torch.FloatTensor` (one for the output of the embeddings + one for the output of each layer) of - shape `(batch_size, sequence_length, hidden_size)`. - - Hidden-states of the model at the output of each layer plus the initial embedding outputs. - attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): - Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, - sequence_length)`. - - GPT2Attentions weights after the attention softmax, used to compute the weighted average in the - self-attention heads. - """ - - loss: Optional[torch.FloatTensor] = None - mc_loss: Optional[torch.FloatTensor] = None - logits: torch.FloatTensor = None - mc_logits: torch.FloatTensor = None - past_key_values: Optional[Tuple[Tuple[torch.FloatTensor]]] = None - hidden_states: Optional[Tuple[torch.FloatTensor]] = None - attentions: Optional[Tuple[torch.FloatTensor]] = None - - -GPT2_START_DOCSTRING = r""" - - This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the - library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads - etc.) - - This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. - Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage - and behavior. - - Parameters: - config ([`GPT2Config`]): Model configuration class with all the parameters of the model. - Initializing with a config file does not load the weights associated with the model, only the - configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. -""" - -GPT2_INPUTS_DOCSTRING = r""" - Args: - input_ids (`torch.LongTensor` of shape `(batch_size, input_ids_length)`): - `input_ids_length` = `sequence_length` if `past_key_values` is `None` else - `past_key_values[0][0].shape[-2]` (`sequence_length` of input past key value states). Indices of input - sequence tokens in the vocabulary. - - If `past_key_values` is used, only `input_ids` that do not have their past calculated should be passed as - `input_ids`. - - Indices can be obtained using [`GPT2Tokenizer`]. See [`PreTrainedTokenizer.encode`] and - [`PreTrainedTokenizer.__call__`] for details. - - [What are input IDs?](../glossary#input-ids) - past_key_values (`Tuple[Tuple[torch.Tensor]]` of length `config.n_layers`): - Contains precomputed hidden-states (key and values in the attention blocks) as computed by the model (see - `past_key_values` output below). Can be used to speed up sequential decoding. The `input_ids` which have - their past given to this model should not be passed as `input_ids` as they have already been computed. - attention_mask (`torch.FloatTensor` of shape `(batch_size, sequence_length)`, *optional*): - Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`: - - - 1 for tokens that are **not masked**, - - 0 for tokens that are **masked**. - - If `past_key_values` is used, `attention_mask` needs to contain the masking strategy that was used for - `past_key_values`. In other words, the `attention_mask` always has to have the length: - `len(past_key_values) + len(input_ids)` - - [What are attention masks?](../glossary#attention-mask) - token_type_ids (`torch.LongTensor` of shape `(batch_size, input_ids_length)`, *optional*): - Segment token indices to indicate first and second portions of the inputs. Indices are selected in `[0, - 1]`: - - - 0 corresponds to a *sentence A* token, - - 1 corresponds to a *sentence B* token. - - [What are token type IDs?](../glossary#token-type-ids) - position_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): - Indices of positions of each input sequence tokens in the position embeddings. Selected in the range `[0, - config.max_position_embeddings - 1]`. - - [What are position IDs?](../glossary#position-ids) - head_mask (`torch.FloatTensor` of shape `(num_heads,)` or `(num_layers, num_heads)`, *optional*): - Mask to nullify selected heads of the self-attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - - inputs_embeds (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*): - Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This - is useful if you want more control over how to convert `input_ids` indices into associated vectors than the - model's internal embedding lookup matrix. - - If `past_key_values` is used, optionally only the last `inputs_embeds` have to be input (see - `past_key_values`). - use_cache (`bool`, *optional*): - If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding (see - `past_key_values`). - output_attentions (`bool`, *optional*): - Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned - tensors for more detail. - output_hidden_states (`bool`, *optional*): - Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for - more detail. - return_dict (`bool`, *optional*): - Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple. -""" -PARALLELIZE_DOCSTRING = r""" - This is an experimental feature and is a subject to change at a moment's notice. - - Uses a device map to distribute attention modules of the model across several devices. If no device map is given, - it will evenly distribute blocks across all devices. - - Args: - device_map (`Dict[int, list]`, optional, defaults to None): - A dictionary that maps attention modules to devices. Note that the embedding module and LMHead are always - automatically mapped to the first device (for esoteric reasons). That means that the first device should - have fewer attention modules mapped to it than other devices. For reference, the gpt2 models have the - following number of attention modules: - - - gpt2: 12 - - gpt2-medium: 24 - - gpt2-large: 36 - - gpt2-xl: 48 - - Example: - - ```python - # Here is an example of a device map on a machine with 4 GPUs using gpt2-xl, which has a total of 48 attention modules: - model = GPT2LMHeadModel.from_pretrained("gpt2-xl") - device_map = { - 0: [0, 1, 2, 3, 4, 5, 6, 7, 8], - 1: [9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21], - 2: [22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34], - 3: [35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47], - } - model.parallelize(device_map) - ``` -""" -DEPARALLELIZE_DOCSTRING = r""" - Moves the model to cpu from a model parallel state. - - Example: - - ```python - # On a 4 GPU machine with gpt2-large: - model = GPT2LMHeadModel.from_pretrained("gpt2-large") - device_map = { - 0: [0, 1, 2, 3, 4, 5, 6, 7], - 1: [8, 9, 10, 11, 12, 13, 14, 15], - 2: [16, 17, 18, 19, 20, 21, 22, 23], - 3: [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], - } - model.parallelize(device_map) # Splits the model across several devices - model.deparallelize() # Put the model back on cpu and cleans memory by calling torch.cuda.empty_cache() - ``` -""" - - -@add_start_docstrings( - "The bare GPT2 Model transformer outputting raw hidden-states without any specific head on top.", - GPT2_START_DOCSTRING, -) -class GPT2Model(GPT2PreTrainedModel): - _keys_to_ignore_on_load_missing = ["attn.masked_bias"] - - def __init__(self, config): - super().__init__(config) - - self.embed_dim = config.hidden_size - - self.wte = nn.Embedding(config.vocab_size, self.embed_dim) - self.wpe = nn.Embedding(config.max_position_embeddings, self.embed_dim) - - self.drop = nn.Dropout(config.embd_pdrop) - self.h = nn.ModuleList([GPT2Block(config, layer_idx=i) for i in range(config.num_hidden_layers)]) - self.ln_f = nn.LayerNorm(self.embed_dim, eps=config.layer_norm_epsilon) - - # Model parallel - self.model_parallel = False - self.device_map = None - self.gradient_checkpointing = False - - # Initialize weights and apply final processing - self.post_init() - - @add_start_docstrings(PARALLELIZE_DOCSTRING) - def parallelize(self, device_map=None): - # Check validity of device_map - self.device_map = ( - get_device_map(len(self.h), range(torch.cuda.device_count())) if device_map is None else device_map - ) - assert_device_map(self.device_map, len(self.h)) - self.model_parallel = True - self.first_device = "cpu" if "cpu" in self.device_map.keys() else "cuda:" + str(min(self.device_map.keys())) - self.last_device = "cuda:" + str(max(self.device_map.keys())) - self.wte = self.wte.to(self.first_device) - self.wpe = self.wpe.to(self.first_device) - # Load onto devices - for k, v in self.device_map.items(): - for block in v: - cuda_device = "cuda:" + str(k) - self.h[block] = self.h[block].to(cuda_device) - # ln_f to last - self.ln_f = self.ln_f.to(self.last_device) - - @add_start_docstrings(DEPARALLELIZE_DOCSTRING) - def deparallelize(self): - self.model_parallel = False - self.device_map = None - self.first_device = "cpu" - self.last_device = "cpu" - self.wte = self.wte.to("cpu") - self.wpe = self.wpe.to("cpu") - for index in range(len(self.h)): - self.h[index] = self.h[index].to("cpu") - self.ln_f = self.ln_f.to("cpu") - torch.cuda.empty_cache() - - def get_input_embeddings(self): - return self.wte - - def set_input_embeddings(self, new_embeddings): - self.wte = new_embeddings - - def _prune_heads(self, heads_to_prune): - """ - Prunes heads of the model. heads_to_prune: dict of {layer_num: list of heads to prune in this layer} - """ - for layer, heads in heads_to_prune.items(): - self.h[layer].attn.prune_heads(heads) - - @add_start_docstrings_to_model_forward(GPT2_INPUTS_DOCSTRING) - @add_code_sample_docstrings( - processor_class=_TOKENIZER_FOR_DOC, - checkpoint=_CHECKPOINT_FOR_DOC, - output_type=BaseModelOutputWithPastAndCrossAttentions, - config_class=_CONFIG_FOR_DOC, - ) - def forward( - self, - input_ids: Optional[torch.LongTensor] = None, - past_key_values: Optional[Tuple[Tuple[torch.Tensor]]] = None, - attention_mask: Optional[torch.FloatTensor] = None, - token_type_ids: Optional[torch.LongTensor] = None, - position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, - inputs_embeds: Optional[torch.FloatTensor] = None, - encoder_hidden_states: Optional[torch.Tensor] = None, - encoder_attention_mask: Optional[torch.FloatTensor] = None, - use_cache: Optional[bool] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[Tuple, BaseModelOutputWithPastAndCrossAttentions]: - output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions - output_hidden_states = ( - output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states - ) - use_cache = use_cache if use_cache is not None else self.config.use_cache - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - if input_ids is not None and inputs_embeds is not None: - raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time") - elif input_ids is not None: - input_shape = input_ids.size() - input_ids = input_ids.view(-1, input_shape[-1]) - batch_size = input_ids.shape[0] - elif inputs_embeds is not None: - input_shape = inputs_embeds.size()[:-1] - batch_size = inputs_embeds.shape[0] - else: - raise ValueError("You have to specify either input_ids or inputs_embeds") - - device = input_ids.device if input_ids is not None else inputs_embeds.device - - if token_type_ids is not None: - token_type_ids = token_type_ids.view(-1, input_shape[-1]) - if position_ids is not None: - position_ids = position_ids.view(-1, input_shape[-1]) - - if past_key_values is None: - past_length = 0 - past_key_values = tuple([None] * len(self.h)) - else: - past_length = past_key_values[0][0].size(-2) - if position_ids is None: - position_ids = torch.arange(past_length, input_shape[-1] + past_length, dtype=torch.long, device=device) - position_ids = position_ids.unsqueeze(0).view(-1, input_shape[-1]) - - # GPT2Attention mask. - if attention_mask is not None: - if batch_size <= 0: - raise ValueError("batch_size has to be defined and > 0") - attention_mask = attention_mask.view(batch_size, -1) - # We create a 3D attention mask from a 2D tensor mask. - # Sizes are [batch_size, 1, 1, to_seq_length] - # So we can broadcast to [batch_size, num_heads, from_seq_length, to_seq_length] - # this attention mask is more simple than the triangular masking of causal attention - # used in OpenAI GPT, we just need to prepare the broadcast dimension here. - attention_mask = attention_mask[:, None, None, :] - - # Since attention_mask is 1.0 for positions we want to attend and 0.0 for - # masked positions, this operation will create a tensor which is 0.0 for - # positions we want to attend and -10000.0 for masked positions. - # Since we are adding it to the raw scores before the softmax, this is - # effectively the same as removing these entirely. - attention_mask = attention_mask.to(dtype=self.dtype) # fp16 compatibility - attention_mask = (1.0 - attention_mask) * -10000.0 - - # If a 2D or 3D attention mask is provided for the cross-attention - # we need to make broadcastable to [batch_size, num_heads, seq_length, seq_length] - if self.config.add_cross_attention and encoder_hidden_states is not None: - encoder_batch_size, encoder_sequence_length, _ = encoder_hidden_states.size() - encoder_hidden_shape = (encoder_batch_size, encoder_sequence_length) - if encoder_attention_mask is None: - encoder_attention_mask = torch.ones(encoder_hidden_shape, device=device) - encoder_attention_mask = self.invert_attention_mask(encoder_attention_mask) - else: - encoder_attention_mask = None - - # Prepare head mask if needed - # 1.0 in head_mask indicate we keep the head - # attention_probs has shape bsz x n_heads x N x N - # head_mask has shape n_layer x batch x n_heads x N x N - head_mask = self.get_head_mask(head_mask, self.config.n_layer) - - if inputs_embeds is None: - inputs_embeds = self.wte(input_ids) - position_embeds = self.wpe(position_ids) - hidden_states = inputs_embeds + position_embeds - - if token_type_ids is not None: - token_type_embeds = self.wte(token_type_ids) - hidden_states = hidden_states + token_type_embeds - - hidden_states = self.drop(hidden_states) - - output_shape = input_shape + (hidden_states.size(-1),) - - presents = () if use_cache else None - all_self_attentions = () if output_attentions else None - all_cross_attentions = () if output_attentions and self.config.add_cross_attention else None - all_hidden_states = () if output_hidden_states else None - for i, (block, layer_past) in enumerate(zip(self.h, past_key_values)): - - # Model parallel - if self.model_parallel: - torch.cuda.set_device(hidden_states.device) - # Ensure layer_past is on same device as hidden_states (might not be correct) - if layer_past is not None: - layer_past = tuple(past_state.to(hidden_states.device) for past_state in layer_past) - # Ensure that attention_mask is always on the same device as hidden_states - if attention_mask is not None: - attention_mask = attention_mask.to(hidden_states.device) - if isinstance(head_mask, torch.Tensor): - head_mask = head_mask.to(hidden_states.device) - if output_hidden_states: - all_hidden_states = all_hidden_states + (hidden_states,) - - if self.gradient_checkpointing and self.training: - - if use_cache: - logger.warning( - "`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`..." - ) - use_cache = False - - def create_custom_forward(module): - def custom_forward(*inputs): - # None for past_key_value - return module(*inputs, use_cache, output_attentions) - - return custom_forward - - outputs = torch.utils.checkpoint.checkpoint( - create_custom_forward(block), - hidden_states, - None, - attention_mask, - head_mask[i], - encoder_hidden_states, - encoder_attention_mask, - ) - else: - outputs = block( - hidden_states, - layer_past=layer_past, - attention_mask=attention_mask, - head_mask=head_mask[i], - encoder_hidden_states=encoder_hidden_states, - encoder_attention_mask=encoder_attention_mask, - use_cache=use_cache, - output_attentions=output_attentions, - ) - - hidden_states = outputs[0] - if use_cache is True: - presents = presents + (outputs[1],) - - if output_attentions: - all_self_attentions = all_self_attentions + (outputs[2 if use_cache else 1],) - if self.config.add_cross_attention: - all_cross_attentions = all_cross_attentions + (outputs[3 if use_cache else 2],) - - # Model Parallel: If it's the last layer for that device, put things on the next device - if self.model_parallel: - for k, v in self.device_map.items(): - if i == v[-1] and "cuda:" + str(k) != self.last_device: - hidden_states = hidden_states.to("cuda:" + str(k + 1)) - - hidden_states = self.ln_f(hidden_states) - - hidden_states = hidden_states.view(output_shape) - # Add last hidden state - if output_hidden_states: - all_hidden_states = all_hidden_states + (hidden_states,) - - if not return_dict: - return tuple( - v - for v in [hidden_states, presents, all_hidden_states, all_self_attentions, all_cross_attentions] - if v is not None - ) - - return BaseModelOutputWithPastAndCrossAttentions( - last_hidden_state=hidden_states, - past_key_values=presents, - hidden_states=all_hidden_states, - attentions=all_self_attentions, - cross_attentions=all_cross_attentions, - ) - - -@add_start_docstrings( - """ - The GPT2 Model transformer with a language modeling head on top (linear layer with weights tied to the input - embeddings). - """, - GPT2_START_DOCSTRING, -) -class GPT2LMHeadModel(GPT2PreTrainedModel): - _keys_to_ignore_on_load_missing = [r"attn.masked_bias", r"attn.bias", r"lm_head.weight"] - - def __init__(self, config): - super().__init__(config) - self.transformer = GPT2Model(config) - self.lm_head = nn.Linear(config.n_embd, config.vocab_size, bias=False) - - # Model parallel - self.model_parallel = False - self.device_map = None - - # Initialize weights and apply final processing - self.post_init() - - def freeze_lm_weights(self): - freeze_list, unfreeze_list = [], [] - for n, p in self.named_parameters(): - if 'crossattention' in n or 'cross_attn' in n or 'alpha_cattn' in n or 'alpha_dense' in n: - p.requires_grad = True - unfreeze_list.append(n) - else: - p.requires_grad = False - freeze_list.append(n) - print("Freeze the pretrained parts in LM: {}".format(freeze_list)) - print(" Learn the rest parts in LM: {}".format(unfreeze_list)) - - @add_start_docstrings(PARALLELIZE_DOCSTRING) - def parallelize(self, device_map=None): - self.device_map = ( - get_device_map(len(self.transformer.h), range(torch.cuda.device_count())) - if device_map is None - else device_map - ) - assert_device_map(self.device_map, len(self.transformer.h)) - self.transformer.parallelize(self.device_map) - self.lm_head = self.lm_head.to(self.transformer.first_device) - self.model_parallel = True - - @add_start_docstrings(DEPARALLELIZE_DOCSTRING) - def deparallelize(self): - self.transformer.deparallelize() - self.transformer = self.transformer.to("cpu") - self.lm_head = self.lm_head.to("cpu") - self.model_parallel = False - torch.cuda.empty_cache() - - def get_output_embeddings(self): - return self.lm_head - - def set_output_embeddings(self, new_embeddings): - self.lm_head = new_embeddings - - def prepare_inputs_for_generation(self, input_ids, past=None, **kwargs): - token_type_ids = kwargs.get("token_type_ids", None) - # only last token for inputs_ids if past is defined in kwargs - if past: - input_ids = input_ids[:, -1].unsqueeze(-1) - if token_type_ids is not None: - token_type_ids = token_type_ids[:, -1].unsqueeze(-1) - - attention_mask = kwargs.get("attention_mask", None) - position_ids = kwargs.get("position_ids", None) - - if attention_mask is not None and position_ids is None: - # create position_ids on the fly for batch generation - position_ids = attention_mask.long().cumsum(-1) - 1 - position_ids.masked_fill_(attention_mask == 0, 1) - if past: - position_ids = position_ids[:, -1].unsqueeze(-1) - else: - position_ids = None - return { - "input_ids": input_ids, - "past_key_values": past, - "use_cache": kwargs.get("use_cache"), - "position_ids": position_ids, - "attention_mask": attention_mask, - "token_type_ids": token_type_ids, - } - - @add_start_docstrings_to_model_forward(GPT2_INPUTS_DOCSTRING) - @add_code_sample_docstrings( - processor_class=_TOKENIZER_FOR_DOC, - checkpoint=_CHECKPOINT_FOR_DOC, - output_type=CausalLMOutputWithCrossAttentions, - config_class=_CONFIG_FOR_DOC, - ) - def forward( - self, - input_ids: Optional[torch.LongTensor] = None, - past_key_values: Optional[Tuple[Tuple[torch.Tensor]]] = None, - attention_mask: Optional[torch.FloatTensor] = None, - token_type_ids: Optional[torch.LongTensor] = None, - position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, - inputs_embeds: Optional[torch.FloatTensor] = None, - encoder_hidden_states: Optional[torch.Tensor] = None, - encoder_attention_mask: Optional[torch.FloatTensor] = None, - labels: Optional[torch.LongTensor] = None, - use_cache: Optional[bool] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[Tuple, CausalLMOutputWithCrossAttentions]: - r""" - labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): - Labels for language modeling. Note that the labels **are shifted** inside the model, i.e. you can set - `labels = input_ids` Indices are selected in `[-100, 0, ..., config.vocab_size]` All labels set to `-100` - are ignored (masked), the loss is only computed for labels in `[0, ..., config.vocab_size]` - """ - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - transformer_outputs = self.transformer( - input_ids, - past_key_values=past_key_values, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - position_ids=position_ids, - head_mask=head_mask, - inputs_embeds=inputs_embeds, - encoder_hidden_states=encoder_hidden_states, - encoder_attention_mask=encoder_attention_mask, - use_cache=use_cache, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - hidden_states = transformer_outputs[0] - - # Set device for model parallelism - if self.model_parallel: - torch.cuda.set_device(self.transformer.first_device) - hidden_states = hidden_states.to(self.lm_head.weight.device) - - lm_logits = self.lm_head(hidden_states) - - loss = None - if labels is not None: - # Shift so that tokens < n predict n - shift_logits = lm_logits[..., :-1, :].contiguous() - shift_labels = labels[..., 1:].contiguous() - # Flatten the tokens - loss_fct = CrossEntropyLoss() - loss = loss_fct(shift_logits.view(-1, shift_logits.size(-1)), shift_labels.view(-1)) - - if not return_dict: - output = (lm_logits,) + transformer_outputs[1:] - return ((loss,) + output) if loss is not None else output - - return CausalLMOutputWithCrossAttentions( - loss=loss, - logits=lm_logits, - past_key_values=transformer_outputs.past_key_values, - hidden_states=transformer_outputs.hidden_states, - attentions=transformer_outputs.attentions, - cross_attentions=transformer_outputs.cross_attentions, - ) - - @staticmethod - def _reorder_cache(past: Tuple[Tuple[torch.Tensor]], beam_idx: torch.Tensor) -> Tuple[Tuple[torch.Tensor]]: - """ - This function is used to re-order the `past_key_values` cache if [`~PreTrainedModel.beam_search`] or - [`~PreTrainedModel.beam_sample`] is called. This is required to match `past_key_values` with the correct - beam_idx at every generation step. - """ - return tuple( - tuple(past_state.index_select(0, beam_idx.to(past_state.device)) for past_state in layer_past) - for layer_past in past - ) - - -@add_start_docstrings( - """ -The GPT2 Model transformer with a language modeling and a multiple-choice classification head on top e.g. for -RocStories/SWAG tasks. The two heads are two linear layers. The language modeling head has its weights tied to the -input embeddings, the classification head takes as input the input of a specified classification token index in the -input sequence). -""", - GPT2_START_DOCSTRING, -) -class GPT2DoubleHeadsModel(GPT2PreTrainedModel): - _keys_to_ignore_on_load_missing = [r"attn.masked_bias", r"attn.bias", r"lm_head.weight"] - - def __init__(self, config): - super().__init__(config) - config.num_labels = 1 - self.transformer = GPT2Model(config) - self.lm_head = nn.Linear(config.n_embd, config.vocab_size, bias=False) - self.multiple_choice_head = SequenceSummary(config) - - # Model parallel - self.model_parallel = False - self.device_map = None - - # Initialize weights and apply final processing - self.post_init() - - @add_start_docstrings(PARALLELIZE_DOCSTRING) - def parallelize(self, device_map=None): - self.device_map = ( - get_device_map(len(self.transformer.h), range(torch.cuda.device_count())) - if device_map is None - else device_map - ) - assert_device_map(self.device_map, len(self.transformer.h)) - self.transformer.parallelize(self.device_map) - self.lm_head = self.lm_head.to(self.transformer.first_device) - self.multiple_choice_head = self.multiple_choice_head.to(self.transformer.first_device) - self.model_parallel = True - - @add_start_docstrings(DEPARALLELIZE_DOCSTRING) - def deparallelize(self): - self.transformer.deparallelize() - self.transformer = self.transformer.to("cpu") - self.lm_head = self.lm_head.to("cpu") - self.multiple_choice_head = self.multiple_choice_head.to("cpu") - self.model_parallel = False - torch.cuda.empty_cache() - - def get_output_embeddings(self): - return self.lm_head - - def set_output_embeddings(self, new_embeddings): - self.lm_head = new_embeddings - - def prepare_inputs_for_generation(self, input_ids, past=None, **kwargs): - token_type_ids = kwargs.get("token_type_ids", None) - # only last token for inputs_ids if past is defined in kwargs - if past: - input_ids = input_ids[:, -1].unsqueeze(-1) - if token_type_ids is not None: - token_type_ids = token_type_ids[:, -1].unsqueeze(-1) - - attention_mask = kwargs.get("attention_mask", None) - position_ids = kwargs.get("position_ids", None) - - if attention_mask is not None and position_ids is None: - # create position_ids on the fly for batch generation - position_ids = attention_mask.long().cumsum(-1) - 1 - position_ids.masked_fill_(attention_mask == 0, 1) - if past: - position_ids = position_ids[:, -1].unsqueeze(-1) - else: - position_ids = None - - return { - "input_ids": input_ids, - "past_key_values": past, - "use_cache": kwargs.get("use_cache"), - "position_ids": position_ids, - "attention_mask": attention_mask, - "token_type_ids": token_type_ids, - } - - @add_start_docstrings_to_model_forward(GPT2_INPUTS_DOCSTRING) - @replace_return_docstrings(output_type=GPT2DoubleHeadsModelOutput, config_class=_CONFIG_FOR_DOC) - def forward( - self, - input_ids: Optional[torch.LongTensor] = None, - past_key_values: Optional[Tuple[Tuple[torch.Tensor]]] = None, - attention_mask: Optional[torch.FloatTensor] = None, - token_type_ids: Optional[torch.LongTensor] = None, - position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, - inputs_embeds: Optional[torch.FloatTensor] = None, - mc_token_ids: Optional[torch.LongTensor] = None, - labels: Optional[torch.LongTensor] = None, - mc_labels: Optional[torch.LongTensor] = None, - use_cache: Optional[bool] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - **kwargs, - ) -> Union[Tuple, GPT2DoubleHeadsModelOutput]: - r""" - mc_token_ids (`torch.LongTensor` of shape `(batch_size, num_choices)`, *optional*, default to index of the last token of the input): - Index of the classification token in each input sequence. Selected in the range `[0, input_ids.size(-1) - - 1[`. - labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): - Labels for language modeling. Note that the labels **are shifted** inside the model, i.e. you can set - `labels = input_ids` Indices are selected in `[-100, 0, ..., config.vocab_size - 1]` All labels set to - `-100` are ignored (masked), the loss is only computed for labels in `[0, ..., config.vocab_size - 1]` - mc_labels (`torch.LongTensor` of shape `(batch_size)`, *optional*): - Labels for computing the multiple choice classification loss. Indices should be in `[0, ..., num_choices]` - where *num_choices* is the size of the second dimension of the input tensors. (see *input_ids* above) - - Return: - - Example: - - ```python - >>> import torch - >>> from transformers import GPT2Tokenizer, GPT2DoubleHeadsModel - - >>> tokenizer = GPT2Tokenizer.from_pretrained("gpt2") - >>> model = GPT2DoubleHeadsModel.from_pretrained("gpt2") - - >>> # Add a [CLS] to the vocabulary (we should train it also!) - >>> num_added_tokens = tokenizer.add_special_tokens({"cls_token": "[CLS]"}) - >>> # Update the model embeddings with the new vocabulary size - >>> embedding_layer = model.resize_token_embeddings(len(tokenizer)) - - >>> choices = ["Hello, my dog is cute [CLS]", "Hello, my cat is cute [CLS]"] - >>> encoded_choices = [tokenizer.encode(s) for s in choices] - >>> cls_token_location = [tokens.index(tokenizer.cls_token_id) for tokens in encoded_choices] - - >>> input_ids = torch.tensor(encoded_choices).unsqueeze(0) # Batch size: 1, number of choices: 2 - >>> mc_token_ids = torch.tensor([cls_token_location]) # Batch size: 1 - - >>> outputs = model(input_ids, mc_token_ids=mc_token_ids) - >>> lm_logits = outputs.logits - >>> mc_logits = outputs.mc_logits - ```""" - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - transformer_outputs = self.transformer( - input_ids, - past_key_values=past_key_values, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - position_ids=position_ids, - head_mask=head_mask, - inputs_embeds=inputs_embeds, - use_cache=use_cache, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - hidden_states = transformer_outputs[0] - - # Set device for model parallelism - if self.model_parallel: - torch.cuda.set_device(self.transformer.first_device) - hidden_states = hidden_states.to(self.lm_head.weight.device) - - lm_logits = self.lm_head(hidden_states) - mc_logits = self.multiple_choice_head(hidden_states, mc_token_ids).squeeze(-1) - - mc_loss = None - if mc_labels is not None: - loss_fct = CrossEntropyLoss() - mc_loss = loss_fct(mc_logits.view(-1, mc_logits.size(-1)), mc_labels.view(-1)) - lm_loss = None - if labels is not None: - shift_logits = lm_logits[..., :-1, :].contiguous() - shift_labels = labels[..., 1:].contiguous() - loss_fct = CrossEntropyLoss() - lm_loss = loss_fct(shift_logits.view(-1, shift_logits.size(-1)), shift_labels.view(-1)) - - if not return_dict: - output = (lm_logits, mc_logits) + transformer_outputs[1:] - if mc_loss is not None: - output = (mc_loss,) + output - return ((lm_loss,) + output) if lm_loss is not None else output - - return GPT2DoubleHeadsModelOutput( - loss=lm_loss, - mc_loss=mc_loss, - logits=lm_logits, - mc_logits=mc_logits, - past_key_values=transformer_outputs.past_key_values, - hidden_states=transformer_outputs.hidden_states, - attentions=transformer_outputs.attentions, - ) - - @staticmethod - def _reorder_cache(past: Tuple[Tuple[torch.Tensor]], beam_idx: torch.Tensor) -> Tuple[Tuple[torch.Tensor]]: - """ - This function is used to re-order the `past_key_values` cache if [`~PreTrainedModel.beam_search`] or - [`~PreTrainedModel.beam_sample`] is called. This is required to match `past_key_values` with the correct - beam_idx at every generation step. - """ - return tuple( - tuple(past_state.index_select(0, beam_idx.to(past_state.device)) for past_state in layer_past) - for layer_past in past - ) - - -@add_start_docstrings( - """ - The GPT2 Model transformer with a sequence classification head on top (linear layer). - - [`GPT2ForSequenceClassification`] uses the last token in order to do the classification, as other causal models - (e.g. GPT-1) do. - - Since it does classification on the last token, it requires to know the position of the last token. If a - `pad_token_id` is defined in the configuration, it finds the last token that is not a padding token in each row. If - no `pad_token_id` is defined, it simply takes the last value in each row of the batch. Since it cannot guess the - padding tokens when `inputs_embeds` are passed instead of `input_ids`, it does the same (take the last value in - each row of the batch). - """, - GPT2_START_DOCSTRING, -) -class GPT2ForSequenceClassification(GPT2PreTrainedModel): - _keys_to_ignore_on_load_missing = [r"h\.\d+\.attn\.masked_bias", r"lm_head\.weight"] - - def __init__(self, config): - super().__init__(config) - self.num_labels = config.num_labels - self.transformer = GPT2Model(config) - self.score = nn.Linear(config.n_embd, self.num_labels, bias=False) - - # Model parallel - self.model_parallel = False - self.device_map = None - - # Initialize weights and apply final processing - self.post_init() - - @add_start_docstrings_to_model_forward(GPT2_INPUTS_DOCSTRING) - @add_code_sample_docstrings( - processor_class=_TOKENIZER_FOR_DOC, - checkpoint="microsoft/DialogRPT-updown", - output_type=SequenceClassifierOutputWithPast, - config_class=_CONFIG_FOR_DOC, - expected_output="'LABEL_0'", - expected_loss=5.28, - ) - def forward( - self, - input_ids: Optional[torch.LongTensor] = None, - past_key_values: Optional[Tuple[Tuple[torch.Tensor]]] = None, - attention_mask: Optional[torch.FloatTensor] = None, - token_type_ids: Optional[torch.LongTensor] = None, - position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, - inputs_embeds: Optional[torch.FloatTensor] = None, - labels: Optional[torch.LongTensor] = None, - use_cache: Optional[bool] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[Tuple, SequenceClassifierOutputWithPast]: - r""" - labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*): - Labels for computing the sequence classification/regression loss. Indices should be in `[0, ..., - config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If - `config.num_labels > 1` a classification loss is computed (Cross-Entropy). - """ - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - transformer_outputs = self.transformer( - input_ids, - past_key_values=past_key_values, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - position_ids=position_ids, - head_mask=head_mask, - inputs_embeds=inputs_embeds, - use_cache=use_cache, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - hidden_states = transformer_outputs[0] - logits = self.score(hidden_states) - - if input_ids is not None: - batch_size, sequence_length = input_ids.shape[:2] - else: - batch_size, sequence_length = inputs_embeds.shape[:2] - - assert ( - self.config.pad_token_id is not None or batch_size == 1 - ), "Cannot handle batch sizes > 1 if no padding token is defined." - if self.config.pad_token_id is None: - sequence_lengths = -1 - else: - if input_ids is not None: - sequence_lengths = torch.ne(input_ids, self.config.pad_token_id).sum(-1) - 1 - else: - sequence_lengths = -1 - logger.warning( - f"{self.__class__.__name__} will not detect padding tokens in `inputs_embeds`. Results may be " - f"unexpected if using padding tokens in conjunction with `inputs_embeds.`" - ) - - pooled_logits = logits[torch.arange(batch_size, device=self.device), sequence_lengths] - - loss = None - if labels is not None: - if self.config.problem_type is None: - if self.num_labels == 1: - self.config.problem_type = "regression" - elif self.num_labels > 1 and (labels.dtype == torch.long or labels.dtype == torch.int): - self.config.problem_type = "single_label_classification" - else: - self.config.problem_type = "multi_label_classification" - - if self.config.problem_type == "regression": - loss_fct = MSELoss() - if self.num_labels == 1: - loss = loss_fct(pooled_logits.squeeze(), labels.squeeze()) - else: - loss = loss_fct(pooled_logits, labels) - elif self.config.problem_type == "single_label_classification": - loss_fct = CrossEntropyLoss() - loss = loss_fct(pooled_logits.view(-1, self.num_labels), labels.view(-1)) - elif self.config.problem_type == "multi_label_classification": - loss_fct = BCEWithLogitsLoss() - loss = loss_fct(pooled_logits, labels) - if not return_dict: - output = (pooled_logits,) + transformer_outputs[1:] - return ((loss,) + output) if loss is not None else output - - return SequenceClassifierOutputWithPast( - loss=loss, - logits=pooled_logits, - past_key_values=transformer_outputs.past_key_values, - hidden_states=transformer_outputs.hidden_states, - attentions=transformer_outputs.attentions, - ) - - -@add_start_docstrings( - """ - GPT2 Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for - Named-Entity-Recognition (NER) tasks. - """, - GPT2_START_DOCSTRING, -) -class GPT2ForTokenClassification(GPT2PreTrainedModel): - def __init__(self, config): - super().__init__(config) - self.num_labels = config.num_labels - - self.transformer = GPT2Model(config) - if hasattr(config, "classifier_dropout") and config.classifier_dropout is not None: - classifier_dropout = config.classifier_dropout - elif hasattr(config, "hidden_dropout") and config.hidden_dropout is not None: - classifier_dropout = config.hidden_dropout - else: - classifier_dropout = 0.1 - self.dropout = nn.Dropout(classifier_dropout) - self.classifier = nn.Linear(config.hidden_size, config.num_labels) - - # Model parallel - self.model_parallel = False - self.device_map = None - - # Initialize weights and apply final processing - self.post_init() - - @add_start_docstrings_to_model_forward(GPT2_INPUTS_DOCSTRING) - # fmt: off - @add_code_sample_docstrings( - processor_class=_TOKENIZER_FOR_DOC, - checkpoint="brad1141/gpt2-finetuned-comp2", - output_type=TokenClassifierOutput, - config_class=_CONFIG_FOR_DOC, - expected_loss=0.25, - expected_output=["Lead", "Lead", "Lead", "Position", "Lead", "Lead", "Lead", "Lead", "Lead", "Lead", "Lead", "Lead"], - ) - # fmt: on - def forward( - self, - input_ids: Optional[torch.LongTensor] = None, - past_key_values: Optional[Tuple[Tuple[torch.Tensor]]] = None, - attention_mask: Optional[torch.FloatTensor] = None, - token_type_ids: Optional[torch.LongTensor] = None, - position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, - inputs_embeds: Optional[torch.FloatTensor] = None, - labels: Optional[torch.LongTensor] = None, - use_cache: Optional[bool] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[Tuple, TokenClassifierOutput]: - r""" - labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*): - Labels for computing the sequence classification/regression loss. Indices should be in `[0, ..., - config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If - `config.num_labels > 1` a classification loss is computed (Cross-Entropy). - """ - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - transformer_outputs = self.transformer( - input_ids, - past_key_values=past_key_values, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - position_ids=position_ids, - head_mask=head_mask, - inputs_embeds=inputs_embeds, - use_cache=use_cache, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - hidden_states = transformer_outputs[0] - hidden_states = self.dropout(hidden_states) - logits = self.classifier(hidden_states) - - loss = None - if labels is not None: - loss_fct = CrossEntropyLoss() - loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1)) - - if not return_dict: - output = (logits,) + transformer_outputs[2:] - return ((loss,) + output) if loss is not None else output - - return TokenClassifierOutput( - loss=loss, - logits=logits, - hidden_states=transformer_outputs.hidden_states, - attentions=transformer_outputs.attentions, - ) diff --git a/spaces/naver/SuperFeatures/how/utils/logging.py b/spaces/naver/SuperFeatures/how/utils/logging.py deleted file mode 100644 index 5ba61eacf69fb168c001c572348314d7dd607311..0000000000000000000000000000000000000000 --- a/spaces/naver/SuperFeatures/how/utils/logging.py +++ /dev/null @@ -1,63 +0,0 @@ -"""Logging-related functionality""" - -import time -import logging - -# Logging - -def init_logger(log_path): - """Return a logger instance which logs to stdout and, if log_path is not None, also to a file""" - logger = logging.getLogger("HOW") - logger.setLevel(logging.DEBUG) - - stdout_handler = logging.StreamHandler() - stdout_handler.setLevel(logging.INFO) - stdout_handler.setFormatter(logging.Formatter('%(name)s %(levelname)s: %(message)s')) - logger.addHandler(stdout_handler) - - if log_path: - file_handler = logging.FileHandler(log_path) - file_handler.setLevel(logging.DEBUG) - formatter = logging.Formatter('%(asctime)s %(name)s %(levelname)s: %(message)s') - file_handler.setFormatter(formatter) - logger.addHandler(file_handler) - - return logger - - -# Stopwatch - -class LoggingStopwatch: - """Stopwatch context that produces one message when entered and another one when exited, - with the time spent in the context embedded in the exiting message. - - :param str message: Message to be logged at the start and finish. If the first word - of the message ends with 'ing', convert to passive for finish message. - :param callable log_start: Will be called with given message at the start - :param callable log_finish: Will be called with built message at the finish. If None, use - log_start - """ - - def __init__(self, message, log_start, log_finish=None): - self.message = message - self.log_start = log_start - self.log_finish = log_finish if log_finish is not None else log_start - self.time0 = None - - def __enter__(self): - self.time0 = time.time() - if self.log_start: - self.log_start(self.message.capitalize()) - - def __exit__(self, exc_type, exc_val, exc_tb): - # Build message - words = self.message.split(" ") - secs = "%.1fs" % (time.time() - self.time0) - if words[0].endswith("ing"): - words += [words.pop(0).replace("ing", "ed"), "in", secs] - else: - words += ["(%.1f)" % secs] - - # Log message - if self.log_finish: - self.log_finish(" ".join(words).capitalize()) diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Download TOP Sun Java Virtual Machine (jvm) Version 1.4.2 03l.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Download TOP Sun Java Virtual Machine (jvm) Version 1.4.2 03l.md deleted file mode 100644 index 2c7f0f3730f9dc64ccfbc7c330b221c76a4c6a68..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Download TOP Sun Java Virtual Machine (jvm) Version 1.4.2 03l.md +++ /dev/null @@ -1,37 +0,0 @@ -
-

How to Download Sun Java Virtual Machine (jvm) Version 1.4.2 03l

-

If you are looking for an older version of the Java platform, you may want to download Sun Java Virtual Machine (jvm) Version 1.4.2 03l. This version of the Java platform was released by Sun Microsystems in 2004 and is no longer supported by Oracle. However, you can still access it from the Oracle Java Archive page[^1^].

-

In this article, we will show you how to download and install Sun Java Virtual Machine (jvm) Version 1.4.2 03l on your Windows computer. We will also explain some of the features and benefits of this version of the Java platform.

-

Download Sun Java Virtual Machine (jvm) Version 1.4.2 03l


Download Filehttps://urlcod.com/2uIaKN



-

What is Sun Java Virtual Machine (jvm) Version 1.4.2 03l?

-

Sun Java Virtual Machine (jvm) Version 1.4.2 03l is a software package that allows you to run applications and applets written in the Java programming language. It consists of two components: the Java Runtime Environment (JRE) and the Java Development Kit (JDK).

-

The JRE is the component that provides the basic functionality for running Java applications and applets. It includes the Java virtual machine, the core libraries, and the supporting files.

-

The JDK is the component that provides the tools for developing and testing Java applications and applets. It includes the JRE, as well as the compiler, debugger, and other utilities.

-

Why use Sun Java Virtual Machine (jvm) Version 1.4.2 03l?

-

Sun Java Virtual Machine (jvm) Version 1.4.2 03l is an older release of the Java platform that may be required by some legacy applications or systems. Some of the features and benefits of this version are:

-
    -
  • It supports standards-based, interoperable applications, applets, and web services.
  • -
  • It offers enhanced performance and scalability, as well as improved reliability and serviceability.
  • -
  • It introduces new features such as XML processing, logging, assertions, regular expressions, preferences, chained exceptions, IPv6 support, and more.
  • -
  • It is compatible with Windows XP, Windows 2000, Windows NT 4.0, Windows ME, Windows 98, and Windows 95 operating systems.
  • -
-

How to download Sun Java Virtual Machine (jvm) Version 1.4.2 03l?

-

To download Sun Java Virtual Machine (jvm) Version 1.4.2 03l, you need to have an oracle.com account. If you don't have one, you can register for one for free on the Oracle website.

-

Once you have an oracle.com account, follow these steps:

-
    -
  1. Go to the Oracle Java Archive page[^1^] and scroll down to find the section titled "Java SE Development Kit 1.4.2_30".
  2. -
  3. Select the file that matches your operating system and processor architecture. For example, if you have a Windows computer with a 32-bit processor, select "j2sdk-1_4_2_30-windows-i586-p.exe".
  4. -
  5. Click on the file name to start the download process. You may need to accept the license agreement and sign in with your oracle.com account credentials.
  6. -
  7. Save the file to your preferred location on your computer.
  8. -
-

How to install Sun Java Virtual Machine (jvm) Version 1.4.2 03l?

-

To install Sun Java Virtual Machine (jvm) Version 1.4.2 03l on your Windows computer, follow these steps:

-
    -
  1. Locate the file that you downloaded in the previous step and double-click on it to launch the installer.
  2. -
  3. Follow the instructions on the screen to complete the installation process.
  4. -
  5. You may need to restart your computer for the changes to take effect.
  6. -
-

Congratulations! You have successfully downloaded and installed Sun Java Virtual Machine (jvm) Version 1.4.2 03l on your Windows computer.

-

e93f5a0c3f
-
-
\ No newline at end of file diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Tamil Movies Midhunam.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Tamil Movies Midhunam.md deleted file mode 100644 index b494b6106af8a8f6d119ab232e918dfd645ef372..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Tamil Movies Midhunam.md +++ /dev/null @@ -1,16 +0,0 @@ - -

Mithunam: A Heartwarming Tale of an Old Couple

-

Mithunam is a 2013 Tamil movie directed by Tanikella Bharani and starring SP Balasubrahmanyam and Lakshmi as an old couple living in a village. The movie is based on a novel by Sri Ramana and explores the themes of love, companionship, and solitude in old age.

-

tamil movies Midhunam


DOWNLOADhttps://urlcod.com/2uIb1m



-

The movie revolves around Appadasu (SP Balasubrahmanyam) and Bucchi (Lakshmi), who have been married for over 50 years and have no children. They live in their own secluded world, away from the hustle and bustle of modern life. They fill their retired days with raw quarrels and ripe loves, cooking, gardening, reading, and singing. They also have a friendly relationship with their neighbors and the village kids, who often visit them for stories and snacks.

-

The movie showcases the simple joys and sorrows of the couple, who have accepted each other's flaws and strengths. They support each other through sickness and health, happiness and sadness, life and death. They also share a deep bond with nature and their surroundings, which reflect their moods and emotions.

-

Mithunam is a rare gem of a movie that celebrates the beauty of old age and the power of love. It is a refreshing contrast to the typical commercial movies that focus on glamour, violence, and romance. The movie has received critical acclaim and won several awards, including the Nandi Award for Best Feature Film. It is also one of the few movies that features SP Balasubrahmanyam as an actor, apart from being a legendary singer.

-

If you are looking for a movie that will touch your heart and make you smile, Mithunam is a perfect choice. You can watch it on Prime Video[^1^] [^2^] or YouTube[^3^].

-

- -

The movie has received excellent response from critics and audiences alike, who have praised the performances of the lead actors, the direction, the screenplay, the music, and the cinematography. The movie has been described as a rare gem of a movie that celebrates the beauty of old age and the power of love. It has also been compared to world cinema quality on international circuit.

-

Some of the highlights of the movie are the dialogues, which are witty, humorous, and poetic; the songs, which are melodious and meaningful; and the scenes, which are realistic and touching. The movie also has a subtle message about the importance of living in harmony with nature and respecting one's culture and traditions.

-

The movie is not without its flaws, however. Some viewers have found the movie to be too slow-paced, too simplistic, or too sentimental. Some have also criticized the ending, which is abrupt and tragic. The movie also has a limited appeal for those who are looking for more action, drama, or romance in their movies.

-

Nevertheless, Mithunam is a movie that deserves to be watched and appreciated for its honesty, sincerity, and originality. It is a movie that will make you laugh, cry, and think. It is a movie that will remind you of the value of life and love.

81aa517590
-
-
\ No newline at end of file diff --git a/spaces/njanakiev/gradio-openai-clip-grad-cam/gradcam/app.py b/spaces/njanakiev/gradio-openai-clip-grad-cam/gradcam/app.py deleted file mode 100644 index d06367d034de2293bb4aad620498355b61832552..0000000000000000000000000000000000000000 --- a/spaces/njanakiev/gradio-openai-clip-grad-cam/gradcam/app.py +++ /dev/null @@ -1,58 +0,0 @@ -import gradio as gr -import clip -import torch - -import utils - -#clip_model = "RN50x4" -clip_model = "RN50x64" -device = "cuda" if torch.cuda.is_available() else "cpu" -model, preprocess = clip.load(clip_model, device=device, jit=False) -model.eval() - - -def grad_cam_fn(text, img, saliency_layer): - resize = model.visual.input_resolution - img = img.resize((resize, resize)) - - text_input = clip.tokenize([text]).to(device) - text_feature = model.encode_text(text_input).float() - image_input = preprocess(img).unsqueeze(0).to(device) - - attn_map = utils.gradCAM( - model.visual, - image_input, - text_feature, - getattr(model.visual, saliency_layer) - ) - attn_map = attn_map.squeeze().detach().cpu().numpy() - attn_map = utils.getAttMap(img, attn_map) - - return attn_map - - -interface = gr.Interface( - fn=grad_cam_fn, - inputs=[ - gr.inputs.Textbox( - label="Target Text", - lines=1), - gr.inputs.Image( - label='Input Image', - image_mode="RGB", - type='pil', - shape=(512, 512)), - gr.inputs.Dropdown( - ["layer4", "layer3", "layer2", "layer1"], - default="layer4", - label="Saliency Layer") - ], - outputs=gr.outputs.Image( - type="pil", - label="Attention Map"), - examples=[ - ['a cat lying on the floor', 'assets/cat_dog.jpg', 'layer4'], - ['a dog sitting', 'assets/cat_dog.jpg', 'layer4'] - ], - description="OpenAI CLIP Grad CAM") -interface.launch() diff --git a/spaces/ntt123/vietTTS/app.py b/spaces/ntt123/vietTTS/app.py deleted file mode 100644 index c0eb25c63615796326f9520e7fe61e04dbb0cd2a..0000000000000000000000000000000000000000 --- a/spaces/ntt123/vietTTS/app.py +++ /dev/null @@ -1,49 +0,0 @@ -from vietTTS.hifigan.mel2wave import mel2wave -from vietTTS.nat.text2mel import text2mel -from vietTTS import nat_normalize_text -import numpy as np -import gradio as gr -import os - - -def text_to_speech(text): - # prevent too long text - if len(text) > 500: - text = text[:500] - text = nat_normalize_text(text) - mel = text2mel( - text, - "lexicon.txt", - 0.2, - "acoustic_latest_ckpt.pickle", - "duration_latest_ckpt.pickle", - ) - wave = mel2wave(mel, "config.json", "hk_hifi.pickle") - return (wave * (2**15)).astype(np.int16) - - -def speak(text): - y = text_to_speech(text) - return 16_000, y - - -title = "vietTTS" -description = "A vietnamese text-to-speech demo." - -gr.Interface( - fn=speak, - inputs="text", - outputs="audio", - title = title, - examples = [ - "Trăm năm trong cõi người ta, chữ tài chữ mệnh khéo là ghét nhau.", - "Đoạn trường tân thanh, thường được biết đến với cái tên đơn giản là Truyện Kiều, là một truyện thơ của đại thi hào Nguyễn Du", - "Lục Vân Tiên quê ở huyện Đông Thành, khôi ngô tuấn tú, tài kiêm văn võ. Nghe tin triều đình mở khoa thi, Vân Tiên từ giã thầy xuống núi đua tài.", - "Lê Quý Đôn, tên thuở nhỏ là Lê Danh Phương, là vị quan thời Lê trung hưng, cũng là nhà thơ và được mệnh danh là nhà bác học lớn của Việt Nam trong thời phong kiến", - "Tất cả mọi người đều sinh ra có quyền bình đẳng. Tạo hóa cho họ những quyền không ai có thể xâm phạm được; trong những quyền ấy, có quyền được sống, quyền tự do và quyền mưu cầu hạnh phúc." - ], - description=description, - theme="default", - allow_screenshot=False, - allow_flagging="never", -).launch(debug=False) diff --git a/spaces/nyx-ai/stylegan2-flax-tpu/training_steps.py b/spaces/nyx-ai/stylegan2-flax-tpu/training_steps.py deleted file mode 100644 index 06bf18df9ef6232c65a5818e31d1a8aedd5fbe25..0000000000000000000000000000000000000000 --- a/spaces/nyx-ai/stylegan2-flax-tpu/training_steps.py +++ /dev/null @@ -1,219 +0,0 @@ -import jax -import jax.numpy as jnp -import functools - - -def main_step_G(state_G, state_D, batch, z_latent1, z_latent2, metrics, mixing_prob, rng): - - def loss_fn(params): - w_latent1, new_state_G = state_G.apply_mapping({'params': params['mapping'], 'moving_stats': state_G.moving_stats}, - z_latent1, - batch['label'], - mutable=['moving_stats']) - w_latent2 = state_G.apply_mapping({'params': params['mapping'], 'moving_stats': state_G.moving_stats}, - z_latent2, - batch['label'], - skip_w_avg_update=True) - - # style mixing - cutoff_rng, layer_select_rng, synth_rng = jax.random.split(rng, num=3) - num_layers = w_latent1.shape[1] - layer_idx = jnp.arange(num_layers)[jnp.newaxis, :, jnp.newaxis] - mixing_cutoff = jax.lax.cond(jax.random.uniform(cutoff_rng, (), minval=0.0, maxval=1.0) < mixing_prob, - lambda _: jax.random.randint(layer_select_rng, (), 1, num_layers, dtype=jnp.int32), - lambda _: num_layers, - operand=None) - mixing_cond = jnp.broadcast_to(layer_idx < mixing_cutoff, w_latent1.shape) - w_latent = jnp.where(mixing_cond, w_latent1, w_latent2) - - image_gen = state_G.apply_synthesis({'params': params['synthesis'], 'noise_consts': state_G.noise_consts}, - w_latent, - rng=synth_rng) - - fake_logits = state_D.apply_fn(state_D.params, image_gen, batch['label']) - loss = jnp.mean(jax.nn.softplus(-fake_logits)) - return loss, (fake_logits, image_gen, new_state_G) - - dynamic_scale = state_G.dynamic_scale_main - - if dynamic_scale: - grad_fn = dynamic_scale.value_and_grad(loss_fn, has_aux=True, axis_name='batch') - dynamic_scale, is_fin, aux, grads = grad_fn(state_G.params) - else: - grad_fn = jax.value_and_grad(loss_fn, has_aux=True) - aux, grads = grad_fn(state_G.params) - grads = jax.lax.pmean(grads, axis_name='batch') - - loss = aux[0] - _, image_gen, new_state = aux[1] - metrics['G_loss'] = loss - metrics['image_gen'] = image_gen - - new_state_G = state_G.apply_gradients(grads=grads, moving_stats=new_state['moving_stats']) - - if dynamic_scale: - new_state_G = new_state_G.replace(opt_state=jax.tree_multimap(functools.partial(jnp.where, is_fin), - new_state_G.opt_state, - state_G.opt_state), - params=jax.tree_multimap(functools.partial(jnp.where, is_fin), - new_state_G.params, - state_G.params)) - metrics['G_scale'] = dynamic_scale.scale - - return new_state_G, metrics - - -def regul_step_G(state_G, batch, z_latent, pl_noise, pl_mean, metrics, config, rng): - - def loss_fn(params): - w_latent, new_state_G = state_G.apply_mapping({'params': params['mapping'], 'moving_stats': state_G.moving_stats}, - z_latent, - batch['label'], - mutable=['moving_stats']) - - pl_grads = jax.grad(lambda *args: jnp.sum(state_G.apply_synthesis(*args) * pl_noise), argnums=1)({'params': params['synthesis'], - 'noise_consts': state_G.noise_consts}, - w_latent, - 'random', - rng) - pl_lengths = jnp.sqrt(jnp.mean(jnp.sum(jnp.square(pl_grads), axis=2), axis=1)) - pl_mean_new = pl_mean + config.pl_decay * (jnp.mean(pl_lengths) - pl_mean) - pl_penalty = jnp.square(pl_lengths - pl_mean_new) * config.pl_weight - loss = jnp.mean(pl_penalty) * config.G_reg_interval - - return loss, pl_mean_new - - dynamic_scale = state_G.dynamic_scale_reg - - if dynamic_scale: - grad_fn = dynamic_scale.value_and_grad(loss_fn, has_aux=True) - dynamic_scale, is_fin, aux, grads = grad_fn(state_G.params) - else: - grad_fn = jax.value_and_grad(loss_fn, has_aux=True) - aux, grads = grad_fn(state_G.params) - grads = jax.lax.pmean(grads, axis_name='batch') - - loss = aux[0] - pl_mean_new = aux[1] - - metrics['G_regul_loss'] = loss - new_state_G = state_G.apply_gradients(grads=grads) - - if dynamic_scale: - new_state_G = new_state_G.replace(opt_state=jax.tree_multimap(functools.partial(jnp.where, is_fin), - new_state_G.opt_state, - state_G.opt_state), - params=jax.tree_multimap(functools.partial(jnp.where, is_fin), - new_state_G.params, - state_G.params)) - metrics['G_regul_scale'] = dynamic_scale.scale - - return new_state_G, metrics, pl_mean_new - - -def main_step_D(state_G, state_D, batch, z_latent1, z_latent2, metrics, mixing_prob, rng): - - def loss_fn(params): - w_latent1 = state_G.apply_mapping({'params': state_G.params['mapping'], 'moving_stats': state_G.moving_stats}, - z_latent1, - batch['label'], - train=False) - - w_latent2 = state_G.apply_mapping({'params': state_G.params['mapping'], 'moving_stats': state_G.moving_stats}, - z_latent2, - batch['label'], - train=False) - - # style mixing - cutoff_rng, layer_select_rng, synth_rng = jax.random.split(rng, num=3) - num_layers = w_latent1.shape[1] - layer_idx = jnp.arange(num_layers)[jnp.newaxis, :, jnp.newaxis] - mixing_cutoff = jax.lax.cond(jax.random.uniform(cutoff_rng, (), minval=0.0, maxval=1.0) < mixing_prob, - lambda _: jax.random.randint(layer_select_rng, (), 1, num_layers, dtype=jnp.int32), - lambda _: num_layers, - operand=None) - mixing_cond = jnp.broadcast_to(layer_idx < mixing_cutoff, w_latent1.shape) - w_latent = jnp.where(mixing_cond, w_latent1, w_latent2) - - image_gen = state_G.apply_synthesis({'params': state_G.params['synthesis'], 'noise_consts': state_G.noise_consts}, - w_latent, - rng=synth_rng) - - fake_logits = state_D.apply_fn(params, image_gen, batch['label']) - real_logits = state_D.apply_fn(params, batch['image'], batch['label']) - - loss_fake = jax.nn.softplus(fake_logits) - loss_real = jax.nn.softplus(-real_logits) - loss = jnp.mean(loss_fake + loss_real) - - return loss, (fake_logits, real_logits) - - dynamic_scale = state_D.dynamic_scale_main - - if dynamic_scale: - grad_fn = dynamic_scale.value_and_grad(loss_fn, has_aux=True) - dynamic_scale, is_fin, aux, grads = grad_fn(state_D.params) - else: - grad_fn = jax.value_and_grad(loss_fn, has_aux=True) - aux, grads = grad_fn(state_D.params) - grads = jax.lax.pmean(grads, axis_name='batch') - - loss = aux[0] - fake_logits, real_logits = aux[1] - metrics['D_loss'] = loss - metrics['fake_logits'] = jnp.mean(fake_logits) - metrics['real_logits'] = jnp.mean(real_logits) - - new_state_D = state_D.apply_gradients(grads=grads) - - if dynamic_scale: - new_state_D = new_state_D.replace(opt_state=jax.tree_multimap(functools.partial(jnp.where, is_fin), - new_state_D.opt_state, - state_D.opt_state), - params=jax.tree_multimap(functools.partial(jnp.where, is_fin), - new_state_D.params, - state_D.params)) - metrics['D_scale'] = dynamic_scale.scale - - return new_state_D, metrics - - -def regul_step_D(state_D, batch, metrics, config): - - def loss_fn(params): - r1_grads = jax.grad(lambda *args: jnp.sum(state_D.apply_fn(*args)), argnums=1)(params, batch['image'], batch['label']) - r1_penalty = jnp.sum(jnp.square(r1_grads), axis=(1, 2, 3)) * (config.r1_gamma / 2) * config.D_reg_interval - loss = jnp.mean(r1_penalty) - return loss, None - - dynamic_scale = state_D.dynamic_scale_reg - - if dynamic_scale: - grad_fn = dynamic_scale.value_and_grad(loss_fn, has_aux=True) - dynamic_scale, is_fin, aux, grads = grad_fn(state_D.params) - else: - grad_fn = jax.value_and_grad(loss_fn, has_aux=True) - aux, grads = grad_fn(state_D.params) - grads = jax.lax.pmean(grads, axis_name='batch') - - loss = aux[0] - metrics['D_regul_loss'] = loss - - new_state_D = state_D.apply_gradients(grads=grads) - - if dynamic_scale: - new_state_D = new_state_D.replace(opt_state=jax.tree_multimap(functools.partial(jnp.where, is_fin), - new_state_D.opt_state, - state_D.opt_state), - params=jax.tree_multimap(functools.partial(jnp.where, is_fin), - new_state_D.params, - state_D.params)) - metrics['D_regul_scale'] = dynamic_scale.scale - - return new_state_D, metrics - - -def eval_step_G(generator, params, z_latent, labels, truncation): - image_gen = generator.apply(params, z_latent, labels, truncation_psi=truncation, train=False, noise_mode='const') - return image_gen - diff --git a/spaces/omlab/vlchecklist_demo/models/vilt/modules/vilt_module.py b/spaces/omlab/vlchecklist_demo/models/vilt/modules/vilt_module.py deleted file mode 100644 index 06b27f60b42b76b13e4897c9f38c392d995c490f..0000000000000000000000000000000000000000 --- a/spaces/omlab/vlchecklist_demo/models/vilt/modules/vilt_module.py +++ /dev/null @@ -1,257 +0,0 @@ -import torch -import torch.nn as nn -import pytorch_lightning as pl -import models.vilt.modules.vision_transformer as vit - -from transformers.models.bert.modeling_bert import BertConfig, BertEmbeddings -from models.vilt.modules import heads, objectives, vilt_utils - - -class ViLTransformer(pl.LightningModule): - def __init__(self, config): - super().__init__() - self.save_hyperparameters() - - bert_config = BertConfig( - vocab_size=config["vocab_size"], - hidden_size=config["hidden_size"], - num_hidden_layers=config["num_layers"], - num_attention_heads=config["num_heads"], - intermediate_size=config["hidden_size"] * config["mlp_ratio"], - max_position_embeddings=config["max_text_len"], - hidden_dropout_prob=config["drop_rate"], - attention_probs_dropout_prob=config["drop_rate"], - ) - - self.text_embeddings = BertEmbeddings(bert_config) - self.text_embeddings.apply(objectives.init_weights) - - self.token_type_embeddings = nn.Embedding(2, config["hidden_size"]) - self.token_type_embeddings.apply(objectives.init_weights) - - if self.hparams.config["load_path"] == "": - self.transformer = getattr(vit, self.hparams.config["vit"])( - pretrained=True, config=self.hparams.config - ) - else: - self.transformer = getattr(vit, self.hparams.config["vit"])( - pretrained=False, config=self.hparams.config - ) - - self.pooler = heads.Pooler(config["hidden_size"]) - self.pooler.apply(objectives.init_weights) - - if config["loss_names"]["mlm"] > 0: - self.mlm_score = heads.MLMHead(bert_config) - self.mlm_score.apply(objectives.init_weights) - - if config["loss_names"]["itm"] > 0: - self.itm_score = heads.ITMHead(config["hidden_size"]) - self.itm_score.apply(objectives.init_weights) - - if config["loss_names"]["mpp"] > 0: - self.mpp_score = heads.MPPHead(bert_config) - self.mpp_score.apply(objectives.init_weights) - - # ===================== Downstream ===================== # - if ( - self.hparams.config["load_path"] != "" - and not self.hparams.config["test_only"] - ): - ckpt = torch.load(self.hparams.config["load_path"], map_location="cpu") - state_dict = ckpt["state_dict"] - self.load_state_dict(state_dict, strict=False) - - hs = self.hparams.config["hidden_size"] - if self.hparams.config["loss_names"]["ipm"] > 0: - self.ipm_score = nn.Linear(hs, 2) - - if self.hparams.config["loss_names"]["vqa"] > 0: - vs = self.hparams.config["vqav2_label_size"] - self.vqa_classifier = nn.Sequential( - nn.Linear(hs, hs * 2), - nn.LayerNorm(hs * 2), - nn.GELU(), - nn.Linear(hs * 2, vs), - ) - self.vqa_classifier.apply(objectives.init_weights) - - if self.hparams.config["loss_names"]["nlvr2"] > 0: - self.nlvr2_classifier = nn.Sequential( - nn.Linear(hs * 2, hs * 2), - nn.LayerNorm(hs * 2), - nn.GELU(), - nn.Linear(hs * 2, 2), - ) - self.nlvr2_classifier.apply(objectives.init_weights) - emb_data = self.token_type_embeddings.weight.data - self.token_type_embeddings = nn.Embedding(3, hs) - self.token_type_embeddings.apply(objectives.init_weights) - self.token_type_embeddings.weight.data[0, :] = emb_data[0, :] - self.token_type_embeddings.weight.data[1, :] = emb_data[1, :] - self.token_type_embeddings.weight.data[2, :] = emb_data[1, :] - - if self.hparams.config["loss_names"]["irtr"] > 0: - self.rank_output = nn.Linear(hs, 1) - self.rank_output.weight.data = self.itm_score.fc.weight.data[1:, :] - self.rank_output.bias.data = self.itm_score.fc.bias.data[1:] - self.margin = 0.2 - for p in self.itm_score.parameters(): - p.requires_grad = False - - vilt_utils.set_metrics(self) - self.current_tasks = list() - - # ===================== load downstream (test_only) ====================== - - if self.hparams.config["load_path"] != "" and self.hparams.config["test_only"]: - ckpt = torch.load(self.hparams.config["load_path"], map_location="cpu") - state_dict = ckpt["state_dict"] - self.load_state_dict(state_dict, strict=False) - - def infer( - self, - batch, - mask_text=False, - mask_image=False, - image_token_type_idx=1, - image_embeds=None, - image_masks=None, - ): - if f"image_{image_token_type_idx - 1}" in batch: - imgkey = f"image_{image_token_type_idx - 1}" - else: - imgkey = "image" - - do_mlm = "_mlm" if mask_text else "" - text_ids = batch[f"text_ids{do_mlm}"] - text_labels = batch[f"text_labels{do_mlm}"] - text_masks = batch[f"text_masks"] - text_embeds = self.text_embeddings(text_ids) - image_emb = [] - if image_embeds is None and image_masks is None: - for img in batch[imgkey]: - a = self.transformer.visual_embed( - img, - max_image_len=self.hparams.config["max_image_len"], - mask_it=mask_image, - ) - image_emb.append(a) - else: - patch_index, image_labels = ( - None, - None, - ) - list_text_embeds, list_image_embeds, list_image_masks = [],[], [] - for i, (image_embeds, image_masks, patch_index, image_labels) in enumerate(image_emb): - _text_embeds, image_embeds = ( - torch.stack([text_embeds[i,:,:]],dim=0) + self.token_type_embeddings(torch.zeros_like(torch.stack([text_masks[i,:]], dim=0))), - image_embeds - + self.token_type_embeddings( - torch.full_like(image_masks, image_token_type_idx) - ), - ) - list_text_embeds.append(_text_embeds) - list_image_embeds.append(image_embeds) - list_image_masks.append(image_masks) - text_embeds = torch.cat(list_text_embeds, dim=0) - image_embeds = torch.cat(list_image_embeds, dim=0) - co_embeds = torch.cat([text_embeds, image_embeds], dim=1) - co_masks = torch.cat([text_masks, torch.cat(list_image_masks,dim=0)], dim=1) - - x = co_embeds - - for i, blk in enumerate(self.transformer.blocks): - x, _attn = blk(x, mask=co_masks) - - x = self.transformer.norm(x) - text_feats, image_feats = ( - x[:, : text_embeds.shape[1]], - x[:, text_embeds.shape[1] :], - ) - cls_feats = self.pooler(x) - - ret = { - "text_feats": text_feats, - "image_feats": image_feats, - "cls_feats": cls_feats, - "raw_cls_feats": x[:, 0], - "image_labels": image_labels, - "image_masks": image_masks, - "text_labels": text_labels, - "text_ids": text_ids, - "text_masks": text_masks, - "patch_index": patch_index, - } - - return ret - - def forward(self, batch): - ret = dict() - if len(self.current_tasks) == 0: - ret.update(self.infer(batch)) - return ret - - # Masked Language Modeling - if "mlm" in self.current_tasks: - ret.update(objectives.compute_mlm(self, batch)) - - # Masked Patch Prediction - if "mpp" in self.current_tasks: - ret.update(objectives.compute_mpp(self, batch)) - - # Image Text Matching - if "itm" in self.current_tasks: - ret.update(objectives.compute_itm_wpa(self, batch)) - - # Visual Question Answering - if "vqa" in self.current_tasks: - ret.update(objectives.compute_vqa(self, batch)) - - # Natural Language for Visual Reasoning 2 - if "nlvr2" in self.current_tasks: - ret.update(objectives.compute_nlvr2(self, batch)) - - # Image Retrieval and Text Retrieval - if "irtr" in self.current_tasks: - ret.update(objectives.compute_irtr(self, batch)) - - return ret - - def training_step(self, batch, batch_idx): - vilt_utils.set_task(self) - output = self(batch) - total_loss = sum([v for k, v in output.items() if "loss" in k]) - - return total_loss - - def training_epoch_end(self, outs): - vilt_utils.epoch_wrapup(self) - - def validation_step(self, batch, batch_idx): - vilt_utils.set_task(self) - output = self(batch) - - def validation_epoch_end(self, outs): - vilt_utils.epoch_wrapup(self) - - def test_step(self, batch, batch_idx): - vilt_utils.set_task(self) - output = self(batch) - ret = dict() - - if self.hparams.config["loss_names"]["vqa"] > 0: - ret.update(objectives.vqa_test_step(self, batch, output)) - - return ret - - def test_epoch_end(self, outs): - model_name = self.hparams.config["load_path"].split("/")[-1][:-5] - - if self.hparams.config["loss_names"]["vqa"] > 0: - objectives.vqa_test_wrapup(outs, model_name) - vilt_utils.epoch_wrapup(self) - - def configure_optimizers(self): - return vilt_utils.set_schedule(self) - diff --git a/spaces/ongxuanhong/listing-content-with-ai/app.py b/spaces/ongxuanhong/listing-content-with-ai/app.py deleted file mode 100644 index c75ecf29383579095c8504b9445dc90a5ca30934..0000000000000000000000000000000000000000 --- a/spaces/ongxuanhong/listing-content-with-ai/app.py +++ /dev/null @@ -1,138 +0,0 @@ -import os - -import streamlit as st - -from langchain.llms import Replicate -from langchain.tools import DuckDuckGoSearchRun -from langchain.chains import LLMChain - -from langchain.prompts import PromptTemplate - -# Creating Session State Variable -if "API_Key" not in st.session_state: - st.session_state["API_Key"] = "" - - -with st.sidebar: - # Sidebar to capture the Replicate API key - st.sidebar.image("figs/logo_yes4all.png", width=300, use_column_width=True) - st.sidebar.title("Together we change the global e-commerce") - - if "REPLICATE_API_TOKEN" in st.secrets: - st.success("API key already provided!", icon="✅") - replicate_api = st.secrets["REPLICATE_API_TOKEN"] - else: - st.session_state["API_Key"] = st.sidebar.text_input( - "🗝 What's your Replicate API key?", type="password" - ) - replicate_api = st.session_state["API_Key"] - if not (replicate_api.startswith("r8_") and len(replicate_api) == 40): - st.warning("Please enter your credentials!", icon="⚠️") - else: - st.success("Let's start generating content!", icon="👉") - os.environ["REPLICATE_API_TOKEN"] = replicate_api - -st.title("Listing content with AI") -st.markdown( - """ - Generate Product Description for your products instantly!\n - Provide product name and keywords related to that product. Click on 'Generate Description' button and multi-paragraph rich text product description will be genrated instantly.\n - Note: Generated product description is SEO compliant and can be used to populate product information. - """ -) -# Captures User Inputs -mode = { - "Expert": "Written from the perspective of a product or industry expert, this tone sounds professional by using scientific and objective language, and statements of fact.", - "Supportive": "Written from the point of view of someone that empathizes with the customer and wants to help them, this tone sounds supportive, and uses language that is friendly, approachable, and straightforward.", - "Persuasive": "Written from the perspective of someone who passionately believes in the value of the product, this tone sounds persuasive and inspiring, using language that appeals to the senses and inspires strong emotions in the buyer.", - "Daring": "Written from the perspective of someone who challenges the buyer to be bold and adventurous, this tone uses strong action words to motivate and inspire.", - "Playful": "Written from the perspective of someone who doesn’t take themselves or life too seriously, this tone uses fun or quirky language, including humor and slang expressions.", - "Sophisticated": "Written in the style of a luxury brand selling premium products, this tone sounds sophisticated and refined, using language that appeals to a buyer’s desire for high quality and taste.", -} -writing_mode = st.selectbox( - "**Choose your writing mode**", - ("Expert", "Supportive", "Persuasive", "Daring", "Playful", "Sophisticated"), -) -if writing_mode: - st.info(mode[writing_mode]) - -product_name = st.text_input( - "**Please provide product name**", - key="product_name", - placeholder="Jasper Deluxe TV Console Table", -) - -product_keywords = st.text_input( - "**Please provide the keywords of the product**", - key="product_keywords", - placeholder="tv stand walnut solid wood modern console blue", -) - -description_length = st.slider( - "Expected Description Length ✨", 300, 1000, 500, step=100, key="description_length" -) -creativity = st.slider("Creativity ✨ - (0 LOW || 1 HIGH)", 0.01, 1.0, 0.4, step=0.1) - -submit = st.button("Generate description for me") - -if submit and product_name and product_keywords: - if st.session_state["API_Key"]: - # Display Search Engine Result - search = DuckDuckGoSearchRun() - search_result = search.run(product_name) - - # Init LLM model - llm = Replicate( - model="meta/llama-2-13b-chat:f4e2de70d66816a838a89eeeb621910adffb0dd0baba3976c96980970978018d", - model_kwargs={ - "temperature": creativity, - "max_new_tokens": description_length, - "system_prompt": mode[writing_mode], - }, - ) - - # Display Title - st.subheader("Suggested titles") - with st.spinner("Wait a second..."): - # Template for generating 'Title' - title_template = PromptTemplate( - input_variables=["product_name"], - template="List of 3 products title for `{product_name}` on Amazon Selling.", - ) - title_chain = LLMChain(llm=llm, prompt=title_template, verbose=True) - product_title = title_chain.run(product_name) - st.markdown(product_title) - - st.subheader("Suggested product description") - with st.spinner("Wait a second..."): - # Template for generating 'Product description' using search engine - description_template = PromptTemplate( - input_variables=[ - "product_keywords", - "DuckDuckGo_Search", - ], - template=""" - Using this search data {DuckDuckGo_Search}\n\n - Come up with list of 3 product descriptions based on these keywords: ```{product_keywords}```\n\n - Output examples: - **Product descriptions 1**: \n\n - **Product descriptions 2**: \n\n - And so on... - """, - ) - description_chain = LLMChain( - llm=llm, prompt=description_template, verbose=True - ) - - description = description_chain.run( - product_keywords=product_keywords, - DuckDuckGo_Search=search_result, - description_length=description_length, - ) - st.markdown(description) - - # Let's generate the description - st.success("Hope you like this description") - - else: - st.error("Ooopssss!!! Please provide Replicate API key.....") diff --git a/spaces/openlamm/LAMM/model/utils/__init__.py b/spaces/openlamm/LAMM/model/utils/__init__.py deleted file mode 100644 index fe6365755ddd81c314e0bb08b4eafbc8d80910fe..0000000000000000000000000000000000000000 --- a/spaces/openlamm/LAMM/model/utils/__init__.py +++ /dev/null @@ -1 +0,0 @@ -from .pcl_utils import * diff --git a/spaces/osanseviero/ChatGPT_MANY_LANGS/README.md b/spaces/osanseviero/ChatGPT_MANY_LANGS/README.md deleted file mode 100644 index 54d935c200d90b4e24727758e4582ae3322e65ed..0000000000000000000000000000000000000000 --- a/spaces/osanseviero/ChatGPT_MANY_LANGS/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: ChatGPT HF -emoji: 🌖 -colorFrom: indigo -colorTo: yellow -sdk: gradio -sdk_version: 3.12.0 -app_file: app.py -pinned: false -license: apache-2.0 -duplicated_from: Xhaheen/ChatGPT_HF ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/overlordx/starlight/README.md b/spaces/overlordx/starlight/README.md deleted file mode 100644 index a963471f2740e96943f24c0df804aad85425f2be..0000000000000000000000000000000000000000 --- a/spaces/overlordx/starlight/README.md +++ /dev/null @@ -1,25 +0,0 @@ ---- -title: Starlight -emoji: 📈 -colorFrom: yellow -colorTo: gray -sdk: streamlit -sdk_version: 1.17.0 -app_file: app.py -pinned: false -license: mit ---- -# Why use starlight -## Are you a large law firm looking to streamline your workflow and maximize efficiency? -## Are you looking for ways to increase your productivity without sacrificing quality? - -XXX - -YYY - -* Our AI-driven summaries and document preparation tools can help you quickly and accurately prepare documents, saving your time and your client's money. -* We understand that large law firms are under constant pressure to deliver high-quality results in a timely manner. That’s why we offer the latest in AI-driven document services, allowing you stay one step ahead, and freeing up time for you to focus on other important tasks. -* Take your law firm to the next level with starlight. -* Let us help you streamline your workflow and maximize efficiency, so you can stay ahead of the competition. -* Contact us today to learn more. - diff --git a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/docs/source/en/api/pipelines/text_to_video_zero.md b/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/docs/source/en/api/pipelines/text_to_video_zero.md deleted file mode 100644 index b64d72db0187a4619751ec777d3b7c40f938ec6f..0000000000000000000000000000000000000000 --- a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/docs/source/en/api/pipelines/text_to_video_zero.md +++ /dev/null @@ -1,260 +0,0 @@ - - -# Text2Video-Zero - -[Text2Video-Zero: Text-to-Image Diffusion Models are Zero-Shot Video Generators](https://huggingface.co/papers/2303.13439) is by -Levon Khachatryan, -Andranik Movsisyan, -Vahram Tadevosyan, -Roberto Henschel, -[Zhangyang Wang](https://www.ece.utexas.edu/people/faculty/atlas-wang), Shant Navasardyan, [Humphrey Shi](https://www.humphreyshi.com). - -Text2Video-Zero enables zero-shot video generation using either: -1. A textual prompt -2. A prompt combined with guidance from poses or edges -3. Video Instruct-Pix2Pix (instruction-guided video editing) - -Results are temporally consistent and closely follow the guidance and textual prompts. - -![teaser-img](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/t2v_zero_teaser.png) - -The abstract from the paper is: - -*Recent text-to-video generation approaches rely on computationally heavy training and require large-scale video datasets. In this paper, we introduce a new task of zero-shot text-to-video generation and propose a low-cost approach (without any training or optimization) by leveraging the power of existing text-to-image synthesis methods (e.g., Stable Diffusion), making them suitable for the video domain. -Our key modifications include (i) enriching the latent codes of the generated frames with motion dynamics to keep the global scene and the background time consistent; and (ii) reprogramming frame-level self-attention using a new cross-frame attention of each frame on the first frame, to preserve the context, appearance, and identity of the foreground object. -Experiments show that this leads to low overhead, yet high-quality and remarkably consistent video generation. Moreover, our approach is not limited to text-to-video synthesis but is also applicable to other tasks such as conditional and content-specialized video generation, and Video Instruct-Pix2Pix, i.e., instruction-guided video editing. -As experiments show, our method performs comparably or sometimes better than recent approaches, despite not being trained on additional video data.* - -You can find additional information about Text-to-Video Zero on the [project page](https://text2video-zero.github.io/), [paper](https://arxiv.org/abs/2303.13439), and [original codebase](https://github.com/Picsart-AI-Research/Text2Video-Zero). - -## Usage example - -### Text-To-Video - -To generate a video from prompt, run the following python command -```python -import torch -import imageio -from diffusers import TextToVideoZeroPipeline - -model_id = "runwayml/stable-diffusion-v1-5" -pipe = TextToVideoZeroPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") - -prompt = "A panda is playing guitar on times square" -result = pipe(prompt=prompt).images -result = [(r * 255).astype("uint8") for r in result] -imageio.mimsave("video.mp4", result, fps=4) -``` -You can change these parameters in the pipeline call: -* Motion field strength (see the [paper](https://arxiv.org/abs/2303.13439), Sect. 3.3.1): - * `motion_field_strength_x` and `motion_field_strength_y`. Default: `motion_field_strength_x=12`, `motion_field_strength_y=12` -* `T` and `T'` (see the [paper](https://arxiv.org/abs/2303.13439), Sect. 3.3.1) - * `t0` and `t1` in the range `{0, ..., num_inference_steps}`. Default: `t0=45`, `t1=48` -* Video length: - * `video_length`, the number of frames video_length to be generated. Default: `video_length=8` - -We an also generate longer videos by doing the processing in a chunk-by-chunk manner: -```python -import torch -import imageio -from diffusers import TextToVideoZeroPipeline -import numpy as np - -model_id = "runwayml/stable-diffusion-v1-5" -pipe = TextToVideoZeroPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") -seed = 0 -video_length = 8 -chunk_size = 4 -prompt = "A panda is playing guitar on times square" - -# Generate the video chunk-by-chunk -result = [] -chunk_ids = np.arange(0, video_length, chunk_size - 1) -generator = torch.Generator(device="cuda") -for i in range(len(chunk_ids)): - print(f"Processing chunk {i + 1} / {len(chunk_ids)}") - ch_start = chunk_ids[i] - ch_end = video_length if i == len(chunk_ids) - 1 else chunk_ids[i + 1] - # Attach the first frame for Cross Frame Attention - frame_ids = [0] + list(range(ch_start, ch_end)) - # Fix the seed for the temporal consistency - generator.manual_seed(seed) - output = pipe(prompt=prompt, video_length=len(frame_ids), generator=generator, frame_ids=frame_ids) - result.append(output.images[1:]) - -# Concatenate chunks and save -result = np.concatenate(result) -result = [(r * 255).astype("uint8") for r in result] -imageio.mimsave("video.mp4", result, fps=4) -``` - - -### Text-To-Video with Pose Control -To generate a video from prompt with additional pose control - -1. Download a demo video - - ```python - from huggingface_hub import hf_hub_download - - filename = "__assets__/poses_skeleton_gifs/dance1_corr.mp4" - repo_id = "PAIR/Text2Video-Zero" - video_path = hf_hub_download(repo_type="space", repo_id=repo_id, filename=filename) - ``` - - -2. Read video containing extracted pose images - ```python - from PIL import Image - import imageio - - reader = imageio.get_reader(video_path, "ffmpeg") - frame_count = 8 - pose_images = [Image.fromarray(reader.get_data(i)) for i in range(frame_count)] - ``` - To extract pose from actual video, read [ControlNet documentation](./stable_diffusion/controlnet). - -3. Run `StableDiffusionControlNetPipeline` with our custom attention processor - - ```python - import torch - from diffusers import StableDiffusionControlNetPipeline, ControlNetModel - from diffusers.pipelines.text_to_video_synthesis.pipeline_text_to_video_zero import CrossFrameAttnProcessor - - model_id = "runwayml/stable-diffusion-v1-5" - controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-openpose", torch_dtype=torch.float16) - pipe = StableDiffusionControlNetPipeline.from_pretrained( - model_id, controlnet=controlnet, torch_dtype=torch.float16 - ).to("cuda") - - # Set the attention processor - pipe.unet.set_attn_processor(CrossFrameAttnProcessor(batch_size=2)) - pipe.controlnet.set_attn_processor(CrossFrameAttnProcessor(batch_size=2)) - - # fix latents for all frames - latents = torch.randn((1, 4, 64, 64), device="cuda", dtype=torch.float16).repeat(len(pose_images), 1, 1, 1) - - prompt = "Darth Vader dancing in a desert" - result = pipe(prompt=[prompt] * len(pose_images), image=pose_images, latents=latents).images - imageio.mimsave("video.mp4", result, fps=4) - ``` - - -### Text-To-Video with Edge Control - -To generate a video from prompt with additional pose control, -follow the steps described above for pose-guided generation using [Canny edge ControlNet model](https://huggingface.co/lllyasviel/sd-controlnet-canny). - - -### Video Instruct-Pix2Pix - -To perform text-guided video editing (with [InstructPix2Pix](./stable_diffusion/pix2pix)): - -1. Download a demo video - - ```python - from huggingface_hub import hf_hub_download - - filename = "__assets__/pix2pix video/camel.mp4" - repo_id = "PAIR/Text2Video-Zero" - video_path = hf_hub_download(repo_type="space", repo_id=repo_id, filename=filename) - ``` - -2. Read video from path - ```python - from PIL import Image - import imageio - - reader = imageio.get_reader(video_path, "ffmpeg") - frame_count = 8 - video = [Image.fromarray(reader.get_data(i)) for i in range(frame_count)] - ``` - -3. Run `StableDiffusionInstructPix2PixPipeline` with our custom attention processor - ```python - import torch - from diffusers import StableDiffusionInstructPix2PixPipeline - from diffusers.pipelines.text_to_video_synthesis.pipeline_text_to_video_zero import CrossFrameAttnProcessor - - model_id = "timbrooks/instruct-pix2pix" - pipe = StableDiffusionInstructPix2PixPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") - pipe.unet.set_attn_processor(CrossFrameAttnProcessor(batch_size=3)) - - prompt = "make it Van Gogh Starry Night style" - result = pipe(prompt=[prompt] * len(video), image=video).images - imageio.mimsave("edited_video.mp4", result, fps=4) - ``` - - -### DreamBooth specialization - -Methods **Text-To-Video**, **Text-To-Video with Pose Control** and **Text-To-Video with Edge Control** -can run with custom [DreamBooth](../training/dreambooth) models, as shown below for -[Canny edge ControlNet model](https://huggingface.co/lllyasviel/sd-controlnet-canny) and -[Avatar style DreamBooth](https://huggingface.co/PAIR/text2video-zero-controlnet-canny-avatar) model - -1. Download a demo video - - ```python - from huggingface_hub import hf_hub_download - - filename = "__assets__/canny_videos_mp4/girl_turning.mp4" - repo_id = "PAIR/Text2Video-Zero" - video_path = hf_hub_download(repo_type="space", repo_id=repo_id, filename=filename) - ``` - -2. Read video from path - ```python - from PIL import Image - import imageio - - reader = imageio.get_reader(video_path, "ffmpeg") - frame_count = 8 - canny_edges = [Image.fromarray(reader.get_data(i)) for i in range(frame_count)] - ``` - -3. Run `StableDiffusionControlNetPipeline` with custom trained DreamBooth model - ```python - import torch - from diffusers import StableDiffusionControlNetPipeline, ControlNetModel - from diffusers.pipelines.text_to_video_synthesis.pipeline_text_to_video_zero import CrossFrameAttnProcessor - - # set model id to custom model - model_id = "PAIR/text2video-zero-controlnet-canny-avatar" - controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", torch_dtype=torch.float16) - pipe = StableDiffusionControlNetPipeline.from_pretrained( - model_id, controlnet=controlnet, torch_dtype=torch.float16 - ).to("cuda") - - # Set the attention processor - pipe.unet.set_attn_processor(CrossFrameAttnProcessor(batch_size=2)) - pipe.controlnet.set_attn_processor(CrossFrameAttnProcessor(batch_size=2)) - - # fix latents for all frames - latents = torch.randn((1, 4, 64, 64), device="cuda", dtype=torch.float16).repeat(len(canny_edges), 1, 1, 1) - - prompt = "oil painting of a beautiful girl avatar style" - result = pipe(prompt=[prompt] * len(canny_edges), image=canny_edges, latents=latents).images - imageio.mimsave("video.mp4", result, fps=4) - ``` - -You can filter out some available DreamBooth-trained models with [this link](https://huggingface.co/models?search=dreambooth). - - -## TextToVideoZeroPipeline -[[autodoc]] TextToVideoZeroPipeline - - all - - __call__ - -## TextToVideoPipelineOutput -[[autodoc]] pipelines.text_to_video_synthesis.pipeline_text_to_video_zero.TextToVideoPipelineOutput \ No newline at end of file diff --git a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/_distutils_hack/override.py b/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/_distutils_hack/override.py deleted file mode 100644 index 2cc433a4a55e3b41fa31089918fb62096092f89f..0000000000000000000000000000000000000000 --- a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/_distutils_hack/override.py +++ /dev/null @@ -1 +0,0 @@ -__import__('_distutils_hack').do_override() diff --git a/spaces/posak/Tune-A-Video-Training-UI/app_upload.py b/spaces/posak/Tune-A-Video-Training-UI/app_upload.py deleted file mode 100644 index f672f555512b456d95d8f674fa832b1c9bf34309..0000000000000000000000000000000000000000 --- a/spaces/posak/Tune-A-Video-Training-UI/app_upload.py +++ /dev/null @@ -1,106 +0,0 @@ -#!/usr/bin/env python - -from __future__ import annotations - -import pathlib - -import gradio as gr -import slugify - -from constants import MODEL_LIBRARY_ORG_NAME, UploadTarget -from uploader import Uploader -from utils import find_exp_dirs - - -class ModelUploader(Uploader): - def upload_model( - self, - folder_path: str, - repo_name: str, - upload_to: str, - private: bool, - delete_existing_repo: bool, - input_token: str | None = None, - ) -> str: - if not folder_path: - raise ValueError - if not repo_name: - repo_name = pathlib.Path(folder_path).name - repo_name = slugify.slugify(repo_name) - - if upload_to == UploadTarget.PERSONAL_PROFILE.value: - organization = '' - elif upload_to == UploadTarget.MODEL_LIBRARY.value: - organization = MODEL_LIBRARY_ORG_NAME - else: - raise ValueError - - return self.upload(folder_path, - repo_name, - organization=organization, - private=private, - delete_existing_repo=delete_existing_repo, - input_token=input_token) - - -def load_local_model_list() -> dict: - choices = find_exp_dirs() - return gr.update(choices=choices, value=choices[0] if choices else None) - - -def create_upload_demo(hf_token: str | None) -> gr.Blocks: - uploader = ModelUploader(hf_token) - model_dirs = find_exp_dirs() - - with gr.Blocks() as demo: - with gr.Box(): - gr.Markdown('Local Models') - reload_button = gr.Button('Reload Model List') - model_dir = gr.Dropdown( - label='Model names', - choices=model_dirs, - value=model_dirs[0] if model_dirs else None) - with gr.Box(): - gr.Markdown('Upload Settings') - with gr.Row(): - use_private_repo = gr.Checkbox(label='Private', value=True) - delete_existing_repo = gr.Checkbox( - label='Delete existing repo of the same name', value=False) - upload_to = gr.Radio(label='Upload to', - choices=[_.value for _ in UploadTarget], - value=UploadTarget.MODEL_LIBRARY.value) - model_name = gr.Textbox(label='Model Name') - input_token = gr.Text(label='Hugging Face Write Token', - placeholder='', - visible=False if hf_token else True) - upload_button = gr.Button('Upload') - gr.Markdown(f''' - - You can upload your trained model to your personal profile (i.e. https://huggingface.co/{{your_username}}/{{model_name}}) or to the public [Tune-A-Video Library](https://huggingface.co/{MODEL_LIBRARY_ORG_NAME}) (i.e. https://huggingface.co/{MODEL_LIBRARY_ORG_NAME}/{{model_name}}). - ''') - with gr.Box(): - gr.Markdown('Output message') - output_message = gr.Markdown() - - reload_button.click(fn=load_local_model_list, - inputs=None, - outputs=model_dir) - upload_button.click(fn=uploader.upload_model, - inputs=[ - model_dir, - model_name, - upload_to, - use_private_repo, - delete_existing_repo, - input_token, - ], - outputs=output_message) - - return demo - - -if __name__ == '__main__': - import os - - hf_token = os.getenv('HF_TOKEN') - demo = create_upload_demo(hf_token) - demo.queue(max_size=1).launch(share=False) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/PIL/ImImagePlugin.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/PIL/ImImagePlugin.py deleted file mode 100644 index b42ba7cac7083d1b3748a001011478709267d933..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/PIL/ImImagePlugin.py +++ /dev/null @@ -1,371 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# IFUNC IM file handling for PIL -# -# history: -# 1995-09-01 fl Created. -# 1997-01-03 fl Save palette images -# 1997-01-08 fl Added sequence support -# 1997-01-23 fl Added P and RGB save support -# 1997-05-31 fl Read floating point images -# 1997-06-22 fl Save floating point images -# 1997-08-27 fl Read and save 1-bit images -# 1998-06-25 fl Added support for RGB+LUT images -# 1998-07-02 fl Added support for YCC images -# 1998-07-15 fl Renamed offset attribute to avoid name clash -# 1998-12-29 fl Added I;16 support -# 2001-02-17 fl Use 're' instead of 'regex' (Python 2.1) (0.7) -# 2003-09-26 fl Added LA/PA support -# -# Copyright (c) 1997-2003 by Secret Labs AB. -# Copyright (c) 1995-2001 by Fredrik Lundh. -# -# See the README file for information on usage and redistribution. -# - - -import os -import re - -from . import Image, ImageFile, ImagePalette - -# -------------------------------------------------------------------- -# Standard tags - -COMMENT = "Comment" -DATE = "Date" -EQUIPMENT = "Digitalization equipment" -FRAMES = "File size (no of images)" -LUT = "Lut" -NAME = "Name" -SCALE = "Scale (x,y)" -SIZE = "Image size (x*y)" -MODE = "Image type" - -TAGS = { - COMMENT: 0, - DATE: 0, - EQUIPMENT: 0, - FRAMES: 0, - LUT: 0, - NAME: 0, - SCALE: 0, - SIZE: 0, - MODE: 0, -} - -OPEN = { - # ifunc93/p3cfunc formats - "0 1 image": ("1", "1"), - "L 1 image": ("1", "1"), - "Greyscale image": ("L", "L"), - "Grayscale image": ("L", "L"), - "RGB image": ("RGB", "RGB;L"), - "RLB image": ("RGB", "RLB"), - "RYB image": ("RGB", "RLB"), - "B1 image": ("1", "1"), - "B2 image": ("P", "P;2"), - "B4 image": ("P", "P;4"), - "X 24 image": ("RGB", "RGB"), - "L 32 S image": ("I", "I;32"), - "L 32 F image": ("F", "F;32"), - # old p3cfunc formats - "RGB3 image": ("RGB", "RGB;T"), - "RYB3 image": ("RGB", "RYB;T"), - # extensions - "LA image": ("LA", "LA;L"), - "PA image": ("LA", "PA;L"), - "RGBA image": ("RGBA", "RGBA;L"), - "RGBX image": ("RGBX", "RGBX;L"), - "CMYK image": ("CMYK", "CMYK;L"), - "YCC image": ("YCbCr", "YCbCr;L"), -} - -# ifunc95 extensions -for i in ["8", "8S", "16", "16S", "32", "32F"]: - OPEN[f"L {i} image"] = ("F", f"F;{i}") - OPEN[f"L*{i} image"] = ("F", f"F;{i}") -for i in ["16", "16L", "16B"]: - OPEN[f"L {i} image"] = (f"I;{i}", f"I;{i}") - OPEN[f"L*{i} image"] = (f"I;{i}", f"I;{i}") -for i in ["32S"]: - OPEN[f"L {i} image"] = ("I", f"I;{i}") - OPEN[f"L*{i} image"] = ("I", f"I;{i}") -for i in range(2, 33): - OPEN[f"L*{i} image"] = ("F", f"F;{i}") - - -# -------------------------------------------------------------------- -# Read IM directory - -split = re.compile(rb"^([A-Za-z][^:]*):[ \t]*(.*)[ \t]*$") - - -def number(s): - try: - return int(s) - except ValueError: - return float(s) - - -## -# Image plugin for the IFUNC IM file format. - - -class ImImageFile(ImageFile.ImageFile): - format = "IM" - format_description = "IFUNC Image Memory" - _close_exclusive_fp_after_loading = False - - def _open(self): - # Quick rejection: if there's not an LF among the first - # 100 bytes, this is (probably) not a text header. - - if b"\n" not in self.fp.read(100): - msg = "not an IM file" - raise SyntaxError(msg) - self.fp.seek(0) - - n = 0 - - # Default values - self.info[MODE] = "L" - self.info[SIZE] = (512, 512) - self.info[FRAMES] = 1 - - self.rawmode = "L" - - while True: - s = self.fp.read(1) - - # Some versions of IFUNC uses \n\r instead of \r\n... - if s == b"\r": - continue - - if not s or s == b"\0" or s == b"\x1A": - break - - # FIXME: this may read whole file if not a text file - s = s + self.fp.readline() - - if len(s) > 100: - msg = "not an IM file" - raise SyntaxError(msg) - - if s[-2:] == b"\r\n": - s = s[:-2] - elif s[-1:] == b"\n": - s = s[:-1] - - try: - m = split.match(s) - except re.error as e: - msg = "not an IM file" - raise SyntaxError(msg) from e - - if m: - k, v = m.group(1, 2) - - # Don't know if this is the correct encoding, - # but a decent guess (I guess) - k = k.decode("latin-1", "replace") - v = v.decode("latin-1", "replace") - - # Convert value as appropriate - if k in [FRAMES, SCALE, SIZE]: - v = v.replace("*", ",") - v = tuple(map(number, v.split(","))) - if len(v) == 1: - v = v[0] - elif k == MODE and v in OPEN: - v, self.rawmode = OPEN[v] - - # Add to dictionary. Note that COMMENT tags are - # combined into a list of strings. - if k == COMMENT: - if k in self.info: - self.info[k].append(v) - else: - self.info[k] = [v] - else: - self.info[k] = v - - if k in TAGS: - n += 1 - - else: - msg = "Syntax error in IM header: " + s.decode("ascii", "replace") - raise SyntaxError(msg) - - if not n: - msg = "Not an IM file" - raise SyntaxError(msg) - - # Basic attributes - self._size = self.info[SIZE] - self._mode = self.info[MODE] - - # Skip forward to start of image data - while s and s[:1] != b"\x1A": - s = self.fp.read(1) - if not s: - msg = "File truncated" - raise SyntaxError(msg) - - if LUT in self.info: - # convert lookup table to palette or lut attribute - palette = self.fp.read(768) - greyscale = 1 # greyscale palette - linear = 1 # linear greyscale palette - for i in range(256): - if palette[i] == palette[i + 256] == palette[i + 512]: - if palette[i] != i: - linear = 0 - else: - greyscale = 0 - if self.mode in ["L", "LA", "P", "PA"]: - if greyscale: - if not linear: - self.lut = list(palette[:256]) - else: - if self.mode in ["L", "P"]: - self._mode = self.rawmode = "P" - elif self.mode in ["LA", "PA"]: - self._mode = "PA" - self.rawmode = "PA;L" - self.palette = ImagePalette.raw("RGB;L", palette) - elif self.mode == "RGB": - if not greyscale or not linear: - self.lut = list(palette) - - self.frame = 0 - - self.__offset = offs = self.fp.tell() - - self._fp = self.fp # FIXME: hack - - if self.rawmode[:2] == "F;": - # ifunc95 formats - try: - # use bit decoder (if necessary) - bits = int(self.rawmode[2:]) - if bits not in [8, 16, 32]: - self.tile = [("bit", (0, 0) + self.size, offs, (bits, 8, 3, 0, -1))] - return - except ValueError: - pass - - if self.rawmode in ["RGB;T", "RYB;T"]: - # Old LabEye/3PC files. Would be very surprised if anyone - # ever stumbled upon such a file ;-) - size = self.size[0] * self.size[1] - self.tile = [ - ("raw", (0, 0) + self.size, offs, ("G", 0, -1)), - ("raw", (0, 0) + self.size, offs + size, ("R", 0, -1)), - ("raw", (0, 0) + self.size, offs + 2 * size, ("B", 0, -1)), - ] - else: - # LabEye/IFUNC files - self.tile = [("raw", (0, 0) + self.size, offs, (self.rawmode, 0, -1))] - - @property - def n_frames(self): - return self.info[FRAMES] - - @property - def is_animated(self): - return self.info[FRAMES] > 1 - - def seek(self, frame): - if not self._seek_check(frame): - return - - self.frame = frame - - if self.mode == "1": - bits = 1 - else: - bits = 8 * len(self.mode) - - size = ((self.size[0] * bits + 7) // 8) * self.size[1] - offs = self.__offset + frame * size - - self.fp = self._fp - - self.tile = [("raw", (0, 0) + self.size, offs, (self.rawmode, 0, -1))] - - def tell(self): - return self.frame - - -# -# -------------------------------------------------------------------- -# Save IM files - - -SAVE = { - # mode: (im type, raw mode) - "1": ("0 1", "1"), - "L": ("Greyscale", "L"), - "LA": ("LA", "LA;L"), - "P": ("Greyscale", "P"), - "PA": ("LA", "PA;L"), - "I": ("L 32S", "I;32S"), - "I;16": ("L 16", "I;16"), - "I;16L": ("L 16L", "I;16L"), - "I;16B": ("L 16B", "I;16B"), - "F": ("L 32F", "F;32F"), - "RGB": ("RGB", "RGB;L"), - "RGBA": ("RGBA", "RGBA;L"), - "RGBX": ("RGBX", "RGBX;L"), - "CMYK": ("CMYK", "CMYK;L"), - "YCbCr": ("YCC", "YCbCr;L"), -} - - -def _save(im, fp, filename): - try: - image_type, rawmode = SAVE[im.mode] - except KeyError as e: - msg = f"Cannot save {im.mode} images as IM" - raise ValueError(msg) from e - - frames = im.encoderinfo.get("frames", 1) - - fp.write(f"Image type: {image_type} image\r\n".encode("ascii")) - if filename: - # Each line must be 100 characters or less, - # or: SyntaxError("not an IM file") - # 8 characters are used for "Name: " and "\r\n" - # Keep just the filename, ditch the potentially overlong path - name, ext = os.path.splitext(os.path.basename(filename)) - name = "".join([name[: 92 - len(ext)], ext]) - - fp.write(f"Name: {name}\r\n".encode("ascii")) - fp.write(("Image size (x*y): %d*%d\r\n" % im.size).encode("ascii")) - fp.write(f"File size (no of images): {frames}\r\n".encode("ascii")) - if im.mode in ["P", "PA"]: - fp.write(b"Lut: 1\r\n") - fp.write(b"\000" * (511 - fp.tell()) + b"\032") - if im.mode in ["P", "PA"]: - im_palette = im.im.getpalette("RGB", "RGB;L") - colors = len(im_palette) // 3 - palette = b"" - for i in range(3): - palette += im_palette[colors * i : colors * (i + 1)] - palette += b"\x00" * (256 - colors) - fp.write(palette) # 768 bytes - ImageFile._save(im, fp, [("raw", (0, 0) + im.size, 0, (rawmode, 0, -1))]) - - -# -# -------------------------------------------------------------------- -# Registry - - -Image.register_open(ImImageFile.format, ImImageFile) -Image.register_save(ImImageFile.format, _save) - -Image.register_extension(ImImageFile.format, ".im") diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/anyio/_core/_fileio.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/anyio/_core/_fileio.py deleted file mode 100644 index 35e8e8af6c11dd6690a8382af6a23d1391fff9dc..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/anyio/_core/_fileio.py +++ /dev/null @@ -1,603 +0,0 @@ -from __future__ import annotations - -import os -import pathlib -import sys -from dataclasses import dataclass -from functools import partial -from os import PathLike -from typing import ( - IO, - TYPE_CHECKING, - Any, - AnyStr, - AsyncIterator, - Callable, - Generic, - Iterable, - Iterator, - Sequence, - cast, - overload, -) - -from .. import to_thread -from ..abc import AsyncResource - -if sys.version_info >= (3, 8): - from typing import Final -else: - from typing_extensions import Final - -if TYPE_CHECKING: - from _typeshed import OpenBinaryMode, OpenTextMode, ReadableBuffer, WriteableBuffer -else: - ReadableBuffer = OpenBinaryMode = OpenTextMode = WriteableBuffer = object - - -class AsyncFile(AsyncResource, Generic[AnyStr]): - """ - An asynchronous file object. - - This class wraps a standard file object and provides async friendly versions of the following - blocking methods (where available on the original file object): - - * read - * read1 - * readline - * readlines - * readinto - * readinto1 - * write - * writelines - * truncate - * seek - * tell - * flush - - All other methods are directly passed through. - - This class supports the asynchronous context manager protocol which closes the underlying file - at the end of the context block. - - This class also supports asynchronous iteration:: - - async with await open_file(...) as f: - async for line in f: - print(line) - """ - - def __init__(self, fp: IO[AnyStr]) -> None: - self._fp: Any = fp - - def __getattr__(self, name: str) -> object: - return getattr(self._fp, name) - - @property - def wrapped(self) -> IO[AnyStr]: - """The wrapped file object.""" - return self._fp - - async def __aiter__(self) -> AsyncIterator[AnyStr]: - while True: - line = await self.readline() - if line: - yield line - else: - break - - async def aclose(self) -> None: - return await to_thread.run_sync(self._fp.close) - - async def read(self, size: int = -1) -> AnyStr: - return await to_thread.run_sync(self._fp.read, size) - - async def read1(self: AsyncFile[bytes], size: int = -1) -> bytes: - return await to_thread.run_sync(self._fp.read1, size) - - async def readline(self) -> AnyStr: - return await to_thread.run_sync(self._fp.readline) - - async def readlines(self) -> list[AnyStr]: - return await to_thread.run_sync(self._fp.readlines) - - async def readinto(self: AsyncFile[bytes], b: WriteableBuffer) -> bytes: - return await to_thread.run_sync(self._fp.readinto, b) - - async def readinto1(self: AsyncFile[bytes], b: WriteableBuffer) -> bytes: - return await to_thread.run_sync(self._fp.readinto1, b) - - @overload - async def write(self: AsyncFile[bytes], b: ReadableBuffer) -> int: - ... - - @overload - async def write(self: AsyncFile[str], b: str) -> int: - ... - - async def write(self, b: ReadableBuffer | str) -> int: - return await to_thread.run_sync(self._fp.write, b) - - @overload - async def writelines( - self: AsyncFile[bytes], lines: Iterable[ReadableBuffer] - ) -> None: - ... - - @overload - async def writelines(self: AsyncFile[str], lines: Iterable[str]) -> None: - ... - - async def writelines(self, lines: Iterable[ReadableBuffer] | Iterable[str]) -> None: - return await to_thread.run_sync(self._fp.writelines, lines) - - async def truncate(self, size: int | None = None) -> int: - return await to_thread.run_sync(self._fp.truncate, size) - - async def seek(self, offset: int, whence: int | None = os.SEEK_SET) -> int: - return await to_thread.run_sync(self._fp.seek, offset, whence) - - async def tell(self) -> int: - return await to_thread.run_sync(self._fp.tell) - - async def flush(self) -> None: - return await to_thread.run_sync(self._fp.flush) - - -@overload -async def open_file( - file: str | PathLike[str] | int, - mode: OpenBinaryMode, - buffering: int = ..., - encoding: str | None = ..., - errors: str | None = ..., - newline: str | None = ..., - closefd: bool = ..., - opener: Callable[[str, int], int] | None = ..., -) -> AsyncFile[bytes]: - ... - - -@overload -async def open_file( - file: str | PathLike[str] | int, - mode: OpenTextMode = ..., - buffering: int = ..., - encoding: str | None = ..., - errors: str | None = ..., - newline: str | None = ..., - closefd: bool = ..., - opener: Callable[[str, int], int] | None = ..., -) -> AsyncFile[str]: - ... - - -async def open_file( - file: str | PathLike[str] | int, - mode: str = "r", - buffering: int = -1, - encoding: str | None = None, - errors: str | None = None, - newline: str | None = None, - closefd: bool = True, - opener: Callable[[str, int], int] | None = None, -) -> AsyncFile[Any]: - """ - Open a file asynchronously. - - The arguments are exactly the same as for the builtin :func:`open`. - - :return: an asynchronous file object - - """ - fp = await to_thread.run_sync( - open, file, mode, buffering, encoding, errors, newline, closefd, opener - ) - return AsyncFile(fp) - - -def wrap_file(file: IO[AnyStr]) -> AsyncFile[AnyStr]: - """ - Wrap an existing file as an asynchronous file. - - :param file: an existing file-like object - :return: an asynchronous file object - - """ - return AsyncFile(file) - - -@dataclass(eq=False) -class _PathIterator(AsyncIterator["Path"]): - iterator: Iterator[PathLike[str]] - - async def __anext__(self) -> Path: - nextval = await to_thread.run_sync(next, self.iterator, None, cancellable=True) - if nextval is None: - raise StopAsyncIteration from None - - return Path(cast("PathLike[str]", nextval)) - - -class Path: - """ - An asynchronous version of :class:`pathlib.Path`. - - This class cannot be substituted for :class:`pathlib.Path` or :class:`pathlib.PurePath`, but - it is compatible with the :class:`os.PathLike` interface. - - It implements the Python 3.10 version of :class:`pathlib.Path` interface, except for the - deprecated :meth:`~pathlib.Path.link_to` method. - - Any methods that do disk I/O need to be awaited on. These methods are: - - * :meth:`~pathlib.Path.absolute` - * :meth:`~pathlib.Path.chmod` - * :meth:`~pathlib.Path.cwd` - * :meth:`~pathlib.Path.exists` - * :meth:`~pathlib.Path.expanduser` - * :meth:`~pathlib.Path.group` - * :meth:`~pathlib.Path.hardlink_to` - * :meth:`~pathlib.Path.home` - * :meth:`~pathlib.Path.is_block_device` - * :meth:`~pathlib.Path.is_char_device` - * :meth:`~pathlib.Path.is_dir` - * :meth:`~pathlib.Path.is_fifo` - * :meth:`~pathlib.Path.is_file` - * :meth:`~pathlib.Path.is_mount` - * :meth:`~pathlib.Path.lchmod` - * :meth:`~pathlib.Path.lstat` - * :meth:`~pathlib.Path.mkdir` - * :meth:`~pathlib.Path.open` - * :meth:`~pathlib.Path.owner` - * :meth:`~pathlib.Path.read_bytes` - * :meth:`~pathlib.Path.read_text` - * :meth:`~pathlib.Path.readlink` - * :meth:`~pathlib.Path.rename` - * :meth:`~pathlib.Path.replace` - * :meth:`~pathlib.Path.rmdir` - * :meth:`~pathlib.Path.samefile` - * :meth:`~pathlib.Path.stat` - * :meth:`~pathlib.Path.touch` - * :meth:`~pathlib.Path.unlink` - * :meth:`~pathlib.Path.write_bytes` - * :meth:`~pathlib.Path.write_text` - - Additionally, the following methods return an async iterator yielding :class:`~.Path` objects: - - * :meth:`~pathlib.Path.glob` - * :meth:`~pathlib.Path.iterdir` - * :meth:`~pathlib.Path.rglob` - """ - - __slots__ = "_path", "__weakref__" - - __weakref__: Any - - def __init__(self, *args: str | PathLike[str]) -> None: - self._path: Final[pathlib.Path] = pathlib.Path(*args) - - def __fspath__(self) -> str: - return self._path.__fspath__() - - def __str__(self) -> str: - return self._path.__str__() - - def __repr__(self) -> str: - return f"{self.__class__.__name__}({self.as_posix()!r})" - - def __bytes__(self) -> bytes: - return self._path.__bytes__() - - def __hash__(self) -> int: - return self._path.__hash__() - - def __eq__(self, other: object) -> bool: - target = other._path if isinstance(other, Path) else other - return self._path.__eq__(target) - - def __lt__(self, other: Path) -> bool: - target = other._path if isinstance(other, Path) else other - return self._path.__lt__(target) - - def __le__(self, other: Path) -> bool: - target = other._path if isinstance(other, Path) else other - return self._path.__le__(target) - - def __gt__(self, other: Path) -> bool: - target = other._path if isinstance(other, Path) else other - return self._path.__gt__(target) - - def __ge__(self, other: Path) -> bool: - target = other._path if isinstance(other, Path) else other - return self._path.__ge__(target) - - def __truediv__(self, other: Any) -> Path: - return Path(self._path / other) - - def __rtruediv__(self, other: Any) -> Path: - return Path(other) / self - - @property - def parts(self) -> tuple[str, ...]: - return self._path.parts - - @property - def drive(self) -> str: - return self._path.drive - - @property - def root(self) -> str: - return self._path.root - - @property - def anchor(self) -> str: - return self._path.anchor - - @property - def parents(self) -> Sequence[Path]: - return tuple(Path(p) for p in self._path.parents) - - @property - def parent(self) -> Path: - return Path(self._path.parent) - - @property - def name(self) -> str: - return self._path.name - - @property - def suffix(self) -> str: - return self._path.suffix - - @property - def suffixes(self) -> list[str]: - return self._path.suffixes - - @property - def stem(self) -> str: - return self._path.stem - - async def absolute(self) -> Path: - path = await to_thread.run_sync(self._path.absolute) - return Path(path) - - def as_posix(self) -> str: - return self._path.as_posix() - - def as_uri(self) -> str: - return self._path.as_uri() - - def match(self, path_pattern: str) -> bool: - return self._path.match(path_pattern) - - def is_relative_to(self, *other: str | PathLike[str]) -> bool: - try: - self.relative_to(*other) - return True - except ValueError: - return False - - async def chmod(self, mode: int, *, follow_symlinks: bool = True) -> None: - func = partial(os.chmod, follow_symlinks=follow_symlinks) - return await to_thread.run_sync(func, self._path, mode) - - @classmethod - async def cwd(cls) -> Path: - path = await to_thread.run_sync(pathlib.Path.cwd) - return cls(path) - - async def exists(self) -> bool: - return await to_thread.run_sync(self._path.exists, cancellable=True) - - async def expanduser(self) -> Path: - return Path(await to_thread.run_sync(self._path.expanduser, cancellable=True)) - - def glob(self, pattern: str) -> AsyncIterator[Path]: - gen = self._path.glob(pattern) - return _PathIterator(gen) - - async def group(self) -> str: - return await to_thread.run_sync(self._path.group, cancellable=True) - - async def hardlink_to(self, target: str | pathlib.Path | Path) -> None: - if isinstance(target, Path): - target = target._path - - await to_thread.run_sync(os.link, target, self) - - @classmethod - async def home(cls) -> Path: - home_path = await to_thread.run_sync(pathlib.Path.home) - return cls(home_path) - - def is_absolute(self) -> bool: - return self._path.is_absolute() - - async def is_block_device(self) -> bool: - return await to_thread.run_sync(self._path.is_block_device, cancellable=True) - - async def is_char_device(self) -> bool: - return await to_thread.run_sync(self._path.is_char_device, cancellable=True) - - async def is_dir(self) -> bool: - return await to_thread.run_sync(self._path.is_dir, cancellable=True) - - async def is_fifo(self) -> bool: - return await to_thread.run_sync(self._path.is_fifo, cancellable=True) - - async def is_file(self) -> bool: - return await to_thread.run_sync(self._path.is_file, cancellable=True) - - async def is_mount(self) -> bool: - return await to_thread.run_sync(os.path.ismount, self._path, cancellable=True) - - def is_reserved(self) -> bool: - return self._path.is_reserved() - - async def is_socket(self) -> bool: - return await to_thread.run_sync(self._path.is_socket, cancellable=True) - - async def is_symlink(self) -> bool: - return await to_thread.run_sync(self._path.is_symlink, cancellable=True) - - def iterdir(self) -> AsyncIterator[Path]: - gen = self._path.iterdir() - return _PathIterator(gen) - - def joinpath(self, *args: str | PathLike[str]) -> Path: - return Path(self._path.joinpath(*args)) - - async def lchmod(self, mode: int) -> None: - await to_thread.run_sync(self._path.lchmod, mode) - - async def lstat(self) -> os.stat_result: - return await to_thread.run_sync(self._path.lstat, cancellable=True) - - async def mkdir( - self, mode: int = 0o777, parents: bool = False, exist_ok: bool = False - ) -> None: - await to_thread.run_sync(self._path.mkdir, mode, parents, exist_ok) - - @overload - async def open( - self, - mode: OpenBinaryMode, - buffering: int = ..., - encoding: str | None = ..., - errors: str | None = ..., - newline: str | None = ..., - ) -> AsyncFile[bytes]: - ... - - @overload - async def open( - self, - mode: OpenTextMode = ..., - buffering: int = ..., - encoding: str | None = ..., - errors: str | None = ..., - newline: str | None = ..., - ) -> AsyncFile[str]: - ... - - async def open( - self, - mode: str = "r", - buffering: int = -1, - encoding: str | None = None, - errors: str | None = None, - newline: str | None = None, - ) -> AsyncFile[Any]: - fp = await to_thread.run_sync( - self._path.open, mode, buffering, encoding, errors, newline - ) - return AsyncFile(fp) - - async def owner(self) -> str: - return await to_thread.run_sync(self._path.owner, cancellable=True) - - async def read_bytes(self) -> bytes: - return await to_thread.run_sync(self._path.read_bytes) - - async def read_text( - self, encoding: str | None = None, errors: str | None = None - ) -> str: - return await to_thread.run_sync(self._path.read_text, encoding, errors) - - def relative_to(self, *other: str | PathLike[str]) -> Path: - return Path(self._path.relative_to(*other)) - - async def readlink(self) -> Path: - target = await to_thread.run_sync(os.readlink, self._path) - return Path(cast(str, target)) - - async def rename(self, target: str | pathlib.PurePath | Path) -> Path: - if isinstance(target, Path): - target = target._path - - await to_thread.run_sync(self._path.rename, target) - return Path(target) - - async def replace(self, target: str | pathlib.PurePath | Path) -> Path: - if isinstance(target, Path): - target = target._path - - await to_thread.run_sync(self._path.replace, target) - return Path(target) - - async def resolve(self, strict: bool = False) -> Path: - func = partial(self._path.resolve, strict=strict) - return Path(await to_thread.run_sync(func, cancellable=True)) - - def rglob(self, pattern: str) -> AsyncIterator[Path]: - gen = self._path.rglob(pattern) - return _PathIterator(gen) - - async def rmdir(self) -> None: - await to_thread.run_sync(self._path.rmdir) - - async def samefile( - self, other_path: str | bytes | int | pathlib.Path | Path - ) -> bool: - if isinstance(other_path, Path): - other_path = other_path._path - - return await to_thread.run_sync( - self._path.samefile, other_path, cancellable=True - ) - - async def stat(self, *, follow_symlinks: bool = True) -> os.stat_result: - func = partial(os.stat, follow_symlinks=follow_symlinks) - return await to_thread.run_sync(func, self._path, cancellable=True) - - async def symlink_to( - self, - target: str | pathlib.Path | Path, - target_is_directory: bool = False, - ) -> None: - if isinstance(target, Path): - target = target._path - - await to_thread.run_sync(self._path.symlink_to, target, target_is_directory) - - async def touch(self, mode: int = 0o666, exist_ok: bool = True) -> None: - await to_thread.run_sync(self._path.touch, mode, exist_ok) - - async def unlink(self, missing_ok: bool = False) -> None: - try: - await to_thread.run_sync(self._path.unlink) - except FileNotFoundError: - if not missing_ok: - raise - - def with_name(self, name: str) -> Path: - return Path(self._path.with_name(name)) - - def with_stem(self, stem: str) -> Path: - return Path(self._path.with_name(stem + self._path.suffix)) - - def with_suffix(self, suffix: str) -> Path: - return Path(self._path.with_suffix(suffix)) - - async def write_bytes(self, data: bytes) -> int: - return await to_thread.run_sync(self._path.write_bytes, data) - - async def write_text( - self, - data: str, - encoding: str | None = None, - errors: str | None = None, - newline: str | None = None, - ) -> int: - # Path.write_text() does not support the "newline" parameter before Python 3.10 - def sync_write_text() -> int: - with self._path.open( - "w", encoding=encoding, errors=errors, newline=newline - ) as fp: - return fp.write(data) - - return await to_thread.run_sync(sync_write_text) - - -PathLike.register(Path) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/exceptiongroup/_version.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/exceptiongroup/_version.py deleted file mode 100644 index 760ce26b66fae11dd809174e2d91cb9873410474..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/exceptiongroup/_version.py +++ /dev/null @@ -1,4 +0,0 @@ -# file generated by setuptools_scm -# don't change, don't track in version control -__version__ = version = '1.1.3' -__version_tuple__ = version_tuple = (1, 1, 3) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/pens/momentsPen.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/pens/momentsPen.py deleted file mode 100644 index dab0d10e2c63b2552cf44005fdd5d2ecea3dfe12..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/pens/momentsPen.py +++ /dev/null @@ -1,882 +0,0 @@ -from fontTools.pens.basePen import BasePen, OpenContourError - -try: - import cython - - COMPILED = cython.compiled -except (AttributeError, ImportError): - # if cython not installed, use mock module with no-op decorators and types - from fontTools.misc import cython - - COMPILED = False - - -__all__ = ["MomentsPen"] - - -class MomentsPen(BasePen): - def __init__(self, glyphset=None): - BasePen.__init__(self, glyphset) - - self.area = 0 - self.momentX = 0 - self.momentY = 0 - self.momentXX = 0 - self.momentXY = 0 - self.momentYY = 0 - - def _moveTo(self, p0): - self.__startPoint = p0 - - def _closePath(self): - p0 = self._getCurrentPoint() - if p0 != self.__startPoint: - self._lineTo(self.__startPoint) - - def _endPath(self): - p0 = self._getCurrentPoint() - if p0 != self.__startPoint: - # Green theorem is not defined on open contours. - raise OpenContourError("Green theorem is not defined on open contours.") - - @cython.locals(r0=cython.double) - @cython.locals(r1=cython.double) - @cython.locals(r2=cython.double) - @cython.locals(r3=cython.double) - @cython.locals(r4=cython.double) - @cython.locals(r5=cython.double) - @cython.locals(r6=cython.double) - @cython.locals(r7=cython.double) - @cython.locals(r8=cython.double) - @cython.locals(r9=cython.double) - @cython.locals(r10=cython.double) - @cython.locals(r11=cython.double) - @cython.locals(r12=cython.double) - @cython.locals(x0=cython.double, y0=cython.double) - @cython.locals(x1=cython.double, y1=cython.double) - def _lineTo(self, p1): - x0, y0 = self._getCurrentPoint() - x1, y1 = p1 - - r0 = x1 * y0 - r1 = x1 * y1 - r2 = x1**2 - r3 = r2 * y1 - r4 = y0 - y1 - r5 = r4 * x0 - r6 = x0**2 - r7 = 2 * y0 - r8 = y0**2 - r9 = y1**2 - r10 = x1**3 - r11 = y0**3 - r12 = y1**3 - - self.area += -r0 / 2 - r1 / 2 + x0 * (y0 + y1) / 2 - self.momentX += -r2 * y0 / 6 - r3 / 3 - r5 * x1 / 6 + r6 * (r7 + y1) / 6 - self.momentY += ( - -r0 * y1 / 6 - r8 * x1 / 6 - r9 * x1 / 6 + x0 * (r8 + r9 + y0 * y1) / 6 - ) - self.momentXX += ( - -r10 * y0 / 12 - - r10 * y1 / 4 - - r2 * r5 / 12 - - r4 * r6 * x1 / 12 - + x0**3 * (3 * y0 + y1) / 12 - ) - self.momentXY += ( - -r2 * r8 / 24 - - r2 * r9 / 8 - - r3 * r7 / 24 - + r6 * (r7 * y1 + 3 * r8 + r9) / 24 - - x0 * x1 * (r8 - r9) / 12 - ) - self.momentYY += ( - -r0 * r9 / 12 - - r1 * r8 / 12 - - r11 * x1 / 12 - - r12 * x1 / 12 - + x0 * (r11 + r12 + r8 * y1 + r9 * y0) / 12 - ) - - @cython.locals(r0=cython.double) - @cython.locals(r1=cython.double) - @cython.locals(r2=cython.double) - @cython.locals(r3=cython.double) - @cython.locals(r4=cython.double) - @cython.locals(r5=cython.double) - @cython.locals(r6=cython.double) - @cython.locals(r7=cython.double) - @cython.locals(r8=cython.double) - @cython.locals(r9=cython.double) - @cython.locals(r10=cython.double) - @cython.locals(r11=cython.double) - @cython.locals(r12=cython.double) - @cython.locals(r13=cython.double) - @cython.locals(r14=cython.double) - @cython.locals(r15=cython.double) - @cython.locals(r16=cython.double) - @cython.locals(r17=cython.double) - @cython.locals(r18=cython.double) - @cython.locals(r19=cython.double) - @cython.locals(r20=cython.double) - @cython.locals(r21=cython.double) - @cython.locals(r22=cython.double) - @cython.locals(r23=cython.double) - @cython.locals(r24=cython.double) - @cython.locals(r25=cython.double) - @cython.locals(r26=cython.double) - @cython.locals(r27=cython.double) - @cython.locals(r28=cython.double) - @cython.locals(r29=cython.double) - @cython.locals(r30=cython.double) - @cython.locals(r31=cython.double) - @cython.locals(r32=cython.double) - @cython.locals(r33=cython.double) - @cython.locals(r34=cython.double) - @cython.locals(r35=cython.double) - @cython.locals(r36=cython.double) - @cython.locals(r37=cython.double) - @cython.locals(r38=cython.double) - @cython.locals(r39=cython.double) - @cython.locals(r40=cython.double) - @cython.locals(r41=cython.double) - @cython.locals(r42=cython.double) - @cython.locals(r43=cython.double) - @cython.locals(r44=cython.double) - @cython.locals(r45=cython.double) - @cython.locals(r46=cython.double) - @cython.locals(r47=cython.double) - @cython.locals(r48=cython.double) - @cython.locals(r49=cython.double) - @cython.locals(r50=cython.double) - @cython.locals(r51=cython.double) - @cython.locals(r52=cython.double) - @cython.locals(r53=cython.double) - @cython.locals(x0=cython.double, y0=cython.double) - @cython.locals(x1=cython.double, y1=cython.double) - @cython.locals(x2=cython.double, y2=cython.double) - def _qCurveToOne(self, p1, p2): - x0, y0 = self._getCurrentPoint() - x1, y1 = p1 - x2, y2 = p2 - - r0 = 2 * y1 - r1 = r0 * x2 - r2 = x2 * y2 - r3 = 3 * r2 - r4 = 2 * x1 - r5 = 3 * y0 - r6 = x1**2 - r7 = x2**2 - r8 = 4 * y1 - r9 = 10 * y2 - r10 = 2 * y2 - r11 = r4 * x2 - r12 = x0**2 - r13 = 10 * y0 - r14 = r4 * y2 - r15 = x2 * y0 - r16 = 4 * x1 - r17 = r0 * x1 + r2 - r18 = r2 * r8 - r19 = y1**2 - r20 = 2 * r19 - r21 = y2**2 - r22 = r21 * x2 - r23 = 5 * r22 - r24 = y0**2 - r25 = y0 * y2 - r26 = 5 * r24 - r27 = x1**3 - r28 = x2**3 - r29 = 30 * y1 - r30 = 6 * y1 - r31 = 10 * r7 * x1 - r32 = 5 * y2 - r33 = 12 * r6 - r34 = 30 * x1 - r35 = x1 * y1 - r36 = r3 + 20 * r35 - r37 = 12 * x1 - r38 = 20 * r6 - r39 = 8 * r6 * y1 - r40 = r32 * r7 - r41 = 60 * y1 - r42 = 20 * r19 - r43 = 4 * r19 - r44 = 15 * r21 - r45 = 12 * x2 - r46 = 12 * y2 - r47 = 6 * x1 - r48 = 8 * r19 * x1 + r23 - r49 = 8 * y1**3 - r50 = y2**3 - r51 = y0**3 - r52 = 10 * y1 - r53 = 12 * y1 - - self.area += ( - -r1 / 6 - - r3 / 6 - + x0 * (r0 + r5 + y2) / 6 - + x1 * y2 / 3 - - y0 * (r4 + x2) / 6 - ) - self.momentX += ( - -r11 * (-r10 + y1) / 30 - + r12 * (r13 + r8 + y2) / 30 - + r6 * y2 / 15 - - r7 * r8 / 30 - - r7 * r9 / 30 - + x0 * (r14 - r15 - r16 * y0 + r17) / 30 - - y0 * (r11 + 2 * r6 + r7) / 30 - ) - self.momentY += ( - -r18 / 30 - - r20 * x2 / 30 - - r23 / 30 - - r24 * (r16 + x2) / 30 - + x0 * (r0 * y2 + r20 + r21 + r25 + r26 + r8 * y0) / 30 - + x1 * y2 * (r10 + y1) / 15 - - y0 * (r1 + r17) / 30 - ) - self.momentXX += ( - r12 * (r1 - 5 * r15 - r34 * y0 + r36 + r9 * x1) / 420 - + 2 * r27 * y2 / 105 - - r28 * r29 / 420 - - r28 * y2 / 4 - - r31 * (r0 - 3 * y2) / 420 - - r6 * x2 * (r0 - r32) / 105 - + x0**3 * (r30 + 21 * y0 + y2) / 84 - - x0 - * ( - r0 * r7 - + r15 * r37 - - r2 * r37 - - r33 * y2 - + r38 * y0 - - r39 - - r40 - + r5 * r7 - ) - / 420 - - y0 * (8 * r27 + 5 * r28 + r31 + r33 * x2) / 420 - ) - self.momentXY += ( - r12 * (r13 * y2 + 3 * r21 + 105 * r24 + r41 * y0 + r42 + r46 * y1) / 840 - - r16 * x2 * (r43 - r44) / 840 - - r21 * r7 / 8 - - r24 * (r38 + r45 * x1 + 3 * r7) / 840 - - r41 * r7 * y2 / 840 - - r42 * r7 / 840 - + r6 * y2 * (r32 + r8) / 210 - + x0 - * ( - -r15 * r8 - + r16 * r25 - + r18 - + r21 * r47 - - r24 * r34 - - r26 * x2 - + r35 * r46 - + r48 - ) - / 420 - - y0 * (r16 * r2 + r30 * r7 + r35 * r45 + r39 + r40) / 420 - ) - self.momentYY += ( - -r2 * r42 / 420 - - r22 * r29 / 420 - - r24 * (r14 + r36 + r52 * x2) / 420 - - r49 * x2 / 420 - - r50 * x2 / 12 - - r51 * (r47 + x2) / 84 - + x0 - * ( - r19 * r46 - + r21 * r5 - + r21 * r52 - + r24 * r29 - + r25 * r53 - + r26 * y2 - + r42 * y0 - + r49 - + 5 * r50 - + 35 * r51 - ) - / 420 - + x1 * y2 * (r43 + r44 + r9 * y1) / 210 - - y0 * (r19 * r45 + r2 * r53 - r21 * r4 + r48) / 420 - ) - - @cython.locals(r0=cython.double) - @cython.locals(r1=cython.double) - @cython.locals(r2=cython.double) - @cython.locals(r3=cython.double) - @cython.locals(r4=cython.double) - @cython.locals(r5=cython.double) - @cython.locals(r6=cython.double) - @cython.locals(r7=cython.double) - @cython.locals(r8=cython.double) - @cython.locals(r9=cython.double) - @cython.locals(r10=cython.double) - @cython.locals(r11=cython.double) - @cython.locals(r12=cython.double) - @cython.locals(r13=cython.double) - @cython.locals(r14=cython.double) - @cython.locals(r15=cython.double) - @cython.locals(r16=cython.double) - @cython.locals(r17=cython.double) - @cython.locals(r18=cython.double) - @cython.locals(r19=cython.double) - @cython.locals(r20=cython.double) - @cython.locals(r21=cython.double) - @cython.locals(r22=cython.double) - @cython.locals(r23=cython.double) - @cython.locals(r24=cython.double) - @cython.locals(r25=cython.double) - @cython.locals(r26=cython.double) - @cython.locals(r27=cython.double) - @cython.locals(r28=cython.double) - @cython.locals(r29=cython.double) - @cython.locals(r30=cython.double) - @cython.locals(r31=cython.double) - @cython.locals(r32=cython.double) - @cython.locals(r33=cython.double) - @cython.locals(r34=cython.double) - @cython.locals(r35=cython.double) - @cython.locals(r36=cython.double) - @cython.locals(r37=cython.double) - @cython.locals(r38=cython.double) - @cython.locals(r39=cython.double) - @cython.locals(r40=cython.double) - @cython.locals(r41=cython.double) - @cython.locals(r42=cython.double) - @cython.locals(r43=cython.double) - @cython.locals(r44=cython.double) - @cython.locals(r45=cython.double) - @cython.locals(r46=cython.double) - @cython.locals(r47=cython.double) - @cython.locals(r48=cython.double) - @cython.locals(r49=cython.double) - @cython.locals(r50=cython.double) - @cython.locals(r51=cython.double) - @cython.locals(r52=cython.double) - @cython.locals(r53=cython.double) - @cython.locals(r54=cython.double) - @cython.locals(r55=cython.double) - @cython.locals(r56=cython.double) - @cython.locals(r57=cython.double) - @cython.locals(r58=cython.double) - @cython.locals(r59=cython.double) - @cython.locals(r60=cython.double) - @cython.locals(r61=cython.double) - @cython.locals(r62=cython.double) - @cython.locals(r63=cython.double) - @cython.locals(r64=cython.double) - @cython.locals(r65=cython.double) - @cython.locals(r66=cython.double) - @cython.locals(r67=cython.double) - @cython.locals(r68=cython.double) - @cython.locals(r69=cython.double) - @cython.locals(r70=cython.double) - @cython.locals(r71=cython.double) - @cython.locals(r72=cython.double) - @cython.locals(r73=cython.double) - @cython.locals(r74=cython.double) - @cython.locals(r75=cython.double) - @cython.locals(r76=cython.double) - @cython.locals(r77=cython.double) - @cython.locals(r78=cython.double) - @cython.locals(r79=cython.double) - @cython.locals(r80=cython.double) - @cython.locals(r81=cython.double) - @cython.locals(r82=cython.double) - @cython.locals(r83=cython.double) - @cython.locals(r84=cython.double) - @cython.locals(r85=cython.double) - @cython.locals(r86=cython.double) - @cython.locals(r87=cython.double) - @cython.locals(r88=cython.double) - @cython.locals(r89=cython.double) - @cython.locals(r90=cython.double) - @cython.locals(r91=cython.double) - @cython.locals(r92=cython.double) - @cython.locals(r93=cython.double) - @cython.locals(r94=cython.double) - @cython.locals(r95=cython.double) - @cython.locals(r96=cython.double) - @cython.locals(r97=cython.double) - @cython.locals(r98=cython.double) - @cython.locals(r99=cython.double) - @cython.locals(r100=cython.double) - @cython.locals(r101=cython.double) - @cython.locals(r102=cython.double) - @cython.locals(r103=cython.double) - @cython.locals(r104=cython.double) - @cython.locals(r105=cython.double) - @cython.locals(r106=cython.double) - @cython.locals(r107=cython.double) - @cython.locals(r108=cython.double) - @cython.locals(r109=cython.double) - @cython.locals(r110=cython.double) - @cython.locals(r111=cython.double) - @cython.locals(r112=cython.double) - @cython.locals(r113=cython.double) - @cython.locals(r114=cython.double) - @cython.locals(r115=cython.double) - @cython.locals(r116=cython.double) - @cython.locals(r117=cython.double) - @cython.locals(r118=cython.double) - @cython.locals(r119=cython.double) - @cython.locals(r120=cython.double) - @cython.locals(r121=cython.double) - @cython.locals(r122=cython.double) - @cython.locals(r123=cython.double) - @cython.locals(r124=cython.double) - @cython.locals(r125=cython.double) - @cython.locals(r126=cython.double) - @cython.locals(r127=cython.double) - @cython.locals(r128=cython.double) - @cython.locals(r129=cython.double) - @cython.locals(r130=cython.double) - @cython.locals(r131=cython.double) - @cython.locals(r132=cython.double) - @cython.locals(x0=cython.double, y0=cython.double) - @cython.locals(x1=cython.double, y1=cython.double) - @cython.locals(x2=cython.double, y2=cython.double) - @cython.locals(x3=cython.double, y3=cython.double) - def _curveToOne(self, p1, p2, p3): - x0, y0 = self._getCurrentPoint() - x1, y1 = p1 - x2, y2 = p2 - x3, y3 = p3 - - r0 = 6 * y2 - r1 = r0 * x3 - r2 = 10 * y3 - r3 = r2 * x3 - r4 = 3 * y1 - r5 = 6 * x1 - r6 = 3 * x2 - r7 = 6 * y1 - r8 = 3 * y2 - r9 = x2**2 - r10 = 45 * r9 - r11 = r10 * y3 - r12 = x3**2 - r13 = r12 * y2 - r14 = r12 * y3 - r15 = 7 * y3 - r16 = 15 * x3 - r17 = r16 * x2 - r18 = x1**2 - r19 = 9 * r18 - r20 = x0**2 - r21 = 21 * y1 - r22 = 9 * r9 - r23 = r7 * x3 - r24 = 9 * y2 - r25 = r24 * x2 + r3 - r26 = 9 * x2 - r27 = x2 * y3 - r28 = -r26 * y1 + 15 * r27 - r29 = 3 * x1 - r30 = 45 * x1 - r31 = 12 * x3 - r32 = 45 * r18 - r33 = 5 * r12 - r34 = r8 * x3 - r35 = 105 * y0 - r36 = 30 * y0 - r37 = r36 * x2 - r38 = 5 * x3 - r39 = 15 * y3 - r40 = 5 * y3 - r41 = r40 * x3 - r42 = x2 * y2 - r43 = 18 * r42 - r44 = 45 * y1 - r45 = r41 + r43 + r44 * x1 - r46 = y2 * y3 - r47 = r46 * x3 - r48 = y2**2 - r49 = 45 * r48 - r50 = r49 * x3 - r51 = y3**2 - r52 = r51 * x3 - r53 = y1**2 - r54 = 9 * r53 - r55 = y0**2 - r56 = 21 * x1 - r57 = 6 * x2 - r58 = r16 * y2 - r59 = r39 * y2 - r60 = 9 * r48 - r61 = r6 * y3 - r62 = 3 * y3 - r63 = r36 * y2 - r64 = y1 * y3 - r65 = 45 * r53 - r66 = 5 * r51 - r67 = x2**3 - r68 = x3**3 - r69 = 630 * y2 - r70 = 126 * x3 - r71 = x1**3 - r72 = 126 * x2 - r73 = 63 * r9 - r74 = r73 * x3 - r75 = r15 * x3 + 15 * r42 - r76 = 630 * x1 - r77 = 14 * x3 - r78 = 21 * r27 - r79 = 42 * x1 - r80 = 42 * x2 - r81 = x1 * y2 - r82 = 63 * r42 - r83 = x1 * y1 - r84 = r41 + r82 + 378 * r83 - r85 = x2 * x3 - r86 = r85 * y1 - r87 = r27 * x3 - r88 = 27 * r9 - r89 = r88 * y2 - r90 = 42 * r14 - r91 = 90 * x1 - r92 = 189 * r18 - r93 = 378 * r18 - r94 = r12 * y1 - r95 = 252 * x1 * x2 - r96 = r79 * x3 - r97 = 30 * r85 - r98 = r83 * x3 - r99 = 30 * x3 - r100 = 42 * x3 - r101 = r42 * x1 - r102 = r10 * y2 + 14 * r14 + 126 * r18 * y1 + r81 * r99 - r103 = 378 * r48 - r104 = 18 * y1 - r105 = r104 * y2 - r106 = y0 * y1 - r107 = 252 * y2 - r108 = r107 * y0 - r109 = y0 * y3 - r110 = 42 * r64 - r111 = 378 * r53 - r112 = 63 * r48 - r113 = 27 * x2 - r114 = r27 * y2 - r115 = r113 * r48 + 42 * r52 - r116 = x3 * y3 - r117 = 54 * r42 - r118 = r51 * x1 - r119 = r51 * x2 - r120 = r48 * x1 - r121 = 21 * x3 - r122 = r64 * x1 - r123 = r81 * y3 - r124 = 30 * r27 * y1 + r49 * x2 + 14 * r52 + 126 * r53 * x1 - r125 = y2**3 - r126 = y3**3 - r127 = y1**3 - r128 = y0**3 - r129 = r51 * y2 - r130 = r112 * y3 + r21 * r51 - r131 = 189 * r53 - r132 = 90 * y2 - - self.area += ( - -r1 / 20 - - r3 / 20 - - r4 * (x2 + x3) / 20 - + x0 * (r7 + r8 + 10 * y0 + y3) / 20 - + 3 * x1 * (y2 + y3) / 20 - + 3 * x2 * y3 / 10 - - y0 * (r5 + r6 + x3) / 20 - ) - self.momentX += ( - r11 / 840 - - r13 / 8 - - r14 / 3 - - r17 * (-r15 + r8) / 840 - + r19 * (r8 + 2 * y3) / 840 - + r20 * (r0 + r21 + 56 * y0 + y3) / 168 - + r29 * (-r23 + r25 + r28) / 840 - - r4 * (10 * r12 + r17 + r22) / 840 - + x0 - * ( - 12 * r27 - + r30 * y2 - + r34 - - r35 * x1 - - r37 - - r38 * y0 - + r39 * x1 - - r4 * x3 - + r45 - ) - / 840 - - y0 * (r17 + r30 * x2 + r31 * x1 + r32 + r33 + 18 * r9) / 840 - ) - self.momentY += ( - -r4 * (r25 + r58) / 840 - - r47 / 8 - - r50 / 840 - - r52 / 6 - - r54 * (r6 + 2 * x3) / 840 - - r55 * (r56 + r57 + x3) / 168 - + x0 - * ( - r35 * y1 - + r40 * y0 - + r44 * y2 - + 18 * r48 - + 140 * r55 - + r59 - + r63 - + 12 * r64 - + r65 - + r66 - ) - / 840 - + x1 * (r24 * y1 + 10 * r51 + r59 + r60 + r7 * y3) / 280 - + x2 * y3 * (r15 + r8) / 56 - - y0 * (r16 * y1 + r31 * y2 + r44 * x2 + r45 + r61 - r62 * x1) / 840 - ) - self.momentXX += ( - -r12 * r72 * (-r40 + r8) / 9240 - + 3 * r18 * (r28 + r34 - r38 * y1 + r75) / 3080 - + r20 - * ( - r24 * x3 - - r72 * y0 - - r76 * y0 - - r77 * y0 - + r78 - + r79 * y3 - + r80 * y1 - + 210 * r81 - + r84 - ) - / 9240 - - r29 - * ( - r12 * r21 - + 14 * r13 - + r44 * r9 - - r73 * y3 - + 54 * r86 - - 84 * r87 - - r89 - - r90 - ) - / 9240 - - r4 * (70 * r12 * x2 + 27 * r67 + 42 * r68 + r74) / 9240 - + 3 * r67 * y3 / 220 - - r68 * r69 / 9240 - - r68 * y3 / 4 - - r70 * r9 * (-r62 + y2) / 9240 - + 3 * r71 * (r24 + r40) / 3080 - + x0**3 * (r24 + r44 + 165 * y0 + y3) / 660 - + x0 - * ( - r100 * r27 - + 162 * r101 - + r102 - + r11 - + 63 * r18 * y3 - + r27 * r91 - - r33 * y0 - - r37 * x3 - + r43 * x3 - - r73 * y0 - - r88 * y1 - + r92 * y2 - - r93 * y0 - - 9 * r94 - - r95 * y0 - - r96 * y0 - - r97 * y1 - - 18 * r98 - + r99 * x1 * y3 - ) - / 9240 - - y0 - * ( - r12 * r56 - + r12 * r80 - + r32 * x3 - + 45 * r67 - + 14 * r68 - + 126 * r71 - + r74 - + r85 * r91 - + 135 * r9 * x1 - + r92 * x2 - ) - / 9240 - ) - self.momentXY += ( - -r103 * r12 / 18480 - - r12 * r51 / 8 - - 3 * r14 * y2 / 44 - + 3 * r18 * (r105 + r2 * y1 + 18 * r46 + 15 * r48 + 7 * r51) / 6160 - + r20 - * ( - 1260 * r106 - + r107 * y1 - + r108 - + 28 * r109 - + r110 - + r111 - + r112 - + 30 * r46 - + 2310 * r55 - + r66 - ) - / 18480 - - r54 * (7 * r12 + 18 * r85 + 15 * r9) / 18480 - - r55 * (r33 + r73 + r93 + r95 + r96 + r97) / 18480 - - r7 * (42 * r13 + r82 * x3 + 28 * r87 + r89 + r90) / 18480 - - 3 * r85 * (r48 - r66) / 220 - + 3 * r9 * y3 * (r62 + 2 * y2) / 440 - + x0 - * ( - -r1 * y0 - - 84 * r106 * x2 - + r109 * r56 - + 54 * r114 - + r117 * y1 - + 15 * r118 - + 21 * r119 - + 81 * r120 - + r121 * r46 - + 54 * r122 - + 60 * r123 - + r124 - - r21 * x3 * y0 - + r23 * y3 - - r54 * x3 - - r55 * r72 - - r55 * r76 - - r55 * r77 - + r57 * y0 * y3 - + r60 * x3 - + 84 * r81 * y0 - + 189 * r81 * y1 - ) - / 9240 - + x1 - * ( - r104 * r27 - - r105 * x3 - - r113 * r53 - + 63 * r114 - + r115 - - r16 * r53 - + 28 * r47 - + r51 * r80 - ) - / 3080 - - y0 - * ( - 54 * r101 - + r102 - + r116 * r5 - + r117 * x3 - + 21 * r13 - - r19 * y3 - + r22 * y3 - + r78 * x3 - + 189 * r83 * x2 - + 60 * r86 - + 81 * r9 * y1 - + 15 * r94 - + 54 * r98 - ) - / 9240 - ) - self.momentYY += ( - -r103 * r116 / 9240 - - r125 * r70 / 9240 - - r126 * x3 / 12 - - 3 * r127 * (r26 + r38) / 3080 - - r128 * (r26 + r30 + x3) / 660 - - r4 * (r112 * x3 + r115 - 14 * r119 + 84 * r47) / 9240 - - r52 * r69 / 9240 - - r54 * (r58 + r61 + r75) / 9240 - - r55 - * (r100 * y1 + r121 * y2 + r26 * y3 + r79 * y2 + r84 + 210 * x2 * y1) - / 9240 - + x0 - * ( - r108 * y1 - + r110 * y0 - + r111 * y0 - + r112 * y0 - + 45 * r125 - + 14 * r126 - + 126 * r127 - + 770 * r128 - + 42 * r129 - + r130 - + r131 * y2 - + r132 * r64 - + 135 * r48 * y1 - + 630 * r55 * y1 - + 126 * r55 * y2 - + 14 * r55 * y3 - + r63 * y3 - + r65 * y3 - + r66 * y0 - ) - / 9240 - + x1 - * ( - 27 * r125 - + 42 * r126 - + 70 * r129 - + r130 - + r39 * r53 - + r44 * r48 - + 27 * r53 * y2 - + 54 * r64 * y2 - ) - / 3080 - + 3 * x2 * y3 * (r48 + r66 + r8 * y3) / 220 - - y0 - * ( - r100 * r46 - + 18 * r114 - - 9 * r118 - - 27 * r120 - - 18 * r122 - - 30 * r123 - + r124 - + r131 * x2 - + r132 * x3 * y1 - + 162 * r42 * y1 - + r50 - + 63 * r53 * x3 - + r64 * r99 - ) - / 9240 - ) - - -if __name__ == "__main__": - from fontTools.misc.symfont import x, y, printGreenPen - - printGreenPen( - "MomentsPen", - [ - ("area", 1), - ("momentX", x), - ("momentY", y), - ("momentXX", x**2), - ("momentXY", x * y), - ("momentYY", y**2), - ], - ) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/ttLib/tables/_o_p_b_d.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/ttLib/tables/_o_p_b_d.py deleted file mode 100644 index b22af216bb2e2ddb8af1cd3f991d4ede69471076..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/ttLib/tables/_o_p_b_d.py +++ /dev/null @@ -1,6 +0,0 @@ -from .otBase import BaseTTXConverter - - -# https://developer.apple.com/fonts/TrueType-Reference-Manual/RM06/Chap6opbd.html -class table__o_p_b_d(BaseTTXConverter): - pass diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/matplotlib/table.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/matplotlib/table.py deleted file mode 100644 index d42cdf878d618b62b7d329d317b05e0f6bd87705..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/matplotlib/table.py +++ /dev/null @@ -1,831 +0,0 @@ -# Original code by: -# John Gill -# Copyright 2004 John Gill and John Hunter -# -# Subsequent changes: -# The Matplotlib development team -# Copyright The Matplotlib development team - -""" -Tables drawing. - -.. note:: - The table implementation in Matplotlib is lightly maintained. For a more - featureful table implementation, you may wish to try `blume - `_. - -Use the factory function `~matplotlib.table.table` to create a ready-made -table from texts. If you need more control, use the `.Table` class and its -methods. - -The table consists of a grid of cells, which are indexed by (row, column). -The cell (0, 0) is positioned at the top left. - -Thanks to John Gill for providing the class and table. -""" - -import numpy as np - -from . import _api, _docstring -from .artist import Artist, allow_rasterization -from .patches import Rectangle -from .text import Text -from .transforms import Bbox -from .path import Path - - -class Cell(Rectangle): - """ - A cell is a `.Rectangle` with some associated `.Text`. - - As a user, you'll most likely not creates cells yourself. Instead, you - should use either the `~matplotlib.table.table` factory function or - `.Table.add_cell`. - """ - - PAD = 0.1 - """Padding between text and rectangle.""" - - _edges = 'BRTL' - _edge_aliases = {'open': '', - 'closed': _edges, # default - 'horizontal': 'BT', - 'vertical': 'RL' - } - - def __init__(self, xy, width, height, *, - edgecolor='k', facecolor='w', - fill=True, - text='', - loc=None, - fontproperties=None, - visible_edges='closed', - ): - """ - Parameters - ---------- - xy : 2-tuple - The position of the bottom left corner of the cell. - width : float - The cell width. - height : float - The cell height. - edgecolor : color - The color of the cell border. - facecolor : color - The cell facecolor. - fill : bool - Whether the cell background is filled. - text : str - The cell text. - loc : {'left', 'center', 'right'}, default: 'right' - The alignment of the text within the cell. - fontproperties : dict - A dict defining the font properties of the text. Supported keys and - values are the keyword arguments accepted by `.FontProperties`. - visible_edges : str, default: 'closed' - The cell edges to be drawn with a line: a substring of 'BRTL' - (bottom, right, top, left), or one of 'open' (no edges drawn), - 'closed' (all edges drawn), 'horizontal' (bottom and top), - 'vertical' (right and left). - """ - - # Call base - super().__init__(xy, width=width, height=height, fill=fill, - edgecolor=edgecolor, facecolor=facecolor) - self.set_clip_on(False) - self.visible_edges = visible_edges - - # Create text object - if loc is None: - loc = 'right' - self._loc = loc - self._text = Text(x=xy[0], y=xy[1], clip_on=False, - text=text, fontproperties=fontproperties, - horizontalalignment=loc, verticalalignment='center') - - @_api.rename_parameter("3.8", "trans", "t") - def set_transform(self, t): - super().set_transform(t) - # the text does not get the transform! - self.stale = True - - def set_figure(self, fig): - super().set_figure(fig) - self._text.set_figure(fig) - - def get_text(self): - """Return the cell `.Text` instance.""" - return self._text - - def set_fontsize(self, size): - """Set the text fontsize.""" - self._text.set_fontsize(size) - self.stale = True - - def get_fontsize(self): - """Return the cell fontsize.""" - return self._text.get_fontsize() - - def auto_set_font_size(self, renderer): - """Shrink font size until the text fits into the cell width.""" - fontsize = self.get_fontsize() - required = self.get_required_width(renderer) - while fontsize > 1 and required > self.get_width(): - fontsize -= 1 - self.set_fontsize(fontsize) - required = self.get_required_width(renderer) - - return fontsize - - @allow_rasterization - def draw(self, renderer): - if not self.get_visible(): - return - # draw the rectangle - super().draw(renderer) - # position the text - self._set_text_position(renderer) - self._text.draw(renderer) - self.stale = False - - def _set_text_position(self, renderer): - """Set text up so it is drawn in the right place.""" - bbox = self.get_window_extent(renderer) - # center vertically - y = bbox.y0 + bbox.height / 2 - # position horizontally - loc = self._text.get_horizontalalignment() - if loc == 'center': - x = bbox.x0 + bbox.width / 2 - elif loc == 'left': - x = bbox.x0 + bbox.width * self.PAD - else: # right. - x = bbox.x0 + bbox.width * (1 - self.PAD) - self._text.set_position((x, y)) - - def get_text_bounds(self, renderer): - """ - Return the text bounds as *(x, y, width, height)* in table coordinates. - """ - return (self._text.get_window_extent(renderer) - .transformed(self.get_data_transform().inverted()) - .bounds) - - def get_required_width(self, renderer): - """Return the minimal required width for the cell.""" - l, b, w, h = self.get_text_bounds(renderer) - return w * (1.0 + (2.0 * self.PAD)) - - @_docstring.dedent_interpd - def set_text_props(self, **kwargs): - """ - Update the text properties. - - Valid keyword arguments are: - - %(Text:kwdoc)s - """ - self._text._internal_update(kwargs) - self.stale = True - - @property - def visible_edges(self): - """ - The cell edges to be drawn with a line. - - Reading this property returns a substring of 'BRTL' (bottom, right, - top, left'). - - When setting this property, you can use a substring of 'BRTL' or one - of {'open', 'closed', 'horizontal', 'vertical'}. - """ - return self._visible_edges - - @visible_edges.setter - def visible_edges(self, value): - if value is None: - self._visible_edges = self._edges - elif value in self._edge_aliases: - self._visible_edges = self._edge_aliases[value] - else: - if any(edge not in self._edges for edge in value): - raise ValueError('Invalid edge param {}, must only be one of ' - '{} or string of {}'.format( - value, - ", ".join(self._edge_aliases), - ", ".join(self._edges))) - self._visible_edges = value - self.stale = True - - def get_path(self): - """Return a `.Path` for the `.visible_edges`.""" - codes = [Path.MOVETO] - codes.extend( - Path.LINETO if edge in self._visible_edges else Path.MOVETO - for edge in self._edges) - if Path.MOVETO not in codes[1:]: # All sides are visible - codes[-1] = Path.CLOSEPOLY - return Path( - [[0.0, 0.0], [1.0, 0.0], [1.0, 1.0], [0.0, 1.0], [0.0, 0.0]], - codes, - readonly=True - ) - - -CustomCell = Cell # Backcompat. alias. - - -class Table(Artist): - """ - A table of cells. - - The table consists of a grid of cells, which are indexed by (row, column). - - For a simple table, you'll have a full grid of cells with indices from - (0, 0) to (num_rows-1, num_cols-1), in which the cell (0, 0) is positioned - at the top left. However, you can also add cells with negative indices. - You don't have to add a cell to every grid position, so you can create - tables that have holes. - - *Note*: You'll usually not create an empty table from scratch. Instead use - `~matplotlib.table.table` to create a table from data. - """ - codes = {'best': 0, - 'upper right': 1, # default - 'upper left': 2, - 'lower left': 3, - 'lower right': 4, - 'center left': 5, - 'center right': 6, - 'lower center': 7, - 'upper center': 8, - 'center': 9, - 'top right': 10, - 'top left': 11, - 'bottom left': 12, - 'bottom right': 13, - 'right': 14, - 'left': 15, - 'top': 16, - 'bottom': 17, - } - """Possible values where to place the table relative to the Axes.""" - - FONTSIZE = 10 - - AXESPAD = 0.02 - """The border between the Axes and the table edge in Axes units.""" - - def __init__(self, ax, loc=None, bbox=None, **kwargs): - """ - Parameters - ---------- - ax : `~matplotlib.axes.Axes` - The `~.axes.Axes` to plot the table into. - loc : str - The position of the cell with respect to *ax*. This must be one of - the `~.Table.codes`. - bbox : `.Bbox` or [xmin, ymin, width, height], optional - A bounding box to draw the table into. If this is not *None*, this - overrides *loc*. - - Other Parameters - ---------------- - **kwargs - `.Artist` properties. - """ - - super().__init__() - - if isinstance(loc, str): - if loc not in self.codes: - raise ValueError( - "Unrecognized location {!r}. Valid locations are\n\t{}" - .format(loc, '\n\t'.join(self.codes))) - loc = self.codes[loc] - self.set_figure(ax.figure) - self._axes = ax - self._loc = loc - self._bbox = bbox - - # use axes coords - ax._unstale_viewLim() - self.set_transform(ax.transAxes) - - self._cells = {} - self._edges = None - self._autoColumns = [] - self._autoFontsize = True - self._internal_update(kwargs) - - self.set_clip_on(False) - - def add_cell(self, row, col, *args, **kwargs): - """ - Create a cell and add it to the table. - - Parameters - ---------- - row : int - Row index. - col : int - Column index. - *args, **kwargs - All other parameters are passed on to `Cell`. - - Returns - ------- - `.Cell` - The created cell. - - """ - xy = (0, 0) - cell = Cell(xy, visible_edges=self.edges, *args, **kwargs) - self[row, col] = cell - return cell - - def __setitem__(self, position, cell): - """ - Set a custom cell in a given position. - """ - _api.check_isinstance(Cell, cell=cell) - try: - row, col = position[0], position[1] - except Exception as err: - raise KeyError('Only tuples length 2 are accepted as ' - 'coordinates') from err - cell.set_figure(self.figure) - cell.set_transform(self.get_transform()) - cell.set_clip_on(False) - self._cells[row, col] = cell - self.stale = True - - def __getitem__(self, position): - """Retrieve a custom cell from a given position.""" - return self._cells[position] - - @property - def edges(self): - """ - The default value of `~.Cell.visible_edges` for newly added - cells using `.add_cell`. - - Notes - ----- - This setting does currently only affect newly created cells using - `.add_cell`. - - To change existing cells, you have to set their edges explicitly:: - - for c in tab.get_celld().values(): - c.visible_edges = 'horizontal' - - """ - return self._edges - - @edges.setter - def edges(self, value): - self._edges = value - self.stale = True - - def _approx_text_height(self): - return (self.FONTSIZE / 72.0 * self.figure.dpi / - self._axes.bbox.height * 1.2) - - @allow_rasterization - def draw(self, renderer): - # docstring inherited - - # Need a renderer to do hit tests on mouseevent; assume the last one - # will do - if renderer is None: - renderer = self.figure._get_renderer() - if renderer is None: - raise RuntimeError('No renderer defined') - - if not self.get_visible(): - return - renderer.open_group('table', gid=self.get_gid()) - self._update_positions(renderer) - - for key in sorted(self._cells): - self._cells[key].draw(renderer) - - renderer.close_group('table') - self.stale = False - - def _get_grid_bbox(self, renderer): - """ - Get a bbox, in axes coordinates for the cells. - - Only include those in the range (0, 0) to (maxRow, maxCol). - """ - boxes = [cell.get_window_extent(renderer) - for (row, col), cell in self._cells.items() - if row >= 0 and col >= 0] - bbox = Bbox.union(boxes) - return bbox.transformed(self.get_transform().inverted()) - - def contains(self, mouseevent): - # docstring inherited - if self._different_canvas(mouseevent): - return False, {} - # TODO: Return index of the cell containing the cursor so that the user - # doesn't have to bind to each one individually. - renderer = self.figure._get_renderer() - if renderer is not None: - boxes = [cell.get_window_extent(renderer) - for (row, col), cell in self._cells.items() - if row >= 0 and col >= 0] - bbox = Bbox.union(boxes) - return bbox.contains(mouseevent.x, mouseevent.y), {} - else: - return False, {} - - def get_children(self): - """Return the Artists contained by the table.""" - return list(self._cells.values()) - - def get_window_extent(self, renderer=None): - # docstring inherited - if renderer is None: - renderer = self.figure._get_renderer() - self._update_positions(renderer) - boxes = [cell.get_window_extent(renderer) - for cell in self._cells.values()] - return Bbox.union(boxes) - - def _do_cell_alignment(self): - """ - Calculate row heights and column widths; position cells accordingly. - """ - # Calculate row/column widths - widths = {} - heights = {} - for (row, col), cell in self._cells.items(): - height = heights.setdefault(row, 0.0) - heights[row] = max(height, cell.get_height()) - width = widths.setdefault(col, 0.0) - widths[col] = max(width, cell.get_width()) - - # work out left position for each column - xpos = 0 - lefts = {} - for col in sorted(widths): - lefts[col] = xpos - xpos += widths[col] - - ypos = 0 - bottoms = {} - for row in sorted(heights, reverse=True): - bottoms[row] = ypos - ypos += heights[row] - - # set cell positions - for (row, col), cell in self._cells.items(): - cell.set_x(lefts[col]) - cell.set_y(bottoms[row]) - - def auto_set_column_width(self, col): - """ - Automatically set the widths of given columns to optimal sizes. - - Parameters - ---------- - col : int or sequence of ints - The indices of the columns to auto-scale. - """ - col1d = np.atleast_1d(col) - if not np.issubdtype(col1d.dtype, np.integer): - _api.warn_deprecated("3.8", name="col", - message="%(name)r must be an int or sequence of ints. " - "Passing other types is deprecated since %(since)s " - "and will be removed %(removal)s.") - return - for cell in col1d: - self._autoColumns.append(cell) - - self.stale = True - - def _auto_set_column_width(self, col, renderer): - """Automatically set width for column.""" - cells = [cell for key, cell in self._cells.items() if key[1] == col] - max_width = max((cell.get_required_width(renderer) for cell in cells), - default=0) - for cell in cells: - cell.set_width(max_width) - - def auto_set_font_size(self, value=True): - """Automatically set font size.""" - self._autoFontsize = value - self.stale = True - - def _auto_set_font_size(self, renderer): - - if len(self._cells) == 0: - return - fontsize = next(iter(self._cells.values())).get_fontsize() - cells = [] - for key, cell in self._cells.items(): - # ignore auto-sized columns - if key[1] in self._autoColumns: - continue - size = cell.auto_set_font_size(renderer) - fontsize = min(fontsize, size) - cells.append(cell) - - # now set all fontsizes equal - for cell in self._cells.values(): - cell.set_fontsize(fontsize) - - def scale(self, xscale, yscale): - """Scale column widths by *xscale* and row heights by *yscale*.""" - for c in self._cells.values(): - c.set_width(c.get_width() * xscale) - c.set_height(c.get_height() * yscale) - - def set_fontsize(self, size): - """ - Set the font size, in points, of the cell text. - - Parameters - ---------- - size : float - - Notes - ----- - As long as auto font size has not been disabled, the value will be - clipped such that the text fits horizontally into the cell. - - You can disable this behavior using `.auto_set_font_size`. - - >>> the_table.auto_set_font_size(False) - >>> the_table.set_fontsize(20) - - However, there is no automatic scaling of the row height so that the - text may exceed the cell boundary. - """ - for cell in self._cells.values(): - cell.set_fontsize(size) - self.stale = True - - def _offset(self, ox, oy): - """Move all the artists by ox, oy (axes coords).""" - for c in self._cells.values(): - x, y = c.get_x(), c.get_y() - c.set_x(x + ox) - c.set_y(y + oy) - - def _update_positions(self, renderer): - # called from renderer to allow more precise estimates of - # widths and heights with get_window_extent - - # Do any auto width setting - for col in self._autoColumns: - self._auto_set_column_width(col, renderer) - - if self._autoFontsize: - self._auto_set_font_size(renderer) - - # Align all the cells - self._do_cell_alignment() - - bbox = self._get_grid_bbox(renderer) - l, b, w, h = bbox.bounds - - if self._bbox is not None: - # Position according to bbox - if isinstance(self._bbox, Bbox): - rl, rb, rw, rh = self._bbox.bounds - else: - rl, rb, rw, rh = self._bbox - self.scale(rw / w, rh / h) - ox = rl - l - oy = rb - b - self._do_cell_alignment() - else: - # Position using loc - (BEST, UR, UL, LL, LR, CL, CR, LC, UC, C, - TR, TL, BL, BR, R, L, T, B) = range(len(self.codes)) - # defaults for center - ox = (0.5 - w / 2) - l - oy = (0.5 - h / 2) - b - if self._loc in (UL, LL, CL): # left - ox = self.AXESPAD - l - if self._loc in (BEST, UR, LR, R, CR): # right - ox = 1 - (l + w + self.AXESPAD) - if self._loc in (BEST, UR, UL, UC): # upper - oy = 1 - (b + h + self.AXESPAD) - if self._loc in (LL, LR, LC): # lower - oy = self.AXESPAD - b - if self._loc in (LC, UC, C): # center x - ox = (0.5 - w / 2) - l - if self._loc in (CL, CR, C): # center y - oy = (0.5 - h / 2) - b - - if self._loc in (TL, BL, L): # out left - ox = - (l + w) - if self._loc in (TR, BR, R): # out right - ox = 1.0 - l - if self._loc in (TR, TL, T): # out top - oy = 1.0 - b - if self._loc in (BL, BR, B): # out bottom - oy = - (b + h) - - self._offset(ox, oy) - - def get_celld(self): - r""" - Return a dict of cells in the table mapping *(row, column)* to - `.Cell`\s. - - Notes - ----- - You can also directly index into the Table object to access individual - cells:: - - cell = table[row, col] - - """ - return self._cells - - -@_docstring.dedent_interpd -def table(ax, - cellText=None, cellColours=None, - cellLoc='right', colWidths=None, - rowLabels=None, rowColours=None, rowLoc='left', - colLabels=None, colColours=None, colLoc='center', - loc='bottom', bbox=None, edges='closed', - **kwargs): - """ - Add a table to an `~.axes.Axes`. - - At least one of *cellText* or *cellColours* must be specified. These - parameters must be 2D lists, in which the outer lists define the rows and - the inner list define the column values per row. Each row must have the - same number of elements. - - The table can optionally have row and column headers, which are configured - using *rowLabels*, *rowColours*, *rowLoc* and *colLabels*, *colColours*, - *colLoc* respectively. - - For finer grained control over tables, use the `.Table` class and add it to - the axes with `.Axes.add_table`. - - Parameters - ---------- - cellText : 2D list of str, optional - The texts to place into the table cells. - - *Note*: Line breaks in the strings are currently not accounted for and - will result in the text exceeding the cell boundaries. - - cellColours : 2D list of colors, optional - The background colors of the cells. - - cellLoc : {'left', 'center', 'right'}, default: 'right' - The alignment of the text within the cells. - - colWidths : list of float, optional - The column widths in units of the axes. If not given, all columns will - have a width of *1 / ncols*. - - rowLabels : list of str, optional - The text of the row header cells. - - rowColours : list of colors, optional - The colors of the row header cells. - - rowLoc : {'left', 'center', 'right'}, default: 'left' - The text alignment of the row header cells. - - colLabels : list of str, optional - The text of the column header cells. - - colColours : list of colors, optional - The colors of the column header cells. - - colLoc : {'left', 'center', 'right'}, default: 'left' - The text alignment of the column header cells. - - loc : str, optional - The position of the cell with respect to *ax*. This must be one of - the `~.Table.codes`. - - bbox : `.Bbox` or [xmin, ymin, width, height], optional - A bounding box to draw the table into. If this is not *None*, this - overrides *loc*. - - edges : substring of 'BRTL' or {'open', 'closed', 'horizontal', 'vertical'} - The cell edges to be drawn with a line. See also - `~.Cell.visible_edges`. - - Returns - ------- - `~matplotlib.table.Table` - The created table. - - Other Parameters - ---------------- - **kwargs - `.Table` properties. - - %(Table:kwdoc)s - """ - - if cellColours is None and cellText is None: - raise ValueError('At least one argument from "cellColours" or ' - '"cellText" must be provided to create a table.') - - # Check we have some cellText - if cellText is None: - # assume just colours are needed - rows = len(cellColours) - cols = len(cellColours[0]) - cellText = [[''] * cols] * rows - - rows = len(cellText) - cols = len(cellText[0]) - for row in cellText: - if len(row) != cols: - raise ValueError(f"Each row in 'cellText' must have {cols} " - "columns") - - if cellColours is not None: - if len(cellColours) != rows: - raise ValueError(f"'cellColours' must have {rows} rows") - for row in cellColours: - if len(row) != cols: - raise ValueError("Each row in 'cellColours' must have " - f"{cols} columns") - else: - cellColours = ['w' * cols] * rows - - # Set colwidths if not given - if colWidths is None: - colWidths = [1.0 / cols] * cols - - # Fill in missing information for column - # and row labels - rowLabelWidth = 0 - if rowLabels is None: - if rowColours is not None: - rowLabels = [''] * rows - rowLabelWidth = colWidths[0] - elif rowColours is None: - rowColours = 'w' * rows - - if rowLabels is not None: - if len(rowLabels) != rows: - raise ValueError(f"'rowLabels' must be of length {rows}") - - # If we have column labels, need to shift - # the text and colour arrays down 1 row - offset = 1 - if colLabels is None: - if colColours is not None: - colLabels = [''] * cols - else: - offset = 0 - elif colColours is None: - colColours = 'w' * cols - - # Set up cell colours if not given - if cellColours is None: - cellColours = ['w' * cols] * rows - - # Now create the table - table = Table(ax, loc, bbox, **kwargs) - table.edges = edges - height = table._approx_text_height() - - # Add the cells - for row in range(rows): - for col in range(cols): - table.add_cell(row + offset, col, - width=colWidths[col], height=height, - text=cellText[row][col], - facecolor=cellColours[row][col], - loc=cellLoc) - # Do column labels - if colLabels is not None: - for col in range(cols): - table.add_cell(0, col, - width=colWidths[col], height=height, - text=colLabels[col], facecolor=colColours[col], - loc=colLoc) - - # Do row labels - if rowLabels is not None: - for row in range(rows): - table.add_cell(row + offset, -1, - width=rowLabelWidth or 1e-15, height=height, - text=rowLabels[row], facecolor=rowColours[row], - loc=rowLoc) - if rowLabelWidth == 0: - table.auto_set_column_width(-1) - - ax.add_table(table) - return table diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/matplotlib/tests/test_backend_gtk3.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/matplotlib/tests/test_backend_gtk3.py deleted file mode 100644 index 6a95f47e1dddb368c2dbd2e534f20ee2baaae505..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/matplotlib/tests/test_backend_gtk3.py +++ /dev/null @@ -1,51 +0,0 @@ -from matplotlib import pyplot as plt - -import pytest - - -pytest.importorskip("matplotlib.backends.backend_gtk3agg") - - -@pytest.mark.backend("gtk3agg", skip_on_importerror=True) -def test_correct_key(): - pytest.xfail("test_widget_send_event is not triggering key_press_event") - - from gi.repository import Gdk, Gtk # type: ignore - fig = plt.figure() - buf = [] - - def send(event): - for key, mod in [ - (Gdk.KEY_a, Gdk.ModifierType.SHIFT_MASK), - (Gdk.KEY_a, 0), - (Gdk.KEY_a, Gdk.ModifierType.CONTROL_MASK), - (Gdk.KEY_agrave, 0), - (Gdk.KEY_Control_L, Gdk.ModifierType.MOD1_MASK), - (Gdk.KEY_Alt_L, Gdk.ModifierType.CONTROL_MASK), - (Gdk.KEY_agrave, - Gdk.ModifierType.CONTROL_MASK - | Gdk.ModifierType.MOD1_MASK - | Gdk.ModifierType.MOD4_MASK), - (0xfd16, 0), # KEY_3270_Play. - (Gdk.KEY_BackSpace, 0), - (Gdk.KEY_BackSpace, Gdk.ModifierType.CONTROL_MASK), - ]: - # This is not actually really the right API: it depends on the - # actual keymap (e.g. on Azerty, shift+agrave -> 0). - Gtk.test_widget_send_key(fig.canvas, key, mod) - - def receive(event): - buf.append(event.key) - if buf == [ - "A", "a", "ctrl+a", - "\N{LATIN SMALL LETTER A WITH GRAVE}", - "alt+control", "ctrl+alt", - "ctrl+alt+super+\N{LATIN SMALL LETTER A WITH GRAVE}", - # (No entry for KEY_3270_Play.) - "backspace", "ctrl+backspace", - ]: - plt.close(fig) - - fig.canvas.mpl_connect("draw_event", send) - fig.canvas.mpl_connect("key_press_event", receive) - plt.show() diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/f2py/__main__.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/f2py/__main__.py deleted file mode 100644 index 936a753a2796896667aa782277be41b40af061d3..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/f2py/__main__.py +++ /dev/null @@ -1,5 +0,0 @@ -# See: -# https://web.archive.org/web/20140822061353/http://cens.ioc.ee/projects/f2py2e -from numpy.f2py.f2py2e import main - -main() diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/arrays/sparse/test_unary.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/arrays/sparse/test_unary.py deleted file mode 100644 index c00a73773fdd4795e3d5d7f030a591a060dc3bfc..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/arrays/sparse/test_unary.py +++ /dev/null @@ -1,79 +0,0 @@ -import operator - -import numpy as np -import pytest - -import pandas as pd -import pandas._testing as tm -from pandas.core.arrays import SparseArray - - -@pytest.mark.filterwarnings("ignore:invalid value encountered in cast:RuntimeWarning") -@pytest.mark.parametrize("fill_value", [0, np.nan]) -@pytest.mark.parametrize("op", [operator.pos, operator.neg]) -def test_unary_op(op, fill_value): - arr = np.array([0, 1, np.nan, 2]) - sparray = SparseArray(arr, fill_value=fill_value) - result = op(sparray) - expected = SparseArray(op(arr), fill_value=op(fill_value)) - tm.assert_sp_array_equal(result, expected) - - -@pytest.mark.parametrize("fill_value", [True, False]) -def test_invert(fill_value): - arr = np.array([True, False, False, True]) - sparray = SparseArray(arr, fill_value=fill_value) - result = ~sparray - expected = SparseArray(~arr, fill_value=not fill_value) - tm.assert_sp_array_equal(result, expected) - - result = ~pd.Series(sparray) - expected = pd.Series(expected) - tm.assert_series_equal(result, expected) - - result = ~pd.DataFrame({"A": sparray}) - expected = pd.DataFrame({"A": expected}) - tm.assert_frame_equal(result, expected) - - -class TestUnaryMethods: - @pytest.mark.filterwarnings( - "ignore:invalid value encountered in cast:RuntimeWarning" - ) - def test_neg_operator(self): - arr = SparseArray([-1, -2, np.nan, 3], fill_value=np.nan, dtype=np.int8) - res = -arr - exp = SparseArray([1, 2, np.nan, -3], fill_value=np.nan, dtype=np.int8) - tm.assert_sp_array_equal(exp, res) - - arr = SparseArray([-1, -2, 1, 3], fill_value=-1, dtype=np.int8) - res = -arr - exp = SparseArray([1, 2, -1, -3], fill_value=1, dtype=np.int8) - tm.assert_sp_array_equal(exp, res) - - @pytest.mark.filterwarnings( - "ignore:invalid value encountered in cast:RuntimeWarning" - ) - def test_abs_operator(self): - arr = SparseArray([-1, -2, np.nan, 3], fill_value=np.nan, dtype=np.int8) - res = abs(arr) - exp = SparseArray([1, 2, np.nan, 3], fill_value=np.nan, dtype=np.int8) - tm.assert_sp_array_equal(exp, res) - - arr = SparseArray([-1, -2, 1, 3], fill_value=-1, dtype=np.int8) - res = abs(arr) - exp = SparseArray([1, 2, 1, 3], fill_value=1, dtype=np.int8) - tm.assert_sp_array_equal(exp, res) - - def test_invert_operator(self): - arr = SparseArray([False, True, False, True], fill_value=False, dtype=np.bool_) - exp = SparseArray( - np.invert([False, True, False, True]), fill_value=True, dtype=np.bool_ - ) - res = ~arr - tm.assert_sp_array_equal(exp, res) - - arr = SparseArray([0, 1, 0, 2, 3, 0], fill_value=0, dtype=np.int32) - res = ~arr - exp = SparseArray([-1, -2, -1, -3, -4, -1], fill_value=-1, dtype=np.int32) - tm.assert_sp_array_equal(exp, res) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/io/formats/test_eng_formatting.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/io/formats/test_eng_formatting.py deleted file mode 100644 index 2f18623559557f09d7774305b8cd0c626aa23da3..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/io/formats/test_eng_formatting.py +++ /dev/null @@ -1,234 +0,0 @@ -import numpy as np - -from pandas import DataFrame -import pandas._testing as tm - -import pandas.io.formats.format as fmt - - -class TestEngFormatter: - def test_eng_float_formatter(self): - df = DataFrame({"A": [1.41, 141.0, 14100, 1410000.0]}) - - fmt.set_eng_float_format() - result = df.to_string() - expected = ( - " A\n" - "0 1.410E+00\n" - "1 141.000E+00\n" - "2 14.100E+03\n" - "3 1.410E+06" - ) - assert result == expected - - fmt.set_eng_float_format(use_eng_prefix=True) - result = df.to_string() - expected = " A\n0 1.410\n1 141.000\n2 14.100k\n3 1.410M" - assert result == expected - - fmt.set_eng_float_format(accuracy=0) - result = df.to_string() - expected = " A\n0 1E+00\n1 141E+00\n2 14E+03\n3 1E+06" - assert result == expected - - tm.reset_display_options() - - def compare(self, formatter, input, output): - formatted_input = formatter(input) - assert formatted_input == output - - def compare_all(self, formatter, in_out): - """ - Parameters: - ----------- - formatter: EngFormatter under test - in_out: list of tuples. Each tuple = (number, expected_formatting) - - It is tested if 'formatter(number) == expected_formatting'. - *number* should be >= 0 because formatter(-number) == fmt is also - tested. *fmt* is derived from *expected_formatting* - """ - for input, output in in_out: - self.compare(formatter, input, output) - self.compare(formatter, -input, "-" + output[1:]) - - def test_exponents_with_eng_prefix(self): - formatter = fmt.EngFormatter(accuracy=3, use_eng_prefix=True) - f = np.sqrt(2) - in_out = [ - (f * 10**-24, " 1.414y"), - (f * 10**-23, " 14.142y"), - (f * 10**-22, " 141.421y"), - (f * 10**-21, " 1.414z"), - (f * 10**-20, " 14.142z"), - (f * 10**-19, " 141.421z"), - (f * 10**-18, " 1.414a"), - (f * 10**-17, " 14.142a"), - (f * 10**-16, " 141.421a"), - (f * 10**-15, " 1.414f"), - (f * 10**-14, " 14.142f"), - (f * 10**-13, " 141.421f"), - (f * 10**-12, " 1.414p"), - (f * 10**-11, " 14.142p"), - (f * 10**-10, " 141.421p"), - (f * 10**-9, " 1.414n"), - (f * 10**-8, " 14.142n"), - (f * 10**-7, " 141.421n"), - (f * 10**-6, " 1.414u"), - (f * 10**-5, " 14.142u"), - (f * 10**-4, " 141.421u"), - (f * 10**-3, " 1.414m"), - (f * 10**-2, " 14.142m"), - (f * 10**-1, " 141.421m"), - (f * 10**0, " 1.414"), - (f * 10**1, " 14.142"), - (f * 10**2, " 141.421"), - (f * 10**3, " 1.414k"), - (f * 10**4, " 14.142k"), - (f * 10**5, " 141.421k"), - (f * 10**6, " 1.414M"), - (f * 10**7, " 14.142M"), - (f * 10**8, " 141.421M"), - (f * 10**9, " 1.414G"), - (f * 10**10, " 14.142G"), - (f * 10**11, " 141.421G"), - (f * 10**12, " 1.414T"), - (f * 10**13, " 14.142T"), - (f * 10**14, " 141.421T"), - (f * 10**15, " 1.414P"), - (f * 10**16, " 14.142P"), - (f * 10**17, " 141.421P"), - (f * 10**18, " 1.414E"), - (f * 10**19, " 14.142E"), - (f * 10**20, " 141.421E"), - (f * 10**21, " 1.414Z"), - (f * 10**22, " 14.142Z"), - (f * 10**23, " 141.421Z"), - (f * 10**24, " 1.414Y"), - (f * 10**25, " 14.142Y"), - (f * 10**26, " 141.421Y"), - ] - self.compare_all(formatter, in_out) - - def test_exponents_without_eng_prefix(self): - formatter = fmt.EngFormatter(accuracy=4, use_eng_prefix=False) - f = np.pi - in_out = [ - (f * 10**-24, " 3.1416E-24"), - (f * 10**-23, " 31.4159E-24"), - (f * 10**-22, " 314.1593E-24"), - (f * 10**-21, " 3.1416E-21"), - (f * 10**-20, " 31.4159E-21"), - (f * 10**-19, " 314.1593E-21"), - (f * 10**-18, " 3.1416E-18"), - (f * 10**-17, " 31.4159E-18"), - (f * 10**-16, " 314.1593E-18"), - (f * 10**-15, " 3.1416E-15"), - (f * 10**-14, " 31.4159E-15"), - (f * 10**-13, " 314.1593E-15"), - (f * 10**-12, " 3.1416E-12"), - (f * 10**-11, " 31.4159E-12"), - (f * 10**-10, " 314.1593E-12"), - (f * 10**-9, " 3.1416E-09"), - (f * 10**-8, " 31.4159E-09"), - (f * 10**-7, " 314.1593E-09"), - (f * 10**-6, " 3.1416E-06"), - (f * 10**-5, " 31.4159E-06"), - (f * 10**-4, " 314.1593E-06"), - (f * 10**-3, " 3.1416E-03"), - (f * 10**-2, " 31.4159E-03"), - (f * 10**-1, " 314.1593E-03"), - (f * 10**0, " 3.1416E+00"), - (f * 10**1, " 31.4159E+00"), - (f * 10**2, " 314.1593E+00"), - (f * 10**3, " 3.1416E+03"), - (f * 10**4, " 31.4159E+03"), - (f * 10**5, " 314.1593E+03"), - (f * 10**6, " 3.1416E+06"), - (f * 10**7, " 31.4159E+06"), - (f * 10**8, " 314.1593E+06"), - (f * 10**9, " 3.1416E+09"), - (f * 10**10, " 31.4159E+09"), - (f * 10**11, " 314.1593E+09"), - (f * 10**12, " 3.1416E+12"), - (f * 10**13, " 31.4159E+12"), - (f * 10**14, " 314.1593E+12"), - (f * 10**15, " 3.1416E+15"), - (f * 10**16, " 31.4159E+15"), - (f * 10**17, " 314.1593E+15"), - (f * 10**18, " 3.1416E+18"), - (f * 10**19, " 31.4159E+18"), - (f * 10**20, " 314.1593E+18"), - (f * 10**21, " 3.1416E+21"), - (f * 10**22, " 31.4159E+21"), - (f * 10**23, " 314.1593E+21"), - (f * 10**24, " 3.1416E+24"), - (f * 10**25, " 31.4159E+24"), - (f * 10**26, " 314.1593E+24"), - ] - self.compare_all(formatter, in_out) - - def test_rounding(self): - formatter = fmt.EngFormatter(accuracy=3, use_eng_prefix=True) - in_out = [ - (5.55555, " 5.556"), - (55.5555, " 55.556"), - (555.555, " 555.555"), - (5555.55, " 5.556k"), - (55555.5, " 55.556k"), - (555555, " 555.555k"), - ] - self.compare_all(formatter, in_out) - - formatter = fmt.EngFormatter(accuracy=1, use_eng_prefix=True) - in_out = [ - (5.55555, " 5.6"), - (55.5555, " 55.6"), - (555.555, " 555.6"), - (5555.55, " 5.6k"), - (55555.5, " 55.6k"), - (555555, " 555.6k"), - ] - self.compare_all(formatter, in_out) - - formatter = fmt.EngFormatter(accuracy=0, use_eng_prefix=True) - in_out = [ - (5.55555, " 6"), - (55.5555, " 56"), - (555.555, " 556"), - (5555.55, " 6k"), - (55555.5, " 56k"), - (555555, " 556k"), - ] - self.compare_all(formatter, in_out) - - formatter = fmt.EngFormatter(accuracy=3, use_eng_prefix=True) - result = formatter(0) - assert result == " 0.000" - - def test_nan(self): - # Issue #11981 - - formatter = fmt.EngFormatter(accuracy=1, use_eng_prefix=True) - result = formatter(np.nan) - assert result == "NaN" - - df = DataFrame( - { - "a": [1.5, 10.3, 20.5], - "b": [50.3, 60.67, 70.12], - "c": [100.2, 101.33, 120.33], - } - ) - pt = df.pivot_table(values="a", index="b", columns="c") - fmt.set_eng_float_format(accuracy=1) - result = pt.to_string() - assert "NaN" in result - tm.reset_display_options() - - def test_inf(self): - # Issue #11981 - - formatter = fmt.EngFormatter(accuracy=1, use_eng_prefix=True) - result = formatter(np.inf) - assert result == "inf" diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/html5lib/filters/lint.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/html5lib/filters/lint.py deleted file mode 100644 index fcc07eec5b2e5ea926fa8b2af199e14c9cac50dd..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/html5lib/filters/lint.py +++ /dev/null @@ -1,93 +0,0 @@ -from __future__ import absolute_import, division, unicode_literals - -from pip._vendor.six import text_type - -from . import base -from ..constants import namespaces, voidElements - -from ..constants import spaceCharacters -spaceCharacters = "".join(spaceCharacters) - - -class Filter(base.Filter): - """Lints the token stream for errors - - If it finds any errors, it'll raise an ``AssertionError``. - - """ - def __init__(self, source, require_matching_tags=True): - """Creates a Filter - - :arg source: the source token stream - - :arg require_matching_tags: whether or not to require matching tags - - """ - super(Filter, self).__init__(source) - self.require_matching_tags = require_matching_tags - - def __iter__(self): - open_elements = [] - for token in base.Filter.__iter__(self): - type = token["type"] - if type in ("StartTag", "EmptyTag"): - namespace = token["namespace"] - name = token["name"] - assert namespace is None or isinstance(namespace, text_type) - assert namespace != "" - assert isinstance(name, text_type) - assert name != "" - assert isinstance(token["data"], dict) - if (not namespace or namespace == namespaces["html"]) and name in voidElements: - assert type == "EmptyTag" - else: - assert type == "StartTag" - if type == "StartTag" and self.require_matching_tags: - open_elements.append((namespace, name)) - for (namespace, name), value in token["data"].items(): - assert namespace is None or isinstance(namespace, text_type) - assert namespace != "" - assert isinstance(name, text_type) - assert name != "" - assert isinstance(value, text_type) - - elif type == "EndTag": - namespace = token["namespace"] - name = token["name"] - assert namespace is None or isinstance(namespace, text_type) - assert namespace != "" - assert isinstance(name, text_type) - assert name != "" - if (not namespace or namespace == namespaces["html"]) and name in voidElements: - assert False, "Void element reported as EndTag token: %(tag)s" % {"tag": name} - elif self.require_matching_tags: - start = open_elements.pop() - assert start == (namespace, name) - - elif type == "Comment": - data = token["data"] - assert isinstance(data, text_type) - - elif type in ("Characters", "SpaceCharacters"): - data = token["data"] - assert isinstance(data, text_type) - assert data != "" - if type == "SpaceCharacters": - assert data.strip(spaceCharacters) == "" - - elif type == "Doctype": - name = token["name"] - assert name is None or isinstance(name, text_type) - assert token["publicId"] is None or isinstance(name, text_type) - assert token["systemId"] is None or isinstance(name, text_type) - - elif type == "Entity": - assert isinstance(token["name"], text_type) - - elif type == "SerializerError": - assert isinstance(token["data"], text_type) - - else: - assert False, "Unknown token type: %(type)s" % {"type": type} - - yield token diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/lexers/dsls.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/lexers/dsls.py deleted file mode 100644 index f607515140e7b20964205533c32048806d7f4374..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/lexers/dsls.py +++ /dev/null @@ -1,981 +0,0 @@ -""" - pygments.lexers.dsls - ~~~~~~~~~~~~~~~~~~~~ - - Lexers for various domain-specific languages. - - :copyright: Copyright 2006-2023 by the Pygments team, see AUTHORS. - :license: BSD, see LICENSE for details. -""" - -import re - -from pygments.lexer import ExtendedRegexLexer, RegexLexer, bygroups, words, \ - include, default, this, using, combined -from pygments.token import Text, Comment, Operator, Keyword, Name, String, \ - Number, Punctuation, Whitespace - -__all__ = ['ProtoBufLexer', 'ZeekLexer', 'PuppetLexer', 'RslLexer', - 'MscgenLexer', 'VGLLexer', 'AlloyLexer', 'PanLexer', - 'CrmshLexer', 'ThriftLexer', 'FlatlineLexer', 'SnowballLexer'] - - -class ProtoBufLexer(RegexLexer): - """ - Lexer for Protocol Buffer definition files. - - .. versionadded:: 1.4 - """ - - name = 'Protocol Buffer' - url = 'https://developers.google.com/protocol-buffers/' - aliases = ['protobuf', 'proto'] - filenames = ['*.proto'] - - tokens = { - 'root': [ - (r'[ \t]+', Whitespace), - (r'[,;{}\[\]()<>]', Punctuation), - (r'/(\\\n)?/(\n|(.|\n)*?[^\\]\n)', Comment.Single), - (r'/(\\\n)?\*(.|\n)*?\*(\\\n)?/', Comment.Multiline), - (words(( - 'import', 'option', 'optional', 'required', 'repeated', - 'reserved', 'default', 'packed', 'ctype', 'extensions', 'to', - 'max', 'rpc', 'returns', 'oneof', 'syntax'), prefix=r'\b', suffix=r'\b'), - Keyword), - (words(( - 'int32', 'int64', 'uint32', 'uint64', 'sint32', 'sint64', - 'fixed32', 'fixed64', 'sfixed32', 'sfixed64', - 'float', 'double', 'bool', 'string', 'bytes'), suffix=r'\b'), - Keyword.Type), - (r'(true|false)\b', Keyword.Constant), - (r'(package)(\s+)', bygroups(Keyword.Namespace, Whitespace), 'package'), - (r'(message|extend)(\s+)', - bygroups(Keyword.Declaration, Whitespace), 'message'), - (r'(enum|group|service)(\s+)', - bygroups(Keyword.Declaration, Whitespace), 'type'), - (r'\".*?\"', String), - (r'\'.*?\'', String), - (r'(\d+\.\d*|\.\d+|\d+)[eE][+-]?\d+[LlUu]*', Number.Float), - (r'(\d+\.\d*|\.\d+|\d+[fF])[fF]?', Number.Float), - (r'(\-?(inf|nan))\b', Number.Float), - (r'0x[0-9a-fA-F]+[LlUu]*', Number.Hex), - (r'0[0-7]+[LlUu]*', Number.Oct), - (r'\d+[LlUu]*', Number.Integer), - (r'[+-=]', Operator), - (r'([a-zA-Z_][\w.]*)([ \t]*)(=)', - bygroups(Name.Attribute, Whitespace, Operator)), - (r'[a-zA-Z_][\w.]*', Name), - ], - 'package': [ - (r'[a-zA-Z_]\w*', Name.Namespace, '#pop'), - default('#pop'), - ], - 'message': [ - (r'[a-zA-Z_]\w*', Name.Class, '#pop'), - default('#pop'), - ], - 'type': [ - (r'[a-zA-Z_]\w*', Name, '#pop'), - default('#pop'), - ], - } - - -class ThriftLexer(RegexLexer): - """ - For Thrift interface definitions. - - .. versionadded:: 2.1 - """ - name = 'Thrift' - url = 'https://thrift.apache.org/' - aliases = ['thrift'] - filenames = ['*.thrift'] - mimetypes = ['application/x-thrift'] - - tokens = { - 'root': [ - include('whitespace'), - include('comments'), - (r'"', String.Double, combined('stringescape', 'dqs')), - (r'\'', String.Single, combined('stringescape', 'sqs')), - (r'(namespace)(\s+)', - bygroups(Keyword.Namespace, Whitespace), 'namespace'), - (r'(enum|union|struct|service|exception)(\s+)', - bygroups(Keyword.Declaration, Whitespace), 'class'), - (r'((?:(?:[^\W\d]|\$)[\w.\[\]$<>]*\s+)+?)' # return arguments - r'((?:[^\W\d]|\$)[\w$]*)' # method name - r'(\s*)(\()', # signature start - bygroups(using(this), Name.Function, Whitespace, Operator)), - include('keywords'), - include('numbers'), - (r'[&=]', Operator), - (r'[:;,{}()<>\[\]]', Punctuation), - (r'[a-zA-Z_](\.\w|\w)*', Name), - ], - 'whitespace': [ - (r'\n', Whitespace), - (r'\s+', Whitespace), - ], - 'comments': [ - (r'#.*$', Comment), - (r'//.*?\n', Comment), - (r'/\*[\w\W]*?\*/', Comment.Multiline), - ], - 'stringescape': [ - (r'\\([\\nrt"\'])', String.Escape), - ], - 'dqs': [ - (r'"', String.Double, '#pop'), - (r'[^\\"\n]+', String.Double), - ], - 'sqs': [ - (r"'", String.Single, '#pop'), - (r'[^\\\'\n]+', String.Single), - ], - 'namespace': [ - (r'[a-z*](\.\w|\w)*', Name.Namespace, '#pop'), - default('#pop'), - ], - 'class': [ - (r'[a-zA-Z_]\w*', Name.Class, '#pop'), - default('#pop'), - ], - 'keywords': [ - (r'(async|oneway|extends|throws|required|optional)\b', Keyword), - (r'(true|false)\b', Keyword.Constant), - (r'(const|typedef)\b', Keyword.Declaration), - (words(( - 'cpp_namespace', 'cpp_include', 'cpp_type', 'java_package', - 'cocoa_prefix', 'csharp_namespace', 'delphi_namespace', - 'php_namespace', 'py_module', 'perl_package', - 'ruby_namespace', 'smalltalk_category', 'smalltalk_prefix', - 'xsd_all', 'xsd_optional', 'xsd_nillable', 'xsd_namespace', - 'xsd_attrs', 'include'), suffix=r'\b'), - Keyword.Namespace), - (words(( - 'void', 'bool', 'byte', 'i16', 'i32', 'i64', 'double', - 'string', 'binary', 'map', 'list', 'set', 'slist', - 'senum'), suffix=r'\b'), - Keyword.Type), - (words(( - 'BEGIN', 'END', '__CLASS__', '__DIR__', '__FILE__', - '__FUNCTION__', '__LINE__', '__METHOD__', '__NAMESPACE__', - 'abstract', 'alias', 'and', 'args', 'as', 'assert', 'begin', - 'break', 'case', 'catch', 'class', 'clone', 'continue', - 'declare', 'def', 'default', 'del', 'delete', 'do', 'dynamic', - 'elif', 'else', 'elseif', 'elsif', 'end', 'enddeclare', - 'endfor', 'endforeach', 'endif', 'endswitch', 'endwhile', - 'ensure', 'except', 'exec', 'finally', 'float', 'for', - 'foreach', 'function', 'global', 'goto', 'if', 'implements', - 'import', 'in', 'inline', 'instanceof', 'interface', 'is', - 'lambda', 'module', 'native', 'new', 'next', 'nil', 'not', - 'or', 'pass', 'public', 'print', 'private', 'protected', - 'raise', 'redo', 'rescue', 'retry', 'register', 'return', - 'self', 'sizeof', 'static', 'super', 'switch', 'synchronized', - 'then', 'this', 'throw', 'transient', 'try', 'undef', - 'unless', 'unsigned', 'until', 'use', 'var', 'virtual', - 'volatile', 'when', 'while', 'with', 'xor', 'yield'), - prefix=r'\b', suffix=r'\b'), - Keyword.Reserved), - ], - 'numbers': [ - (r'[+-]?(\d+\.\d+([eE][+-]?\d+)?|\.?\d+[eE][+-]?\d+)', Number.Float), - (r'[+-]?0x[0-9A-Fa-f]+', Number.Hex), - (r'[+-]?[0-9]+', Number.Integer), - ], - } - - -class ZeekLexer(RegexLexer): - """ - For Zeek scripts. - - .. versionadded:: 2.5 - """ - name = 'Zeek' - url = 'https://www.zeek.org/' - aliases = ['zeek', 'bro'] - filenames = ['*.zeek', '*.bro'] - - _hex = r'[0-9a-fA-F]' - _float = r'((\d*\.?\d+)|(\d+\.?\d*))([eE][-+]?\d+)?' - _h = r'[A-Za-z0-9][-A-Za-z0-9]*' - - tokens = { - 'root': [ - include('whitespace'), - include('comments'), - include('directives'), - include('attributes'), - include('types'), - include('keywords'), - include('literals'), - include('operators'), - include('punctuation'), - (r'((?:[A-Za-z_]\w*)(?:::(?:[A-Za-z_]\w*))*)(?=\s*\()', - Name.Function), - include('identifiers'), - ], - - 'whitespace': [ - (r'\n', Whitespace), - (r'\s+', Whitespace), - (r'(\\)(\n)', bygroups(Text, Whitespace)), - ], - - 'comments': [ - (r'#.*$', Comment), - ], - - 'directives': [ - (r'@(load-plugin|load-sigs|load|unload)\b.*$', Comment.Preproc), - (r'@(DEBUG|DIR|FILENAME|deprecated|if|ifdef|ifndef|else|endif)\b', Comment.Preproc), - (r'(@prefixes)(\s*)((\+?=).*)$', bygroups(Comment.Preproc, - Whitespace, Comment.Preproc)), - ], - - 'attributes': [ - (words(('redef', 'priority', 'log', 'optional', 'default', 'add_func', - 'delete_func', 'expire_func', 'read_expire', 'write_expire', - 'create_expire', 'synchronized', 'persistent', 'rotate_interval', - 'rotate_size', 'encrypt', 'raw_output', 'mergeable', 'error_handler', - 'type_column', 'deprecated'), - prefix=r'&', suffix=r'\b'), - Keyword.Pseudo), - ], - - 'types': [ - (words(('any', - 'enum', 'record', 'set', 'table', 'vector', - 'function', 'hook', 'event', - 'addr', 'bool', 'count', 'double', 'file', 'int', 'interval', - 'pattern', 'port', 'string', 'subnet', 'time'), - suffix=r'\b'), - Keyword.Type), - - (r'(opaque)(\s+)(of)(\s+)((?:[A-Za-z_]\w*)(?:::(?:[A-Za-z_]\w*))*)\b', - bygroups(Keyword.Type, Whitespace, Operator.Word, Whitespace, Keyword.Type)), - - (r'(type)(\s+)((?:[A-Za-z_]\w*)(?:::(?:[A-Za-z_]\w*))*)(\s*)(:)(\s*)\b(record|enum)\b', - bygroups(Keyword, Whitespace, Name.Class, Whitespace, Operator, Whitespace, Keyword.Type)), - - (r'(type)(\s+)((?:[A-Za-z_]\w*)(?:::(?:[A-Za-z_]\w*))*)(\s*)(:)', - bygroups(Keyword, Whitespace, Name, Whitespace, Operator)), - - (r'(redef)(\s+)(record|enum)(\s+)((?:[A-Za-z_]\w*)(?:::(?:[A-Za-z_]\w*))*)\b', - bygroups(Keyword, Whitespace, Keyword.Type, Whitespace, Name.Class)), - ], - - 'keywords': [ - (words(('redef', 'export', 'if', 'else', 'for', 'while', - 'return', 'break', 'next', 'continue', 'fallthrough', - 'switch', 'default', 'case', - 'add', 'delete', - 'when', 'timeout', 'schedule'), - suffix=r'\b'), - Keyword), - (r'(print)\b', Keyword), - (r'(global|local|const|option)\b', Keyword.Declaration), - (r'(module)(\s+)(([A-Za-z_]\w*)(?:::([A-Za-z_]\w*))*)\b', - bygroups(Keyword.Namespace, Whitespace, Name.Namespace)), - ], - - 'literals': [ - (r'"', String, 'string'), - - # Not the greatest match for patterns, but generally helps - # disambiguate between start of a pattern and just a division - # operator. - (r'/(?=.*/)', String.Regex, 'regex'), - - (r'(T|F)\b', Keyword.Constant), - - # Port - (r'\d{1,5}/(udp|tcp|icmp|unknown)\b', Number), - - # IPv4 Address - (r'(\d{1,3}.){3}(\d{1,3})\b', Number), - - # IPv6 Address - (r'\[([0-9a-fA-F]{0,4}:){2,7}([0-9a-fA-F]{0,4})?((\d{1,3}.){3}(\d{1,3}))?\]', Number), - - # Numeric - (r'0[xX]' + _hex + r'+\b', Number.Hex), - (_float + r'\s*(day|hr|min|sec|msec|usec)s?\b', Number.Float), - (_float + r'\b', Number.Float), - (r'(\d+)\b', Number.Integer), - - # Hostnames - (_h + r'(\.' + _h + r')+', String), - ], - - 'operators': [ - (r'[!%*/+<=>~|&^-]', Operator), - (r'([-+=&|]{2}|[+=!><-]=)', Operator), - (r'(in|as|is|of)\b', Operator.Word), - (r'\??\$', Operator), - ], - - 'punctuation': [ - (r'[{}()\[\],;.]', Punctuation), - # The "ternary if", which uses '?' and ':', could instead be - # treated as an Operator, but colons are more frequently used to - # separate field/identifier names from their types, so the (often) - # less-prominent Punctuation is used even with '?' for consistency. - (r'[?:]', Punctuation), - ], - - 'identifiers': [ - (r'([a-zA-Z_]\w*)(::)', bygroups(Name, Punctuation)), - (r'[a-zA-Z_]\w*', Name) - ], - - 'string': [ - (r'\\.', String.Escape), - (r'%-?[0-9]*(\.[0-9]+)?[DTd-gsx]', String.Escape), - (r'"', String, '#pop'), - (r'.', String), - ], - - 'regex': [ - (r'\\.', String.Escape), - (r'/', String.Regex, '#pop'), - (r'.', String.Regex), - ], - } - - -BroLexer = ZeekLexer - - -class PuppetLexer(RegexLexer): - """ - For Puppet configuration DSL. - - .. versionadded:: 1.6 - """ - name = 'Puppet' - url = 'https://puppet.com/' - aliases = ['puppet'] - filenames = ['*.pp'] - - tokens = { - 'root': [ - include('comments'), - include('keywords'), - include('names'), - include('numbers'), - include('operators'), - include('strings'), - - (r'[]{}:(),;[]', Punctuation), - (r'\s+', Whitespace), - ], - - 'comments': [ - (r'(\s*)(#.*)$', bygroups(Whitespace, Comment)), - (r'/(\\\n)?[*](.|\n)*?[*](\\\n)?/', Comment.Multiline), - ], - - 'operators': [ - (r'(=>|\?|<|>|=|\+|-|/|\*|~|!|\|)', Operator), - (r'(in|and|or|not)\b', Operator.Word), - ], - - 'names': [ - (r'[a-zA-Z_]\w*', Name.Attribute), - (r'(\$\S+)(\[)(\S+)(\])', bygroups(Name.Variable, Punctuation, - String, Punctuation)), - (r'\$\S+', Name.Variable), - ], - - 'numbers': [ - # Copypasta from the Python lexer - (r'(\d+\.\d*|\d*\.\d+)([eE][+-]?[0-9]+)?j?', Number.Float), - (r'\d+[eE][+-]?[0-9]+j?', Number.Float), - (r'0[0-7]+j?', Number.Oct), - (r'0[xX][a-fA-F0-9]+', Number.Hex), - (r'\d+L', Number.Integer.Long), - (r'\d+j?', Number.Integer) - ], - - 'keywords': [ - # Left out 'group' and 'require' - # Since they're often used as attributes - (words(( - 'absent', 'alert', 'alias', 'audit', 'augeas', 'before', 'case', - 'check', 'class', 'computer', 'configured', 'contained', - 'create_resources', 'crit', 'cron', 'debug', 'default', - 'define', 'defined', 'directory', 'else', 'elsif', 'emerg', - 'err', 'exec', 'extlookup', 'fail', 'false', 'file', - 'filebucket', 'fqdn_rand', 'generate', 'host', 'if', 'import', - 'include', 'info', 'inherits', 'inline_template', 'installed', - 'interface', 'k5login', 'latest', 'link', 'loglevel', - 'macauthorization', 'mailalias', 'maillist', 'mcx', 'md5', - 'mount', 'mounted', 'nagios_command', 'nagios_contact', - 'nagios_contactgroup', 'nagios_host', 'nagios_hostdependency', - 'nagios_hostescalation', 'nagios_hostextinfo', 'nagios_hostgroup', - 'nagios_service', 'nagios_servicedependency', 'nagios_serviceescalation', - 'nagios_serviceextinfo', 'nagios_servicegroup', 'nagios_timeperiod', - 'node', 'noop', 'notice', 'notify', 'package', 'present', 'purged', - 'realize', 'regsubst', 'resources', 'role', 'router', 'running', - 'schedule', 'scheduled_task', 'search', 'selboolean', 'selmodule', - 'service', 'sha1', 'shellquote', 'split', 'sprintf', - 'ssh_authorized_key', 'sshkey', 'stage', 'stopped', 'subscribe', - 'tag', 'tagged', 'template', 'tidy', 'true', 'undef', 'unmounted', - 'user', 'versioncmp', 'vlan', 'warning', 'yumrepo', 'zfs', 'zone', - 'zpool'), prefix='(?i)', suffix=r'\b'), - Keyword), - ], - - 'strings': [ - (r'"([^"])*"', String), - (r"'(\\'|[^'])*'", String), - ], - - } - - -class RslLexer(RegexLexer): - """ - RSL is the formal specification - language used in RAISE (Rigorous Approach to Industrial Software Engineering) - method. - - .. versionadded:: 2.0 - """ - name = 'RSL' - url = 'http://en.wikipedia.org/wiki/RAISE' - aliases = ['rsl'] - filenames = ['*.rsl'] - mimetypes = ['text/rsl'] - - flags = re.MULTILINE | re.DOTALL - - tokens = { - 'root': [ - (words(( - 'Bool', 'Char', 'Int', 'Nat', 'Real', 'Text', 'Unit', 'abs', - 'all', 'always', 'any', 'as', 'axiom', 'card', 'case', 'channel', - 'chaos', 'class', 'devt_relation', 'dom', 'elems', 'else', 'elif', - 'end', 'exists', 'extend', 'false', 'for', 'hd', 'hide', 'if', - 'in', 'is', 'inds', 'initialise', 'int', 'inter', 'isin', 'len', - 'let', 'local', 'ltl_assertion', 'object', 'of', 'out', 'post', - 'pre', 'read', 'real', 'rng', 'scheme', 'skip', 'stop', 'swap', - 'then', 'theory', 'test_case', 'tl', 'transition_system', 'true', - 'type', 'union', 'until', 'use', 'value', 'variable', 'while', - 'with', 'write', '~isin', '-inflist', '-infset', '-list', - '-set'), prefix=r'\b', suffix=r'\b'), - Keyword), - (r'(variable|value)\b', Keyword.Declaration), - (r'--.*?\n', Comment), - (r'<:.*?:>', Comment), - (r'\{!.*?!\}', Comment), - (r'/\*.*?\*/', Comment), - (r'^([ \t]*)([\w]+)([ \t]*)(:[^:])', bygroups(Whitespace, - Name.Function, Whitespace, Name.Function)), - (r'(^[ \t]*)([\w]+)([ \t]*)(\([\w\s,]*\))([ \t]*)(is|as)', - bygroups(Whitespace, Name.Function, Whitespace, Text, - Whitespace, Keyword)), - (r'\b[A-Z]\w*\b', Keyword.Type), - (r'(true|false)\b', Keyword.Constant), - (r'".*"', String), - (r'\'.\'', String.Char), - (r'(><|->|-m->|/\\|<=|<<=|<\.|\|\||\|\^\||-~->|-~m->|\\/|>=|>>|' - r'\.>|\+\+|-\\|<->|=>|:-|~=|\*\*|<<|>>=|\+>|!!|\|=\||#)', - Operator), - (r'[0-9]+\.[0-9]+([eE][0-9]+)?[fd]?', Number.Float), - (r'0x[0-9a-f]+', Number.Hex), - (r'[0-9]+', Number.Integer), - (r'\s+', Whitespace), - (r'.', Text), - ], - } - - def analyse_text(text): - """ - Check for the most common text in the beginning of a RSL file. - """ - if re.search(r'scheme\s*.*?=\s*class\s*type', text, re.I) is not None: - return 1.0 - - -class MscgenLexer(RegexLexer): - """ - For Mscgen files. - - .. versionadded:: 1.6 - """ - name = 'Mscgen' - url = 'http://www.mcternan.me.uk/mscgen/' - aliases = ['mscgen', 'msc'] - filenames = ['*.msc'] - - _var = r'(\w+|"(?:\\"|[^"])*")' - - tokens = { - 'root': [ - (r'msc\b', Keyword.Type), - # Options - (r'(hscale|HSCALE|width|WIDTH|wordwraparcs|WORDWRAPARCS' - r'|arcgradient|ARCGRADIENT)\b', Name.Property), - # Operators - (r'(abox|ABOX|rbox|RBOX|box|BOX|note|NOTE)\b', Operator.Word), - (r'(\.|-|\|){3}', Keyword), - (r'(?:-|=|\.|:){2}' - r'|<<=>>|<->|<=>|<<>>|<:>' - r'|->|=>>|>>|=>|:>|-x|-X' - r'|<-|<<=|<<|<=|<:|x-|X-|=', Operator), - # Names - (r'\*', Name.Builtin), - (_var, Name.Variable), - # Other - (r'\[', Punctuation, 'attrs'), - (r'\{|\}|,|;', Punctuation), - include('comments') - ], - 'attrs': [ - (r'\]', Punctuation, '#pop'), - (_var + r'(\s*)(=)(\s*)' + _var, - bygroups(Name.Attribute, Whitespace, Operator, Whitespace, - String)), - (r',', Punctuation), - include('comments') - ], - 'comments': [ - (r'(?://|#).*?\n', Comment.Single), - (r'/\*(?:.|\n)*?\*/', Comment.Multiline), - (r'[ \t\r\n]+', Whitespace) - ] - } - - -class VGLLexer(RegexLexer): - """ - For SampleManager VGL source code. - - .. versionadded:: 1.6 - """ - name = 'VGL' - url = 'http://www.thermoscientific.com/samplemanager' - aliases = ['vgl'] - filenames = ['*.rpf'] - - flags = re.MULTILINE | re.DOTALL | re.IGNORECASE - - tokens = { - 'root': [ - (r'\{[^}]*\}', Comment.Multiline), - (r'declare', Keyword.Constant), - (r'(if|then|else|endif|while|do|endwhile|and|or|prompt|object' - r'|create|on|line|with|global|routine|value|endroutine|constant' - r'|global|set|join|library|compile_option|file|exists|create|copy' - r'|delete|enable|windows|name|notprotected)(?! *[=<>.,()])', - Keyword), - (r'(true|false|null|empty|error|locked)', Keyword.Constant), - (r'[~^*#!%&\[\]()<>|+=:;,./?-]', Operator), - (r'"[^"]*"', String), - (r'(\.)([a-z_$][\w$]*)', bygroups(Operator, Name.Attribute)), - (r'[0-9][0-9]*(\.[0-9]+(e[+\-]?[0-9]+)?)?', Number), - (r'[a-z_$][\w$]*', Name), - (r'[\r\n]+', Whitespace), - (r'\s+', Whitespace) - ] - } - - -class AlloyLexer(RegexLexer): - """ - For Alloy source code. - - .. versionadded:: 2.0 - """ - - name = 'Alloy' - url = 'http://alloy.mit.edu' - aliases = ['alloy'] - filenames = ['*.als'] - mimetypes = ['text/x-alloy'] - - flags = re.MULTILINE | re.DOTALL - - iden_rex = r'[a-zA-Z_][\w]*"*' - string_rex = r'"\b(\\\\|\\[^\\]|[^"\\])*"' - text_tuple = (r'[^\S\n]+', Whitespace) - - tokens = { - 'sig': [ - (r'(extends)\b', Keyword, '#pop'), - (iden_rex, Name), - text_tuple, - (r',', Punctuation), - (r'\{', Operator, '#pop'), - ], - 'module': [ - text_tuple, - (iden_rex, Name, '#pop'), - ], - 'fun': [ - text_tuple, - (r'\{', Operator, '#pop'), - (iden_rex, Name, '#pop'), - ], - 'fact': [ - include('fun'), - (string_rex, String, '#pop'), - ], - 'root': [ - (r'--.*?$', Comment.Single), - (r'//.*?$', Comment.Single), - (r'/\*.*?\*/', Comment.Multiline), - text_tuple, - (r'(module|open)(\s+)', bygroups(Keyword.Namespace, Whitespace), - 'module'), - (r'(sig|enum)(\s+)', bygroups(Keyword.Declaration, Whitespace), 'sig'), - (r'(iden|univ|none)\b', Keyword.Constant), - (r'(int|Int)\b', Keyword.Type), - (r'(var|this|abstract|extends|set|seq|one|lone|let)\b', Keyword), - (r'(all|some|no|sum|disj|when|else)\b', Keyword), - (r'(run|check|for|but|exactly|expect|as|steps)\b', Keyword), - (r'(always|after|eventually|until|release)\b', Keyword), # future time operators - (r'(historically|before|once|since|triggered)\b', Keyword), # past time operators - (r'(and|or|implies|iff|in)\b', Operator.Word), - (r'(fun|pred|assert)(\s+)', bygroups(Keyword, Whitespace), 'fun'), - (r'(fact)(\s+)', bygroups(Keyword, Whitespace), 'fact'), - (r'!|#|&&|\+\+|<<|>>|>=|<=>|<=|\.\.|\.|->', Operator), - (r'[-+/*%=<>&!^|~{}\[\]().\';]', Operator), - (iden_rex, Name), - (r'[:,]', Punctuation), - (r'[0-9]+', Number.Integer), - (string_rex, String), - (r'\n', Whitespace), - ] - } - - -class PanLexer(RegexLexer): - """ - Lexer for pan source files. - - Based on tcsh lexer. - - .. versionadded:: 2.0 - """ - - name = 'Pan' - url = 'https://github.com/quattor/pan/' - aliases = ['pan'] - filenames = ['*.pan'] - - tokens = { - 'root': [ - include('basic'), - (r'\(', Keyword, 'paren'), - (r'\{', Keyword, 'curly'), - include('data'), - ], - 'basic': [ - (words(( - 'if', 'for', 'with', 'else', 'type', 'bind', 'while', 'valid', 'final', - 'prefix', 'unique', 'object', 'foreach', 'include', 'template', - 'function', 'variable', 'structure', 'extensible', 'declaration'), - prefix=r'\b', suffix=r'\b'), - Keyword), - (words(( - 'file_contents', 'format', 'index', 'length', 'match', 'matches', - 'replace', 'splice', 'split', 'substr', 'to_lowercase', 'to_uppercase', - 'debug', 'error', 'traceback', 'deprecated', 'base64_decode', - 'base64_encode', 'digest', 'escape', 'unescape', 'append', 'create', - 'first', 'nlist', 'key', 'list', 'merge', 'next', 'prepend', 'is_boolean', - 'is_defined', 'is_double', 'is_list', 'is_long', 'is_nlist', 'is_null', - 'is_number', 'is_property', 'is_resource', 'is_string', 'to_boolean', - 'to_double', 'to_long', 'to_string', 'clone', 'delete', 'exists', - 'path_exists', 'if_exists', 'return', 'value'), - prefix=r'\b', suffix=r'\b'), - Name.Builtin), - (r'#.*', Comment), - (r'\\[\w\W]', String.Escape), - (r'(\b\w+)(\s*)(=)', bygroups(Name.Variable, Whitespace, Operator)), - (r'[\[\]{}()=]+', Operator), - (r'<<\s*(\'?)\\?(\w+)[\w\W]+?\2', String), - (r';', Punctuation), - ], - 'data': [ - (r'(?s)"(\\\\|\\[0-7]+|\\.|[^"\\])*"', String.Double), - (r"(?s)'(\\\\|\\[0-7]+|\\.|[^'\\])*'", String.Single), - (r'\s+', Whitespace), - (r'[^=\s\[\]{}()$"\'`\\;#]+', Text), - (r'\d+(?= |\Z)', Number), - ], - 'curly': [ - (r'\}', Keyword, '#pop'), - (r':-', Keyword), - (r'\w+', Name.Variable), - (r'[^}:"\'`$]+', Punctuation), - (r':', Punctuation), - include('root'), - ], - 'paren': [ - (r'\)', Keyword, '#pop'), - include('root'), - ], - } - - -class CrmshLexer(RegexLexer): - """ - Lexer for crmsh configuration files for Pacemaker clusters. - - .. versionadded:: 2.1 - """ - name = 'Crmsh' - url = 'http://crmsh.github.io/' - aliases = ['crmsh', 'pcmk'] - filenames = ['*.crmsh', '*.pcmk'] - mimetypes = [] - - elem = words(( - 'node', 'primitive', 'group', 'clone', 'ms', 'location', - 'colocation', 'order', 'fencing_topology', 'rsc_ticket', - 'rsc_template', 'property', 'rsc_defaults', - 'op_defaults', 'acl_target', 'acl_group', 'user', 'role', - 'tag'), suffix=r'(?![\w#$-])') - sub = words(( - 'params', 'meta', 'operations', 'op', 'rule', - 'attributes', 'utilization'), suffix=r'(?![\w#$-])') - acl = words(('read', 'write', 'deny'), suffix=r'(?![\w#$-])') - bin_rel = words(('and', 'or'), suffix=r'(?![\w#$-])') - un_ops = words(('defined', 'not_defined'), suffix=r'(?![\w#$-])') - date_exp = words(('in_range', 'date', 'spec', 'in'), suffix=r'(?![\w#$-])') - acl_mod = (r'(?:tag|ref|reference|attribute|type|xpath)') - bin_ops = (r'(?:lt|gt|lte|gte|eq|ne)') - val_qual = (r'(?:string|version|number)') - rsc_role_action = (r'(?:Master|Started|Slave|Stopped|' - r'start|promote|demote|stop)') - - tokens = { - 'root': [ - (r'^(#.*)(\n)?', bygroups(Comment, Whitespace)), - # attr=value (nvpair) - (r'([\w#$-]+)(=)("(?:""|[^"])*"|\S+)', - bygroups(Name.Attribute, Punctuation, String)), - # need this construct, otherwise numeric node ids - # are matched as scores - # elem id: - (r'(node)(\s+)([\w#$-]+)(:)', - bygroups(Keyword, Whitespace, Name, Punctuation)), - # scores - (r'([+-]?([0-9]+|inf)):', Number), - # keywords (elements and other) - (elem, Keyword), - (sub, Keyword), - (acl, Keyword), - # binary operators - (r'(?:%s:)?(%s)(?![\w#$-])' % (val_qual, bin_ops), Operator.Word), - # other operators - (bin_rel, Operator.Word), - (un_ops, Operator.Word), - (date_exp, Operator.Word), - # builtin attributes (e.g. #uname) - (r'#[a-z]+(?![\w#$-])', Name.Builtin), - # acl_mod:blah - (r'(%s)(:)("(?:""|[^"])*"|\S+)' % acl_mod, - bygroups(Keyword, Punctuation, Name)), - # rsc_id[:(role|action)] - # NB: this matches all other identifiers - (r'([\w#$-]+)(?:(:)(%s))?(?![\w#$-])' % rsc_role_action, - bygroups(Name, Punctuation, Operator.Word)), - # punctuation - (r'(\\(?=\n)|[\[\](){}/:@])', Punctuation), - (r'\s+|\n', Whitespace), - ], - } - - -class FlatlineLexer(RegexLexer): - """ - Lexer for Flatline expressions. - - .. versionadded:: 2.2 - """ - name = 'Flatline' - url = 'https://github.com/bigmlcom/flatline' - aliases = ['flatline'] - filenames = [] - mimetypes = ['text/x-flatline'] - - special_forms = ('let',) - - builtins = ( - "!=", "*", "+", "-", "<", "<=", "=", ">", ">=", "abs", "acos", "all", - "all-but", "all-with-defaults", "all-with-numeric-default", "and", - "asin", "atan", "avg", "avg-window", "bin-center", "bin-count", "call", - "category-count", "ceil", "cond", "cond-window", "cons", "cos", "cosh", - "count", "diff-window", "div", "ensure-value", "ensure-weighted-value", - "epoch", "epoch-day", "epoch-fields", "epoch-hour", "epoch-millisecond", - "epoch-minute", "epoch-month", "epoch-second", "epoch-weekday", - "epoch-year", "exp", "f", "field", "field-prop", "fields", "filter", - "first", "floor", "head", "if", "in", "integer", "language", "length", - "levenshtein", "linear-regression", "list", "ln", "log", "log10", "map", - "matches", "matches?", "max", "maximum", "md5", "mean", "median", "min", - "minimum", "missing", "missing-count", "missing?", "missing_count", - "mod", "mode", "normalize", "not", "nth", "occurrences", "or", - "percentile", "percentile-label", "population", "population-fraction", - "pow", "preferred", "preferred?", "quantile-label", "rand", "rand-int", - "random-value", "re-quote", "real", "replace", "replace-first", "rest", - "round", "row-number", "segment-label", "sha1", "sha256", "sin", "sinh", - "sqrt", "square", "standard-deviation", "standard_deviation", "str", - "subs", "sum", "sum-squares", "sum-window", "sum_squares", "summary", - "summary-no", "summary-str", "tail", "tan", "tanh", "to-degrees", - "to-radians", "variance", "vectorize", "weighted-random-value", "window", - "winnow", "within-percentiles?", "z-score", - ) - - valid_name = r'(?!#)[\w!$%*+<=>?/.#-]+' - - tokens = { - 'root': [ - # whitespaces - usually not relevant - (r'[,]+', Text), - (r'\s+', Whitespace), - - # numbers - (r'-?\d+\.\d+', Number.Float), - (r'-?\d+', Number.Integer), - (r'0x-?[a-f\d]+', Number.Hex), - - # strings, symbols and characters - (r'"(\\\\|\\[^\\]|[^"\\])*"', String), - (r"\\(.|[a-z]+)", String.Char), - - # expression template placeholder - (r'_', String.Symbol), - - # highlight the special forms - (words(special_forms, suffix=' '), Keyword), - - # highlight the builtins - (words(builtins, suffix=' '), Name.Builtin), - - # the remaining functions - (r'(?<=\()' + valid_name, Name.Function), - - # find the remaining variables - (valid_name, Name.Variable), - - # parentheses - (r'(\(|\))', Punctuation), - ], - } - - -class SnowballLexer(ExtendedRegexLexer): - """ - Lexer for Snowball source code. - - .. versionadded:: 2.2 - """ - - name = 'Snowball' - url = 'http://snowballstem.org/' - aliases = ['snowball'] - filenames = ['*.sbl'] - - _ws = r'\n\r\t ' - - def __init__(self, **options): - self._reset_stringescapes() - ExtendedRegexLexer.__init__(self, **options) - - def _reset_stringescapes(self): - self._start = "'" - self._end = "'" - - def _string(do_string_first): - def callback(lexer, match, ctx): - s = match.start() - text = match.group() - string = re.compile(r'([^%s]*)(.)' % re.escape(lexer._start)).match - escape = re.compile(r'([^%s]*)(.)' % re.escape(lexer._end)).match - pos = 0 - do_string = do_string_first - while pos < len(text): - if do_string: - match = string(text, pos) - yield s + match.start(1), String.Single, match.group(1) - if match.group(2) == "'": - yield s + match.start(2), String.Single, match.group(2) - ctx.stack.pop() - break - yield s + match.start(2), String.Escape, match.group(2) - pos = match.end() - match = escape(text, pos) - yield s + match.start(), String.Escape, match.group() - if match.group(2) != lexer._end: - ctx.stack[-1] = 'escape' - break - pos = match.end() - do_string = True - ctx.pos = s + match.end() - return callback - - def _stringescapes(lexer, match, ctx): - lexer._start = match.group(3) - lexer._end = match.group(5) - return bygroups(Keyword.Reserved, Whitespace, String.Escape, Whitespace, - String.Escape)(lexer, match, ctx) - - tokens = { - 'root': [ - (words(('len', 'lenof'), suffix=r'\b'), Operator.Word), - include('root1'), - ], - 'root1': [ - (r'[%s]+' % _ws, Whitespace), - (r'\d+', Number.Integer), - (r"'", String.Single, 'string'), - (r'[()]', Punctuation), - (r'/\*[\w\W]*?\*/', Comment.Multiline), - (r'//.*', Comment.Single), - (r'[!*+\-/<=>]=|[-=]>|<[+-]|[$*+\-/<=>?\[\]]', Operator), - (words(('as', 'get', 'hex', 'among', 'define', 'decimal', - 'backwardmode'), suffix=r'\b'), - Keyword.Reserved), - (words(('strings', 'booleans', 'integers', 'routines', 'externals', - 'groupings'), suffix=r'\b'), - Keyword.Reserved, 'declaration'), - (words(('do', 'or', 'and', 'for', 'hop', 'non', 'not', 'set', 'try', - 'fail', 'goto', 'loop', 'next', 'test', 'true', - 'false', 'unset', 'atmark', 'attach', 'delete', 'gopast', - 'insert', 'repeat', 'sizeof', 'tomark', 'atleast', - 'atlimit', 'reverse', 'setmark', 'tolimit', 'setlimit', - 'backwards', 'substring'), suffix=r'\b'), - Operator.Word), - (words(('size', 'limit', 'cursor', 'maxint', 'minint'), - suffix=r'\b'), - Name.Builtin), - (r'(stringdef\b)([%s]*)([^%s]+)' % (_ws, _ws), - bygroups(Keyword.Reserved, Whitespace, String.Escape)), - (r'(stringescapes\b)([%s]*)(.)([%s]*)(.)' % (_ws, _ws), - _stringescapes), - (r'[A-Za-z]\w*', Name), - ], - 'declaration': [ - (r'\)', Punctuation, '#pop'), - (words(('len', 'lenof'), suffix=r'\b'), Name, - ('root1', 'declaration')), - include('root1'), - ], - 'string': [ - (r"[^']*'", _string(True)), - ], - 'escape': [ - (r"[^']*'", _string(False)), - ], - } - - def get_tokens_unprocessed(self, text=None, context=None): - self._reset_stringescapes() - return ExtendedRegexLexer.get_tokens_unprocessed(self, text, context) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/uvicorn/protocols/http/auto.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/uvicorn/protocols/http/auto.py deleted file mode 100644 index 1aa99674462a7c7a4d0ff5dfbee16efe532d1400..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/uvicorn/protocols/http/auto.py +++ /dev/null @@ -1,14 +0,0 @@ -import asyncio -from typing import Type - -AutoHTTPProtocol: Type[asyncio.Protocol] -try: - import httptools # noqa -except ImportError: # pragma: no cover - from uvicorn.protocols.http.h11_impl import H11Protocol - - AutoHTTPProtocol = H11Protocol -else: # pragma: no cover - from uvicorn.protocols.http.httptools_impl import HttpToolsProtocol - - AutoHTTPProtocol = HttpToolsProtocol diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/yaml/constructor.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/yaml/constructor.py deleted file mode 100644 index 619acd3070a4845c653fcf22a626e05158035bc2..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/yaml/constructor.py +++ /dev/null @@ -1,748 +0,0 @@ - -__all__ = [ - 'BaseConstructor', - 'SafeConstructor', - 'FullConstructor', - 'UnsafeConstructor', - 'Constructor', - 'ConstructorError' -] - -from .error import * -from .nodes import * - -import collections.abc, datetime, base64, binascii, re, sys, types - -class ConstructorError(MarkedYAMLError): - pass - -class BaseConstructor: - - yaml_constructors = {} - yaml_multi_constructors = {} - - def __init__(self): - self.constructed_objects = {} - self.recursive_objects = {} - self.state_generators = [] - self.deep_construct = False - - def check_data(self): - # If there are more documents available? - return self.check_node() - - def check_state_key(self, key): - """Block special attributes/methods from being set in a newly created - object, to prevent user-controlled methods from being called during - deserialization""" - if self.get_state_keys_blacklist_regexp().match(key): - raise ConstructorError(None, None, - "blacklisted key '%s' in instance state found" % (key,), None) - - def get_data(self): - # Construct and return the next document. - if self.check_node(): - return self.construct_document(self.get_node()) - - def get_single_data(self): - # Ensure that the stream contains a single document and construct it. - node = self.get_single_node() - if node is not None: - return self.construct_document(node) - return None - - def construct_document(self, node): - data = self.construct_object(node) - while self.state_generators: - state_generators = self.state_generators - self.state_generators = [] - for generator in state_generators: - for dummy in generator: - pass - self.constructed_objects = {} - self.recursive_objects = {} - self.deep_construct = False - return data - - def construct_object(self, node, deep=False): - if node in self.constructed_objects: - return self.constructed_objects[node] - if deep: - old_deep = self.deep_construct - self.deep_construct = True - if node in self.recursive_objects: - raise ConstructorError(None, None, - "found unconstructable recursive node", node.start_mark) - self.recursive_objects[node] = None - constructor = None - tag_suffix = None - if node.tag in self.yaml_constructors: - constructor = self.yaml_constructors[node.tag] - else: - for tag_prefix in self.yaml_multi_constructors: - if tag_prefix is not None and node.tag.startswith(tag_prefix): - tag_suffix = node.tag[len(tag_prefix):] - constructor = self.yaml_multi_constructors[tag_prefix] - break - else: - if None in self.yaml_multi_constructors: - tag_suffix = node.tag - constructor = self.yaml_multi_constructors[None] - elif None in self.yaml_constructors: - constructor = self.yaml_constructors[None] - elif isinstance(node, ScalarNode): - constructor = self.__class__.construct_scalar - elif isinstance(node, SequenceNode): - constructor = self.__class__.construct_sequence - elif isinstance(node, MappingNode): - constructor = self.__class__.construct_mapping - if tag_suffix is None: - data = constructor(self, node) - else: - data = constructor(self, tag_suffix, node) - if isinstance(data, types.GeneratorType): - generator = data - data = next(generator) - if self.deep_construct: - for dummy in generator: - pass - else: - self.state_generators.append(generator) - self.constructed_objects[node] = data - del self.recursive_objects[node] - if deep: - self.deep_construct = old_deep - return data - - def construct_scalar(self, node): - if not isinstance(node, ScalarNode): - raise ConstructorError(None, None, - "expected a scalar node, but found %s" % node.id, - node.start_mark) - return node.value - - def construct_sequence(self, node, deep=False): - if not isinstance(node, SequenceNode): - raise ConstructorError(None, None, - "expected a sequence node, but found %s" % node.id, - node.start_mark) - return [self.construct_object(child, deep=deep) - for child in node.value] - - def construct_mapping(self, node, deep=False): - if not isinstance(node, MappingNode): - raise ConstructorError(None, None, - "expected a mapping node, but found %s" % node.id, - node.start_mark) - mapping = {} - for key_node, value_node in node.value: - key = self.construct_object(key_node, deep=deep) - if not isinstance(key, collections.abc.Hashable): - raise ConstructorError("while constructing a mapping", node.start_mark, - "found unhashable key", key_node.start_mark) - value = self.construct_object(value_node, deep=deep) - mapping[key] = value - return mapping - - def construct_pairs(self, node, deep=False): - if not isinstance(node, MappingNode): - raise ConstructorError(None, None, - "expected a mapping node, but found %s" % node.id, - node.start_mark) - pairs = [] - for key_node, value_node in node.value: - key = self.construct_object(key_node, deep=deep) - value = self.construct_object(value_node, deep=deep) - pairs.append((key, value)) - return pairs - - @classmethod - def add_constructor(cls, tag, constructor): - if not 'yaml_constructors' in cls.__dict__: - cls.yaml_constructors = cls.yaml_constructors.copy() - cls.yaml_constructors[tag] = constructor - - @classmethod - def add_multi_constructor(cls, tag_prefix, multi_constructor): - if not 'yaml_multi_constructors' in cls.__dict__: - cls.yaml_multi_constructors = cls.yaml_multi_constructors.copy() - cls.yaml_multi_constructors[tag_prefix] = multi_constructor - -class SafeConstructor(BaseConstructor): - - def construct_scalar(self, node): - if isinstance(node, MappingNode): - for key_node, value_node in node.value: - if key_node.tag == 'tag:yaml.org,2002:value': - return self.construct_scalar(value_node) - return super().construct_scalar(node) - - def flatten_mapping(self, node): - merge = [] - index = 0 - while index < len(node.value): - key_node, value_node = node.value[index] - if key_node.tag == 'tag:yaml.org,2002:merge': - del node.value[index] - if isinstance(value_node, MappingNode): - self.flatten_mapping(value_node) - merge.extend(value_node.value) - elif isinstance(value_node, SequenceNode): - submerge = [] - for subnode in value_node.value: - if not isinstance(subnode, MappingNode): - raise ConstructorError("while constructing a mapping", - node.start_mark, - "expected a mapping for merging, but found %s" - % subnode.id, subnode.start_mark) - self.flatten_mapping(subnode) - submerge.append(subnode.value) - submerge.reverse() - for value in submerge: - merge.extend(value) - else: - raise ConstructorError("while constructing a mapping", node.start_mark, - "expected a mapping or list of mappings for merging, but found %s" - % value_node.id, value_node.start_mark) - elif key_node.tag == 'tag:yaml.org,2002:value': - key_node.tag = 'tag:yaml.org,2002:str' - index += 1 - else: - index += 1 - if merge: - node.value = merge + node.value - - def construct_mapping(self, node, deep=False): - if isinstance(node, MappingNode): - self.flatten_mapping(node) - return super().construct_mapping(node, deep=deep) - - def construct_yaml_null(self, node): - self.construct_scalar(node) - return None - - bool_values = { - 'yes': True, - 'no': False, - 'true': True, - 'false': False, - 'on': True, - 'off': False, - } - - def construct_yaml_bool(self, node): - value = self.construct_scalar(node) - return self.bool_values[value.lower()] - - def construct_yaml_int(self, node): - value = self.construct_scalar(node) - value = value.replace('_', '') - sign = +1 - if value[0] == '-': - sign = -1 - if value[0] in '+-': - value = value[1:] - if value == '0': - return 0 - elif value.startswith('0b'): - return sign*int(value[2:], 2) - elif value.startswith('0x'): - return sign*int(value[2:], 16) - elif value[0] == '0': - return sign*int(value, 8) - elif ':' in value: - digits = [int(part) for part in value.split(':')] - digits.reverse() - base = 1 - value = 0 - for digit in digits: - value += digit*base - base *= 60 - return sign*value - else: - return sign*int(value) - - inf_value = 1e300 - while inf_value != inf_value*inf_value: - inf_value *= inf_value - nan_value = -inf_value/inf_value # Trying to make a quiet NaN (like C99). - - def construct_yaml_float(self, node): - value = self.construct_scalar(node) - value = value.replace('_', '').lower() - sign = +1 - if value[0] == '-': - sign = -1 - if value[0] in '+-': - value = value[1:] - if value == '.inf': - return sign*self.inf_value - elif value == '.nan': - return self.nan_value - elif ':' in value: - digits = [float(part) for part in value.split(':')] - digits.reverse() - base = 1 - value = 0.0 - for digit in digits: - value += digit*base - base *= 60 - return sign*value - else: - return sign*float(value) - - def construct_yaml_binary(self, node): - try: - value = self.construct_scalar(node).encode('ascii') - except UnicodeEncodeError as exc: - raise ConstructorError(None, None, - "failed to convert base64 data into ascii: %s" % exc, - node.start_mark) - try: - if hasattr(base64, 'decodebytes'): - return base64.decodebytes(value) - else: - return base64.decodestring(value) - except binascii.Error as exc: - raise ConstructorError(None, None, - "failed to decode base64 data: %s" % exc, node.start_mark) - - timestamp_regexp = re.compile( - r'''^(?P[0-9][0-9][0-9][0-9]) - -(?P[0-9][0-9]?) - -(?P[0-9][0-9]?) - (?:(?:[Tt]|[ \t]+) - (?P[0-9][0-9]?) - :(?P[0-9][0-9]) - :(?P[0-9][0-9]) - (?:\.(?P[0-9]*))? - (?:[ \t]*(?PZ|(?P[-+])(?P[0-9][0-9]?) - (?::(?P[0-9][0-9]))?))?)?$''', re.X) - - def construct_yaml_timestamp(self, node): - value = self.construct_scalar(node) - match = self.timestamp_regexp.match(node.value) - values = match.groupdict() - year = int(values['year']) - month = int(values['month']) - day = int(values['day']) - if not values['hour']: - return datetime.date(year, month, day) - hour = int(values['hour']) - minute = int(values['minute']) - second = int(values['second']) - fraction = 0 - tzinfo = None - if values['fraction']: - fraction = values['fraction'][:6] - while len(fraction) < 6: - fraction += '0' - fraction = int(fraction) - if values['tz_sign']: - tz_hour = int(values['tz_hour']) - tz_minute = int(values['tz_minute'] or 0) - delta = datetime.timedelta(hours=tz_hour, minutes=tz_minute) - if values['tz_sign'] == '-': - delta = -delta - tzinfo = datetime.timezone(delta) - elif values['tz']: - tzinfo = datetime.timezone.utc - return datetime.datetime(year, month, day, hour, minute, second, fraction, - tzinfo=tzinfo) - - def construct_yaml_omap(self, node): - # Note: we do not check for duplicate keys, because it's too - # CPU-expensive. - omap = [] - yield omap - if not isinstance(node, SequenceNode): - raise ConstructorError("while constructing an ordered map", node.start_mark, - "expected a sequence, but found %s" % node.id, node.start_mark) - for subnode in node.value: - if not isinstance(subnode, MappingNode): - raise ConstructorError("while constructing an ordered map", node.start_mark, - "expected a mapping of length 1, but found %s" % subnode.id, - subnode.start_mark) - if len(subnode.value) != 1: - raise ConstructorError("while constructing an ordered map", node.start_mark, - "expected a single mapping item, but found %d items" % len(subnode.value), - subnode.start_mark) - key_node, value_node = subnode.value[0] - key = self.construct_object(key_node) - value = self.construct_object(value_node) - omap.append((key, value)) - - def construct_yaml_pairs(self, node): - # Note: the same code as `construct_yaml_omap`. - pairs = [] - yield pairs - if not isinstance(node, SequenceNode): - raise ConstructorError("while constructing pairs", node.start_mark, - "expected a sequence, but found %s" % node.id, node.start_mark) - for subnode in node.value: - if not isinstance(subnode, MappingNode): - raise ConstructorError("while constructing pairs", node.start_mark, - "expected a mapping of length 1, but found %s" % subnode.id, - subnode.start_mark) - if len(subnode.value) != 1: - raise ConstructorError("while constructing pairs", node.start_mark, - "expected a single mapping item, but found %d items" % len(subnode.value), - subnode.start_mark) - key_node, value_node = subnode.value[0] - key = self.construct_object(key_node) - value = self.construct_object(value_node) - pairs.append((key, value)) - - def construct_yaml_set(self, node): - data = set() - yield data - value = self.construct_mapping(node) - data.update(value) - - def construct_yaml_str(self, node): - return self.construct_scalar(node) - - def construct_yaml_seq(self, node): - data = [] - yield data - data.extend(self.construct_sequence(node)) - - def construct_yaml_map(self, node): - data = {} - yield data - value = self.construct_mapping(node) - data.update(value) - - def construct_yaml_object(self, node, cls): - data = cls.__new__(cls) - yield data - if hasattr(data, '__setstate__'): - state = self.construct_mapping(node, deep=True) - data.__setstate__(state) - else: - state = self.construct_mapping(node) - data.__dict__.update(state) - - def construct_undefined(self, node): - raise ConstructorError(None, None, - "could not determine a constructor for the tag %r" % node.tag, - node.start_mark) - -SafeConstructor.add_constructor( - 'tag:yaml.org,2002:null', - SafeConstructor.construct_yaml_null) - -SafeConstructor.add_constructor( - 'tag:yaml.org,2002:bool', - SafeConstructor.construct_yaml_bool) - -SafeConstructor.add_constructor( - 'tag:yaml.org,2002:int', - SafeConstructor.construct_yaml_int) - -SafeConstructor.add_constructor( - 'tag:yaml.org,2002:float', - SafeConstructor.construct_yaml_float) - -SafeConstructor.add_constructor( - 'tag:yaml.org,2002:binary', - SafeConstructor.construct_yaml_binary) - -SafeConstructor.add_constructor( - 'tag:yaml.org,2002:timestamp', - SafeConstructor.construct_yaml_timestamp) - -SafeConstructor.add_constructor( - 'tag:yaml.org,2002:omap', - SafeConstructor.construct_yaml_omap) - -SafeConstructor.add_constructor( - 'tag:yaml.org,2002:pairs', - SafeConstructor.construct_yaml_pairs) - -SafeConstructor.add_constructor( - 'tag:yaml.org,2002:set', - SafeConstructor.construct_yaml_set) - -SafeConstructor.add_constructor( - 'tag:yaml.org,2002:str', - SafeConstructor.construct_yaml_str) - -SafeConstructor.add_constructor( - 'tag:yaml.org,2002:seq', - SafeConstructor.construct_yaml_seq) - -SafeConstructor.add_constructor( - 'tag:yaml.org,2002:map', - SafeConstructor.construct_yaml_map) - -SafeConstructor.add_constructor(None, - SafeConstructor.construct_undefined) - -class FullConstructor(SafeConstructor): - # 'extend' is blacklisted because it is used by - # construct_python_object_apply to add `listitems` to a newly generate - # python instance - def get_state_keys_blacklist(self): - return ['^extend$', '^__.*__$'] - - def get_state_keys_blacklist_regexp(self): - if not hasattr(self, 'state_keys_blacklist_regexp'): - self.state_keys_blacklist_regexp = re.compile('(' + '|'.join(self.get_state_keys_blacklist()) + ')') - return self.state_keys_blacklist_regexp - - def construct_python_str(self, node): - return self.construct_scalar(node) - - def construct_python_unicode(self, node): - return self.construct_scalar(node) - - def construct_python_bytes(self, node): - try: - value = self.construct_scalar(node).encode('ascii') - except UnicodeEncodeError as exc: - raise ConstructorError(None, None, - "failed to convert base64 data into ascii: %s" % exc, - node.start_mark) - try: - if hasattr(base64, 'decodebytes'): - return base64.decodebytes(value) - else: - return base64.decodestring(value) - except binascii.Error as exc: - raise ConstructorError(None, None, - "failed to decode base64 data: %s" % exc, node.start_mark) - - def construct_python_long(self, node): - return self.construct_yaml_int(node) - - def construct_python_complex(self, node): - return complex(self.construct_scalar(node)) - - def construct_python_tuple(self, node): - return tuple(self.construct_sequence(node)) - - def find_python_module(self, name, mark, unsafe=False): - if not name: - raise ConstructorError("while constructing a Python module", mark, - "expected non-empty name appended to the tag", mark) - if unsafe: - try: - __import__(name) - except ImportError as exc: - raise ConstructorError("while constructing a Python module", mark, - "cannot find module %r (%s)" % (name, exc), mark) - if name not in sys.modules: - raise ConstructorError("while constructing a Python module", mark, - "module %r is not imported" % name, mark) - return sys.modules[name] - - def find_python_name(self, name, mark, unsafe=False): - if not name: - raise ConstructorError("while constructing a Python object", mark, - "expected non-empty name appended to the tag", mark) - if '.' in name: - module_name, object_name = name.rsplit('.', 1) - else: - module_name = 'builtins' - object_name = name - if unsafe: - try: - __import__(module_name) - except ImportError as exc: - raise ConstructorError("while constructing a Python object", mark, - "cannot find module %r (%s)" % (module_name, exc), mark) - if module_name not in sys.modules: - raise ConstructorError("while constructing a Python object", mark, - "module %r is not imported" % module_name, mark) - module = sys.modules[module_name] - if not hasattr(module, object_name): - raise ConstructorError("while constructing a Python object", mark, - "cannot find %r in the module %r" - % (object_name, module.__name__), mark) - return getattr(module, object_name) - - def construct_python_name(self, suffix, node): - value = self.construct_scalar(node) - if value: - raise ConstructorError("while constructing a Python name", node.start_mark, - "expected the empty value, but found %r" % value, node.start_mark) - return self.find_python_name(suffix, node.start_mark) - - def construct_python_module(self, suffix, node): - value = self.construct_scalar(node) - if value: - raise ConstructorError("while constructing a Python module", node.start_mark, - "expected the empty value, but found %r" % value, node.start_mark) - return self.find_python_module(suffix, node.start_mark) - - def make_python_instance(self, suffix, node, - args=None, kwds=None, newobj=False, unsafe=False): - if not args: - args = [] - if not kwds: - kwds = {} - cls = self.find_python_name(suffix, node.start_mark) - if not (unsafe or isinstance(cls, type)): - raise ConstructorError("while constructing a Python instance", node.start_mark, - "expected a class, but found %r" % type(cls), - node.start_mark) - if newobj and isinstance(cls, type): - return cls.__new__(cls, *args, **kwds) - else: - return cls(*args, **kwds) - - def set_python_instance_state(self, instance, state, unsafe=False): - if hasattr(instance, '__setstate__'): - instance.__setstate__(state) - else: - slotstate = {} - if isinstance(state, tuple) and len(state) == 2: - state, slotstate = state - if hasattr(instance, '__dict__'): - if not unsafe and state: - for key in state.keys(): - self.check_state_key(key) - instance.__dict__.update(state) - elif state: - slotstate.update(state) - for key, value in slotstate.items(): - if not unsafe: - self.check_state_key(key) - setattr(instance, key, value) - - def construct_python_object(self, suffix, node): - # Format: - # !!python/object:module.name { ... state ... } - instance = self.make_python_instance(suffix, node, newobj=True) - yield instance - deep = hasattr(instance, '__setstate__') - state = self.construct_mapping(node, deep=deep) - self.set_python_instance_state(instance, state) - - def construct_python_object_apply(self, suffix, node, newobj=False): - # Format: - # !!python/object/apply # (or !!python/object/new) - # args: [ ... arguments ... ] - # kwds: { ... keywords ... } - # state: ... state ... - # listitems: [ ... listitems ... ] - # dictitems: { ... dictitems ... } - # or short format: - # !!python/object/apply [ ... arguments ... ] - # The difference between !!python/object/apply and !!python/object/new - # is how an object is created, check make_python_instance for details. - if isinstance(node, SequenceNode): - args = self.construct_sequence(node, deep=True) - kwds = {} - state = {} - listitems = [] - dictitems = {} - else: - value = self.construct_mapping(node, deep=True) - args = value.get('args', []) - kwds = value.get('kwds', {}) - state = value.get('state', {}) - listitems = value.get('listitems', []) - dictitems = value.get('dictitems', {}) - instance = self.make_python_instance(suffix, node, args, kwds, newobj) - if state: - self.set_python_instance_state(instance, state) - if listitems: - instance.extend(listitems) - if dictitems: - for key in dictitems: - instance[key] = dictitems[key] - return instance - - def construct_python_object_new(self, suffix, node): - return self.construct_python_object_apply(suffix, node, newobj=True) - -FullConstructor.add_constructor( - 'tag:yaml.org,2002:python/none', - FullConstructor.construct_yaml_null) - -FullConstructor.add_constructor( - 'tag:yaml.org,2002:python/bool', - FullConstructor.construct_yaml_bool) - -FullConstructor.add_constructor( - 'tag:yaml.org,2002:python/str', - FullConstructor.construct_python_str) - -FullConstructor.add_constructor( - 'tag:yaml.org,2002:python/unicode', - FullConstructor.construct_python_unicode) - -FullConstructor.add_constructor( - 'tag:yaml.org,2002:python/bytes', - FullConstructor.construct_python_bytes) - -FullConstructor.add_constructor( - 'tag:yaml.org,2002:python/int', - FullConstructor.construct_yaml_int) - -FullConstructor.add_constructor( - 'tag:yaml.org,2002:python/long', - FullConstructor.construct_python_long) - -FullConstructor.add_constructor( - 'tag:yaml.org,2002:python/float', - FullConstructor.construct_yaml_float) - -FullConstructor.add_constructor( - 'tag:yaml.org,2002:python/complex', - FullConstructor.construct_python_complex) - -FullConstructor.add_constructor( - 'tag:yaml.org,2002:python/list', - FullConstructor.construct_yaml_seq) - -FullConstructor.add_constructor( - 'tag:yaml.org,2002:python/tuple', - FullConstructor.construct_python_tuple) - -FullConstructor.add_constructor( - 'tag:yaml.org,2002:python/dict', - FullConstructor.construct_yaml_map) - -FullConstructor.add_multi_constructor( - 'tag:yaml.org,2002:python/name:', - FullConstructor.construct_python_name) - -class UnsafeConstructor(FullConstructor): - - def find_python_module(self, name, mark): - return super(UnsafeConstructor, self).find_python_module(name, mark, unsafe=True) - - def find_python_name(self, name, mark): - return super(UnsafeConstructor, self).find_python_name(name, mark, unsafe=True) - - def make_python_instance(self, suffix, node, args=None, kwds=None, newobj=False): - return super(UnsafeConstructor, self).make_python_instance( - suffix, node, args, kwds, newobj, unsafe=True) - - def set_python_instance_state(self, instance, state): - return super(UnsafeConstructor, self).set_python_instance_state( - instance, state, unsafe=True) - -UnsafeConstructor.add_multi_constructor( - 'tag:yaml.org,2002:python/module:', - UnsafeConstructor.construct_python_module) - -UnsafeConstructor.add_multi_constructor( - 'tag:yaml.org,2002:python/object:', - UnsafeConstructor.construct_python_object) - -UnsafeConstructor.add_multi_constructor( - 'tag:yaml.org,2002:python/object/new:', - UnsafeConstructor.construct_python_object_new) - -UnsafeConstructor.add_multi_constructor( - 'tag:yaml.org,2002:python/object/apply:', - UnsafeConstructor.construct_python_object_apply) - -# Constructor is same as UnsafeConstructor. Need to leave this in place in case -# people have extended it directly. -class Constructor(UnsafeConstructor): - pass diff --git a/spaces/r3gm/RVC_HF/infer/lib/uvr5_pack/lib_v5/nets.py b/spaces/r3gm/RVC_HF/infer/lib/uvr5_pack/lib_v5/nets.py deleted file mode 100644 index 5da3948c2f2e9edcc3cdac49bdf9f738e403de40..0000000000000000000000000000000000000000 --- a/spaces/r3gm/RVC_HF/infer/lib/uvr5_pack/lib_v5/nets.py +++ /dev/null @@ -1,123 +0,0 @@ -import layers -import torch -import torch.nn.functional as F -from torch import nn - -from . import spec_utils - - -class BaseASPPNet(nn.Module): - def __init__(self, nin, ch, dilations=(4, 8, 16)): - super(BaseASPPNet, self).__init__() - self.enc1 = layers.Encoder(nin, ch, 3, 2, 1) - self.enc2 = layers.Encoder(ch, ch * 2, 3, 2, 1) - self.enc3 = layers.Encoder(ch * 2, ch * 4, 3, 2, 1) - self.enc4 = layers.Encoder(ch * 4, ch * 8, 3, 2, 1) - - self.aspp = layers.ASPPModule(ch * 8, ch * 16, dilations) - - self.dec4 = layers.Decoder(ch * (8 + 16), ch * 8, 3, 1, 1) - self.dec3 = layers.Decoder(ch * (4 + 8), ch * 4, 3, 1, 1) - self.dec2 = layers.Decoder(ch * (2 + 4), ch * 2, 3, 1, 1) - self.dec1 = layers.Decoder(ch * (1 + 2), ch, 3, 1, 1) - - def __call__(self, x): - h, e1 = self.enc1(x) - h, e2 = self.enc2(h) - h, e3 = self.enc3(h) - h, e4 = self.enc4(h) - - h = self.aspp(h) - - h = self.dec4(h, e4) - h = self.dec3(h, e3) - h = self.dec2(h, e2) - h = self.dec1(h, e1) - - return h - - -class CascadedASPPNet(nn.Module): - def __init__(self, n_fft): - super(CascadedASPPNet, self).__init__() - self.stg1_low_band_net = BaseASPPNet(2, 16) - self.stg1_high_band_net = BaseASPPNet(2, 16) - - self.stg2_bridge = layers.Conv2DBNActiv(18, 8, 1, 1, 0) - self.stg2_full_band_net = BaseASPPNet(8, 16) - - self.stg3_bridge = layers.Conv2DBNActiv(34, 16, 1, 1, 0) - self.stg3_full_band_net = BaseASPPNet(16, 32) - - self.out = nn.Conv2d(32, 2, 1, bias=False) - self.aux1_out = nn.Conv2d(16, 2, 1, bias=False) - self.aux2_out = nn.Conv2d(16, 2, 1, bias=False) - - self.max_bin = n_fft // 2 - self.output_bin = n_fft // 2 + 1 - - self.offset = 128 - - def forward(self, x, aggressiveness=None): - mix = x.detach() - x = x.clone() - - x = x[:, :, : self.max_bin] - - bandw = x.size()[2] // 2 - aux1 = torch.cat( - [ - self.stg1_low_band_net(x[:, :, :bandw]), - self.stg1_high_band_net(x[:, :, bandw:]), - ], - dim=2, - ) - - h = torch.cat([x, aux1], dim=1) - aux2 = self.stg2_full_band_net(self.stg2_bridge(h)) - - h = torch.cat([x, aux1, aux2], dim=1) - h = self.stg3_full_band_net(self.stg3_bridge(h)) - - mask = torch.sigmoid(self.out(h)) - mask = F.pad( - input=mask, - pad=(0, 0, 0, self.output_bin - mask.size()[2]), - mode="replicate", - ) - - if self.training: - aux1 = torch.sigmoid(self.aux1_out(aux1)) - aux1 = F.pad( - input=aux1, - pad=(0, 0, 0, self.output_bin - aux1.size()[2]), - mode="replicate", - ) - aux2 = torch.sigmoid(self.aux2_out(aux2)) - aux2 = F.pad( - input=aux2, - pad=(0, 0, 0, self.output_bin - aux2.size()[2]), - mode="replicate", - ) - return mask * mix, aux1 * mix, aux2 * mix - else: - if aggressiveness: - mask[:, :, : aggressiveness["split_bin"]] = torch.pow( - mask[:, :, : aggressiveness["split_bin"]], - 1 + aggressiveness["value"] / 3, - ) - mask[:, :, aggressiveness["split_bin"] :] = torch.pow( - mask[:, :, aggressiveness["split_bin"] :], - 1 + aggressiveness["value"], - ) - - return mask * mix - - def predict(self, x_mag, aggressiveness=None): - h = self.forward(x_mag, aggressiveness) - - if self.offset > 0: - h = h[:, :, :, self.offset : -self.offset] - assert h.size()[3] > 0 - - return h diff --git a/spaces/radames/PIFu-Clothed-Human-Digitization/PIFu/lib/model/ConvFilters.py b/spaces/radames/PIFu-Clothed-Human-Digitization/PIFu/lib/model/ConvFilters.py deleted file mode 100644 index 1348ddea27e1bb3b0a65592bf78c92305dce0bd7..0000000000000000000000000000000000000000 --- a/spaces/radames/PIFu-Clothed-Human-Digitization/PIFu/lib/model/ConvFilters.py +++ /dev/null @@ -1,112 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F -import torchvision.models.resnet as resnet -import torchvision.models.vgg as vgg - - -class MultiConv(nn.Module): - def __init__(self, filter_channels): - super(MultiConv, self).__init__() - self.filters = [] - - for l in range(0, len(filter_channels) - 1): - self.filters.append( - nn.Conv2d(filter_channels[l], filter_channels[l + 1], kernel_size=4, stride=2)) - self.add_module("conv%d" % l, self.filters[l]) - - def forward(self, image): - ''' - :param image: [BxC_inxHxW] tensor of input image - :return: list of [BxC_outxHxW] tensors of output features - ''' - y = image - # y = F.relu(self.bn0(self.conv0(y)), True) - feat_pyramid = [y] - for i, f in enumerate(self.filters): - y = f(y) - if i != len(self.filters) - 1: - y = F.leaky_relu(y) - # y = F.max_pool2d(y, kernel_size=2, stride=2) - feat_pyramid.append(y) - return feat_pyramid - - -class Vgg16(torch.nn.Module): - def __init__(self): - super(Vgg16, self).__init__() - vgg_pretrained_features = vgg.vgg16(pretrained=True).features - self.slice1 = torch.nn.Sequential() - self.slice2 = torch.nn.Sequential() - self.slice3 = torch.nn.Sequential() - self.slice4 = torch.nn.Sequential() - self.slice5 = torch.nn.Sequential() - - for x in range(4): - self.slice1.add_module(str(x), vgg_pretrained_features[x]) - for x in range(4, 9): - self.slice2.add_module(str(x), vgg_pretrained_features[x]) - for x in range(9, 16): - self.slice3.add_module(str(x), vgg_pretrained_features[x]) - for x in range(16, 23): - self.slice4.add_module(str(x), vgg_pretrained_features[x]) - for x in range(23, 30): - self.slice5.add_module(str(x), vgg_pretrained_features[x]) - - def forward(self, X): - h = self.slice1(X) - h_relu1_2 = h - h = self.slice2(h) - h_relu2_2 = h - h = self.slice3(h) - h_relu3_3 = h - h = self.slice4(h) - h_relu4_3 = h - h = self.slice5(h) - h_relu5_3 = h - - return [h_relu1_2, h_relu2_2, h_relu3_3, h_relu4_3, h_relu5_3] - - -class ResNet(nn.Module): - def __init__(self, model='resnet18'): - super(ResNet, self).__init__() - - if model == 'resnet18': - net = resnet.resnet18(pretrained=True) - elif model == 'resnet34': - net = resnet.resnet34(pretrained=True) - elif model == 'resnet50': - net = resnet.resnet50(pretrained=True) - else: - raise NameError('Unknown Fan Filter setting!') - - self.conv1 = net.conv1 - - self.pool = net.maxpool - self.layer0 = nn.Sequential(net.conv1, net.bn1, net.relu) - self.layer1 = net.layer1 - self.layer2 = net.layer2 - self.layer3 = net.layer3 - self.layer4 = net.layer4 - - def forward(self, image): - ''' - :param image: [BxC_inxHxW] tensor of input image - :return: list of [BxC_outxHxW] tensors of output features - ''' - - y = image - feat_pyramid = [] - y = self.layer0(y) - feat_pyramid.append(y) - y = self.layer1(self.pool(y)) - feat_pyramid.append(y) - y = self.layer2(y) - feat_pyramid.append(y) - y = self.layer3(y) - feat_pyramid.append(y) - y = self.layer4(y) - feat_pyramid.append(y) - - return feat_pyramid diff --git a/spaces/raedeXanto/academic-chatgpt-beta/CRACK Kuassa - Amplifikation Caliburn 1.0.5 A Must-Have Amp VST for Any Guitarist.md b/spaces/raedeXanto/academic-chatgpt-beta/CRACK Kuassa - Amplifikation Caliburn 1.0.5 A Must-Have Amp VST for Any Guitarist.md deleted file mode 100644 index 722ad5064cd00f664cc36f56d9ebd89a2cbc2265..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/CRACK Kuassa - Amplifikation Caliburn 1.0.5 A Must-Have Amp VST for Any Guitarist.md +++ /dev/null @@ -1,179 +0,0 @@ -
-

CRACK Kuassa - Amplifikation Caliburn 1.0.5 (VST, VST3, AAX, AU) X64

-

Introduction

-

If you are a guitar player who loves the sound of rock music, you probably have heard of Marshall amps. These British-made amplifiers have been used by many legendary guitarists such as Jimi Hendrix, Slash, Angus Young, and more. They are known for their loud, chimey clean, and thick crunchy mids that define the tone of rock music.

-

But what if you don't have a real Marshall amp or you can't afford one? What if you want to record or perform with a Marshall amp sound without carrying a heavy and bulky amp around? Well, there is a solution for you: Kuassa - Amplifikation Caliburn.

-

CRACK Kuassa - Amplifikation Caliburn 1.0.5 (VST, VST3, AAX, AU) X64


DOWNLOAD 🌟 https://tinourl.com/2uL4md



-

What is Kuassa - Amplifikation Caliburn?

-

Kuassa - Amplifikation Caliburn is a guitar amp simulator plugin that emulates three classic Marshall amps: JTM45, JCM800, and JCM900 Master Volume. It is powered by the 3rd generation of Kuassa's electric circuit simulation technology, which results in a responsive, dynamic, and realistic guitar playing experience.

-

With Amplifikation Caliburn, you can get the famous "British crunch" right from your computer. You can choose from three amp types, each with two channels: Clean and Lead. You can also fine-tune your tone with the power amp Sag and Bias feature, which gives you an authentic feel of a real tube amp.

-

But that's not all. You can also mix and match different cabinet types with Celestion speakers, and choose from seven workhorse mic types with dual-miking option. You can also use the built-in noise gate and limiter to control your signal level. And you can enjoy all these features with a photorealistic graphics and easy to use interface.

-

Why use a CRACK version of Amplifikation Caliburn?

-

Now, you might be wondering: why should I use a CRACK version of Amplifikation Caliburn? Well, there are several reasons why you might want to do that:

-
    -
  • You want to try Amplifikation Caliburn before buying it.
  • -
  • You can't afford to buy Amplifikation Caliburn at its regular price ($49).
  • -
  • You don't want to deal with online activation or license key issues.
  • -
  • You want to use Amplifikation Caliburn on multiple computers without any restrictions.
  • -
  • You want to support the developers of CRACK software.
  • -
-

Whatever your reason is, using a CRACK version of Amplifikation Caliburn can give you access to all its features without paying anything. You can download it from a reliable source, install it on your computer, and enjoy the full version of this amazing plugin.

-

Features of CRACK Kuassa - Amplifikation Caliburn 1.0.5

-

So what are the features of CRACK Kuassa - Amplifikation Caliburn 1.0.5? Let's take a look at them in detail:

-

How to download CRACK Kuassa - Amplifikation Caliburn 1.0.5 for free
-CRACK Kuassa - Amplifikation Caliburn 1.0.5 review and tutorial
-Best guitar amp simulator plugin: CRACK Kuassa - Amplifikation Caliburn 1.0.5
-CRACK Kuassa - Amplifikation Caliburn 1.0.5 vs other Kuassa plugins
-CRACK Kuassa - Amplifikation Caliburn 1.0.5 compatibility and system requirements
-CRACK Kuassa - Amplifikation Caliburn 1.0.5 features and benefits
-CRACK Kuassa - Amplifikation Caliburn 1.0.5 presets and sound samples
-CRACK Kuassa - Amplifikation Caliburn 1.0.5 installation and activation guide
-CRACK Kuassa - Amplifikation Caliburn 1.0.5 coupon code and discount offer
-CRACK Kuassa - Amplifikation Caliburn 1.0.5 alternatives and competitors
-CRACK Kuassa - Amplifikation Caliburn 1.0.5 customer support and feedback
-CRACK Kuassa - Amplifikation Caliburn 1.0.5 update and bug fixes
-CRACK Kuassa - Amplifikation Caliburn 1.0.5 license and terms of use
-CRACK Kuassa - Amplifikation Caliburn 1.0.5 pros and cons
-CRACK Kuassa - Amplifikation Caliburn 1.0.5 testimonials and user reviews
-How to use CRACK Kuassa - Amplifikation Caliburn 1.0.5 with your DAW
-How to get the best tone with CRACK Kuassa - Amplifikation Caliburn 1.0.5
-How to customize and tweak CRACK Kuassa - Amplifikation Caliburn 1.0.5 settings
-How to record and mix with CRACK Kuassa - Amplifikation Caliburn 1.0.5
-How to create your own presets with CRACK Kuassa - Amplifikation Caliburn 1.0.5
-How to optimize your CPU performance with CRACK Kuassa - Amplifikation Caliburn 1.0.5
-How to troubleshoot common issues with CRACK Kuassa - Amplifikation Caliburn 1.0.5
-How to uninstall and remove CRACK Kuassa - Amplifikation Caliburn 1.0.5 from your computer
-How to upgrade to the latest version of CRACK Kuassa - Amplifikation Caliburn 1.0.5
-How to backup and restore your CRACK Kuassa - Amplifikation Caliburn 1.0.5 data
-How to integrate CRACK Kuassa - Amplifikation Caliburn 1.0.5 with other plugins and effects
-How to master your guitar tracks with CRACK Kuassa - Amplifikation Caliburn 1.0.5
-How to emulate different guitar styles with CRACK Kuassa - Amplifikation Caliburn 1.0.5
-How to add realism and expression to your guitar playing with CRACK Kuassa - Amplifikation Caliburn 1.0.5
-How to achieve a professional sound quality with CRACK Kuassa - Amplifikation Caliburn 1.0.5
-How to learn guitar techniques and tips from CRACK Kuassa - Amplifikation Caliburn 1.0.5 users
-How to access exclusive content and resources from CRACK Kuassa - Amplifikation Caliburn 1.0.5 website
-How to join the community and network with other CRACK Kuassa - Amplifikation Caliburn 1.0.5 users
-How to get inspired and motivated by CRACK Kuassa - Amplifikation Caliburn 1.0.5 demos and videos
-How to make money online with your music using CRACK Kuassa - Amplifikation Caliburn 1.0.5
-How to avoid legal issues and risks when using CRACK Kuassa - Amplifikation Caliburn 1.0.5
-How to protect your computer from viruses and malware when downloading CRACK Kuassa - Amplifikation Caliburn 1.0.

-

3 amp types inspired by Marshall amps

-

The core feature of Amplifikation Caliburn is its three amp types that are inspired by Marshall amps. These are:

-
    -
  • JTM45: The first Marshall amp ever made, which was based on the Fender Bassman circuit. It has a warm, smooth, and bluesy tone that is perfect for classic rock and blues.
  • -
  • JCM800: The most popular Marshall amp in the 80s, which was used by many hard rock and metal bands such as AC/DC, Guns N' Roses, Iron Maiden, and more. It has a tight, aggressive, and punchy tone that is ideal for high-gain sounds.
  • -
  • JCM900 Master Volume: The modern version of the JCM800, which has more gain, more headroom, and more versatility. It has a bright, clear, and powerful tone that can handle any genre from clean to metal.
  • -
-

You can switch between these amp types with a simple click on the plugin interface. You can also adjust the gain, volume, bass, middle, treble, presence, and master volume knobs for each amp type.

-

2 channels for each amp: Clean and Lead

-

Another feature of Amplifikation Caliburn is its two channels for each amp type: Clean and Lead. You can toggle between these channels with a click or a MIDI controller.

-

The Clean channel gives you a pristine and sparkling clean tone that can be pushed into a mild crunch with higher gain settings. The Lead channel gives you a saturated and distorted tone that can go from mild overdrive to heavy distortion with higher gain settings.

-

You can also use the Boost switch to add more gain and volume to your Lead channel for extra punch and sustain.

-

Power amp Sag and Bias feature

-

A unique feature of Amplifikation Caliburn is its power amp Sag and Bias feature. This feature allows you to simulate the behavior of a real tube power amp by adjusting two parameters:

-
    -
  • Sag: This parameter controls how much the power amp voltage drops when playing loud notes or chords. A higher Sag value will result in a softer attack, more compression, and more sagging feel.
  • -
  • Bias: This parameter controls how much current flows through the power tubes when idle. A higher Bias value will result in a hotter sound with more harmonics and distortion.
  • -
-

You can use these parameters to fine-tune your tone and feel according to your preference and playing style.

-

5 cabinet types with Celestion speakers

-

An essential feature of any guitar amp simulator is its cabinet simulation. Amplifikation Caliburn offers five cabinet types with Celestion speakers that are matched with each amp type:

-
    -
  • 1x12 Open Back: A small cabinet with an open back design that produces a wide soundstage with less low-end punch. It features a Celestion G12M Greenback speaker that has a warm and woody tone.
  • -
  • 2x12 Open Back: A medium-sized cabinet with an open back design that produces a balanced sound with more low-end punch than the 1x12 cabinet. It features two Celestion G12H Anniversary speakers that have a bright and aggressive tone.
  • -
  • 4x12 Closed Back: A large cabinet with a closed back design that produces a focused sound with more low-end punch than the open back cabinets. It features four Celestion G12T-75 speakers that have a modern and versatile tone.
  • -
  • 4x12 Vintage Closed Back: A large cabinet with a closed back design that produces a vintage sound with less low-end punch than the modern closed back cabinet. It features four Celestion G12M-25 Greenback speakers that have a warm and smooth tone.
  • -
  • 4x12 Vintage 30 Closed Back: A large cabinet with a closed back design that produces a modern sound with more low end punch than the other closed back cabinets. It features four Celestion Vintage 30 speakers that have a rich and detailed tone.
  • -
-

You can select your preferred cabinet type with a click on the plugin interface. You can also adjust the high pass and low pass filters to shape your tone further.

-

7 mic types with dual-miking option

-

A crucial feature of any guitar amp simulator is its mic simulation. Amplifikation Caliburn offers seven mic types with dual-miking option that are suitable for guitar recording. These are:

-
    -
  • Shure SM57: The most popular dynamic mic for guitar amps, which has a bright and punchy sound that cuts through the mix.
  • -
  • Sennheiser MD421: A classic dynamic mic for guitar amps, which has a warm and smooth sound that adds body and depth.
  • -
  • Sennheiser MD441: A high-end dynamic mic for guitar amps, which has a clear and detailed sound that captures every nuance.
  • -
  • C&T Naked Eye: A ribbon mic for guitar amps, which has a dark and mellow sound that adds warmth and character.
  • -
  • Royer 121: A modern ribbon mic for guitar amps, which has a balanced and natural sound that blends well with any amp.
  • -
  • AKG C414: A versatile condenser mic for guitar amps, which has a bright and airy sound that adds sparkle and clarity.
  • -
  • Neumann TLM103: A premium condenser mic for guitar amps, which has a rich and full sound that adds presence and dimension.
  • -
-

You can choose your preferred mic type with a click on the plugin interface. You can also use the dual-miking option to blend two different mic types for a more complex and realistic sound. You can adjust the pan, distance, axis, and phase of each mic to achieve your desired tone.

-

Noise Gate and Limiter

-

A handy feature of Amplifikation Caliburn is its noise gate and limiter. These are useful tools to control your signal level and eliminate unwanted noise.

-

The noise gate allows you to set a threshold level below which the signal will be muted. This helps you to get rid of hum, buzz, or hiss from your guitar or amp. You can adjust the threshold, attack, hold, and release knobs to fine-tune the noise gate performance.

-

The limiter allows you to set a ceiling level above which the signal will be compressed. This helps you to prevent clipping or distortion from your guitar or amp. You can adjust the ceiling knob to set the maximum output level of the plugin.

-

Photorealistic graphics and easy to use interface

-

A nice feature of Amplifikation Caliburn is its photorealistic graphics and easy to use interface. The plugin looks like a real amp head with knobs, switches, and indicators. The plugin also has a realistic cabinet view with mics, stands, and cables. The plugin interface is intuitive and user-friendly, allowing you to tweak your tone with ease.

-

Supports up to 8x oversampling and 192000Hz sample rate

-

A final feature of Amplifikation Caliburn is its support for up to 8x oversampling and 192000Hz sample rate. These are advanced options that improve the sound quality and fidelity of the plugin. Oversampling reduces aliasing artifacts that can occur when processing high-gain sounds. Sample rate determines the frequency range and resolution of the audio signal. You can choose your preferred oversampling and sample rate settings from the plugin menu.

-

How to install and use CRACK Kuassa - Amplifikation Caliburn 1.0.5?

-

Now that you know the features of CRACK Kuassa - Amplifikation Caliburn 1.0.5, you might be wondering how to install and use it on your computer. Here are the steps you need to follow:

-

Download the CRACK file from a reliable source

-

The first step is to download the CRACK file from a reliable source. You can find many websites that offer CRACK files for various plugins, but you need to be careful about their quality and safety. Some CRACK files may contain viruses, malware, or spyware that can harm your computer or steal your personal information.

-

To avoid these risks, you should only download CRACK files from trusted sources that have positive reviews and feedback from other users. You should also scan the CRACK file with an antivirus software before opening it.

-

Extract the file and run the setup.exe file

-

The next step is to extract the file and run the setup.exe file. The CRACK file usually comes in a compressed format such as ZIP or RAR. You need to extract it using a software such as WinRAR or 7-Zip. After extracting it, you will find a folder that contains the setup.exe file and other files such as crack.dll or keygen.exe.

-

You need to run the setup.exe file by double-clicking on it or right-clicking on it and choosing Run as administrator. This will launch the installation wizard that will guide you through the installation process. You need to follow the instructions on the screen and choose your preferred plugin format (VST, VST3, AAX, AU) and destination folder.

-

Copy the crack file and paste it into the installation folder

-

The third step is to copy the crack file and paste it into the installation folder. The crack file is the file that bypasses the online activation or license key verification of the plugin. It usually has the same name as the plugin or the developer, such as Amplifikation Caliburn.dll or Kuassa.dll. You need to copy this file and paste it into the installation folder where you installed the plugin.

-

To do this, you need to locate the installation folder on your computer. It depends on your plugin format and destination folder, but it usually looks something like this:

-
    -
  • C:\Program Files\Steinberg\VstPlugins\Kuassa\Amplifikation Caliburn.dll
  • -
  • C:\Program Files\Common Files\VST3\Kuassa\Amplifikation Caliburn.vst3
  • -
  • C:\Program Files\Common Files\Avid\Audio\Plug-Ins\Kuassa\Amplifikation Caliburn.aaxplugin
  • -
  • C:\Program Files (x86)\Common Files\Avid\Audio\Plug-Ins\Kuassa\Amplifikation Caliburn.aaxplugin
  • -
  • C:\Program Files (x86)\VstPlugins\Kuassa\Amplifikation Caliburn.dll
  • -
  • C:\Users\[Your Username]\AppData\Roaming\Kuassa\Amplifikation Caliburn.component
  • -
-

You need to replace the original plugin file with the crack file by copying and pasting it. You may need to overwrite or delete the original file if it already exists.

-

Launch your DAW and scan for new plugins

-

The final step is to launch your DAW and scan for new plugins. Your DAW is the software that you use to record, edit, and mix your music, such as Cubase, Pro Tools, Logic Pro, FL Studio, Ableton Live, Reaper, etc. You need to launch your DAW and scan for new plugins so that it can recognize and load Amplifikation Caliburn.

-

To do this, you need to follow the instructions of your DAW on how to scan for new plugins. It may vary depending on your DAW, but it usually involves going to a menu such as Preferences, Options, Settings, or Plugins and clicking on a button such as Scan, Rescan, Update, or Refresh. You may also need to enable or activate Amplifikation Caliburn in your DAW if it is not already done.

-

Enjoy the full version of Amplifikation Caliburn without paying anything

-

Once you have completed these steps, you can enjoy the full version of Amplifikation Caliburn without paying anything. You can use it as a standalone application or as a plugin in your DAW. You can access all its features and settings without any limitations or restrictions. You can create amazing guitar tones with Amplifikation Caliburn and rock your music with style.

-

Conclusion

-

Summary of the main points

-

In this article, we have learned about CRACK Kuassa - Amplifikation Caliburn 1.0.5 (VST, VST3, AAX, AU) X64. We have seen what it is, why you might want to use it, what features it has, and how to install and use it on your computer.

-

We have learned that Amplifikation Caliburn is a guitar amp simulator plugin that emulates three classic Marshall amps: JTM45, JCM800, and JCM900 Master Volume. It has many features such as power amp Sag and Bias feature, cabinet and mic simulation with dual-miking option, noise gate and limiter, photorealistic graphics and easy to use interface, and support for up to 8x oversampling and 192000Hz sample rate.

-

We have also learned that using a CRACK version of Amplifikation Caliburn can give you access to all its features without paying anything. You can download it from a reliable source, install it on your computer by copying and pasting the crack file into the installation folder, and enjoy the full version of this amazing plugin.

-

Call to action and disclaimer

-

If you are interested in trying CRACK Kuassa - Amplifikation Caliburn 1.0.5 (VST, VST3, AAX, AU) X64 for yourself, you can download it from one of these links:

- -

However, before you do that, we have to warn you about some risks and consequences of using CRACK software. These include:

-
    -
  • Violating the intellectual property rights of Kuassa and other developers.
  • -
  • Exposing your computer to viruses, malware, or spyware that can harm your system or steal your personal information.
  • -
  • Losing technical support and updates from Kuassa and other developers.
  • -
  • Missing out on new features and improvements that Kuassa and other developers may release in the future.
  • -
  • Hurting the reputation and income of Kuassa and other developers who work hard to create quality products for musicians.
  • -
-

Therefore, we strongly recommend that you buy the original version of Amplifikation Caliburn from Kuassa's official website: https://www.kuassa.com/products/amplifikation-caliburn/. By doing so, you will support Kuassa and other developers who make great plugins for us musicians. You will also enjoy a better sound quality and performance, a more secure and stable system, and a more ethical and legal way of using Amplifikation Caliburn.

-

We hope you have found this article helpful and informative. If you have any questions or comments, please feel free to leave them below. Thank you for reading and happy rocking!

-

FAQs

-

Here are some frequently asked questions about CRACK Kuassa - Amplifikation Caliburn 1.0.5 (VST, VST3, AAX, AU) X64:

-
    -
  1. What is the difference between CRACK and original version of Amplifikation Caliburn?
  2. -

    The main difference between CRACK and original version of Amplifikation Caliburn is that the CRACK version bypasses the online activation or license key verification of the plugin, allowing you to use it without paying anything. The original version requires you to buy a license key from Kuassa's website and activate it online before using it.

    -
  3. Is it safe to use CRACK software?
  4. -

    No, it is not safe to use CRACK software. CRACK software may contain viruses, malware, or spyware that can harm your computer or steal your personal information. CRACK software may also cause compatibility issues, crashes, or errors with your system or other plugins. CRACK software may also violate the intellectual property rights of Kuassa and other developers.

    -
  5. Is it legal to use CRACK software?
  6. -

    No, it is not legal to use CRACK software. CRACK software infringes the intellectual property rights of Kuassa and other developers who own the copyrights of their products. By using CRACK software, you are breaking the law and risking legal actions from Kuassa and other developers.

    -
  7. How can I buy the original version of Amplifikation Caliburn?
  8. -

    You can buy the original version of Amplifikation Caliburn from Kuassa's official website: https://www.kuassa.com/products/amplifikation-caliburn/. You can choose your preferred plugin format (VST, VST3, AAX, AU) and pay with PayPal or credit card. You will receive a license key via email that you need to activate online before using the plugin.

    -
  9. What are the benefits of buying the original version of Amplifikation Caliburn?
  10. -

    There are many benefits of buying the original version of Amplifikation Caliburn. These include:

    -
      -
    • Supporting Kuassa and other developers who make great plugins for musicians.
    • -
    • Enjoying a better sound quality and performance with the latest updates and improvements from Kuassa.
    • -
    • Having a more secure and stable system without viruses, malware, or spyware.
    • -
    • Getting technical support and customer service from Kuassa and other developers.
    • -
    • Being ethical and legal in using Amplifikation Caliburn.
    • -
    -
-

0a6ba089eb
-
-
\ No newline at end of file diff --git a/spaces/raedeXanto/academic-chatgpt-beta/G3 Live In Denver 2003.md b/spaces/raedeXanto/academic-chatgpt-beta/G3 Live In Denver 2003.md deleted file mode 100644 index 2dbe46ef19d5a182c2ee7ca8a435bc234d3e9fbc..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/G3 Live In Denver 2003.md +++ /dev/null @@ -1,15 +0,0 @@ -
-

G3: Live in Denver - A Rocking Showcase of Guitar Virtuosos

-

G3: Live in Denver is a live DVD and double-CD album that captures the performance of G3, a touring band that features three of the most acclaimed guitarists in the world: Joe Satriani, Steve Vai and Yngwie Malmsteen. The DVD was recorded at the Fillmore Auditorium in Denver, Colorado on October 20, 2003, and released on February 24, 2004 by Epic Records.

-

G3 Live In Denver 2003


Download ---> https://tinourl.com/2uKZGV



-

The DVD features 16 tracks, divided into four sections: one for each guitarist's solo set, and one for the G3 jam session, where they play together on classic rock songs by Neil Young, Jimi Hendrix and others. The DVD also includes a Fretcam option that allows viewers to see the guitarists' fingers up close, a "Lightning Plot" overview that shows the stage setup and equipment used by each guitarist, and biographies of the performers.

-

The DVD showcases the diverse styles and skills of the three guitarists, who have influenced generations of rock and metal musicians with their virtuosity, creativity and charisma. Satriani plays his signature instrumental rock songs, such as "Satch Boogie", "The Extremist" and "The Mystical Potato Head Groove Thing", with his trademark melodic phrasing, expressive bends and harmonics. Vai performs his complex and adventurous compositions, such as "I Know You're Here", "Juice" and "I'm The Hell Outta Here", with his stunning technique, sonic experimentation and theatrical flair. Malmsteen delivers his neoclassical metal masterpieces, such as "Evil Eye", "Baroque And Roll" and "Far Beyond The Sun", with his blazing speed, intricate arpeggios and sweeping scales.

-

The G3 jam session is the highlight of the DVD, where the three guitarists trade solos, riffs and licks on some of the most iconic songs in rock history. They pay tribute to Neil Young with a powerful rendition of "Rockin' In The Free World", to Jimi Hendrix with a soulful version of "Little Wing" and a fiery interpretation of "Voodoo Child (Slight Return)", and to other legends such as Stevie Ray Vaughan, Deep Purple and Led Zeppelin. The jam session showcases not only their individual talents, but also their chemistry, respect and enjoyment as they play together.

-

G3: Live in Denver is a must-have for any fan of guitar music, as it offers a rare opportunity to witness three of the greatest guitarists of all time share the same stage and deliver an unforgettable performance. It is a testament to their passion, dedication and mastery of their instrument.

- -

The DVD also features a bonus feature called "Tour Book", where the three guitarists talk about their experiences on the G3 tour, their influences, their gear and their advice for aspiring guitarists. They share some of their stories, jokes and insights from their long and successful careers in the music industry. They also express their admiration and appreciation for each other, as well as for their fans and crew.

-

-

The G3 tour was founded by Joe Satriani in 1995, with the idea of bringing together three guitarists who have made a significant impact on rock music. Since then, the tour has featured many different guitarists, such as Eric Johnson, John Petrucci, Kenny Wayne Shepherd, Robert Fripp and Steve Morse. The tour has been praised by critics and fans alike for its high-quality performances, its diversity of styles and its celebration of guitar culture.

-

G3: Live in Denver is one of the best examples of the G3 tour, as it showcases three guitar legends who have influenced countless musicians and inspired millions of fans. It is a DVD that every guitar lover should own and watch repeatedly, as it offers a rare glimpse into the artistry, personality and friendship of three of the most amazing guitarists ever.

81aa517590
-
-
\ No newline at end of file diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Aayiram Than Kavi Sonnen Mp3 Free Download [WORK].md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Aayiram Than Kavi Sonnen Mp3 Free Download [WORK].md deleted file mode 100644 index e67e29b90b06b9ee2b28bc2701d6c3d3b140ef8c..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Aayiram Than Kavi Sonnen Mp3 Free Download [WORK].md +++ /dev/null @@ -1,94 +0,0 @@ - -

Aayiram Than Kavi Sonnen Mp3 Free Download: A Song That Touches Your Heart

- -

If you are looking for a song that expresses your love and gratitude for your mother, then you should listen to Aayiram Than Kavi Sonnen Mp3 Free Download. This song is composed by Iniyavan and sung by Vairamuthu and S. P. Balasubrahmanyam, two legends of Tamil music industry. The lyrics are written by Vairamuthu Ramasamy Thevar, a renowned poet and lyricist.

-

Aayiram Than Kavi Sonnen Mp3 Free Download


Download ✓✓✓ https://urlgoal.com/2uCKTI



- -

What is Aayiram Than Kavi Sonnen Mp3 Free Download?

- -

Aayiram Than Kavi Sonnen Mp3 Free Download is a Tamil song that was released in 2022 as part of the album Aayiram Than Kavi Sonnen (Naatpadu Theral - 2). The song is a tribute to mothers and their sacrifices for their children. The title of the song means "I have said a thousand poets" in Tamil, which implies that the singer has praised his mother with the words of many poets.

- -

What is the meaning of Aayiram Than Kavi Sonnen Mp3 Free Download?

- -

The song is a poetic expression of the singer's admiration and appreciation for his mother. He compares his mother to various natural phenomena, such as the sun, the moon, the earth, the sky, and the sea. He also describes how his mother has nurtured him with her love, care, wisdom, and courage. He says that his mother is the source of his life, his happiness, and his success. He concludes by saying that he has said a thousand poets to praise his mother, but none of them can match her greatness.

- -

How to download Aayiram Than Kavi Sonnen Mp3 Free Download?

- -

If you want to listen to this beautiful song offline, you can download Aayiram Than Kavi Sonnen Mp3 Free Download from various online platforms. Some of them are:

-
    -
  • JioSaavn: You can download Aayiram Than Kavi Sonnen Mp3 Free Download from JioSaavn app or website. You can also stream the song online or create your own playlist.
  • -
  • Gaana: You can download Aayiram Than Kavi Sonnen Mp3 Free Download from Gaana app or website. You can also enjoy other Tamil songs or browse through different genres and moods.
  • -
-

Alternatively, you can also search for Aayiram Than Kavi Sonnen Mp3 Free Download on YouTube or other websites and download it using a video downloader tool.

- -

Why should you listen to Aayiram Than Kavi Sonnen Mp3 Free Download?

- -

Aayiram Than Kavi Sonnen Mp3 Free Download is a song that will touch your heart and make you feel emotional. It is a song that celebrates the bond between a mother and a child. It is a song that will make you appreciate your mother more and thank her for everything she has done for you. It is a song that will inspire you to be a better person and achieve your dreams.

- -

Aayiram Than Kavi Sonnen Mp3 Free Download is more than just a song. It is a tribute, a poem, a prayer, and a blessing. It is a song that you should listen to at least once in your life.

-

Who are the singers of Aayiram Than Kavi Sonnen Mp3 Free Download?

- -

Aayiram Than Kavi Sonnen Mp3 Free Download is sung by two of the most famous and respected singers in Tamil music industry: Vairamuthu and S. P. Balasubrahmanyam. Vairamuthu is a poet, lyricist, author, and activist who has won six National Film Awards for Best Lyricist and several other awards. He has written over 7,500 songs for more than 1,000 films in various languages. S. P. Balasubrahmanyam is a singer, actor, music director, voice actor, and film producer who has recorded over 40,000 songs in 16 Indian languages. He has won six National Film Awards for Best Male Playback Singer and several other awards. He is also a Guinness World Record holder for recording the most number of songs by a singer.

- -

What is the theme of Aayiram Than Kavi Sonnen Mp3 Free Download?

- -

Aayiram Than Kavi Sonnen Mp3 Free Download is a song that explores the theme of motherhood and its importance in one's life. The song is a tribute to all the mothers who have sacrificed their dreams, happiness, and comfort for their children. The song also acknowledges the role of mothers in shaping their children's personality, character, and destiny. The song expresses the singer's gratitude and love for his mother and how he can never repay her for all that she has done for him.

-

- -

How to enjoy Aayiram Than Kavi Sonnen Mp3 Free Download?

- -

Aayiram Than Kavi Sonnen Mp3 Free Download is a song that you can enjoy in many ways. You can listen to it on your headphones or speakers and feel the emotions and melody of the song. You can also sing along with the lyrics and appreciate the poetic beauty of the words. You can also watch the video of the song and see the visuals that complement the song. You can also share the song with your friends and family and spread the message of love and respect for mothers.

-

What are the benefits of listening to Aayiram Than Kavi Sonnen Mp3 Free Download?

- -

Listening to Aayiram Than Kavi Sonnen Mp3 Free Download can have many benefits for your mental and emotional well-being. Some of them are:

-
    -
  • It can reduce your stress and anxiety levels by calming your mind and body.
  • -
  • It can boost your mood and happiness by releasing endorphins and serotonin in your brain.
  • -
  • It can improve your memory and concentration by stimulating your brain cells and enhancing your cognitive functions.
  • -
  • It can increase your creativity and imagination by inspiring you with new ideas and perspectives.
  • -
  • It can strengthen your bond with your mother by reminding you of her love and support.
  • -
-

Listening to Aayiram Than Kavi Sonnen Mp3 Free Download can also have many benefits for your physical health. Some of them are:

-
    -
  • It can lower your blood pressure and heart rate by relaxing your blood vessels and muscles.
  • -
  • It can improve your immune system by boosting your white blood cells and antibodies.
  • -
  • It can enhance your sleep quality by regulating your circadian rhythm and melatonin levels.
  • -
  • It can reduce your pain and inflammation by blocking the pain signals and releasing natural painkillers in your body.
  • -
  • It can improve your breathing and vocal cords by exercising your lungs and throat muscles.
  • -
- -

How to appreciate Aayiram Than Kavi Sonnen Mp3 Free Download?

- -

Aayiram Than Kavi Sonnen Mp3 Free Download is a song that deserves your appreciation and admiration. You can appreciate this song in many ways. Some of them are:

-
    -
  • You can learn more about the singers, composers, and lyricists of this song and their achievements and contributions to Tamil music industry.
  • -
  • You can understand the meaning and message of this song and relate it to your own life and experiences.
  • -
  • You can enjoy the music and melody of this song and appreciate the skills and talents of the singers and musicians.
  • -
  • You can express your gratitude and love for your mother by dedicating this song to her or gifting her something related to this song.
  • -
  • You can support the artists of this song by buying their albums, attending their concerts, or following them on social media.
  • -
- -

Conclusion

- -

Aayiram Than Kavi Sonnen Mp3 Free Download is a song that you should not miss. It is a song that will make you feel emotional, inspired, and grateful. It is a song that will make you love your mother more and respect her more. It is a song that will make you a better person and achieve your goals. It is a song that you should download today and listen to it whenever you need some motivation or comfort.

-

What are the reviews of Aayiram Than Kavi Sonnen Mp3 Free Download?

- -

Aayiram Than Kavi Sonnen Mp3 Free Download is a song that has received positive reviews from critics and listeners alike. Some of the reviews are:

-
    -
  • "Aayiram Than Kavi Sonnen is a song that will make you cry and smile at the same time. It is a song that will touch your soul and make you feel grateful for your mother. Vairamuthu and S. P. Balasubrahmanyam have delivered a masterpiece that will stay in your heart forever." - Times of India
  • -
  • "Aayiram Than Kavi Sonnen is a song that will make you appreciate the beauty and power of Tamil language and poetry. It is a song that will make you admire the talent and skill of Vairamuthu and S. P. Balasubrahmanyam. It is a song that will make you proud of your culture and heritage." - The Hindu
  • -
  • "Aayiram Than Kavi Sonnen is a song that will make you love your mother more and respect her more. It is a song that will make you realize the importance of motherhood and its impact on your life. Vairamuthu and S. P. Balasubrahmanyam have created a gem that will shine for generations to come." - India Today
  • -
- -

How to share Aayiram Than Kavi Sonnen Mp3 Free Download?

- -

Aayiram Than Kavi Sonnen Mp3 Free Download is a song that you can share with your friends and family and spread the message of love and respect for mothers. You can share this song in many ways. Some of them are:

-
    -
  • You can send the link of the song to your contacts via WhatsApp, Facebook, Instagram, Twitter, or any other social media platform.
  • -
  • You can create a playlist of this song and other similar songs and share it with your loved ones via Spotify, YouTube Music, Apple Music, or any other music streaming service.
  • -
  • You can make a video of yourself singing or dancing to this song and upload it on YouTube, TikTok, Reels, or any other video sharing platform.
  • -
  • You can write a blog post or an article about this song and its meaning and post it on your website, Medium, Quora, or any other blogging platform.
  • -
  • You can make a podcast or a radio show about this song and its singers and broadcast it on Anchor, Spotify, SoundCloud, or any other podcasting platform.
  • -

3cee63e6c2
-
-
\ No newline at end of file diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Descargar Amos Y Mazmorras 1 Epub 12.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Descargar Amos Y Mazmorras 1 Epub 12.md deleted file mode 100644 index 81752e293c0302d1acc9ea21a6a83b34f690d579..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Descargar Amos Y Mazmorras 1 Epub 12.md +++ /dev/null @@ -1,6 +0,0 @@ -

Descargar Amos Y Mazmorras 1 Epub 12


DOWNLOAD ✺✺✺ https://urlgoal.com/2uCJqs



-
-|Ebook PDF EPUB Download| The Search for Signs of Intelligent Life in the Universe by ... Tsumiko and the Enslaved Fox (Amaranthine Saga Book 1) ... Na Inglaterra do século XII, Tom, um humilde pedreiro e mestre-de-obras, tem um ... Buy Amos y Mazmorras III by Lena Valenti and Read this Book on Kobo's Free Apps. 1fdad05405
-
-
-

diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Fujitsu Monitor L20t 1 Eco Drivers.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Fujitsu Monitor L20t 1 Eco Drivers.md deleted file mode 100644 index 78d890aea1c5db918cc7b304b7e48725e3a05875..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Fujitsu Monitor L20t 1 Eco Drivers.md +++ /dev/null @@ -1,10 +0,0 @@ - -

if you are using a new product, make sure to download all the drivers that are available for it before you start installing them. otherwise, it might cause problems with your computer and prevent it from working properly. fujitsu support can help you to download, install and update drivers for a variety of products.

-

fujitsu monitor l20t 1 eco drivers


DOWNLOADhttps://urlgoal.com/2uCLMc



-

if you want to resolve driver-related issues, you need to identify the problem and find a solution. for example, you need to find out what kind of information your driver keeps and where it is stored. if you have any problems during the updating process, make sure to report them to fujitsu support. a fujitsu support engineer will be happy to help you to resolve any issues that you might have.

-

a good driver update service ensures that you have current compatible drivers and builds a backup of all current drivers before making any changes. driver backup files offer the security of a rollback feature and an ability to revert to a previous version (if necessary).

-

the manufacturer of fujitsu monitors has released a driver for the models compatible with the windows os. to keep you updated we provide you with the latest fujitsu monitors driver versions. we do our best to provide the most current drivers.

-

-

fujitsu monitors is a well-known brand in the world of computer hardware, but the fujitsu monitors drivers may not always be available. the best way to get the drivers for your device is to use the search feature of our site to locate the software for your device. download and install the fujitsu monitors driver for your device. this driver is the only way to get full access to your device. if you are not sure whether your device is supported or not, please check out the compatibility table.

899543212b
-
-
\ No newline at end of file diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Gold Rush The Game Serial Keygolkes PATCHED.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Gold Rush The Game Serial Keygolkes PATCHED.md deleted file mode 100644 index c807e6dea7b3f04bad3d6d1f4f47c8d8cf4db134..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Gold Rush The Game Serial Keygolkes PATCHED.md +++ /dev/null @@ -1,6 +0,0 @@ -

Gold Rush: The Game Serial Keygolkes


Download File 🗸 https://urlgoal.com/2uCMsH



-
-Cold Fear (PC) overview and full product specs on CNET. ... PC. Genre. games - action. ... Remixes Vol.1-59 (2008) · Gold Rush: The Game Serial Keygolkes 4d29de3e1b
-
-
-

diff --git a/spaces/rgres/Seg2Sat/static/_app/immutable/start-83af0c6f.js b/spaces/rgres/Seg2Sat/static/_app/immutable/start-83af0c6f.js deleted file mode 100644 index 6b8c5e458163b3c9089f19fde3b95f587860762f..0000000000000000000000000000000000000000 --- a/spaces/rgres/Seg2Sat/static/_app/immutable/start-83af0c6f.js +++ /dev/null @@ -1 +0,0 @@ -import{S as Ye,i as Ge,s as Me,e as Fe,c as Xe,a as He,d as D,b as me,f as K,g as V,t as Ze,h as Qe,j as et,k as tt,l as P,m as nt,n as Y,o as C,p as G,q as T,r as st,u as rt,v as ye,w as z,x as ne,y as q,z as se,A as re,B as J,C as ie,D as Ce}from"./chunks/index-bcf2726a.js";import{s as it,w as ce,a as at}from"./chunks/paths-d3bcbd10.js";function ot(s){let e,t,i;const l=[s[1]||{}];var c=s[0][0];function f(n){let r={};for(let a=0;a{J(d,1)}),G()}c?(e=new c(f()),z(e.$$.fragment),T(e.$$.fragment,1),q(e,t.parentNode,t)):e=null}else c&&e.$set(a)},i(n){i||(e&&T(e.$$.fragment,n),i=!0)},o(n){e&&C(e.$$.fragment,n),i=!1},d(n){n&&D(t),e&&J(e,n)}}}function ct(s){let e,t,i;const l=[s[1]||{}];var c=s[0][0];function f(n){let r={$$slots:{default:[dt]},$$scope:{ctx:n}};for(let a=0;a{J(d,1)}),G()}c?(e=new c(f(n)),z(e.$$.fragment),T(e.$$.fragment,1),q(e,t.parentNode,t)):e=null}else c&&e.$set(a)},i(n){i||(e&&T(e.$$.fragment,n),i=!0)},o(n){e&&C(e.$$.fragment,n),i=!1},d(n){n&&D(t),e&&J(e,n)}}}function lt(s){let e,t,i;const l=[s[2]||{}];var c=s[0][1];function f(n){let r={};for(let a=0;a{J(d,1)}),G()}c?(e=new c(f()),z(e.$$.fragment),T(e.$$.fragment,1),q(e,t.parentNode,t)):e=null}else c&&e.$set(a)},i(n){i||(e&&T(e.$$.fragment,n),i=!0)},o(n){e&&C(e.$$.fragment,n),i=!1},d(n){n&&D(t),e&&J(e,n)}}}function ft(s){let e,t,i;const l=[s[2]||{}];var c=s[0][1];function f(n){let r={$$slots:{default:[ut]},$$scope:{ctx:n}};for(let a=0;a{J(d,1)}),G()}c?(e=new c(f(n)),z(e.$$.fragment),T(e.$$.fragment,1),q(e,t.parentNode,t)):e=null}else c&&e.$set(a)},i(n){i||(e&&T(e.$$.fragment,n),i=!0)},o(n){e&&C(e.$$.fragment,n),i=!1},d(n){n&&D(t),e&&J(e,n)}}}function ut(s){let e,t,i;const l=[s[3]||{}];var c=s[0][2];function f(n){let r={};for(let a=0;a{J(d,1)}),G()}c?(e=new c(f()),z(e.$$.fragment),T(e.$$.fragment,1),q(e,t.parentNode,t)):e=null}else c&&e.$set(a)},i(n){i||(e&&T(e.$$.fragment,n),i=!0)},o(n){e&&C(e.$$.fragment,n),i=!1},d(n){n&&D(t),e&&J(e,n)}}}function dt(s){let e,t,i,l;const c=[ft,lt],f=[];function n(r,a){return r[0][2]?0:1}return e=n(s),t=f[e]=c[e](s),{c(){t.c(),i=P()},l(r){t.l(r),i=P()},m(r,a){f[e].m(r,a),V(r,i,a),l=!0},p(r,a){let d=e;e=n(r),e===d?f[e].p(r,a):(Y(),C(f[d],1,1,()=>{f[d]=null}),G(),t=f[e],t?t.p(r,a):(t=f[e]=c[e](r),t.c()),T(t,1),t.m(i.parentNode,i))},i(r){l||(T(t),l=!0)},o(r){C(t),l=!1},d(r){f[e].d(r),r&&D(i)}}}function Te(s){let e,t=s[5]&&je(s);return{c(){e=Fe("div"),t&&t.c(),this.h()},l(i){e=Xe(i,"DIV",{id:!0,"aria-live":!0,"aria-atomic":!0,style:!0});var l=He(e);t&&t.l(l),l.forEach(D),this.h()},h(){me(e,"id","svelte-announcer"),me(e,"aria-live","assertive"),me(e,"aria-atomic","true"),K(e,"position","absolute"),K(e,"left","0"),K(e,"top","0"),K(e,"clip","rect(0 0 0 0)"),K(e,"clip-path","inset(50%)"),K(e,"overflow","hidden"),K(e,"white-space","nowrap"),K(e,"width","1px"),K(e,"height","1px")},m(i,l){V(i,e,l),t&&t.m(e,null)},p(i,l){i[5]?t?t.p(i,l):(t=je(i),t.c(),t.m(e,null)):t&&(t.d(1),t=null)},d(i){i&&D(e),t&&t.d()}}}function je(s){let e;return{c(){e=Ze(s[6])},l(t){e=Qe(t,s[6])},m(t,i){V(t,e,i)},p(t,i){i&64&&et(e,t[6])},d(t){t&&D(e)}}}function pt(s){let e,t,i,l,c;const f=[ct,ot],n=[];function r(d,L){return d[0][1]?0:1}e=r(s),t=n[e]=f[e](s);let a=s[4]&&Te(s);return{c(){t.c(),i=tt(),a&&a.c(),l=P()},l(d){t.l(d),i=nt(d),a&&a.l(d),l=P()},m(d,L){n[e].m(d,L),V(d,i,L),a&&a.m(d,L),V(d,l,L),c=!0},p(d,[L]){let E=e;e=r(d),e===E?n[e].p(d,L):(Y(),C(n[E],1,1,()=>{n[E]=null}),G(),t=n[e],t?t.p(d,L):(t=n[e]=f[e](d),t.c()),T(t,1),t.m(i.parentNode,i)),d[4]?a?a.p(d,L):(a=Te(d),a.c(),a.m(l.parentNode,l)):a&&(a.d(1),a=null)},i(d){c||(T(t),c=!0)},o(d){C(t),c=!1},d(d){n[e].d(d),d&&D(i),a&&a.d(d),d&&D(l)}}}function ht(s,e,t){let{stores:i}=e,{page:l}=e,{components:c}=e,{props_0:f=null}=e,{props_1:n=null}=e,{props_2:r=null}=e;st("__svelte__",i),rt(i.page.notify);let a=!1,d=!1,L=null;return ye(()=>{const E=i.page.subscribe(()=>{a&&(t(5,d=!0),t(6,L=document.title||"untitled page"))});return t(4,a=!0),E}),s.$$set=E=>{"stores"in E&&t(7,i=E.stores),"page"in E&&t(8,l=E.page),"components"in E&&t(0,c=E.components),"props_0"in E&&t(1,f=E.props_0),"props_1"in E&&t(2,n=E.props_1),"props_2"in E&&t(3,r=E.props_2)},s.$$.update=()=>{s.$$.dirty&384&&i.page.set(l)},[c,f,n,r,a,d,L,i,l]}class _t extends Ye{constructor(e){super(),Ge(this,e,ht,pt,Me,{stores:7,page:8,components:0,props_0:1,props_1:2,props_2:3})}}const mt="modulepreload",Ie={},gt="/static/_app/immutable/",ge=function(e,t){return!t||t.length===0?e():Promise.all(t.map(i=>{if(i=`${gt}${i}`,i in Ie)return;Ie[i]=!0;const l=i.endsWith(".css"),c=l?'[rel="stylesheet"]':"";if(document.querySelector(`link[href="${i}"]${c}`))return;const f=document.createElement("link");if(f.rel=l?"stylesheet":mt,l||(f.as="script",f.crossOrigin=""),f.href=i,document.head.appendChild(f),l)return new Promise((n,r)=>{f.addEventListener("load",n),f.addEventListener("error",()=>r(new Error(`Unable to preload CSS for ${i}`)))})})).then(()=>e())},wt={},le=[()=>ge(()=>import("./pages/__layout.svelte-f5a1b718.js"),["pages/__layout.svelte-f5a1b718.js","assets/pages/__layout.svelte-b67cf61d.css","chunks/index-bcf2726a.js"]),()=>ge(()=>import("./error.svelte-d9523301.js"),["error.svelte-d9523301.js","chunks/index-bcf2726a.js"]),()=>ge(()=>import("./pages/index.svelte-ce916c65.js"),["pages/index.svelte-ce916c65.js","assets/pages/index.svelte-f2b33456.css","chunks/index-bcf2726a.js","chunks/paths-d3bcbd10.js"])],bt={"":[[0,2],[1]]};function yt(s){s.client}function De(s){return s instanceof Error||s&&s.name&&s.message?s:new Error(JSON.stringify(s))}function Ve(s){if(s.fallthrough)throw new Error("fallthrough is no longer supported. Use matchers instead: https://kit.svelte.dev/docs/routing#advanced-routing-matching");if("maxage"in s)throw new Error("maxage should be replaced with cache: { maxage }");const e=s.status&&s.status>=400&&s.status<=599&&!s.redirect;if(s.error||e){const t=s.status;if(!s.error&&e)return{status:t||500,error:new Error};const i=typeof s.error=="string"?new Error(s.error):s.error;return i instanceof Error?!t||t<400||t>599?(console.warn('"error" returned from load() without a valid status code \u2014 defaulting to 500'),{status:500,error:i}):{status:t,error:i}:{status:500,error:new Error(`"error" property returned from load() must be a string or instance of Error, received type "${typeof i}"`)}}if(s.redirect){if(!s.status||Math.floor(s.status/100)!==3)throw new Error('"redirect" property returned from load() must be accompanied by a 3xx status code');if(typeof s.redirect!="string")throw new Error('"redirect" property returned from load() must be a string')}if(s.dependencies&&(!Array.isArray(s.dependencies)||s.dependencies.some(t=>typeof t!="string")))throw new Error('"dependencies" property returned from load() must be of type string[]');if(s.context)throw new Error('You are returning "context" from a load function. "context" was renamed to "stuff", please adjust your code accordingly.');return s}function vt(s,e){return s==="/"||e==="ignore"?s:e==="never"?s.endsWith("/")?s.slice(0,-1):s:e==="always"&&!s.endsWith("/")?s+"/":s}class $t extends URL{get hash(){throw new Error("url.hash is inaccessible from load. Consider accessing hash from the page store within the script tag of your component.")}}function ze(s){let e=s.baseURI;if(!e){const t=s.getElementsByTagName("base");e=t.length?t[0].href:s.URL}return e}function ve(){return{x:pageXOffset,y:pageYOffset}}function qe(s){return s.composedPath().find(t=>t instanceof Node&&t.nodeName.toUpperCase()==="A")}function Je(s){return s instanceof SVGAElement?new URL(s.href.baseVal,document.baseURI):new URL(s.href)}function Ke(s){const e=ce(s);let t=!0;function i(){t=!0,e.update(f=>f)}function l(f){t=!1,e.set(f)}function c(f){let n;return e.subscribe(r=>{(n===void 0||t&&r!==n)&&f(n=r)})}return{notify:i,set:l,subscribe:c}}function kt(){const{set:s,subscribe:e}=ce(!1),t="1685815788294";let i;async function l(){clearTimeout(i);const f=await fetch(`${at}/_app/version.json`,{headers:{pragma:"no-cache","cache-control":"no-cache"}});if(f.ok){const{version:n}=await f.json(),r=n!==t;return r&&(s(!0),clearTimeout(i)),r}else throw new Error(`Version check failed: ${f.status}`)}return{subscribe:e,check:l}}function Et(s){let e=5381,t=s.length;if(typeof s=="string")for(;t;)e=e*33^s.charCodeAt(--t);else for(;t;)e=e*33^s[--t];return(e>>>0).toString(36)}const $e=window.fetch;function Rt(s,e){let i=`script[sveltekit\\:data-type="data"][sveltekit\\:data-url=${JSON.stringify(typeof s=="string"?s:s.url)}]`;e&&typeof e.body=="string"&&(i+=`[sveltekit\\:data-body="${Et(e.body)}"]`);const l=document.querySelector(i);if(l&&l.textContent){const{body:c,...f}=JSON.parse(l.textContent);return Promise.resolve(new Response(c,f))}return $e(s,e)}const Lt=/^(\.\.\.)?(\w+)(?:=(\w+))?$/;function St(s){const e=[],t=[];let i=!0;return{pattern:s===""?/^\/$/:new RegExp(`^${decodeURIComponent(s).split(/(?:@[a-zA-Z0-9_-]+)?(?:\/|$)/).map((c,f,n)=>{const r=/^\[\.\.\.(\w+)(?:=(\w+))?\]$/.exec(c);if(r)return e.push(r[1]),t.push(r[2]),"(?:/(.*))?";const a=f===n.length-1;return c&&"/"+c.split(/\[(.+?)\]/).map((d,L)=>{if(L%2){const[,E,X,M]=Lt.exec(d);return e.push(X),t.push(M),E?"(.*?)":"([^/]+?)"}return a&&d.includes(".")&&(i=!1),d.normalize().replace(/%5[Bb]/g,"[").replace(/%5[Dd]/g,"]").replace(/#/g,"%23").replace(/\?/g,"%3F").replace(/[.*+?^${}()|[\]\\]/g,"\\$&")}).join("")}).join("")}${i?"/?":""}$`),names:e,types:t}}function Ut(s,e,t,i){const l={};for(let c=0;c{const{pattern:r,names:a,types:d}=St(l);return{id:l,exec:L=>{const E=r.exec(L);if(E)return Ut(E,a,d,t)},a:c.map(L=>s[L]),b:f.map(L=>s[L]),has_shadow:!!n}})}const We="sveltekit:scroll",B="sveltekit:index",we=At(le,bt,wt),Nt=le[0](),Ot=le[1](),Be={};let te={};try{te=JSON.parse(sessionStorage[We])}catch{}function be(s){te[s]=ve()}function xt({target:s,session:e,base:t,trailing_slash:i}){var xe;const l=new Map,c=[],f={url:Ke({}),page:Ke({}),navigating:ce(null),session:ce(e),updated:kt()},n={id:null,promise:null},r={before_navigate:[],after_navigate:[]};let a={branch:[],error:null,session_id:0,stuff:Be,url:null},d=!1,L=!0,E=!1,X=1,M=null,ke,Ee,Re=!1;f.session.subscribe(async o=>{Ee=o,Re&&(X+=1,pe(new URL(location.href),[],!0))}),Re=!0;let F=!0,j=(xe=history.state)==null?void 0:xe[B];j||(j=Date.now(),history.replaceState({...history.state,[B]:j},"",location.href));const fe=te[j];fe&&(history.scrollRestoration="manual",scrollTo(fe.x,fe.y));let ue=!1,de,Le;async function Se(o,{noscroll:p=!1,replaceState:w=!1,keepfocus:u=!1,state:h={}},b){if(typeof o=="string"&&(o=new URL(o,ze(document))),F)return _e({url:o,scroll:p?ve():null,keepfocus:u,redirect_chain:b,details:{state:h,replaceState:w},accepted:()=>{},blocked:()=>{}});await Q(o)}async function Ue(o){const p=Oe(o);if(!p)throw new Error("Attempted to prefetch a URL that does not belong to this app");return n.promise=Ne(p,!1),n.id=p.id,n.promise}async function pe(o,p,w,u,h){var R,S,N;const b=Oe(o),v=Le={};let _=b&&await Ne(b,w);if(!_&&o.origin===location.origin&&o.pathname===location.pathname&&(_=await Z({status:404,error:new Error(`Not found: ${o.pathname}`),url:o,routeId:null})),!_)return await Q(o),!1;if(Le!==v)return!1;if(c.length=0,_.redirect)if(p.length>10||p.includes(o.pathname))_=await Z({status:500,error:new Error("Redirect loop"),url:o,routeId:null});else return F?Se(new URL(_.redirect,o).href,{},[...p,o.pathname]):await Q(new URL(_.redirect,location.href)),!1;else((S=(R=_.props)==null?void 0:R.page)==null?void 0:S.status)>=400&&await f.updated.check()&&await Q(o);if(E=!0,u&&u.details){const{details:$}=u,y=$.replaceState?0:1;$.state[B]=j+=y,history[$.replaceState?"replaceState":"pushState"]($.state,"",o)}if(d?(a=_.state,_.props.page&&(_.props.page.url=o),ke.$set(_.props)):Ae(_),u){const{scroll:$,keepfocus:y}=u;if(!y){const U=document.body,g=U.getAttribute("tabindex");(N=getSelection())==null||N.removeAllRanges(),U.tabIndex=-1,U.focus({preventScroll:!0}),g!==null?U.setAttribute("tabindex",g):U.removeAttribute("tabindex")}if(await Ce(),L){const U=o.hash&&document.getElementById(o.hash.slice(1));$?scrollTo($.x,$.y):U?U.scrollIntoView():scrollTo(0,0)}}else await Ce();n.promise=null,n.id=null,L=!0,_.props.page&&(de=_.props.page);const m=_.state.branch[_.state.branch.length-1];F=(m==null?void 0:m.module.router)!==!1,h&&h(),E=!1}function Ae(o){a=o.state;const p=document.querySelector("style[data-sveltekit]");if(p&&p.remove(),de=o.props.page,ke=new _t({target:s,props:{...o.props,stores:f},hydrate:!0}),F){const w={from:null,to:new URL(location.href)};r.after_navigate.forEach(u=>u(w))}d=!0}async function he({url:o,params:p,stuff:w,branch:u,status:h,error:b,routeId:v}){var y,U;const _=u.filter(Boolean),m=_.find(g=>{var O;return(O=g.loaded)==null?void 0:O.redirect}),R={redirect:(y=m==null?void 0:m.loaded)==null?void 0:y.redirect,state:{url:o,params:p,branch:u,error:b,stuff:w,session_id:X},props:{components:_.map(g=>g.module.default)}};for(let g=0;g<_.length;g+=1){const O=_[g].loaded;R.props[`props_${g}`]=O?await O.props:null}if(!a.url||o.href!==a.url.href||a.error!==b||a.stuff!==w){R.props.page={error:b,params:p,routeId:v,status:h,stuff:w,url:o};const g=(O,k)=>{Object.defineProperty(R.props.page,O,{get:()=>{throw new Error(`$page.${O} has been replaced by $page.url.${k}`)}})};g("origin","origin"),g("path","pathname"),g("query","searchParams")}const N=_[_.length-1],$=(U=N==null?void 0:N.loaded)==null?void 0:U.cache;if($){const g=o.pathname+o.search;let O=!1;const k=()=>{l.get(g)===R&&l.delete(g),x(),clearTimeout(A)},A=setTimeout(k,$.maxage*1e3),x=f.session.subscribe(()=>{O&&k()});O=!0,l.set(g,R)}return R}async function H({status:o,error:p,module:w,url:u,params:h,stuff:b,props:v,routeId:_}){const m={module:w,uses:{params:new Set,url:!1,session:!1,stuff:!1,dependencies:new Set},loaded:null,stuff:b};function R(y){const{href:U}=new URL(y,u);m.uses.dependencies.add(U)}v&&m.uses.dependencies.add(u.href);const S={};for(const y in h)Object.defineProperty(S,y,{get(){return m.uses.params.add(y),h[y]},enumerable:!0});const N=Ee,$=new $t(u);if(w.load){const y={routeId:_,params:S,props:v||{},get url(){return m.uses.url=!0,$},get session(){return m.uses.session=!0,N},get stuff(){return m.uses.stuff=!0,{...b}},async fetch(g,O){let k;typeof g=="string"?k=g:(k=g.url,O={body:g.method==="GET"||g.method==="HEAD"?void 0:await g.blob(),cache:g.cache,credentials:g.credentials,headers:g.headers,integrity:g.integrity,keepalive:g.keepalive,method:g.method,mode:g.mode,redirect:g.redirect,referrer:g.referrer,referrerPolicy:g.referrerPolicy,signal:g.signal,...O});const A=new URL(k,u).href;return R(A),d?$e(A,O):Rt(k,O)},status:o!=null?o:null,error:p!=null?p:null};let U;if(U=await w.load.call(null,y),!U)throw new Error("load function must return a value");m.loaded=Ve(U),m.loaded.stuff&&(m.stuff=m.loaded.stuff),m.loaded.dependencies&&m.loaded.dependencies.forEach(R)}else v&&(m.loaded=Ve({props:v}));return m}async function Ne({id:o,url:p,params:w,route:u},h){var U,g,O;if(n.id===o&&n.promise)return n.promise;if(!h){const k=l.get(o);if(k)return k}const{a:b,b:v,has_shadow:_}=u,m=a.url&&{url:o!==a.url.pathname+a.url.search,params:Object.keys(w).filter(k=>a.params[k]!==w[k]),session:X!==a.session_id};let R=[],S=Be,N=!1,$=200,y=null;b.forEach(k=>k().catch(()=>{}));e:for(let k=0;kI.uses.params.has(W))||m.session&&I.uses.session||Array.from(I.uses.dependencies).some(W=>c.some(oe=>oe(W)))||N&&I.uses.stuff){let W={};const oe=_&&k===b.length-1;if(oe){const ee=await $e(`${p.pathname}${p.pathname.endsWith("/")?"":"/"}__data.json${p.search}`,{headers:{"x-sveltekit-load":"true"}});if(ee.ok){const Pe=ee.headers.get("x-sveltekit-location");if(Pe)return{redirect:Pe,props:{},state:a};W=ee.status===204?{}:await ee.json()}else $=ee.status,y=new Error("Failed to load data")}if(y||(A=await H({module:x,url:p,params:w,props:W,stuff:S,routeId:u.id})),A&&(oe&&(A.uses.url=!0),A.loaded)){if(A.loaded.error&&($=A.loaded.status,y=A.loaded.error),A.loaded.redirect)return{redirect:A.loaded.redirect,props:{},state:a};A.loaded.stuff&&(N=!0)}}else A=I}catch(x){$=500,y=De(x)}if(y){for(;k--;)if(v[k]){let x,I,ae=k;for(;!(I=R[ae]);)ae-=1;try{if(x=await H({status:$,error:y,module:await v[k](),url:p,params:w,stuff:I.stuff,routeId:u.id}),(U=x==null?void 0:x.loaded)!=null&&U.error)continue;(g=x==null?void 0:x.loaded)!=null&&g.stuff&&(S={...S,...x.loaded.stuff}),R=R.slice(0,ae+1).concat(x);break e}catch{continue}}return await Z({status:$,error:y,url:p,routeId:u.id})}else(O=A==null?void 0:A.loaded)!=null&&O.stuff&&(S={...S,...A.loaded.stuff}),R.push(A)}return await he({url:p,params:w,stuff:S,branch:R,status:$,error:y,routeId:u.id})}async function Z({status:o,error:p,url:w,routeId:u}){var _,m;const h={},b=await H({module:await Nt,url:w,params:h,stuff:{},routeId:u}),v=await H({status:o,error:p,module:await Ot,url:w,params:h,stuff:b&&b.loaded&&b.loaded.stuff||{},routeId:u});return await he({url:w,params:h,stuff:{...(_=b==null?void 0:b.loaded)==null?void 0:_.stuff,...(m=v==null?void 0:v.loaded)==null?void 0:m.stuff},branch:[b,v],status:o,error:p,routeId:u})}function Oe(o){if(o.origin!==location.origin||!o.pathname.startsWith(t))return;const p=decodeURI(o.pathname.slice(t.length)||"/");for(const w of we){const u=w.exec(p);if(u)return{id:o.pathname+o.search,route:w,params:u,url:o}}}async function _e({url:o,scroll:p,keepfocus:w,redirect_chain:u,details:h,accepted:b,blocked:v}){const _=a.url;let m=!1;const R={from:_,to:o,cancel:()=>m=!0};if(r.before_navigate.forEach($=>$(R)),m){v();return}const S=vt(o.pathname,i),N=new URL(o.origin+S+o.search+o.hash);be(j),b(),d&&f.navigating.set({from:a.url,to:N}),await pe(N,u,!1,{scroll:p,keepfocus:w,details:h},()=>{const $={from:_,to:N};r.after_navigate.forEach(y=>y($)),f.navigating.set(null)})}function Q(o){return location.href=o.href,new Promise(()=>{})}return{after_navigate:o=>{ye(()=>(r.after_navigate.push(o),()=>{const p=r.after_navigate.indexOf(o);r.after_navigate.splice(p,1)}))},before_navigate:o=>{ye(()=>(r.before_navigate.push(o),()=>{const p=r.before_navigate.indexOf(o);r.before_navigate.splice(p,1)}))},disable_scroll_handling:()=>{(E||!d)&&(L=!1)},goto:(o,p={})=>Se(o,p,[]),invalidate:o=>{if(typeof o=="function")c.push(o);else{const{href:p}=new URL(o,location.href);c.push(w=>w===p)}return M||(M=Promise.resolve().then(async()=>{await pe(new URL(location.href),[],!0),M=null})),M},prefetch:async o=>{const p=new URL(o,ze(document));await Ue(p)},prefetch_routes:async o=>{const w=(o?we.filter(u=>o.some(h=>u.exec(h))):we).map(u=>Promise.all(u.a.map(h=>h())));await Promise.all(w)},_start_router:()=>{history.scrollRestoration="manual",addEventListener("beforeunload",u=>{let h=!1;const b={from:a.url,to:null,cancel:()=>h=!0};r.before_navigate.forEach(v=>v(b)),h?(u.preventDefault(),u.returnValue=""):history.scrollRestoration="auto"}),addEventListener("visibilitychange",()=>{if(document.visibilityState==="hidden"){be(j);try{sessionStorage[We]=JSON.stringify(te)}catch{}}});const o=u=>{const h=qe(u);h&&h.href&&h.hasAttribute("sveltekit:prefetch")&&Ue(Je(h))};let p;const w=u=>{clearTimeout(p),p=setTimeout(()=>{var h;(h=u.target)==null||h.dispatchEvent(new CustomEvent("sveltekit:trigger_prefetch",{bubbles:!0}))},20)};addEventListener("touchstart",o),addEventListener("mousemove",w),addEventListener("sveltekit:trigger_prefetch",o),addEventListener("click",u=>{if(!F||u.button||u.which!==1||u.metaKey||u.ctrlKey||u.shiftKey||u.altKey||u.defaultPrevented)return;const h=qe(u);if(!h||!h.href)return;const b=h instanceof SVGAElement,v=Je(h);if(!b&&v.origin==="null")return;const _=(h.getAttribute("rel")||"").split(/\s+/);if(h.hasAttribute("download")||_.includes("external")||h.hasAttribute("sveltekit:reload")||(b?h.target.baseVal:h.target))return;const[m,R]=v.href.split("#");if(R!==void 0&&m===location.href.split("#")[0]){ue=!0,be(j),f.page.set({...de,url:v}),f.page.notify();return}_e({url:v,scroll:h.hasAttribute("sveltekit:noscroll")?ve():null,keepfocus:!1,redirect_chain:[],details:{state:{},replaceState:v.href===location.href},accepted:()=>u.preventDefault(),blocked:()=>u.preventDefault()})}),addEventListener("popstate",u=>{if(u.state&&F){if(u.state[B]===j)return;_e({url:new URL(location.href),scroll:te[u.state[B]],keepfocus:!1,redirect_chain:[],details:null,accepted:()=>{j=u.state[B]},blocked:()=>{const h=j-u.state[B];history.go(h)}})}}),addEventListener("hashchange",()=>{ue&&(ue=!1,history.replaceState({...history.state,[B]:++j},"",location.href))})},_hydrate:async({status:o,error:p,nodes:w,params:u,routeId:h})=>{const b=new URL(location.href),v=[];let _={},m,R;try{for(let S=0;SPrompt Segment Anything (zero-shot instance segmentation demo) -Github link: [Link](https://github.com/RockeyCoss/Prompt-Segment-Anything) -You can select the model you want to use from the "Model" dropdown menu and click "Submit" to segment the image you uploaded to the "Input Image" box. -""" -if SPACE_ID is not None: - description += f'\n

For faster inference without waiting in queue, you may duplicate the space and upgrade to GPU in settings. Duplicate Space

' - - -def main(): - with gr.Blocks() as demo: - gr.Markdown(description) - with gr.Column(): - with gr.Row(): - with gr.Column(): - input_img = gr.Image(type="numpy", label="Input Image") - model_type = gr.Dropdown(choices=list(config_dict.keys()), - value=list(config_dict.keys())[0], - label='Model', - multiselect=False) - with gr.Row(): - clear_btn = gr.Button(value="Clear") - submit_btn = gr.Button(value="Submit") - output_img = gr.Image(type="numpy", label="Output") - gr.Examples( - examples=[["./assets/img1.jpg", "r50-hdetr_sam-vit-b"], - ["./assets/img2.jpg", "r50-hdetr_sam-vit-b"], - ["./assets/img3.jpg", "r50-hdetr_sam-vit-b"], - ["./assets/img4.jpg", "r50-hdetr_sam-vit-b"]], - inputs=[input_img, model_type], - outputs=output_img, - fn=inference - ) - - submit_btn.click(inference, - inputs=[input_img, model_type], - outputs=output_img) - clear_btn.click(lambda: [None, None], None, [input_img, output_img], queue=False) - - demo.queue() - demo.launch() - - -if __name__ == '__main__': - main() diff --git a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/core/post_processing/merge_augs.py b/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/core/post_processing/merge_augs.py deleted file mode 100644 index 2ac4603a1aea9e463e35d7041a0bf00bd3b13529..0000000000000000000000000000000000000000 --- a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/core/post_processing/merge_augs.py +++ /dev/null @@ -1,154 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import copy -import warnings - -import numpy as np -import torch -from mmcv import ConfigDict -from mmcv.ops import nms - -from ..bbox import bbox_mapping_back - - -def merge_aug_proposals(aug_proposals, img_metas, cfg): - """Merge augmented proposals (multiscale, flip, etc.) - - Args: - aug_proposals (list[Tensor]): proposals from different testing - schemes, shape (n, 5). Note that they are not rescaled to the - original image size. - - img_metas (list[dict]): list of image info dict where each dict has: - 'img_shape', 'scale_factor', 'flip', and may also contain - 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'. - For details on the values of these keys see - `mmdet/datasets/pipelines/formatting.py:Collect`. - - cfg (dict): rpn test config. - - Returns: - Tensor: shape (n, 4), proposals corresponding to original image scale. - """ - - cfg = copy.deepcopy(cfg) - - # deprecate arguments warning - if 'nms' not in cfg or 'max_num' in cfg or 'nms_thr' in cfg: - warnings.warn( - 'In rpn_proposal or test_cfg, ' - 'nms_thr has been moved to a dict named nms as ' - 'iou_threshold, max_num has been renamed as max_per_img, ' - 'name of original arguments and the way to specify ' - 'iou_threshold of NMS will be deprecated.') - if 'nms' not in cfg: - cfg.nms = ConfigDict(dict(type='nms', iou_threshold=cfg.nms_thr)) - if 'max_num' in cfg: - if 'max_per_img' in cfg: - assert cfg.max_num == cfg.max_per_img, f'You set max_num and ' \ - f'max_per_img at the same time, but get {cfg.max_num} ' \ - f'and {cfg.max_per_img} respectively' \ - f'Please delete max_num which will be deprecated.' - else: - cfg.max_per_img = cfg.max_num - if 'nms_thr' in cfg: - assert cfg.nms.iou_threshold == cfg.nms_thr, f'You set ' \ - f'iou_threshold in nms and ' \ - f'nms_thr at the same time, but get ' \ - f'{cfg.nms.iou_threshold} and {cfg.nms_thr}' \ - f' respectively. Please delete the nms_thr ' \ - f'which will be deprecated.' - - recovered_proposals = [] - for proposals, img_info in zip(aug_proposals, img_metas): - img_shape = img_info['img_shape'] - scale_factor = img_info['scale_factor'] - flip = img_info['flip'] - flip_direction = img_info['flip_direction'] - _proposals = proposals.clone() - _proposals[:, :4] = bbox_mapping_back(_proposals[:, :4], img_shape, - scale_factor, flip, - flip_direction) - recovered_proposals.append(_proposals) - aug_proposals = torch.cat(recovered_proposals, dim=0) - merged_proposals, _ = nms(aug_proposals[:, :4].contiguous(), - aug_proposals[:, -1].contiguous(), - cfg.nms.iou_threshold) - scores = merged_proposals[:, 4] - _, order = scores.sort(0, descending=True) - num = min(cfg.max_per_img, merged_proposals.shape[0]) - order = order[:num] - merged_proposals = merged_proposals[order, :] - return merged_proposals - - -def merge_aug_bboxes(aug_bboxes, aug_scores, img_metas, rcnn_test_cfg): - """Merge augmented detection bboxes and scores. - - Args: - aug_bboxes (list[Tensor]): shape (n, 4*#class) - aug_scores (list[Tensor] or None): shape (n, #class) - img_shapes (list[Tensor]): shape (3, ). - rcnn_test_cfg (dict): rcnn test config. - - Returns: - tuple: (bboxes, scores) - """ - recovered_bboxes = [] - for bboxes, img_info in zip(aug_bboxes, img_metas): - img_shape = img_info[0]['img_shape'] - scale_factor = img_info[0]['scale_factor'] - flip = img_info[0]['flip'] - flip_direction = img_info[0]['flip_direction'] - bboxes = bbox_mapping_back(bboxes, img_shape, scale_factor, flip, - flip_direction) - recovered_bboxes.append(bboxes) - bboxes = torch.stack(recovered_bboxes).mean(dim=0) - if aug_scores is None: - return bboxes - else: - scores = torch.stack(aug_scores).mean(dim=0) - return bboxes, scores - - -def merge_aug_scores(aug_scores): - """Merge augmented bbox scores.""" - if isinstance(aug_scores[0], torch.Tensor): - return torch.mean(torch.stack(aug_scores), dim=0) - else: - return np.mean(aug_scores, axis=0) - - -def merge_aug_masks(aug_masks, img_metas, rcnn_test_cfg, weights=None): - """Merge augmented mask prediction. - - Args: - aug_masks (list[ndarray]): shape (n, #class, h, w) - img_shapes (list[ndarray]): shape (3, ). - rcnn_test_cfg (dict): rcnn test config. - - Returns: - tuple: (bboxes, scores) - """ - recovered_masks = [] - for mask, img_info in zip(aug_masks, img_metas): - flip = img_info[0]['flip'] - if flip: - flip_direction = img_info[0]['flip_direction'] - if flip_direction == 'horizontal': - mask = mask[:, :, :, ::-1] - elif flip_direction == 'vertical': - mask = mask[:, :, ::-1, :] - elif flip_direction == 'diagonal': - mask = mask[:, :, :, ::-1] - mask = mask[:, :, ::-1, :] - else: - raise ValueError( - f"Invalid flipping direction '{flip_direction}'") - recovered_masks.append(mask) - - if weights is None: - merged_masks = np.mean(recovered_masks, axis=0) - else: - merged_masks = np.average( - np.array(recovered_masks), axis=0, weights=np.array(weights)) - return merged_masks diff --git a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/models/detectors/nasfcos.py b/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/models/detectors/nasfcos.py deleted file mode 100644 index a34c2280f59f93139e716b54ef1799fc0941149f..0000000000000000000000000000000000000000 --- a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/models/detectors/nasfcos.py +++ /dev/null @@ -1,22 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ..builder import DETECTORS -from .single_stage import SingleStageDetector - - -@DETECTORS.register_module() -class NASFCOS(SingleStageDetector): - """NAS-FCOS: Fast Neural Architecture Search for Object Detection. - - https://arxiv.org/abs/1906.0442 - """ - - def __init__(self, - backbone, - neck, - bbox_head, - train_cfg=None, - test_cfg=None, - pretrained=None, - init_cfg=None): - super(NASFCOS, self).__init__(backbone, neck, bbox_head, train_cfg, - test_cfg, pretrained, init_cfg) diff --git a/spaces/rorallitri/biomedical-language-models/logs/Isocpeur Bold Font A Great Choice for Your Next Project.md b/spaces/rorallitri/biomedical-language-models/logs/Isocpeur Bold Font A Great Choice for Your Next Project.md deleted file mode 100644 index ba88fba62a7f0f32de00c277bfd1e4e1e778c173..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/Isocpeur Bold Font A Great Choice for Your Next Project.md +++ /dev/null @@ -1,9 +0,0 @@ -
-

All rights for the fonts given on this website reserved by their owners (authors, designers). The license given on the font page only represents received data. For detailed information, please, read the files (e.g., readme.txt) from archive or visit the website given by an author (designer) or contact with him if you have any doubt.
If there is no reported author (designer) or license, it means that there is no information on the given font, but it does not mean that the font is free.

-

This is the page of ISOCPEUR font. You can download it for free and without registration here. This entry was published on Friday, September 16th 2011, at 05:10 AM and was placed in the Regular catalog. Version of the ISOCPEUR is 1.02 - 02/12/98. This page was viewed 5187 times. File was downloaded 5497 times.

-

Isocpeur Bold Font


Download Filehttps://tinurll.com/2uzmlu



-

2. Unlike Word processors, bold is not an available option when editing text in Nitro. For example, if you are using Calibri and you want to make a section bold, you have to swap to the 'Calibri-Bold' font. To do this, click on the Edit tool, then select the text. The format tab will appear and here you can change the font type using the dropdown menu. Kindly refer to the screenshot attached.

-

Thanks for your help, but I downloaded the latest build as you suggested, and unfortunately, the Montserrat fonts still do not appear. I ran a Support Tools Report and see the fonts that I want at the bottom of the fonts list (see below),

-

[FONTS]
Number of fonts installed: 426
System
Terminal
Fixedsys
Modern
Roman
Script
Courier
MS Serif
MS Sans Serif
Small Fonts
Adobe Caslon Pro Bold
Adobe Caslon Pro
Adobe Garamond Pro Bold
Adobe Garamond Pro
Arno Pro
Arno Pro Caption
Arno Pro Display
Arno Pro SmText
Arno Pro Subhead
Arno Pro Light Display
Arno Pro Smbd
Arno Pro Smbd Caption
Arno Pro Smbd Display
Arno Pro Smbd SmText
Arno Pro Smbd Subhead
Bell Gothic Std Black
Bell Gothic Std Light
Bickham Script Pro Regular
Bickham Script Pro Semibold
Birch Std
Blackoak Std
Brush Script Std
Chaparral Pro
Charlemagne Std
Cooper Std Black
Eccentric Std
Garamond Premr Pro
Garamond Premr Pro Smbd
Giddyup Std
Hobo Std
Kozuka Gothic Pro B
Kozuka Gothic Pro EL
Kozuka Gothic Pro H
Kozuka Gothic Pro L
Kozuka Gothic Pro M
Kozuka Gothic Pro R
Kozuka Mincho Pro B
Kozuka Mincho Pro EL
Kozuka Mincho Pro H
Kozuka Mincho Pro L
Kozuka Mincho Pro M
Kozuka Mincho Pro R
Letter Gothic Std
Lithos Pro Regular
Mesquite Std
Minion Pro
Minion Pro Cond
Minion Pro Med
Minion Pro SmBd
Myriad Pro
Myriad Pro Cond
Myriad Pro Light
Nueva Std Cond
OCR A Std
Orator Std
Poplar Std
Prestige Elite Std
Rosewood Std Regular
Stencil Std
Tekton Pro
Tekton Pro Cond
Tekton Pro Ext
Trajan Pro
Rosewood Std Fill
Ryo Display Std B
Ryo Display Std EB
Ryo Display Std H
Ryo Display Std M
Ryo Display Std SB
Ryo Gothic Std B
Ryo Gothic Std EL
Ryo Gothic Std H
Ryo Gothic Std L
Ryo Gothic Std M
Ryo Gothic Std R
Ryo Gothic Std UH
Ryo Text Std EL
Ryo Text Std L
Ryo Text Std M
Ryo Text Std R
Adobe Wood Type Ornaments Std
Adobe Fangsong Std R
Adobe Heiti Std R
Adobe Kaiti Std R
Adobe Ming Std L
Adobe Myungjo Std M
Adobe Song Std L
Bernhard Modern Std Roman
Caflisch Script Pro Regular
Chaparral Pro Light
Chaparral Pro SmBd
Kozuka Gothic Std B
Kozuka Gothic Std EL
Kozuka Gothic Std H
Kozuka Gothic Std L
Kozuka Gothic Std M
Kozuka Gothic Std R
Kozuka Mincho Std B
Kozuka Mincho Std EL
Kozuka Mincho Std H
Kozuka Mincho Std L
Kozuka Mincho Std M
Kozuka Mincho Std R
Lithos Pro Light
Minion Std Black
Myriad Pro Black
Myriad Std Sketch
Myriad Std Tilt
News Gothic Std
Nueva Std
Nueva Std Light
Source Sans Pro Black
Adobe Devanagari
Trueno
Wheat Aged
Wheat
Wheat Rough
Barley Aged
Barley
Barley Rough
Montserrat Alternates Black
Montserrat Alternates
Montserrat Alternates ExtraBold
Montserrat Alternates ExLight
Montserrat Alternates Light
Montserrat Alternates Medium
Montserrat Alternates SemiBold
Montserrat Alternates Thin
Intro Regular Italic
Intro Thin
Intro Thin Italic
Intro Black
Intro Black Alt
Intro Black Inline
Intro Black Italic
Intro Bold
Intro Bold Alt
Intro Bold Italic
Intro Book
Intro Book Alt
Intro Book Italic
Intro Light
Intro Light Alt
Intro Light Italic
Intro Regular
Intro Regular Alt
Marlett
Arial
Arabic Transparent
Arial Baltic
Arial CE
Arial CYR
Arial Greek
Arial TUR
Arial Black
Bahnschrift Light
Bahnschrift SemiLight
Bahnschrift
Bahnschrift SemiBold
Bahnschrift Light SemiCondensed
Bahnschrift SemiLight SemiConde
Bahnschrift SemiCondensed
Bahnschrift SemiBold SemiConden
Bahnschrift Light Condensed
Bahnschrift SemiLight Condensed
Bahnschrift Condensed
Bahnschrift SemiBold Condensed
Calibri
Calibri Light
Cambria
Cambria Math
Candara
Candara Light
Comic Sans MS
Consolas
Constantia
Corbel
Corbel Light
Courier New
Courier New Baltic
Courier New CE
Courier New CYR
Courier New Greek
Courier New TUR
Ebrima
Franklin Gothic Medium
Gabriola
Gadugi
Georgia
Impact
Ink Free
Javanese Text
Leelawadee UI
Leelawadee UI Semilight
Lucida Console
Lucida Sans Unicode
Malgun Gothic
Malgun Gothic Semilight
Microsoft Himalaya
Microsoft JhengHei
Microsoft JhengHei UI
Microsoft JhengHei Light
Microsoft JhengHei UI Light
Microsoft New Tai Lue
Microsoft PhagsPa
Microsoft Sans Serif
Microsoft Tai Le
Microsoft YaHei
Microsoft YaHei UI
Microsoft YaHei Light
Microsoft YaHei UI Light
Microsoft Yi Baiti
MingLiU-ExtB
PMingLiU-ExtB
MingLiU_HKSCS-ExtB
Mongolian Baiti
MS Gothic
MS UI Gothic
MS PGothic
MV Boli
Myanmar Text
Nirmala UI
Nirmala UI Semilight
Palatino Linotype
Segoe MDL2 Assets
Segoe Print
Segoe Script
Segoe UI
Segoe UI Black
Segoe UI Emoji
Segoe UI Historic
Segoe UI Light
Segoe UI Semibold
Segoe UI Semilight
Segoe UI Symbol
SimSun
NSimSun
SimSun-ExtB
Sitka Small
Sitka Text
Sitka Subheading
Sitka Heading
Sitka Display
Sitka Banner
Sylfaen
Symbol
Tahoma
Times New Roman
Times New Roman Baltic
Times New Roman CE
Times New Roman CYR
Times New Roman Greek
Times New Roman TUR
Trebuchet MS
Verdana
Webdings
Wingdings
Yu Gothic
Yu Gothic UI
Yu Gothic UI Semibold
Yu Gothic Light
Yu Gothic UI Light
Yu Gothic Medium
Yu Gothic UI Semilight
HoloLens MDL2 Assets
Agency FB
Algerian
Book Antiqua
Arial Narrow
Arial Rounded MT Bold
Baskerville Old Face
Bauhaus 93
Bell MT
Bernard MT Condensed
Bodoni MT
Bodoni MT Black
Bodoni MT Condensed
Bodoni MT Poster Compressed
Bookman Old Style
Bradley Hand ITC
Britannic Bold
Berlin Sans FB
Berlin Sans FB Demi
Broadway
Brush Script MT
Bookshelf Symbol 7
Californian FB
Calisto MT
Castellar
Century Schoolbook
Centaur
Century
Chiller
Colonna MT
Cooper Black
Copperplate Gothic Bold
Copperplate Gothic Light
Curlz MT
Elephant
Engravers MT
Eras Bold ITC
Eras Demi ITC
Eras Light ITC
Eras Medium ITC
Felix Titling
Forte
Franklin Gothic Book
Franklin Gothic Demi
Franklin Gothic Demi Cond
Franklin Gothic Heavy
Franklin Gothic Medium Cond
Freestyle Script
French Script MT
Footlight MT Light
Garamond
Gigi
Gill Sans MT
Gill Sans MT Condensed
Gill Sans Ultra Bold Condensed
Gill Sans Ultra Bold
Gloucester MT Extra Condensed
Gill Sans MT Ext Condensed Bold
Century Gothic
Goudy Old Style
Goudy Stout
Harlow Solid Italic
Harrington
Haettenschweiler
High Tower Text
Imprint MT Shadow
Informal Roman
Blackadder ITC
Edwardian Script ITC
Kristen ITC
Jokerman
Juice ITC
Kunstler Script
Wide Latin
Lucida Bright
Lucida Calligraphy
Lucida Fax
Lucida Handwriting
Lucida Sans
Lucida Sans Typewriter
Magneto
Maiandra GD
Matura MT Script Capitals
Mistral
Modern No. 20
Monotype Corsiva
Niagara Engraved
Niagara Solid
OCR A Extended
Old English Text MT
Onyx
MS Outlook
Palace Script MT
Papyrus
Parchment
Perpetua
Perpetua Titling MT
Playbill
Poor Richard
Pristina
Rage Italic
Ravie
MS Reference Sans Serif
MS Reference Specialty
Rockwell Condensed
Rockwell
Rockwell Extra Bold
Script MT Bold
Showcard Gothic
Snap ITC
Stencil
Tw Cen MT
Tw Cen MT Condensed
Tw Cen MT Condensed Extra Bold
Tempus Sans ITC
Viner Hand ITC
Vivaldi
Vladimir Script
Wingdings 2
Wingdings 3
MT Extra
Lato
Lato Black
Oswald
Dosis
Source Sans Pro Semibold
Montserrat
OCR-A II
OCR B MT
QuickType II Condensed
QuickType II Mono
QuickType II Pi
QuickType II
Source Sans Pro
ZWAdobeF
Euro Sign
Moon
AvenirLT-Medium
AvenirLT-Roman
FontAwesome
icon-brand
GLYPHICONS Halflings
icon-large
icon-small
icon-ui
Montserrat Medium
Montserrat SemiBold
Montserrat Thin
Montserrat Black
Montserrat ExtraBold
Montserrat Light
Montserrat ExtraLight

aaccfb2cb3
-
-
\ No newline at end of file diff --git a/spaces/rosenthal/chess/app.py b/spaces/rosenthal/chess/app.py deleted file mode 100644 index 1dcbc57b610bb19d778115ef63946ea645d20208..0000000000000000000000000000000000000000 --- a/spaces/rosenthal/chess/app.py +++ /dev/null @@ -1,57 +0,0 @@ -import re -import gradio as gr - -from chessfenbot.chessboard_finder import findGrayscaleTilesInImage -from chessfenbot.tensorflow_chessbot import ChessboardPredictor -from chessfenbot.helper_functions import shortenFEN - - -def predict(img, active="w"): - """ - main predict function for gradio. - Predict a chessboard FEN. - Wraps model from https://github.com/Elucidation/tensorflow_chessbot/tree/chessfenbot - - Args: - img (PIL image): input image of a chess board - active (str): defaults to "w" - """ - - # Look for chessboard in image, get corners and split chessboard into tiles - tiles, corners = findGrayscaleTilesInImage(img) - - # Initialize predictor, takes a while, but only needed once - predictor = ChessboardPredictor(frozen_graph_path='chessfenbot/saved_models/frozen_graph.pb') - fen, tile_certainties = predictor.getPrediction(tiles) - predictor.close() - short_fen = shortenFEN(fen) - # Use the worst case certainty as our final uncertainty score - certainty = tile_certainties.min() - - print('Per-tile certainty:') - print(tile_certainties) - print("Certainty range [%g - %g], Avg: %g" % ( - tile_certainties.min(), tile_certainties.max(), tile_certainties.mean())) - - # predicted FEN - fen_out = f"{short_fen} {active} - - 0 1" - # certainty - certainty = "%.1f%%" % (certainty*100) - # link to analysis board on Lichess - lichess_link = f'https://lichess.org/analysis/standard/{re.sub(" ", "_", fen_out)}' - - return fen_out, certainty, lichess_link - - -gr.Interface( - predict, - inputs=gr.inputs.Image(label="Upload chess board", type="pil"), - outputs=[ - gr.Textbox(label="FEN"), - gr.Textbox(label="certainty"), - gr.Textbox(label="Link to Lichess analysis board (copy and paste into URL)"), - ], - title="Chess FEN bot", - examples=["chessfenbot/example_input.png"], - description="Simple wrapper around TensorFlow Chessbot (https://github.com/Elucidation/tensorflow_chessbot)" -).launch() \ No newline at end of file diff --git a/spaces/samcaicn/bingai/src/components/chat-message.tsx b/spaces/samcaicn/bingai/src/components/chat-message.tsx deleted file mode 100644 index 7d304365676a1fcfb5cd043ce9d1dbf9e367e50d..0000000000000000000000000000000000000000 --- a/spaces/samcaicn/bingai/src/components/chat-message.tsx +++ /dev/null @@ -1,94 +0,0 @@ -import remarkGfm from 'remark-gfm' -import remarkMath from 'remark-math' -import supersub from 'remark-supersub' -import remarkBreaks from 'remark-breaks' -import { cn } from '@/lib/utils' -import { CodeBlock } from '@/components/ui/codeblock' -import { MemoizedReactMarkdown } from '@/components/markdown' -import { LearnMore } from './learn-more' -import { ChatMessageModel } from '@/lib/bots/bing/types' -import { useEffect } from 'react' -import { TurnCounter } from './turn-counter' - -export interface ChatMessageProps { - message: ChatMessageModel -} - -export function ChatMessage({ message, ...props }: ChatMessageProps) { - useEffect(() => { - if (document.body.scrollHeight - window.innerHeight - window.scrollY - 200 < 0) { - window.scrollBy(0, 200) - } - }, [message.text]) - - return message.text ? ( -
-
- {obj.alt} - } - } catch (e) { - } - return {obj.alt} - }, - p({ children }) { - return

{children}

- }, - code({ node, inline, className, children, ...props }) { - if (children.length) { - if (children[0] == '▍') { - return ( - - ) - } - - children[0] = (children[0] as string).replace('`▍`', '▍') - } - - const match = /language-(\w+)/.exec(className || '') - - if (inline) { - return ( - - {children} - - ) - } - - return ( - - ) - } - }} - > - {message.text} -
-
-
- {message.author === 'bot' && } - {message.author === 'bot' && } -
-
- ) : null -} diff --git a/spaces/scedlatioru/img-to-music/Kab Ke Bichhde Hue-kishore Kumar Asha Bhosle Hd-1080pl.md b/spaces/scedlatioru/img-to-music/Kab Ke Bichhde Hue-kishore Kumar Asha Bhosle Hd-1080pl.md deleted file mode 100644 index 095800454d9d6cbc43c0fc7f89788ea36e9ca081..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/Kab Ke Bichhde Hue-kishore Kumar Asha Bhosle Hd-1080pl.md +++ /dev/null @@ -1,108 +0,0 @@ -## Kab Ke Bichhde Hue-kishore Kumar Asha Bhosle Hd-1080pl - - - - - - ![Kab Ke Bichhde Hue-kishore Kumar Asha Bhosle Hd-1080pl](https://img.wynk.in/unsafe/320x180/top/https://s3-ap-south-1.amazonaws.com/wynk-music-cms/music/1440701509607/srch_saregama_INH109341090.jpg) - - - - - -**DOWNLOAD ……… [https://dropnobece.blogspot.com/?download=2tyq4W](https://dropnobece.blogspot.com/?download=2tyq4W)** - - - - - - - - - - - - - -# Kab Ke Bichhde Hue: A Classic Duet by Kishore Kumar and Asha Bhosle - - - -Kab Ke Bichhde Hue is a popular Hindi song from the 1981 movie Lawaaris, directed by Prakash Mehra. The song features the legendary singers Kishore Kumar and Asha Bhosle, who have given many hit duets in Bollywood. The song is composed by Kalyanji-Anandji and written by Anjaan. - - - -The song is a romantic and nostalgic number that expresses the longing of two lovers who have been separated for a long time. The lyrics describe how they miss each other's presence, voice, touch and smile. The song also has a catchy chorus that repeats the phrase "Kab Ke Bichhde Hue Hum Aaj Kaha Aa Ke Mile", which means "We have been separated for so long, where have we met today?". - - - -The song is picturized on Amitabh Bachchan and Zeenat Aman, who play the lead roles in the movie. Lawaaris is a drama film that revolves around an orphan who stumbles over reality in search for his parents. The film was a blockbuster at the box office and received several awards and nominations. The song Kab Ke Bichhde Hue was one of the highlights of the film and is still remembered as one of the best duets by Kishore Kumar and Asha Bhosle. - - - -If you want to listen to this song, you can find it on YouTube[^1^] [^2^] [^3^]. You can also enjoy the HD video quality of 1080p on some of the links. The song is also available on various music streaming platforms like Spotify, Gaana, JioSaavn and more. - - - -Kab Ke Bichhde Hue is not the only duet by Kishore Kumar and Asha Bhosle that has won the hearts of millions of listeners. The two singers have collaborated on many other songs that have become evergreen classics in Hindi cinema. Some of their most famous duets are: - - - -- Chhod Do Aanchal Zamana Kya Kahega from Paying Guest (1957) - -- Haal Kaisa Hai Janab Ka from Chalti Ka Naam Gaadi (1958) - -- Aankhon Mein Kya Ji from Nau Do Gyarah (1957) - -- Jhumka Gira Re from Mera Saaya (1966) - -- Aaj Rapat Jaayen To from Namak Halaal (1982) - -- Ek Main Aur Ek Tu from Khel Khel Mein (1975) - -- Jaane Jaan Dhoondta Phir Raha from Jawani Diwani (1972) - -- Kehdoon Tumhe from Deewaar (1975) - -- Pyar Mein Dil Pe Maar De Goli from Mahaan (1983) - -- Yeh Vaada Raha from Yeh Vaada Raha (1982) - - - -These songs showcase the versatility and chemistry of Kishore Kumar and Asha Bhosle, who could sing any genre of music with ease and flair. Whether it was a playful song, a romantic song, a sad song or a dance song, they could bring out the emotions and expressions of the characters and the situations. Their voices complemented each other perfectly and created a magical harmony that is still unmatched. - - - -If you want to listen to more duets by Kishore Kumar and Asha Bhosle, you can find a playlist on YouTube. You can also find their songs on various music streaming platforms like Spotify, Gaana, JioSaavn and more. - - - -Kishore Kumar and Asha Bhosle have not only received immense love and appreciation from their fans, but also from the critics and the industry. They have been honored with many awards and recognitions for their contribution to Indian music. Some of their notable awards are: - - - -- Asha Bhosle has won seven Filmfare Awards for Best Female Playback Singer out of 18 nominations. She also received a Special Award for Rangeela in 1996, and the Filmfare Lifetime Achievement Award in 2001. [^1^] - -- Kishore Kumar has won eight Filmfare Awards for Best Male Playback Singer out of 27 nominations. He also received a Special Award for Saagar in 1986, and the Filmfare Lifetime Achievement Award in 1990. [^2^] - -- Asha Bhosle and Kishore Kumar have won four Bengal Film Journalists' Association Awards for Best Female Playback Singer and Best Male Playback Singer respectively. [^3^] - -- Asha Bhosle has won 18 Maharashtra State Film Awards for Best Female Playback Singer. - -- Kishore Kumar has won four National Film Awards for Best Male Playback Singer. - -- Asha Bhosle has received the Padma Vibhushan, India's second-highest civilian honor, in 2008. She has also received the Dadasaheb Phalke Award, India's highest award in cinema, in 2000. - -- Kishore Kumar has received the Padma Shri, India's fourth-highest civilian honor, in 1970. He has also been awarded the Lata Mangeshkar Award by the Madhya Pradesh government and the RD Burman Award by Filmfare. - - - -These awards are a testimony to the talent and legacy of Kishore Kumar and Asha Bhosle, who have enriched Indian music with their golden voices. - - 145887f19f - - - - - diff --git a/spaces/scedlatioru/img-to-music/example/Code Easeus Data Recovery Wizard.md b/spaces/scedlatioru/img-to-music/example/Code Easeus Data Recovery Wizard.md deleted file mode 100644 index bd16ca5eafa9775ede2667af97975c9357be6369..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/Code Easeus Data Recovery Wizard.md +++ /dev/null @@ -1,71 +0,0 @@ - -

Code EaseUS Data Recovery Wizard: How to Recover Your Lost Data

-

Data loss is a common problem that can happen to anyone, whether it's due to accidental deletion, formatting, virus attack, system crash, or other reasons. If you have lost your important data and want to get it back, you need a reliable and professional data recovery software. One of the best options you can try is Code EaseUS Data Recovery Wizard.

-

Code EaseUS Data Recovery Wizard is a powerful and easy-to-use data recovery software that can help you recover deleted, formatted, or inaccessible data from various devices, such as hard drive, USB flash drive, memory card, digital camera, mobile phone, etc. It supports more than 1000 file types, including photos, videos, audio, documents, emails, and more. It also has a high success rate of data recovery and can ensure the safety and privacy of your data.

-

Code easeus data recovery wizard


DOWNLOADhttps://gohhs.com/2uEAsC



-

Key Features of Code EaseUS Data Recovery Wizard

-

Code EaseUS Data Recovery Wizard has many features that make it stand out from other data recovery software. Here are some of them:

-
    -
  • Quick and Easy: You can recover your lost data in just three simple steps: select a location, scan for lost files, and preview and recover them. You don't need any technical skills or experience to use it.
  • -
  • Flexible Scan Mode: You can choose between two scan modes: quick scan and deep scan. Quick scan can find your lost files in a short time, while deep scan can find more files by scanning every sector of your device.
  • -
  • Recover Accidentally Deleted Files: You can recover files that you have deleted by mistake or emptied from the recycle bin. You can also recover files that have been deleted by other programs or viruses.
  • -
  • Operating System Crash Recovery: You can recover data from a crashed or unbootable Windows system by creating a bootable USB drive or CD/DVD with Code EaseUS Data Recovery Wizard. You can then boot your computer from the bootable media and recover your data.
  • -
-

How to Recover Deleted Data With Code EaseUS Data Recovery Wizard

-

If you want to use Code EaseUS Data Recovery Wizard to recover your deleted data, you need to follow these steps:

-
    -
  1. Download and install Code EaseUS Data Recovery Wizard on your computer. You can get it from the official website or use one of the free license codes provided in this article.
  2. -
  3. Launch the software and select the location where you lost your data. It can be a hard drive partition, an external device, or a specific folder.
  4. -
  5. Click "Scan" to start scanning for lost files. The software will first perform a quick scan and then a deep scan. You can pause or stop the scan at any time.
  6. -
  7. After the scan is completed, you can preview the found files by type, name, date, or path. You can also use the filter or search function to find your desired files quickly.
  8. -
  9. Select the files you want to recover and click "Recover" to save them to another location. You should not save them to the same place where you lost them to avoid overwriting.
  10. -
-

Conclusion

-

Data loss is a frustrating and stressful situation that can happen to anyone. However, with Code EaseUS Data Recovery Wizard, you can recover your lost data easily and safely. Whether you have lost your data due to deletion, formatting, virus attack, system crash, or other reasons, you can use this software to get it back in no time.

-

If you want to try Code EaseUS Data Recovery Wizard for free, you can use one of the free license codes provided in this article. However, if you want to enjoy more features and benefits of this software, you should buy the official license code from the official website. By doing so, you can get lifetime updates, technical support, and data protection guarantee.

-

Don't let data loss ruin your day. Download Code EaseUS Data Recovery Wizard now and recover your precious data with ease!

-

How to Get Code EaseUS Data Recovery Wizard for Free or with Discount

-

Code EaseUS Data Recovery Wizard is not a free software, but you can get it for free or with a discount in some ways. Here are some of them:

-

-
    -
  • Free Trial: You can download and install Code EaseUS Data Recovery Wizard for free from the official website and use it to recover up to 2 GB of data for free. This is a good way to test the software before buying it.
  • -
  • Free License Code: You can find some free license codes or activation codes for Code EaseUS Data Recovery Wizard on some websites or blogs, such as TechMaina or Followchain. These codes are usually shared by users who have bought the software and don't need them anymore. However, you should be careful when using these codes, as they may not work or may be illegal.
  • -
  • Discount Coupon: You can also get a discount coupon for Code EaseUS Data Recovery Wizard from some websites or platforms, such as Dealspotr or CouponBirds. These coupons can help you save up to 51% off the original price of the software. You can apply these coupons at the checkout page when you buy the software from the official website.
  • -
-

Why You Should Choose Code EaseUS Data Recovery Wizard

-

Code EaseUS Data Recovery Wizard is one of the best data recovery software in the market, and there are many reasons why you should choose it. Here are some of them:

-
    -
  • Reliable and Professional: Code EaseUS Data Recovery Wizard has been trusted by millions of users and professionals around the world for over 15 years. It has been certified by Microsoft and Norton as a safe and reliable software. It also has a 24/7 technical support team that can help you with any issues or questions.
  • -
  • Powerful and Comprehensive: Code EaseUS Data Recovery Wizard can recover any type of data from any device and situation. It can recover photos, videos, audio, documents, emails, and more from hard drive, USB flash drive, memory card, digital camera, mobile phone, etc. It can also recover data from formatted, corrupted, encrypted, or damaged devices. It can also recover data from system crash, virus attack, partition loss, or other scenarios.
  • -
  • Easy and Fast: Code EaseUS Data Recovery Wizard has a user-friendly interface and a simple operation process. You can recover your lost data in just three steps: select, scan, and recover. You can also preview your files before recovering them. The software can scan your device quickly and thoroughly with its advanced scan algorithm.
  • -
-

Conclusion

-

Data recovery is not a difficult task anymore with Code EaseUS Data Recovery Wizard. This software can help you recover your lost data easily and safely from any device and situation. Whether you have deleted your data by mistake or lost it due to other reasons, you can use this software to get it back in no time.

-

If you want to try Code EaseUS Data Recovery Wizard for free or with a discount, you can use one of the methods mentioned above. However, if you want to enjoy more features and benefits of this software, you should buy the official license code from the official website. By doing so, you can get lifetime updates, technical support, and data protection guarantee.

-

Don't let data loss ruin your day. Download Code EaseUS Data Recovery Wizard now and recover your precious data with ease!

-

How to Activate and Upgrade Code EaseUS Data Recovery Wizard

-

If you have bought Code EaseUS Data Recovery Wizard from the official website or other authorized platforms, you will receive a license code or activation code via email. You can use this code to activate and upgrade the software to enjoy more features and benefits. Here is how to do it:

-
    -
  1. Launch Code EaseUS Data Recovery Wizard on your computer and click "Activate" on the top right corner.
  2. -
  3. Enter your license code or activation code in the pop-up window and click "Activate".
  4. -
  5. Wait for the activation process to complete and click "OK".
  6. -
  7. Restart the software and you will see the upgraded version on the main interface.
  8. -
-

If you have any problems with the activation or upgrade process, you can contact the 24/7 technical support team via email or live chat.

-

How to Use Code EaseUS Data Recovery Wizard Effectively

-

Code EaseUS Data Recovery Wizard is a user-friendly and powerful data recovery software that can help you recover your lost data in various scenarios. However, there are some tips and tricks that can help you use it more effectively and efficiently. Here are some of them:

-
    -
  • Stop Using Your Device Immediately: When you lose your data, you should stop using your device immediately to avoid overwriting or damaging your data. You should also disconnect your device from the internet or any other devices to prevent virus infection or data leakage.
  • -
  • Select a Suitable Scan Mode: Code EaseUS Data Recovery Wizard offers two scan modes: quick scan and deep scan. Quick scan can find your lost files quickly, but it may not find all of them. Deep scan can find more files, but it may take longer time. You can choose a suitable scan mode according to your needs and situation.
  • -
  • Preview and Filter Your Files: Before recovering your files, you can preview them by type, name, date, or path. You can also use the filter or search function to find your desired files quickly. This can help you save time and disk space by recovering only what you need.
  • -
  • Save Your Files to Another Location: When recovering your files, you should save them to another location rather than the original one. This can avoid overwriting or losing your files again. You can also save your files to an external device or cloud storage for backup.
  • -
-

Conclusion

-

Code EaseUS Data Recovery Wizard is a comprehensive and reliable data recovery software that can help you recover your lost data from any device and situation. It has many features and advantages that make it stand out from other data recovery software. It also has a free trial version that allows you to recover up to 2 GB of data for free.

-

If you want to try Code EaseUS Data Recovery Wizard for free or with a discount, you can use one of the methods mentioned above. However, if you want to enjoy more features and benefits of this software, you should buy the official license code from the official website. By doing so, you can get lifetime updates, technical support, and data protection guarantee.

-

Don't let data loss ruin your day. Download Code EaseUS Data Recovery Wizard now and recover your precious data with ease!

-

Code EaseUS Data Recovery Wizard is a comprehensive and reliable data recovery software that can help you recover your lost data from any device and situation. It has many features and advantages that make it stand out from other data recovery software. It also has a free trial version that allows you to recover up to 2 GB of data for free.

-

If you want to try Code EaseUS Data Recovery Wizard for free or with a discount, you can use one of the methods mentioned above. However, if you want to enjoy more features and benefits of this software, you should buy the official license code from the official website. By doing so, you can get lifetime updates, technical support, and data protection guarantee.

-

Don't let data loss ruin your day. Download Code EaseUS Data Recovery Wizard now and recover your precious data with ease!

3cee63e6c2
-
-
\ No newline at end of file diff --git a/spaces/scedlatioru/img-to-music/example/Crack KeygenAutoCAD Mechanical 2017 Download !!INSTALL!!.md b/spaces/scedlatioru/img-to-music/example/Crack KeygenAutoCAD Mechanical 2017 Download !!INSTALL!!.md deleted file mode 100644 index 983dd1bd9a9d7cd2dc4c6aba0fdee9e9a3bdc099..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/Crack KeygenAutoCAD Mechanical 2017 Download !!INSTALL!!.md +++ /dev/null @@ -1,64 +0,0 @@ - -

How to Download Crack Keygen AutoCAD Mechanical 2017 and Activate the Software

-

AutoCAD Mechanical 2017 is a powerful software that helps engineers and designers to create mechanical drawings and designs with ease. It is part of the Autodesk product line that includes AutoCAD, Inventor, Revit, and more. AutoCAD Mechanical 2017 offers many features and tools that can improve your productivity and accuracy, such as:

-
    -
  • Comprehensive libraries of standards-based parts and tools
  • -
  • Automated generation of bills of materials and parts lists
  • -
  • Advanced layer management and dimensioning
  • -
  • Integrated simulation and analysis tools
  • -
  • Collaboration and data exchange capabilities
  • -
-

However, AutoCAD Mechanical 2017 is not a cheap software. It costs around $4,195 for a single-user license, which may be too expensive for some users. That's why some people may look for a crack keygen AutoCAD Mechanical 2017 download that can help them to activate the software for free and use it without any limitations.

-

crack keygenAutoCAD Mechanical 2017 download


DOWNLOAD ⚙⚙⚙ https://gohhs.com/2uEA3u



-

In this article, we will show you how to find and download a crack keygen AutoCAD Mechanical 2017 that works and is safe for your computer. We will also show you how to install and activate the software using the crack keygen. But before we start, we want to remind you that using a crack keygen AutoCAD Mechanical 2017 download is illegal and unethical. It violates the terms of service of Autodesk and may cause legal problems or security risks. Therefore, we do not recommend or endorse using a crack keygen AutoCAD Mechanical 2017 download. If you like the software and want to support the developers, please buy it from the official website.

-

Where to Find Crack Keygen AutoCAD Mechanical 2017 Download

-

There are many websites that claim to offer crack keygen AutoCAD Mechanical 2017 download, but not all of them are reliable or trustworthy. Some of them may contain viruses, malware, or fake files that can harm your computer or steal your personal information. Therefore, you need to be careful and do some research before downloading anything from the internet.

-

One way to find a reputable source for crack keygen AutoCAD Mechanical 2017 download is to use a torrent site. Torrent sites are platforms that allow users to share files with each other using peer-to-peer technology. You can use a torrent client like uTorrent or BitTorrent to download files from torrent sites.

-

However, not all torrent sites are safe or legal either. Some of them may host copyrighted or illegal content that can get you in trouble with the law or your internet service provider. Therefore, you need to use a VPN (virtual private network) service to hide your IP address and encrypt your traffic when using torrent sites. You also need to use an antivirus program to scan the files before opening them.

-

Some of the popular and reliable torrent sites that you can use to find crack keygen AutoCAD Mechanical 2017 download are:

-

-
    -
  • The Pirate Bay
  • -
  • RARBG
  • -
  • 1337x
  • -
  • LimeTorrents
  • -
  • Torrentz2
  • -
-

On these sites, you can search for crack keygen AutoCAD Mechanical 2017 download using the search bar or browse through the categories. You can also check the comments, ratings, and seeders of each torrent to see if it is legit and working. Once you find a suitable torrent, you can click on the magnet link or download button to start downloading it.

-

How to Install and Activate AutoCAD Mechanical 2017 with Crack Keygen

-

After downloading the crack keygen AutoCAD Mechanical 2017, you need to install and activate it on your computer. The installation process may vary depending on the source of the crack keygen, but generally it involves these steps:

-
    -
  1. Extract the downloaded file using a program like WinRAR or 7-Zip.
  2. -
  3. Open the extracted folder and look for a setup.exe file or an ISO file.
  4. -
  5. If there is a setup.exe file, run it as administrator and follow the instructions on the screen.
  6. -
  7. If there is an ISO file, mount it using a program like Daemon Tools or PowerISO.
  8. -
  9. Open the mounted drive and run the setup.exe file as administrator.
  10. -
  11. Choose a destination folder for the software installation and click next.
  12. -
  13. Wait for the installation to finish and uncheck any unwanted options.
  14. -
  15. Look for a folder named Crack or X-Force in the extracted folder or the mounted drive.
  16. -
  17. Copy all the files from that folder and paste them into the software installation folder, replacing any existing files.
  18. -
  19. Run the x-force.exe file as administrator.
  20. -
  21. Select AutoCAD Mechanical 2017 from the drop-down menu and click on Generate.
  22. -
  23. Copy the generated activation code and paste it into the software activation window.
  24. -
  25. Click on Next and enjoy your activated software.
  26. -
-

Congratulations! You have successfully installed and activated AutoCAD Mechanical 2017 with crack keygen on your computer. Now you can use the software without any limitations.

-

Tips and Tricks for Using AutoCAD Mechanical 2017

-

AutoCAD Mechanical 2017 is a powerful software that can help you create mechanical drawings and designs with ease. Here are some tips and tricks that can help you use it better and faster:

-
    -
  • Use keyboard shortcuts to access commands quickly. You can find a list of keyboard shortcuts in this link: https://knowledge.autodesk.com/support/autocad-mechanical/learn-explore/caas/CloudHelp/cloudhelp/2017/ENU/AutoCAD-Mechanical/files/GUID-0B9E8F6A-9F8C-4C0E-BB5D-3F6D8A0E5C6D-htm.html
  • -
  • Use templates to start your drawings with predefined settings and standards. You can find a list of templates in this link: https://knowledge.autodesk.com/support/autocad-mechanical/learn-explore/caas/CloudHelp/cloudhelp/2017/ENU/AutoCAD-Mechanical/files/GUID-9B8A0E3F-5C1F-4D1A-AF9E-5B5E6A9B3D0C-htm.html
  • -
  • Use layers to organize your drawing objects by type, color, linetype, etc. You can find more information about layers in this link: https://knowledge.autodesk.com/support/autocad-mechanical/learn-explore/caas/CloudHelp/cloudhelp/2017/ENU/AutoCAD-Mechanical/files/GUID-8C3B8B6A-AF1D-4D3A-BE1E-AF6C9B6A0F0C-htm.html
  • -
  • Use dimensions to annotate your drawings with accurate measurements and tolerances. You can find more information about dimensions in this link: https://knowledge.autodesk.com/support/autocad-mechanical/learn-explore/caas/CloudHelp/cloudhelp/2017/ENU/AutoCAD-Mechanical/files/GUID-CB5D4E9F-A9C4-4E69-BD1A-F3C5F8F5B8B0-htm.html
  • -
  • Use the Content Manager to access and manage libraries of standards-based parts and features. You can also create and save your own custom content for future use. Learn more
  • -
  • Use the Power View tool to create dynamic views of your drawing that update automatically when you make changes. You can also use the Power Dimension tool to create associative dimensions that follow your preferences. Learn more
  • -
  • Use the Associative Hide tool to hide objects behind other objects in a drawing view. You can also use the Associative Section tool to create 2D sections from 3D models. Learn more
  • -
  • Use the Shaft Generator tool to create shafts and holes with standard features. You can also use the Spring Generator tool to create springs with various types and parameters. Learn more
  • -
  • Use the Mechanical Structure Browser to view and edit the structure of your drawing. You can also use the Mechanical Browser to view and edit the properties of your drawing objects. Learn more
  • -
-

Conclusion

-

AutoCAD Mechanical 2017 is a great software that can help you create mechanical drawings and designs with ease. However, it is not a cheap software and you may not want to pay for it or wait for updates. That's why you may be interested in crack keygen AutoCAD Mechanical 2017 download that can help you activate the software for free and use it without any limitations.

-

In this article, we showed you how to find and download a crack keygen AutoCAD Mechanical 2017 that works and is safe for your computer. We also showed you how to install and activate the software using the crack keygen. But before we end, we want to remind you that using a crack keygen AutoCAD Mechanical 2017 download is illegal and unethical. It violates the terms of service of Autodesk and may cause legal problems or security risks. Therefore, we do not recommend or endorse using a crack keygen AutoCAD Mechanical 2017 download. If you like the software and want to support the developers, please buy it from the official website.

-

Thank you for reading this article and we hope you found it helpful and informative.

3cee63e6c2
-
-
\ No newline at end of file diff --git a/spaces/scedlatioru/img-to-music/example/Download Refox Xii Keygen Download __LINK__.md b/spaces/scedlatioru/img-to-music/example/Download Refox Xii Keygen Download __LINK__.md deleted file mode 100644 index 1d73d1582171fe293cf36710e2945ff17620ef06..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/Download Refox Xii Keygen Download __LINK__.md +++ /dev/null @@ -1,6 +0,0 @@ -

Download Refox Xii Keygen Download


DOWNLOAD --->>> https://gohhs.com/2uEzYP



- -We have seen about 1 different instances of ReFox XII crack. EnRoute Software provides CNC. Terrence Lo November 4, 2013 at 5:54 pm. Download cracked ... 1fdad05405
-
-
-

diff --git a/spaces/sdhsdhk/bingosjj/src/lib/hooks/chat-history.ts b/spaces/sdhsdhk/bingosjj/src/lib/hooks/chat-history.ts deleted file mode 100644 index c6fbf3fecfa86fe553f56acc8253236b8f22a775..0000000000000000000000000000000000000000 --- a/spaces/sdhsdhk/bingosjj/src/lib/hooks/chat-history.ts +++ /dev/null @@ -1,62 +0,0 @@ -import { zip } from 'lodash-es' -import { ChatMessageModel, BotId } from '@/lib/bots/bing/types' -import { Storage } from '../storage' - -/** - * conversations:$botId => Conversation[] - * conversation:$botId:$cid:messages => ChatMessageModel[] - */ - -interface Conversation { - id: string - createdAt: number -} - -type ConversationWithMessages = Conversation & { messages: ChatMessageModel[] } - -async function loadHistoryConversations(botId: BotId): Promise { - const key = `conversations:${botId}` - const { [key]: value } = await Storage.get(key) - return value || [] -} - -async function deleteHistoryConversation(botId: BotId, cid: string) { - const conversations = await loadHistoryConversations(botId) - const newConversations = conversations.filter((c) => c.id !== cid) - await Storage.set({ [`conversations:${botId}`]: newConversations }) -} - -async function loadConversationMessages(botId: BotId, cid: string): Promise { - const key = `conversation:${botId}:${cid}:messages` - const { [key]: value } = await Storage.get(key) - return value || [] -} - -export async function setConversationMessages(botId: BotId, cid: string, messages: ChatMessageModel[]) { - const conversations = await loadHistoryConversations(botId) - if (!conversations.some((c) => c.id === cid)) { - conversations.unshift({ id: cid, createdAt: Date.now() }) - await Storage.set({ [`conversations:${botId}`]: conversations }) - } - const key = `conversation:${botId}:${cid}:messages` - await Storage.set({ [key]: messages }) -} - -export async function loadHistoryMessages(botId: BotId): Promise { - const conversations = await loadHistoryConversations(botId) - const messagesList = await Promise.all(conversations.map((c) => loadConversationMessages(botId, c.id))) - return zip(conversations, messagesList).map(([c, messages]) => ({ - id: c!.id, - createdAt: c!.createdAt, - messages: messages!, - })) -} - -export async function deleteHistoryMessage(botId: BotId, conversationId: string, messageId: string) { - const messages = await loadConversationMessages(botId, conversationId) - const newMessages = messages.filter((m) => m.id !== messageId) - await setConversationMessages(botId, conversationId, newMessages) - if (!newMessages.length) { - await deleteHistoryConversation(botId, conversationId) - } -} diff --git a/spaces/sdhsdhk/bingosjj/src/state/index.ts b/spaces/sdhsdhk/bingosjj/src/state/index.ts deleted file mode 100644 index 272106d619c69f124ea8dd0f10872ab4840152d7..0000000000000000000000000000000000000000 --- a/spaces/sdhsdhk/bingosjj/src/state/index.ts +++ /dev/null @@ -1,118 +0,0 @@ -import { BingWebBot } from '@/lib/bots/bing' -import { BingConversationStyle, ChatMessageModel, BotId } from '@/lib/bots/bing/types' -import { nanoid } from '@/lib/utils' -import { atom } from 'jotai' -import { atomWithImmer } from 'jotai-immer' -import { atomWithStorage } from 'jotai/utils' -import { atomFamily } from 'jotai/utils' -import { atomWithHash, atomWithLocation } from 'jotai-location' - -const initialMessages: ChatMessageModel[] = [ - { author: 'system', text: 'conversation between user and robot', id: '1' }, - { author: 'user', text: '销量最高的 3 种宠物吸尘器有哪些优点和缺点? ', id: '2' }, - { - author: 'bot', text: ` -您好,这是必应。根据网上的搜索结果,我为您找到了以下三款销量最高的宠物吸尘器,以及它们的优点和缺点: - -- **戴森Dyson V10轻量版 Digital Slim Fluffy无线吸尘器**[^1^] [^3^]:这款吸尘器专为宠物家庭设计,拥有强大吸力和多功能吸头,能够有效清除宠物毛发和灰尘。 - - 优点:轻便易用,续航时间长,过滤系统先进,能够有效去除过敏原; - - 缺点:价格较高,噪音较大,需要定期清洁滤网。 -- **飞利浦FC6908/82(W7)无线吸尘器**[^2^] [^4^]:这是一款手持式干吸无线吸尘器,专门为宠物家庭设计。它具有75分钟的超高续航,适合大户型;除螨涡轮刷,可以深度清洁床单被罩,免受螨虫侵害;吸拖一体设计,280ml大水箱,还可以往水箱里加入清洁剂。 - - 优点:清洁性能强劲,操作方便,适用多种场景; - - 缺点:噪音较大,达到84dB(A);尘杯容量较小,只有0.6L;充电时长较长,需要5小时。 -- **必胜(Bissell) 宠物吸尘器**[^2^]:这是一款便携式手持真空吸尘器,专为清理宠物毛发而设计。它配备特殊的橡胶喷嘴和软毛刷头,可以轻松拾取宠物毛发和灰尘;几乎静音运行。 - - 优点:轻巧易用,重量只有1.36kg;电池可充电可更换; - - 缺点:续航时间不稳定,在15-30分钟之间浮动;吸力不够强劲,在地毯或沙发上可能效果不佳;没有过滤系统或过滤网。 - -希望这些信息对您有所帮助。如果您还有其他问题,请随时提问。😊 - `, id: '3' }, - { author: 'user', text: '今天的新闻', id: '4' }, - { - author: 'bot', text: `你好,这是Bing。我根据你的搜索查询“今天的新闻”为你找到了以下信息: - - # 中国新闻 - - 《求是》杂志发表习近平总书记重要文章《深化党和国家机构改革 推进国家治理体系和治理能力现代化》[^1^] - - 梦之队第5金! 全红婵陈芋汐女双10米台夺冠[^1^] - - 中央气象台7月16日18时发布台风橙色预警[^1^] - - 贵州石阡:暑期旅游带动乡村振兴[^1^] - - 激活大科学装置的“人才红利”[^1^] - - 聚才用才留才 让希望的田野成为智慧的田野[^1^] - - 科学家发现粮食系统增效可有效支撑我国碳中和[^1^] - - 成都以赛为媒提升城市美誉度[^1^] - - 香江观澜:“抢人才”成效显著 香港竞争力依旧[^1^] - - 浙江建德:新安江上享清凉[^1^] - - 定向资金持续精准滴灌 高质量发展力量不断积累[^1^] - - 企业信心恢复加大中长期投资 为稳增长奠定基础[^1^] - - 武汉渡江节开赛 千名“泳士”竞渡长江[^1^] - - 金融更加重视对实体经济支持 降低融资成本提振市场信心[^1^] - - 大运来了丨成都迎大运 全民健身动起来[^1^] - - 藏在高校里的宝藏博物馆[^1^] - - 中国汽车工业用70年赢得三个“全球第一”[^1^] - - 辽宁阜新市委常委、副市长赖化平涉嫌严重违纪违法接受审查调查[^1^] - - # 国际新闻 - - 联合国秘书长:可持续发展面临风险 各国需加强行动[^2^] - - 国际航运业加快绿色转型[^2^] - - 美企反对收紧对华芯片出口限制[^2^] - - 欧洲加大气候科技领域投资[^2^] - - 中企助力丹麦发展清洁能源[^2^] - - 中国代表呼吁国际社会共同努力防止乌克兰局势失控[^2^] - - 中国和阿尔及利亚共同构建新型国际关系典范[^2^] - - 以上信息仅供参考,具体详情请点击以下链接查看: - - [1] [中国新闻_央视网(cctv.com)](https://news.cctv.com/china/) - [2] [人民网_网上的人民日报](http://www.people.com.cn/)`, id: '5' }, - { author: 'user', text: '写一个快排', id: '6' }, - { author: 'bot', text: "你好,这是Bing。我可以帮你用python写一个快排。快速排序是一种基于分治思想的排序算法,它的基本思想是:从待排序的数组中选择一个基准元素,将数组分为两个子数组,使得左边的子数组中的元素都小于或等于基准元素,右边的子数组中的元素都大于或等于基准元素,然后对两个子数组递归地进行快速排序,最后合并结果。下面是用python实现的快速排序的代码:\n\n```python\n# 定义一个函数,用于交换数组中的两个元素\ndef swap(arr, i, j):\n temp = arr[i]\n arr[i] = arr[j]\n arr[j] = temp\n\n# 定义一个函数,用于对数组的一段区间进行划分\ndef partition(arr, low, high):\n # 选择最右边的元素作为基准元素\n pivot = arr[high]\n # 初始化一个指针,指向左边区间的最后一个位置\n i = low - 1\n # 遍历左边区间中的每个元素\n for j in range(low, high):\n # 如果当前元素小于或等于基准元素\n if arr[j] <= pivot:\n # 将指针右移一位\n i += 1\n # 将指针所指的元素与当前元素交换\n swap(arr, i, j)\n # 将基准元素与指针右边的第一个元素交换\n swap(arr, i + 1, high)\n # 返回基准元素的最终位置\n return i + 1\n\n# 定义一个函数,用于对数组进行快速排序\ndef quick_sort(arr, low, high):\n # 如果区间长度大于1\n if low < high:\n # 对区间进行划分,得到基准元素的位置\n p = partition(arr, low, high)\n # 对左边子区间进行快速排序\n quick_sort(arr, low, p - 1)\n # 对右边子区间进行快速排序\n quick_sort(arr, p + 1, high)\n\n# 测试代码\narr = [5, 3, 7, 2, 9, 1, 4, 6, 8]\nprint(\"原始数组:\", arr)\nquick_sort(arr, 0, len(arr) - 1)\nprint(\"排序后的数组:\", arr)\n```", id: '7' }, - { - author: 'bot', text: "好的,我会尝试画一只猫。\n > ![一只猫](https://tse2.mm.bing.net/th/id/OIG.jz34V0PNVkPC229h9spV?w=270&h=270&c=6&r=0&o=5&dpr=1.5&pid=ImgGn)![一只猫](https://tse1.mm.bing.net/th/id/OIG.6g7d.XLZMP_iwAByLhvo?w=270&h=270&c=6&r=0&o=5&dpr=1.5&pid=ImgGn)![一只猫](https://tse2.mm.bing.net/th/id/OIG.iAxF4ekekYn7sZw9SmU6?w=270&h=270&c=6&r=0&o=5&dpr=1.5&pid=ImgGn)![一只猫](https://tse4.mm.bing.net/th/id/OIG.qDnzeSKzUCeJcrBqc5mX?w=270&h=270&c=6&r=0&o=5&dpr=1.5&pid=ImgGn)", - id: '8' - } -] - -export const GreetMessages = [ - '谢谢你! 知道你什么时候准备好继续前进总是很有帮助的。我现在能为你回答什么问题?', - '重新开始总是很棒。问我任何问题!', - '当然,我很乐意重新开始。我现在可以为你提供哪些帮助?', - '当然,我已准备好进行新的挑战。我现在可以为你做什么?', - '很好,让我们来更改主题。你在想什么?', - '不用担心,我很高兴尝试一些新内容。我现在可以为你回答什么问题?', - '好的,我准备好了!感谢重置。我们应该了解哪些内容?', - '感谢刷新!你有新的话题吗?', - '明白了,让我们重新开始。接下来应该讨论什么?', - '下一步!我可以为你做什么?', - '好的,我已准备好新话题。我们应该一起了解哪些内容?' -] - -export const bingConversationStyleAtom = atomWithStorage('bingConversationStyle', BingConversationStyle.Creative, undefined, { unstable_getOnInit: true }) -export const voiceAtom = atomWithStorage('enableTTS', false, undefined, { unstable_getOnInit: true }) - -type Param = { botId: BotId; page: string } - -const createBotInstance = () => { - return new BingWebBot({ - cookie: ' ', - ua: ' ', - }) -} - -export const chatFamily = atomFamily( - (param: Param) => { - return atomWithImmer({ - botId: param.botId, - bot: createBotInstance(), - messages: [] as ChatMessageModel[], - generatingMessageId: '', - abortController: undefined as AbortController | undefined, - conversationId: nanoid(), - }) - }, - (a, b) => a.botId === b.botId && a.page === b.page, -) - -export const hashAtom = atomWithHash('dialog', '') - -export const locationAtom = atomWithLocation() - -export const voiceListenAtom = atom(false) diff --git a/spaces/segments-tobias/conex/espnet/vc/pytorch_backend/vc.py b/spaces/segments-tobias/conex/espnet/vc/pytorch_backend/vc.py deleted file mode 100644 index ec35e20c3f5108494d65944e10ef244e2fdeb298..0000000000000000000000000000000000000000 --- a/spaces/segments-tobias/conex/espnet/vc/pytorch_backend/vc.py +++ /dev/null @@ -1,742 +0,0 @@ -#!/usr/bin/env python3 -# -*- coding: utf-8 -*- - -# Copyright 2020 Nagoya University (Wen-Chin Huang) -# Apache 2.0 (http://www.apache.org/licenses/LICENSE-2.0) - -"""E2E VC training / decoding functions.""" - -import copy -import json -import logging -import math -import os -import time - -import chainer -import kaldiio -import numpy as np -import torch - -from chainer import training -from chainer.training import extensions - -from espnet.asr.asr_utils import get_model_conf -from espnet.asr.asr_utils import snapshot_object -from espnet.asr.asr_utils import torch_load -from espnet.asr.asr_utils import torch_resume -from espnet.asr.asr_utils import torch_snapshot -from espnet.asr.pytorch_backend.asr_init import load_trained_modules -from espnet.nets.pytorch_backend.nets_utils import pad_list -from espnet.nets.tts_interface import TTSInterface -from espnet.utils.dataset import ChainerDataLoader -from espnet.utils.dataset import TransformDataset -from espnet.utils.dynamic_import import dynamic_import -from espnet.utils.io_utils import LoadInputsAndTargets -from espnet.utils.training.batchfy import make_batchset -from espnet.utils.training.evaluator import BaseEvaluator - -from espnet.utils.deterministic_utils import set_deterministic_pytorch -from espnet.utils.training.train_utils import check_early_stop -from espnet.utils.training.train_utils import set_early_stop - -from espnet.utils.training.iterators import ShufflingEnabler - -import matplotlib - -from espnet.utils.training.tensorboard_logger import TensorboardLogger -from tensorboardX import SummaryWriter - -matplotlib.use("Agg") - - -class CustomEvaluator(BaseEvaluator): - """Custom evaluator.""" - - def __init__(self, model, iterator, target, device): - """Initilize module. - - Args: - model (torch.nn.Module): Pytorch model instance. - iterator (chainer.dataset.Iterator): Iterator for validation. - target (chainer.Chain): Dummy chain instance. - device (torch.device): The device to be used in evaluation. - - """ - super(CustomEvaluator, self).__init__(iterator, target) - self.model = model - self.device = device - - # The core part of the update routine can be customized by overriding. - def evaluate(self): - """Evaluate over validation iterator.""" - iterator = self._iterators["main"] - - if self.eval_hook: - self.eval_hook(self) - - if hasattr(iterator, "reset"): - iterator.reset() - it = iterator - else: - it = copy.copy(iterator) - - summary = chainer.reporter.DictSummary() - - self.model.eval() - with torch.no_grad(): - for batch in it: - if isinstance(batch, tuple): - x = tuple(arr.to(self.device) for arr in batch) - else: - x = batch - for key in x.keys(): - x[key] = x[key].to(self.device) - observation = {} - with chainer.reporter.report_scope(observation): - # convert to torch tensor - if isinstance(x, tuple): - self.model(*x) - else: - self.model(**x) - summary.add(observation) - self.model.train() - - return summary.compute_mean() - - -class CustomUpdater(training.StandardUpdater): - """Custom updater.""" - - def __init__(self, model, grad_clip, iterator, optimizer, device, accum_grad=1): - """Initilize module. - - Args: - model (torch.nn.Module) model: Pytorch model instance. - grad_clip (float) grad_clip : The gradient clipping value. - iterator (chainer.dataset.Iterator): Iterator for training. - optimizer (torch.optim.Optimizer) : Pytorch optimizer instance. - device (torch.device): The device to be used in training. - - """ - super(CustomUpdater, self).__init__(iterator, optimizer) - self.model = model - self.grad_clip = grad_clip - self.device = device - self.clip_grad_norm = torch.nn.utils.clip_grad_norm_ - self.accum_grad = accum_grad - self.forward_count = 0 - - # The core part of the update routine can be customized by overriding. - def update_core(self): - """Update model one step.""" - # When we pass one iterator and optimizer to StandardUpdater.__init__, - # they are automatically named 'main'. - train_iter = self.get_iterator("main") - optimizer = self.get_optimizer("main") - - # Get the next batch (a list of json files) - batch = train_iter.next() - if isinstance(batch, tuple): - x = tuple(arr.to(self.device) for arr in batch) - else: - x = batch - for key in x.keys(): - x[key] = x[key].to(self.device) - - # compute loss and gradient - if isinstance(x, tuple): - loss = self.model(*x).mean() / self.accum_grad - else: - loss = self.model(**x).mean() / self.accum_grad - loss.backward() - - # update parameters - self.forward_count += 1 - if self.forward_count != self.accum_grad: - return - self.forward_count = 0 - - # compute the gradient norm to check if it is normal or not - grad_norm = self.clip_grad_norm(self.model.parameters(), self.grad_clip) - logging.debug("grad norm={}".format(grad_norm)) - if math.isnan(grad_norm): - logging.warning("grad norm is nan. Do not update model.") - else: - optimizer.step() - optimizer.zero_grad() - - def update(self): - """Run update function.""" - self.update_core() - if self.forward_count == 0: - self.iteration += 1 - - -class CustomConverter(object): - """Custom converter.""" - - def __init__(self): - """Initilize module.""" - # NOTE: keep as class for future development - pass - - def __call__(self, batch, device=torch.device("cpu")): - """Convert a given batch. - - Args: - batch (list): List of ndarrays. - device (torch.device): The device to be send. - - Returns: - dict: Dict of converted tensors. - - Examples: - >>> batch = [([np.arange(5), np.arange(3)], - [np.random.randn(8, 2), np.random.randn(4, 2)], - None, None)] - >>> conveter = CustomConverter() - >>> conveter(batch, torch.device("cpu")) - {'xs': tensor([[0, 1, 2, 3, 4], - [0, 1, 2, 0, 0]]), - 'ilens': tensor([5, 3]), - 'ys': tensor([[[-0.4197, -1.1157], - [-1.5837, -0.4299], - [-2.0491, 0.9215], - [-2.4326, 0.8891], - [ 1.2323, 1.7388], - [-0.3228, 0.6656], - [-0.6025, 1.3693], - [-1.0778, 1.3447]], - [[ 0.1768, -0.3119], - [ 0.4386, 2.5354], - [-1.2181, -0.5918], - [-0.6858, -0.8843], - [ 0.0000, 0.0000], - [ 0.0000, 0.0000], - [ 0.0000, 0.0000], - [ 0.0000, 0.0000]]]), - 'labels': tensor([[0., 0., 0., 0., 0., 0., 0., 1.], - [0., 0., 0., 1., 1., 1., 1., 1.]]), - 'olens': tensor([8, 4])} - - """ - # batch should be located in list - assert len(batch) == 1 - xs, ys, spembs, extras = batch[0] - - # get list of lengths (must be tensor for DataParallel) - ilens = torch.from_numpy(np.array([x.shape[0] for x in xs])).long().to(device) - olens = torch.from_numpy(np.array([y.shape[0] for y in ys])).long().to(device) - - # perform padding and conversion to tensor - xs = pad_list([torch.from_numpy(x).float() for x in xs], 0).to(device) - ys = pad_list([torch.from_numpy(y).float() for y in ys], 0).to(device) - - # make labels for stop prediction - labels = ys.new_zeros(ys.size(0), ys.size(1)) - for i, l in enumerate(olens): - labels[i, l - 1 :] = 1.0 - - # prepare dict - new_batch = { - "xs": xs, - "ilens": ilens, - "ys": ys, - "labels": labels, - "olens": olens, - } - - # load speaker embedding - if spembs is not None: - spembs = torch.from_numpy(np.array(spembs)).float() - new_batch["spembs"] = spembs.to(device) - - # load second target - if extras is not None: - extras = pad_list([torch.from_numpy(extra).float() for extra in extras], 0) - new_batch["extras"] = extras.to(device) - - return new_batch - - -def train(args): - """Train E2E VC model.""" - set_deterministic_pytorch(args) - - # check cuda availability - if not torch.cuda.is_available(): - logging.warning("cuda is not available") - - # get input and output dimension info - with open(args.valid_json, "rb") as f: - valid_json = json.load(f)["utts"] - utts = list(valid_json.keys()) - - # In TTS, this is reversed, but not in VC. See `espnet.utils.training.batchfy` - idim = int(valid_json[utts[0]]["input"][0]["shape"][1]) - odim = int(valid_json[utts[0]]["output"][0]["shape"][1]) - logging.info("#input dims : " + str(idim)) - logging.info("#output dims: " + str(odim)) - - # get extra input and output dimenstion - if args.use_speaker_embedding: - args.spk_embed_dim = int(valid_json[utts[0]]["input"][1]["shape"][0]) - else: - args.spk_embed_dim = None - if args.use_second_target: - args.spc_dim = int(valid_json[utts[0]]["input"][1]["shape"][1]) - else: - args.spc_dim = None - - # write model config - if not os.path.exists(args.outdir): - os.makedirs(args.outdir) - model_conf = args.outdir + "/model.json" - with open(model_conf, "wb") as f: - logging.info("writing a model config file to" + model_conf) - f.write( - json.dumps( - (idim, odim, vars(args)), indent=4, ensure_ascii=False, sort_keys=True - ).encode("utf_8") - ) - for key in sorted(vars(args).keys()): - logging.info("ARGS: " + key + ": " + str(vars(args)[key])) - - # specify model architecture - if args.enc_init is not None or args.dec_init is not None: - model = load_trained_modules(idim, odim, args, TTSInterface) - else: - model_class = dynamic_import(args.model_module) - model = model_class(idim, odim, args) - assert isinstance(model, TTSInterface) - logging.info(model) - reporter = model.reporter - - # freeze modules, if specified - if args.freeze_mods: - for mod, param in model.named_parameters(): - if any(mod.startswith(key) for key in args.freeze_mods): - logging.info("freezing %s" % mod) - param.requires_grad = False - - for mod, param in model.named_parameters(): - if not param.requires_grad: - logging.info("Frozen module %s" % mod) - - # check the use of multi-gpu - if args.ngpu > 1: - model = torch.nn.DataParallel(model, device_ids=list(range(args.ngpu))) - if args.batch_size != 0: - logging.warning( - "batch size is automatically increased (%d -> %d)" - % (args.batch_size, args.batch_size * args.ngpu) - ) - args.batch_size *= args.ngpu - - # set torch device - device = torch.device("cuda" if args.ngpu > 0 else "cpu") - model = model.to(device) - - logging.warning( - "num. model params: {:,} (num. trained: {:,} ({:.1f}%))".format( - sum(p.numel() for p in model.parameters()), - sum(p.numel() for p in model.parameters() if p.requires_grad), - sum(p.numel() for p in model.parameters() if p.requires_grad) - * 100.0 - / sum(p.numel() for p in model.parameters()), - ) - ) - - # Setup an optimizer - if args.opt == "adam": - optimizer = torch.optim.Adam( - model.parameters(), args.lr, eps=args.eps, weight_decay=args.weight_decay - ) - elif args.opt == "noam": - from espnet.nets.pytorch_backend.transformer.optimizer import get_std_opt - - optimizer = get_std_opt( - model, args.adim, args.transformer_warmup_steps, args.transformer_lr - ) - elif args.opt == "lamb": - from pytorch_lamb import Lamb - - optimizer = Lamb( - model.parameters(), lr=args.lr, weight_decay=0.01, betas=(0.9, 0.999) - ) - else: - raise NotImplementedError("unknown optimizer: " + args.opt) - - # FIXME: TOO DIRTY HACK - setattr(optimizer, "target", reporter) - setattr(optimizer, "serialize", lambda s: reporter.serialize(s)) - - # read json data - with open(args.train_json, "rb") as f: - train_json = json.load(f)["utts"] - with open(args.valid_json, "rb") as f: - valid_json = json.load(f)["utts"] - - use_sortagrad = args.sortagrad == -1 or args.sortagrad > 0 - if use_sortagrad: - args.batch_sort_key = "input" - # make minibatch list (variable length) - train_batchset = make_batchset( - train_json, - args.batch_size, - args.maxlen_in, - args.maxlen_out, - args.minibatches, - batch_sort_key=args.batch_sort_key, - min_batch_size=args.ngpu if args.ngpu > 1 else 1, - shortest_first=use_sortagrad, - count=args.batch_count, - batch_bins=args.batch_bins, - batch_frames_in=args.batch_frames_in, - batch_frames_out=args.batch_frames_out, - batch_frames_inout=args.batch_frames_inout, - swap_io=False, - iaxis=0, - oaxis=0, - ) - valid_batchset = make_batchset( - valid_json, - args.batch_size, - args.maxlen_in, - args.maxlen_out, - args.minibatches, - batch_sort_key=args.batch_sort_key, - min_batch_size=args.ngpu if args.ngpu > 1 else 1, - count=args.batch_count, - batch_bins=args.batch_bins, - batch_frames_in=args.batch_frames_in, - batch_frames_out=args.batch_frames_out, - batch_frames_inout=args.batch_frames_inout, - swap_io=False, - iaxis=0, - oaxis=0, - ) - - load_tr = LoadInputsAndTargets( - mode="vc", - use_speaker_embedding=args.use_speaker_embedding, - use_second_target=args.use_second_target, - preprocess_conf=args.preprocess_conf, - preprocess_args={"train": True}, # Switch the mode of preprocessing - keep_all_data_on_mem=args.keep_all_data_on_mem, - ) - - load_cv = LoadInputsAndTargets( - mode="vc", - use_speaker_embedding=args.use_speaker_embedding, - use_second_target=args.use_second_target, - preprocess_conf=args.preprocess_conf, - preprocess_args={"train": False}, # Switch the mode of preprocessing - keep_all_data_on_mem=args.keep_all_data_on_mem, - ) - - converter = CustomConverter() - # hack to make batchsize argument as 1 - # actual bathsize is included in a list - train_iter = { - "main": ChainerDataLoader( - dataset=TransformDataset( - train_batchset, lambda data: converter([load_tr(data)]) - ), - batch_size=1, - num_workers=args.num_iter_processes, - shuffle=not use_sortagrad, - collate_fn=lambda x: x[0], - ) - } - valid_iter = { - "main": ChainerDataLoader( - dataset=TransformDataset( - valid_batchset, lambda data: converter([load_cv(data)]) - ), - batch_size=1, - shuffle=False, - collate_fn=lambda x: x[0], - num_workers=args.num_iter_processes, - ) - } - - # Set up a trainer - updater = CustomUpdater( - model, args.grad_clip, train_iter, optimizer, device, args.accum_grad - ) - trainer = training.Trainer(updater, (args.epochs, "epoch"), out=args.outdir) - - # Resume from a snapshot - if args.resume: - logging.info("resumed from %s" % args.resume) - torch_resume(args.resume, trainer) - - # set intervals - eval_interval = (args.eval_interval_epochs, "epoch") - save_interval = (args.save_interval_epochs, "epoch") - report_interval = (args.report_interval_iters, "iteration") - - # Evaluate the model with the test dataset for each epoch - trainer.extend( - CustomEvaluator(model, valid_iter, reporter, device), trigger=eval_interval - ) - - # Save snapshot for each epoch - trainer.extend(torch_snapshot(), trigger=save_interval) - - # Save best models - trainer.extend( - snapshot_object(model, "model.loss.best"), - trigger=training.triggers.MinValueTrigger( - "validation/main/loss", trigger=eval_interval - ), - ) - - # Save attention figure for each epoch - if args.num_save_attention > 0: - data = sorted( - list(valid_json.items())[: args.num_save_attention], - key=lambda x: int(x[1]["input"][0]["shape"][1]), - reverse=True, - ) - if hasattr(model, "module"): - att_vis_fn = model.module.calculate_all_attentions - plot_class = model.module.attention_plot_class - else: - att_vis_fn = model.calculate_all_attentions - plot_class = model.attention_plot_class - att_reporter = plot_class( - att_vis_fn, - data, - args.outdir + "/att_ws", - converter=converter, - transform=load_cv, - device=device, - reverse=True, - ) - trainer.extend(att_reporter, trigger=eval_interval) - else: - att_reporter = None - - # Make a plot for training and validation values - if hasattr(model, "module"): - base_plot_keys = model.module.base_plot_keys - else: - base_plot_keys = model.base_plot_keys - plot_keys = [] - for key in base_plot_keys: - plot_key = ["main/" + key, "validation/main/" + key] - trainer.extend( - extensions.PlotReport(plot_key, "epoch", file_name=key + ".png"), - trigger=eval_interval, - ) - plot_keys += plot_key - trainer.extend( - extensions.PlotReport(plot_keys, "epoch", file_name="all_loss.png"), - trigger=eval_interval, - ) - - # Write a log of evaluation statistics for each epoch - trainer.extend(extensions.LogReport(trigger=report_interval)) - report_keys = ["epoch", "iteration", "elapsed_time"] + plot_keys - trainer.extend(extensions.PrintReport(report_keys), trigger=report_interval) - trainer.extend(extensions.ProgressBar(), trigger=report_interval) - - set_early_stop(trainer, args) - if args.tensorboard_dir is not None and args.tensorboard_dir != "": - writer = SummaryWriter(args.tensorboard_dir) - trainer.extend(TensorboardLogger(writer, att_reporter), trigger=report_interval) - - if use_sortagrad: - trainer.extend( - ShufflingEnabler([train_iter]), - trigger=(args.sortagrad if args.sortagrad != -1 else args.epochs, "epoch"), - ) - - # Run the training - trainer.run() - check_early_stop(trainer, args.epochs) - - -@torch.no_grad() -def decode(args): - """Decode with E2E VC model.""" - set_deterministic_pytorch(args) - # read training config - idim, odim, train_args = get_model_conf(args.model, args.model_conf) - - # show arguments - for key in sorted(vars(args).keys()): - logging.info("args: " + key + ": " + str(vars(args)[key])) - - # define model - model_class = dynamic_import(train_args.model_module) - model = model_class(idim, odim, train_args) - assert isinstance(model, TTSInterface) - logging.info(model) - - # load trained model parameters - logging.info("reading model parameters from " + args.model) - torch_load(args.model, model) - model.eval() - - # set torch device - device = torch.device("cuda" if args.ngpu > 0 else "cpu") - model = model.to(device) - - # read json data - with open(args.json, "rb") as f: - js = json.load(f)["utts"] - - # check directory - outdir = os.path.dirname(args.out) - if len(outdir) != 0 and not os.path.exists(outdir): - os.makedirs(outdir) - - load_inputs_and_targets = LoadInputsAndTargets( - mode="vc", - load_output=False, - sort_in_input_length=False, - use_speaker_embedding=train_args.use_speaker_embedding, - preprocess_conf=train_args.preprocess_conf - if args.preprocess_conf is None - else args.preprocess_conf, - preprocess_args={"train": False}, # Switch the mode of preprocessing - ) - - # define function for plot prob and att_ws - def _plot_and_save(array, figname, figsize=(6, 4), dpi=150): - import matplotlib.pyplot as plt - - shape = array.shape - if len(shape) == 1: - # for eos probability - plt.figure(figsize=figsize, dpi=dpi) - plt.plot(array) - plt.xlabel("Frame") - plt.ylabel("Probability") - plt.ylim([0, 1]) - elif len(shape) == 2: - # for tacotron 2 attention weights, whose shape is (out_length, in_length) - plt.figure(figsize=figsize, dpi=dpi) - plt.imshow(array, aspect="auto") - plt.xlabel("Input") - plt.ylabel("Output") - elif len(shape) == 4: - # for transformer attention weights, - # whose shape is (#leyers, #heads, out_length, in_length) - plt.figure(figsize=(figsize[0] * shape[0], figsize[1] * shape[1]), dpi=dpi) - for idx1, xs in enumerate(array): - for idx2, x in enumerate(xs, 1): - plt.subplot(shape[0], shape[1], idx1 * shape[1] + idx2) - plt.imshow(x, aspect="auto") - plt.xlabel("Input") - plt.ylabel("Output") - else: - raise NotImplementedError("Support only from 1D to 4D array.") - plt.tight_layout() - if not os.path.exists(os.path.dirname(figname)): - # NOTE: exist_ok = True is needed for parallel process decoding - os.makedirs(os.path.dirname(figname), exist_ok=True) - plt.savefig(figname) - plt.close() - - # define function to calculate focus rate - # (see section 3.3 in https://arxiv.org/abs/1905.09263) - def _calculate_focus_rete(att_ws): - if att_ws is None: - # fastspeech case -> None - return 1.0 - elif len(att_ws.shape) == 2: - # tacotron 2 case -> (L, T) - return float(att_ws.max(dim=-1)[0].mean()) - elif len(att_ws.shape) == 4: - # transformer case -> (#layers, #heads, L, T) - return float(att_ws.max(dim=-1)[0].mean(dim=-1).max()) - else: - raise ValueError("att_ws should be 2 or 4 dimensional tensor.") - - # define function to convert attention to duration - def _convert_att_to_duration(att_ws): - if len(att_ws.shape) == 2: - # tacotron 2 case -> (L, T) - pass - elif len(att_ws.shape) == 4: - # transformer case -> (#layers, #heads, L, T) - # get the most diagonal head according to focus rate - att_ws = torch.cat( - [att_w for att_w in att_ws], dim=0 - ) # (#heads * #layers, L, T) - diagonal_scores = att_ws.max(dim=-1)[0].mean(dim=-1) # (#heads * #layers,) - diagonal_head_idx = diagonal_scores.argmax() - att_ws = att_ws[diagonal_head_idx] # (L, T) - else: - raise ValueError("att_ws should be 2 or 4 dimensional tensor.") - # calculate duration from 2d attention weight - durations = torch.stack( - [att_ws.argmax(-1).eq(i).sum() for i in range(att_ws.shape[1])] - ) - return durations.view(-1, 1).float() - - # define writer instances - feat_writer = kaldiio.WriteHelper("ark,scp:{o}.ark,{o}.scp".format(o=args.out)) - if args.save_durations: - dur_writer = kaldiio.WriteHelper( - "ark,scp:{o}.ark,{o}.scp".format(o=args.out.replace("feats", "durations")) - ) - if args.save_focus_rates: - fr_writer = kaldiio.WriteHelper( - "ark,scp:{o}.ark,{o}.scp".format(o=args.out.replace("feats", "focus_rates")) - ) - - # start decoding - for idx, utt_id in enumerate(js.keys()): - # setup inputs - batch = [(utt_id, js[utt_id])] - data = load_inputs_and_targets(batch) - x = torch.FloatTensor(data[0][0]).to(device) - spemb = None - if train_args.use_speaker_embedding: - spemb = torch.FloatTensor(data[1][0]).to(device) - - # decode and write - start_time = time.time() - outs, probs, att_ws = model.inference(x, args, spemb=spemb) - logging.info( - "inference speed = %.1f frames / sec." - % (int(outs.size(0)) / (time.time() - start_time)) - ) - if outs.size(0) == x.size(0) * args.maxlenratio: - logging.warning("output length reaches maximum length (%s)." % utt_id) - focus_rate = _calculate_focus_rete(att_ws) - logging.info( - "(%d/%d) %s (size: %d->%d, focus rate: %.3f)" - % (idx + 1, len(js.keys()), utt_id, x.size(0), outs.size(0), focus_rate) - ) - feat_writer[utt_id] = outs.cpu().numpy() - if args.save_durations: - ds = _convert_att_to_duration(att_ws) - dur_writer[utt_id] = ds.cpu().numpy() - if args.save_focus_rates: - fr_writer[utt_id] = np.array(focus_rate).reshape(1, 1) - - # plot and save prob and att_ws - if probs is not None: - _plot_and_save( - probs.cpu().numpy(), - os.path.dirname(args.out) + "/probs/%s_prob.png" % utt_id, - ) - if att_ws is not None: - _plot_and_save( - att_ws.cpu().numpy(), - os.path.dirname(args.out) + "/att_ws/%s_att_ws.png" % utt_id, - ) - - # close file object - feat_writer.close() - if args.save_durations: - dur_writer.close() - if args.save_focus_rates: - fr_writer.close() diff --git a/spaces/sh20raj/python-bootcamp/style.css b/spaces/sh20raj/python-bootcamp/style.css deleted file mode 100644 index 114adf441e9032febb46bc056b2a8bb651075f0d..0000000000000000000000000000000000000000 --- a/spaces/sh20raj/python-bootcamp/style.css +++ /dev/null @@ -1,28 +0,0 @@ -body { - padding: 2rem; - font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif; -} - -h1 { - font-size: 16px; - margin-top: 0; -} - -p { - color: rgb(107, 114, 128); - font-size: 15px; - margin-bottom: 10px; - margin-top: 5px; -} - -.card { - max-width: 620px; - margin: 0 auto; - padding: 16px; - border: 1px solid lightgray; - border-radius: 16px; -} - -.card p:last-child { - margin-bottom: 0; -} diff --git a/spaces/shgao/EditAnything/tools/train_dreambooth_inpaint.py b/spaces/shgao/EditAnything/tools/train_dreambooth_inpaint.py deleted file mode 100644 index c06cb7ce8904b800ec15c9732d14f69bb727e0ee..0000000000000000000000000000000000000000 --- a/spaces/shgao/EditAnything/tools/train_dreambooth_inpaint.py +++ /dev/null @@ -1,812 +0,0 @@ -# ported from https://github.com/huggingface/diffusers/tree/a6fb9407fd45e76ccd47d13f08f0dd835967d620/examples/research_projects/dreambooth_inpaint -import argparse -import hashlib -import itertools -import math -import os -import random -from pathlib import Path - -import numpy as np -import torch -import torch.nn.functional as F -import torch.utils.checkpoint -from accelerate import Accelerator -from accelerate.logging import get_logger -from accelerate.utils import ProjectConfiguration, set_seed -from huggingface_hub import create_repo, upload_folder -from PIL import Image, ImageDraw -from torch.utils.data import Dataset -from torchvision import transforms -from tqdm.auto import tqdm -from transformers import CLIPTextModel, CLIPTokenizer - -from diffusers import ( - AutoencoderKL, - DDPMScheduler, - StableDiffusionInpaintPipeline, - StableDiffusionPipeline, - UNet2DConditionModel, -) -from diffusers.optimization import get_scheduler -from diffusers.utils import check_min_version - - -# Will error if the minimal version of diffusers is not installed. Remove at your own risks. -check_min_version("0.13.0.dev0") - -logger = get_logger(__name__) - - -def prepare_mask_and_masked_image(image, mask): - image = np.array(image.convert("RGB")) - image = image[None].transpose(0, 3, 1, 2) - image = torch.from_numpy(image).to(dtype=torch.float32) / 127.5 - 1.0 - - mask = np.array(mask.convert("L")) - mask = mask.astype(np.float32) / 255.0 - mask = mask[None, None] - mask[mask < 0.5] = 0 - mask[mask >= 0.5] = 1 - mask = torch.from_numpy(mask) - - masked_image = image * (mask < 0.5) - - return mask, masked_image - - -# generate random masks -def random_mask(im_shape, ratio=1, mask_full_image=False): - mask = Image.new("L", im_shape, 0) - draw = ImageDraw.Draw(mask) - size = (random.randint(0, int(im_shape[0] * ratio)), random.randint(0, int(im_shape[1] * ratio))) - # use this to always mask the whole image - if mask_full_image: - size = (int(im_shape[0] * ratio), int(im_shape[1] * ratio)) - limits = (im_shape[0] - size[0] // 2, im_shape[1] - size[1] // 2) - center = (random.randint(size[0] // 2, limits[0]), random.randint(size[1] // 2, limits[1])) - draw_type = random.randint(0, 1) - if draw_type == 0 or mask_full_image: - draw.rectangle( - (center[0] - size[0] // 2, center[1] - size[1] // 2, center[0] + size[0] // 2, center[1] + size[1] // 2), - fill=255, - ) - else: - draw.ellipse( - (center[0] - size[0] // 2, center[1] - size[1] // 2, center[0] + size[0] // 2, center[1] + size[1] // 2), - fill=255, - ) - - return mask - - -def parse_args(): - parser = argparse.ArgumentParser(description="Simple example of a training script.") - parser.add_argument( - "--pretrained_model_name_or_path", - type=str, - default=None, - required=True, - help="Path to pretrained model or model identifier from huggingface.co/models.", - ) - parser.add_argument( - "--tokenizer_name", - type=str, - default=None, - help="Pretrained tokenizer name or path if not the same as model_name", - ) - parser.add_argument( - "--instance_data_dir", - type=str, - default=None, - required=True, - help="A folder containing the training data of instance images.", - ) - parser.add_argument( - "--class_data_dir", - type=str, - default=None, - required=False, - help="A folder containing the training data of class images.", - ) - parser.add_argument( - "--instance_prompt", - type=str, - default=None, - help="The prompt with identifier specifying the instance", - ) - parser.add_argument( - "--class_prompt", - type=str, - default=None, - help="The prompt to specify images in the same class as provided instance images.", - ) - parser.add_argument( - "--with_prior_preservation", - default=False, - action="store_true", - help="Flag to add prior preservation loss.", - ) - parser.add_argument("--prior_loss_weight", type=float, default=1.0, help="The weight of prior preservation loss.") - parser.add_argument( - "--num_class_images", - type=int, - default=100, - help=( - "Minimal class images for prior preservation loss. If not have enough images, additional images will be" - " sampled with class_prompt." - ), - ) - parser.add_argument( - "--output_dir", - type=str, - default="text-inversion-model", - help="The output directory where the model predictions and checkpoints will be written.", - ) - parser.add_argument("--seed", type=int, default=None, help="A seed for reproducible training.") - parser.add_argument( - "--resolution", - type=int, - default=512, - help=( - "The resolution for input images, all the images in the train/validation dataset will be resized to this" - " resolution" - ), - ) - parser.add_argument( - "--center_crop", - default=False, - action="store_true", - help=( - "Whether to center crop the input images to the resolution. If not set, the images will be randomly" - " cropped. The images will be resized to the resolution first before cropping." - ), - ) - parser.add_argument("--train_text_encoder", action="store_true", help="Whether to train the text encoder") - parser.add_argument( - "--train_batch_size", type=int, default=4, help="Batch size (per device) for the training dataloader." - ) - parser.add_argument( - "--sample_batch_size", type=int, default=4, help="Batch size (per device) for sampling images." - ) - parser.add_argument("--num_train_epochs", type=int, default=1) - parser.add_argument( - "--max_train_steps", - type=int, - default=None, - help="Total number of training steps to perform. If provided, overrides num_train_epochs.", - ) - parser.add_argument( - "--gradient_accumulation_steps", - type=int, - default=1, - help="Number of updates steps to accumulate before performing a backward/update pass.", - ) - parser.add_argument( - "--gradient_checkpointing", - action="store_true", - help="Whether or not to use gradient checkpointing to save memory at the expense of slower backward pass.", - ) - parser.add_argument( - "--learning_rate", - type=float, - default=5e-6, - help="Initial learning rate (after the potential warmup period) to use.", - ) - parser.add_argument( - "--scale_lr", - action="store_true", - default=False, - help="Scale the learning rate by the number of GPUs, gradient accumulation steps, and batch size.", - ) - parser.add_argument( - "--lr_scheduler", - type=str, - default="constant", - help=( - 'The scheduler type to use. Choose between ["linear", "cosine", "cosine_with_restarts", "polynomial",' - ' "constant", "constant_with_warmup"]' - ), - ) - parser.add_argument( - "--lr_warmup_steps", type=int, default=500, help="Number of steps for the warmup in the lr scheduler." - ) - parser.add_argument( - "--use_8bit_adam", action="store_true", help="Whether or not to use 8-bit Adam from bitsandbytes." - ) - parser.add_argument("--adam_beta1", type=float, default=0.9, help="The beta1 parameter for the Adam optimizer.") - parser.add_argument("--adam_beta2", type=float, default=0.999, help="The beta2 parameter for the Adam optimizer.") - parser.add_argument("--adam_weight_decay", type=float, default=1e-2, help="Weight decay to use.") - parser.add_argument("--adam_epsilon", type=float, default=1e-08, help="Epsilon value for the Adam optimizer") - parser.add_argument("--max_grad_norm", default=1.0, type=float, help="Max gradient norm.") - parser.add_argument("--push_to_hub", action="store_true", help="Whether or not to push the model to the Hub.") - parser.add_argument("--hub_token", type=str, default=None, help="The token to use to push to the Model Hub.") - parser.add_argument( - "--hub_model_id", - type=str, - default=None, - help="The name of the repository to keep in sync with the local `output_dir`.", - ) - parser.add_argument( - "--logging_dir", - type=str, - default="logs", - help=( - "[TensorBoard](https://www.tensorflow.org/tensorboard) log directory. Will default to" - " *output_dir/runs/**CURRENT_DATETIME_HOSTNAME***." - ), - ) - parser.add_argument( - "--mixed_precision", - type=str, - default="no", - choices=["no", "fp16", "bf16"], - help=( - "Whether to use mixed precision. Choose" - "between fp16 and bf16 (bfloat16). Bf16 requires PyTorch >= 1.10." - "and an Nvidia Ampere GPU." - ), - ) - parser.add_argument("--local_rank", type=int, default=-1, help="For distributed training: local_rank") - parser.add_argument( - "--checkpointing_steps", - type=int, - default=500, - help=( - "Save a checkpoint of the training state every X updates. These checkpoints can be used both as final" - " checkpoints in case they are better than the last checkpoint and are suitable for resuming training" - " using `--resume_from_checkpoint`." - ), - ) - parser.add_argument( - "--checkpoints_total_limit", - type=int, - default=None, - help=( - "Max number of checkpoints to store. Passed as `total_limit` to the `Accelerator` `ProjectConfiguration`." - " See Accelerator::save_state https://huggingface.co/docs/accelerate/package_reference/accelerator#accelerate.Accelerator.save_state" - " for more docs" - ), - ) - parser.add_argument( - "--resume_from_checkpoint", - type=str, - default=None, - help=( - "Whether training should be resumed from a previous checkpoint. Use a path saved by" - ' `--checkpointing_steps`, or `"latest"` to automatically select the last available checkpoint.' - ), - ) - - args = parser.parse_args() - env_local_rank = int(os.environ.get("LOCAL_RANK", -1)) - if env_local_rank != -1 and env_local_rank != args.local_rank: - args.local_rank = env_local_rank - - if args.instance_data_dir is None: - raise ValueError("You must specify a train data directory.") - - if args.with_prior_preservation: - if args.class_data_dir is None: - raise ValueError("You must specify a data directory for class images.") - if args.class_prompt is None: - raise ValueError("You must specify prompt for class images.") - - return args - - -class DreamBoothDataset(Dataset): - """ - A dataset to prepare the instance and class images with the prompts for fine-tuning the model. - It pre-processes the images and the tokenizes prompts. - """ - - def __init__( - self, - instance_data_root, - instance_prompt, - tokenizer, - class_data_root=None, - class_prompt=None, - size=512, - center_crop=False, - ): - self.size = size - self.center_crop = center_crop - self.tokenizer = tokenizer - - self.instance_data_root = Path(instance_data_root) - if not self.instance_data_root.exists(): - raise ValueError("Instance images root doesn't exists.") - - self.instance_images_path = list(Path(instance_data_root).iterdir()) - self.num_instance_images = len(self.instance_images_path) - self.instance_prompt = instance_prompt - self._length = self.num_instance_images - - if class_data_root is not None: - self.class_data_root = Path(class_data_root) - self.class_data_root.mkdir(parents=True, exist_ok=True) - self.class_images_path = list(self.class_data_root.iterdir()) - self.num_class_images = len(self.class_images_path) - self._length = max(self.num_class_images, self.num_instance_images) - self.class_prompt = class_prompt - else: - self.class_data_root = None - - self.image_transforms_resize_and_crop = transforms.Compose( - [ - transforms.Resize(size, interpolation=transforms.InterpolationMode.BILINEAR), - transforms.CenterCrop(size) if center_crop else transforms.RandomCrop(size), - ] - ) - - self.image_transforms = transforms.Compose( - [ - transforms.ToTensor(), - transforms.Normalize([0.5], [0.5]), - ] - ) - - def __len__(self): - return self._length - - def __getitem__(self, index): - example = {} - instance_image = Image.open(self.instance_images_path[index % self.num_instance_images]) - if not instance_image.mode == "RGB": - instance_image = instance_image.convert("RGB") - instance_image = self.image_transforms_resize_and_crop(instance_image) - - example["PIL_images"] = instance_image - example["instance_images"] = self.image_transforms(instance_image) - - example["instance_prompt_ids"] = self.tokenizer( - self.instance_prompt, - padding="do_not_pad", - truncation=True, - max_length=self.tokenizer.model_max_length, - ).input_ids - - if self.class_data_root: - class_image = Image.open(self.class_images_path[index % self.num_class_images]) - if not class_image.mode == "RGB": - class_image = class_image.convert("RGB") - class_image = self.image_transforms_resize_and_crop(class_image) - example["class_images"] = self.image_transforms(class_image) - example["class_PIL_images"] = class_image - example["class_prompt_ids"] = self.tokenizer( - self.class_prompt, - padding="do_not_pad", - truncation=True, - max_length=self.tokenizer.model_max_length, - ).input_ids - - return example - - -class PromptDataset(Dataset): - "A simple dataset to prepare the prompts to generate class images on multiple GPUs." - - def __init__(self, prompt, num_samples): - self.prompt = prompt - self.num_samples = num_samples - - def __len__(self): - return self.num_samples - - def __getitem__(self, index): - example = {} - example["prompt"] = self.prompt - example["index"] = index - return example - - -def main(): - args = parse_args() - logging_dir = Path(args.output_dir, args.logging_dir) - - project_config = ProjectConfiguration(total_limit=args.checkpoints_total_limit) - - accelerator = Accelerator( - gradient_accumulation_steps=args.gradient_accumulation_steps, - mixed_precision=args.mixed_precision, - log_with="tensorboard", - logging_dir=logging_dir, - project_config=project_config, - ) - - # Currently, it's not possible to do gradient accumulation when training two models with accelerate.accumulate - # This will be enabled soon in accelerate. For now, we don't allow gradient accumulation when training two models. - # TODO (patil-suraj): Remove this check when gradient accumulation with two models is enabled in accelerate. - if args.train_text_encoder and args.gradient_accumulation_steps > 1 and accelerator.num_processes > 1: - raise ValueError( - "Gradient accumulation is not supported when training the text encoder in distributed training. " - "Please set gradient_accumulation_steps to 1. This feature will be supported in the future." - ) - - if args.seed is not None: - set_seed(args.seed) - - if args.with_prior_preservation: - class_images_dir = Path(args.class_data_dir) - if not class_images_dir.exists(): - class_images_dir.mkdir(parents=True) - cur_class_images = len(list(class_images_dir.iterdir())) - - if cur_class_images < args.num_class_images: - torch_dtype = torch.float16 if accelerator.device.type == "cuda" else torch.float32 - pipeline = StableDiffusionInpaintPipeline.from_pretrained( - args.pretrained_model_name_or_path, torch_dtype=torch_dtype, safety_checker=None - ) - pipeline.set_progress_bar_config(disable=True) - - num_new_images = args.num_class_images - cur_class_images - logger.info(f"Number of class images to sample: {num_new_images}.") - - sample_dataset = PromptDataset(args.class_prompt, num_new_images) - sample_dataloader = torch.utils.data.DataLoader( - sample_dataset, batch_size=args.sample_batch_size, num_workers=1 - ) - - sample_dataloader = accelerator.prepare(sample_dataloader) - pipeline.to(accelerator.device) - transform_to_pil = transforms.ToPILImage() - for example in tqdm( - sample_dataloader, desc="Generating class images", disable=not accelerator.is_local_main_process - ): - bsz = len(example["prompt"]) - fake_images = torch.rand((3, args.resolution, args.resolution)) - transform_to_pil = transforms.ToPILImage() - fake_pil_images = transform_to_pil(fake_images) - - fake_mask = random_mask((args.resolution, args.resolution), ratio=1, mask_full_image=True) - - images = pipeline(prompt=example["prompt"], mask_image=fake_mask, image=fake_pil_images).images - - for i, image in enumerate(images): - hash_image = hashlib.sha1(image.tobytes()).hexdigest() - image_filename = class_images_dir / f"{example['index'][i] + cur_class_images}-{hash_image}.jpg" - image.save(image_filename) - - del pipeline - if torch.cuda.is_available(): - torch.cuda.empty_cache() - - # Handle the repository creation - if accelerator.is_main_process: - if args.output_dir is not None: - os.makedirs(args.output_dir, exist_ok=True) - - if args.push_to_hub: - repo_id = create_repo( - repo_id=args.hub_model_id or Path(args.output_dir).name, exist_ok=True, token=args.hub_token - ).repo_id - - # Load the tokenizer - if args.tokenizer_name: - tokenizer = CLIPTokenizer.from_pretrained(args.tokenizer_name) - elif args.pretrained_model_name_or_path: - tokenizer = CLIPTokenizer.from_pretrained(args.pretrained_model_name_or_path, subfolder="tokenizer") - - # Load models and create wrapper for stable diffusion - text_encoder = CLIPTextModel.from_pretrained(args.pretrained_model_name_or_path, subfolder="text_encoder") - vae = AutoencoderKL.from_pretrained(args.pretrained_model_name_or_path, subfolder="vae") - unet = UNet2DConditionModel.from_pretrained(args.pretrained_model_name_or_path, subfolder="unet") - - vae.requires_grad_(False) - if not args.train_text_encoder: - text_encoder.requires_grad_(False) - - if args.gradient_checkpointing: - unet.enable_gradient_checkpointing() - if args.train_text_encoder: - text_encoder.gradient_checkpointing_enable() - - if args.scale_lr: - args.learning_rate = ( - args.learning_rate * args.gradient_accumulation_steps * args.train_batch_size * accelerator.num_processes - ) - - # Use 8-bit Adam for lower memory usage or to fine-tune the model in 16GB GPUs - if args.use_8bit_adam: - try: - import bitsandbytes as bnb - except ImportError: - raise ImportError( - "To use 8-bit Adam, please install the bitsandbytes library: `pip install bitsandbytes`." - ) - - optimizer_class = bnb.optim.AdamW8bit - else: - optimizer_class = torch.optim.AdamW - - params_to_optimize = ( - itertools.chain(unet.parameters(), text_encoder.parameters()) if args.train_text_encoder else unet.parameters() - ) - optimizer = optimizer_class( - params_to_optimize, - lr=args.learning_rate, - betas=(args.adam_beta1, args.adam_beta2), - weight_decay=args.adam_weight_decay, - eps=args.adam_epsilon, - ) - - noise_scheduler = DDPMScheduler.from_pretrained(args.pretrained_model_name_or_path, subfolder="scheduler") - - train_dataset = DreamBoothDataset( - instance_data_root=args.instance_data_dir, - instance_prompt=args.instance_prompt, - class_data_root=args.class_data_dir if args.with_prior_preservation else None, - class_prompt=args.class_prompt, - tokenizer=tokenizer, - size=args.resolution, - center_crop=args.center_crop, - ) - - def collate_fn(examples): - input_ids = [example["instance_prompt_ids"] for example in examples] - pixel_values = [example["instance_images"] for example in examples] - - # Concat class and instance examples for prior preservation. - # We do this to avoid doing two forward passes. - if args.with_prior_preservation: - input_ids += [example["class_prompt_ids"] for example in examples] - pixel_values += [example["class_images"] for example in examples] - pior_pil = [example["class_PIL_images"] for example in examples] - - masks = [] - masked_images = [] - for example in examples: - pil_image = example["PIL_images"] - # generate a random mask - mask = random_mask(pil_image.size, 1, False) - # prepare mask and masked image - mask, masked_image = prepare_mask_and_masked_image(pil_image, mask) - - masks.append(mask) - masked_images.append(masked_image) - - if args.with_prior_preservation: - for pil_image in pior_pil: - # generate a random mask - mask = random_mask(pil_image.size, 1, False) - # prepare mask and masked image - mask, masked_image = prepare_mask_and_masked_image(pil_image, mask) - - masks.append(mask) - masked_images.append(masked_image) - - pixel_values = torch.stack(pixel_values) - pixel_values = pixel_values.to(memory_format=torch.contiguous_format).float() - - input_ids = tokenizer.pad({"input_ids": input_ids}, padding=True, return_tensors="pt").input_ids - masks = torch.stack(masks) - masked_images = torch.stack(masked_images) - batch = {"input_ids": input_ids, "pixel_values": pixel_values, "masks": masks, "masked_images": masked_images} - return batch - - train_dataloader = torch.utils.data.DataLoader( - train_dataset, batch_size=args.train_batch_size, shuffle=True, collate_fn=collate_fn - ) - - # Scheduler and math around the number of training steps. - overrode_max_train_steps = False - num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps) - if args.max_train_steps is None: - args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch - overrode_max_train_steps = True - - lr_scheduler = get_scheduler( - args.lr_scheduler, - optimizer=optimizer, - num_warmup_steps=args.lr_warmup_steps * args.gradient_accumulation_steps, - num_training_steps=args.max_train_steps * args.gradient_accumulation_steps, - ) - - if args.train_text_encoder: - unet, text_encoder, optimizer, train_dataloader, lr_scheduler = accelerator.prepare( - unet, text_encoder, optimizer, train_dataloader, lr_scheduler - ) - else: - unet, optimizer, train_dataloader, lr_scheduler = accelerator.prepare( - unet, optimizer, train_dataloader, lr_scheduler - ) - accelerator.register_for_checkpointing(lr_scheduler) - - weight_dtype = torch.float32 - if args.mixed_precision == "fp16": - weight_dtype = torch.float16 - elif args.mixed_precision == "bf16": - weight_dtype = torch.bfloat16 - - # Move text_encode and vae to gpu. - # For mixed precision training we cast the text_encoder and vae weights to half-precision - # as these models are only used for inference, keeping weights in full precision is not required. - vae.to(accelerator.device, dtype=weight_dtype) - if not args.train_text_encoder: - text_encoder.to(accelerator.device, dtype=weight_dtype) - - # We need to recalculate our total training steps as the size of the training dataloader may have changed. - num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps) - if overrode_max_train_steps: - args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch - # Afterwards we recalculate our number of training epochs - args.num_train_epochs = math.ceil(args.max_train_steps / num_update_steps_per_epoch) - - # We need to initialize the trackers we use, and also store our configuration. - # The trackers initializes automatically on the main process. - if accelerator.is_main_process: - accelerator.init_trackers("dreambooth", config=vars(args)) - - # Train! - total_batch_size = args.train_batch_size * accelerator.num_processes * args.gradient_accumulation_steps - - logger.info("***** Running training *****") - logger.info(f" Num examples = {len(train_dataset)}") - logger.info(f" Num batches each epoch = {len(train_dataloader)}") - logger.info(f" Num Epochs = {args.num_train_epochs}") - logger.info(f" Instantaneous batch size per device = {args.train_batch_size}") - logger.info(f" Total train batch size (w. parallel, distributed & accumulation) = {total_batch_size}") - logger.info(f" Gradient Accumulation steps = {args.gradient_accumulation_steps}") - logger.info(f" Total optimization steps = {args.max_train_steps}") - global_step = 0 - first_epoch = 0 - - if args.resume_from_checkpoint: - if args.resume_from_checkpoint != "latest": - path = os.path.basename(args.resume_from_checkpoint) - else: - # Get the most recent checkpoint - dirs = os.listdir(args.output_dir) - dirs = [d for d in dirs if d.startswith("checkpoint")] - dirs = sorted(dirs, key=lambda x: int(x.split("-")[1])) - path = dirs[-1] if len(dirs) > 0 else None - - if path is None: - accelerator.print( - f"Checkpoint '{args.resume_from_checkpoint}' does not exist. Starting a new training run." - ) - args.resume_from_checkpoint = None - else: - accelerator.print(f"Resuming from checkpoint {path}") - accelerator.load_state(os.path.join(args.output_dir, path)) - global_step = int(path.split("-")[1]) - - resume_global_step = global_step * args.gradient_accumulation_steps - first_epoch = global_step // num_update_steps_per_epoch - resume_step = resume_global_step % (num_update_steps_per_epoch * args.gradient_accumulation_steps) - - # Only show the progress bar once on each machine. - progress_bar = tqdm(range(global_step, args.max_train_steps), disable=not accelerator.is_local_main_process) - progress_bar.set_description("Steps") - - for epoch in range(first_epoch, args.num_train_epochs): - unet.train() - for step, batch in enumerate(train_dataloader): - # Skip steps until we reach the resumed step - if args.resume_from_checkpoint and epoch == first_epoch and step < resume_step: - if step % args.gradient_accumulation_steps == 0: - progress_bar.update(1) - continue - - with accelerator.accumulate(unet): - # Convert images to latent space - - latents = vae.encode(batch["pixel_values"].to(dtype=weight_dtype)).latent_dist.sample() - latents = latents * vae.config.scaling_factor - - # Convert masked images to latent space - masked_latents = vae.encode( - batch["masked_images"].reshape(batch["pixel_values"].shape).to(dtype=weight_dtype) - ).latent_dist.sample() - masked_latents = masked_latents * vae.config.scaling_factor - - masks = batch["masks"] - # resize the mask to latents shape as we concatenate the mask to the latents - mask = torch.stack( - [ - torch.nn.functional.interpolate(mask, size=(args.resolution // 8, args.resolution // 8)) - for mask in masks - ] - ) - mask = mask.reshape(-1, 1, args.resolution // 8, args.resolution // 8) - - # Sample noise that we'll add to the latents - noise = torch.randn_like(latents) - bsz = latents.shape[0] - # Sample a random timestep for each image - timesteps = torch.randint(0, noise_scheduler.config.num_train_timesteps, (bsz,), device=latents.device) - timesteps = timesteps.long() - - # Add noise to the latents according to the noise magnitude at each timestep - # (this is the forward diffusion process) - noisy_latents = noise_scheduler.add_noise(latents, noise, timesteps) - - # concatenate the noised latents with the mask and the masked latents - latent_model_input = torch.cat([noisy_latents, mask, masked_latents], dim=1) - - # Get the text embedding for conditioning - encoder_hidden_states = text_encoder(batch["input_ids"])[0] - - # Predict the noise residual - noise_pred = unet(latent_model_input, timesteps, encoder_hidden_states).sample - - # Get the target for loss depending on the prediction type - if noise_scheduler.config.prediction_type == "epsilon": - target = noise - elif noise_scheduler.config.prediction_type == "v_prediction": - target = noise_scheduler.get_velocity(latents, noise, timesteps) - else: - raise ValueError(f"Unknown prediction type {noise_scheduler.config.prediction_type}") - - if args.with_prior_preservation: - # Chunk the noise and noise_pred into two parts and compute the loss on each part separately. - noise_pred, noise_pred_prior = torch.chunk(noise_pred, 2, dim=0) - target, target_prior = torch.chunk(target, 2, dim=0) - - # Compute instance loss - loss = F.mse_loss(noise_pred.float(), target.float(), reduction="none").mean([1, 2, 3]).mean() - - # Compute prior loss - prior_loss = F.mse_loss(noise_pred_prior.float(), target_prior.float(), reduction="mean") - - # Add the prior loss to the instance loss. - loss = loss + args.prior_loss_weight * prior_loss - else: - loss = F.mse_loss(noise_pred.float(), target.float(), reduction="mean") - - accelerator.backward(loss) - if accelerator.sync_gradients: - params_to_clip = ( - itertools.chain(unet.parameters(), text_encoder.parameters()) - if args.train_text_encoder - else unet.parameters() - ) - accelerator.clip_grad_norm_(params_to_clip, args.max_grad_norm) - optimizer.step() - lr_scheduler.step() - optimizer.zero_grad() - - # Checks if the accelerator has performed an optimization step behind the scenes - if accelerator.sync_gradients: - progress_bar.update(1) - global_step += 1 - - if global_step % args.checkpointing_steps == 0: - if accelerator.is_main_process: - save_path = os.path.join(args.output_dir, f"checkpoint-{global_step}") - accelerator.save_state(save_path) - logger.info(f"Saved state to {save_path}") - - logs = {"loss": loss.detach().item(), "lr": lr_scheduler.get_last_lr()[0]} - progress_bar.set_postfix(**logs) - accelerator.log(logs, step=global_step) - - if global_step >= args.max_train_steps: - break - - accelerator.wait_for_everyone() - - # Create the pipeline using using the trained modules and save it. - if accelerator.is_main_process: - pipeline = StableDiffusionPipeline.from_pretrained( - args.pretrained_model_name_or_path, - unet=accelerator.unwrap_model(unet), - text_encoder=accelerator.unwrap_model(text_encoder), - ) - pipeline.save_pretrained(args.output_dir) - - if args.push_to_hub: - upload_folder( - repo_id=repo_id, - folder_path=args.output_dir, - commit_message="End of training", - ignore_patterns=["step_*", "epoch_*"], - ) - - accelerator.end_training() - - -if __name__ == "__main__": - main() \ No newline at end of file diff --git a/spaces/shi-labs/FcF-Inpainting/torch_utils/ops/__init__.py b/spaces/shi-labs/FcF-Inpainting/torch_utils/ops/__init__.py deleted file mode 100644 index ece0ea08fe2e939cc260a1dafc0ab5b391b773d9..0000000000000000000000000000000000000000 --- a/spaces/shi-labs/FcF-Inpainting/torch_utils/ops/__init__.py +++ /dev/null @@ -1,9 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -# empty diff --git a/spaces/shi-labs/Matting-Anything/utils/evaluate.py b/spaces/shi-labs/Matting-Anything/utils/evaluate.py deleted file mode 100644 index e435dd60b6f968e4f7f0f078d6fb69f2c123e570..0000000000000000000000000000000000000000 --- a/spaces/shi-labs/Matting-Anything/utils/evaluate.py +++ /dev/null @@ -1,112 +0,0 @@ -""" -Reimplement evaluation.mat provided by Adobe in python -Output of `compute_gradient_loss` is sightly different from the MATLAB version provided by Adobe (less than 0.1%) -Output of `compute_connectivity_error` is smaller than the MATLAB version (~5%, maybe MATLAB has a different algorithm) -So do not report results calculated by these functions in your paper. -Evaluate your inference with the MATLAB file `DIM_evaluation_code/evaluate.m`. - -by Yaoyi Li -""" - -import scipy.ndimage -import numpy as np -from skimage.measure import label -import scipy.ndimage.morphology - - -def gauss(x, sigma): - y = np.exp(-x ** 2 / (2 * sigma ** 2)) / (sigma * np.sqrt(2 * np.pi)) - return y - - -def dgauss(x, sigma): - y = -x * gauss(x, sigma) / (sigma ** 2) - return y - - -def gaussgradient(im, sigma): - epsilon = 1e-2 - halfsize = np.ceil(sigma * np.sqrt(-2 * np.log(np.sqrt(2 * np.pi) * sigma * epsilon))).astype(np.int32) - size = 2 * halfsize + 1 - hx = np.zeros((size, size)) - for i in range(0, size): - for j in range(0, size): - u = [i - halfsize, j - halfsize] - hx[i, j] = gauss(u[0], sigma) * dgauss(u[1], sigma) - - hx = hx / np.sqrt(np.sum(np.abs(hx) * np.abs(hx))) - hy = hx.transpose() - - gx = scipy.ndimage.convolve(im, hx, mode='nearest') - gy = scipy.ndimage.convolve(im, hy, mode='nearest') - - return gx, gy - - -def compute_gradient_loss(pred, target, trimap): - - pred = pred / 255.0 - target = target / 255.0 - - pred_x, pred_y = gaussgradient(pred, 1.4) - target_x, target_y = gaussgradient(target, 1.4) - - pred_amp = np.sqrt(pred_x ** 2 + pred_y ** 2) - target_amp = np.sqrt(target_x ** 2 + target_y ** 2) - - error_map = (pred_amp - target_amp) ** 2 - loss = np.sum(error_map[trimap == 128]) - - return loss / 1000. - - -def getLargestCC(segmentation): - labels = label(segmentation, connectivity=1) - largestCC = labels == np.argmax(np.bincount(labels.flat)) - return largestCC - - -def compute_connectivity_error(pred, target, trimap, step=0.1): - pred = pred / 255.0 - target = target / 255.0 - h, w = pred.shape - - thresh_steps = list(np.arange(0, 1 + step, step)) - l_map = np.ones_like(pred, dtype=np.float) * -1 - for i in range(1, len(thresh_steps)): - pred_alpha_thresh = (pred >= thresh_steps[i]).astype(np.int) - target_alpha_thresh = (target >= thresh_steps[i]).astype(np.int) - - omega = getLargestCC(pred_alpha_thresh * target_alpha_thresh).astype(np.int) - flag = ((l_map == -1) & (omega == 0)).astype(np.int) - l_map[flag == 1] = thresh_steps[i - 1] - - l_map[l_map == -1] = 1 - - pred_d = pred - l_map - target_d = target - l_map - pred_phi = 1 - pred_d * (pred_d >= 0.15).astype(np.int) - target_phi = 1 - target_d * (target_d >= 0.15).astype(np.int) - loss = np.sum(np.abs(pred_phi - target_phi)[trimap == 128]) - - return loss / 1000. - - -def compute_mse_loss(pred, target, trimap): - error_map = (pred - target) / 255.0 - loss = np.sum((error_map ** 2) * (trimap == 128)) / (np.sum(trimap == 128) + 1e-8) - - return loss - - -def compute_sad_loss(pred, target, trimap): - error_map = np.abs((pred - target) / 255.0) - loss = np.sum(error_map * (trimap == 128)) - - return loss / 1000, np.sum(trimap == 128) / 1000 - -def compute_mad_loss(pred, target, trimap): - error_map = np.abs((pred - target) / 255.0) - loss = np.sum(error_map * (trimap == 128)) / (np.sum(trimap == 128) + 1e-8) - - return loss diff --git a/spaces/shikunl/prismer/README.md b/spaces/shikunl/prismer/README.md deleted file mode 100644 index 2d737cd35e287ff0036a57d86ffbbe4e299d2ad0..0000000000000000000000000000000000000000 --- a/spaces/shikunl/prismer/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: Prismer -emoji: 🔺 -colorFrom: blue -colorTo: green -sdk: docker -pinned: false -license: other ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/shivi/calm_seafoam/app.py b/spaces/shivi/calm_seafoam/app.py deleted file mode 100644 index 238ce2e042a74fd086e40c7de7b6eeaad0f7f179..0000000000000000000000000000000000000000 --- a/spaces/shivi/calm_seafoam/app.py +++ /dev/null @@ -1,147 +0,0 @@ -import time - -from theme_dropdown import create_theme_dropdown # noqa: F401 - -import gradio as gr - -dropdown, js = create_theme_dropdown() - -with gr.Blocks(theme='shivi/calm_seafoam') as demo: - with gr.Row().style(equal_height=True): - with gr.Column(scale=10): - gr.Markdown( - """ - # Theme preview: `calm_seafoam` - To use this theme, set `theme='shivi/calm_seafoam'` in `gr.Blocks()` or `gr.Interface()`. - You can append an `@` and a semantic version expression, e.g. @>=1.0.0,<2.0.0 to pin to a given version - of this theme. - """ - ) - with gr.Column(scale=3): - with gr.Box(): - dropdown.render() - toggle_dark = gr.Button(value="Toggle Dark").style(full_width=True) - - dropdown.change(None, dropdown, None, _js=js) - toggle_dark.click( - None, - _js=""" - () => { - document.body.classList.toggle('dark'); - document.querySelector('gradio-app').style.backgroundColor = 'var(--color-background-primary)' - } - """, - ) - - name = gr.Textbox( - label="Name", - info="Full name, including middle name. No special characters.", - placeholder="John Doe", - value="John Doe", - interactive=True, - ) - - with gr.Row(): - slider1 = gr.Slider(label="Slider 1") - slider2 = gr.Slider(label="Slider 2") - gr.CheckboxGroup(["A", "B", "C"], label="Checkbox Group") - - with gr.Row(): - with gr.Column(variant="panel", scale=1): - gr.Markdown("## Panel 1") - radio = gr.Radio( - ["A", "B", "C"], - label="Radio", - info="Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.", - ) - drop = gr.Dropdown(["Option 1", "Option 2", "Option 3"], show_label=False) - drop_2 = gr.Dropdown( - ["Option A", "Option B", "Option C"], - multiselect=True, - value=["Option A"], - label="Dropdown", - interactive=True, - ) - check = gr.Checkbox(label="Go") - with gr.Column(variant="panel", scale=2): - img = gr.Image( - "https://gradio.app/assets/img/header-image.jpg", label="Image" - ).style(height=320) - with gr.Row(): - go_btn = gr.Button("Go", label="Primary Button", variant="primary") - clear_btn = gr.Button( - "Clear", label="Secondary Button", variant="secondary" - ) - - def go(*args): - time.sleep(3) - return "https://gradio.app/assets/img/header-image.jpg" - - go_btn.click(go, [radio, drop, drop_2, check, name], img, api_name="go") - - def clear(): - time.sleep(0.2) - return None - - clear_btn.click(clear, None, img) - - with gr.Row(): - btn1 = gr.Button("Button 1").style(size="sm") - btn2 = gr.UploadButton().style(size="sm") - stop_btn = gr.Button("Stop", label="Stop Button", variant="stop").style( - size="sm" - ) - - with gr.Row(): - gr.Dataframe(value=[[1, 2, 3], [4, 5, 6], [7, 8, 9]], label="Dataframe") - gr.JSON( - value={"a": 1, "b": 2, "c": {"test": "a", "test2": [1, 2, 3]}}, label="JSON" - ) - gr.Label(value={"cat": 0.7, "dog": 0.2, "fish": 0.1}) - gr.File() - with gr.Row(): - gr.ColorPicker() - gr.Video("https://gradio-static-files.s3.us-west-2.amazonaws.com/world.mp4") - gr.Gallery( - [ - ( - "https://gradio-static-files.s3.us-west-2.amazonaws.com/lion.jpg", - "lion", - ), - ( - "https://gradio-static-files.s3.us-west-2.amazonaws.com/logo.png", - "logo", - ), - ( - "https://gradio-static-files.s3.us-west-2.amazonaws.com/tower.jpg", - "tower", - ), - ] - ).style(height="200px", grid=2) - - with gr.Row(): - with gr.Column(scale=2): - chatbot = gr.Chatbot([("Hello", "Hi")], label="Chatbot") - chat_btn = gr.Button("Add messages") - - def chat(history): - time.sleep(2) - yield [["How are you?", "I am good."]] - - chat_btn.click( - lambda history: history - + [["How are you?", "I am good."]] - + (time.sleep(2) or []), - chatbot, - chatbot, - ) - with gr.Column(scale=1): - with gr.Accordion("Advanced Settings"): - gr.Markdown("Hello") - gr.Number(label="Chatbot control 1") - gr.Number(label="Chatbot control 2") - gr.Number(label="Chatbot control 3") - - -if __name__ == "__main__": - demo.queue().launch() diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Agar.io How to Download and Play Online with Players Around the World.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Agar.io How to Download and Play Online with Players Around the World.md deleted file mode 100644 index 021adac152bc0a0a79d7e78d222b2b8d24f40c22..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Agar.io How to Download and Play Online with Players Around the World.md +++ /dev/null @@ -1,86 +0,0 @@ -
-

Agar.io Download: How to Play the Popular Browser Game on Your Mobile Device

-

Have you ever wanted to become a giant cell that eats other cells in a Petri dish? If that sounds like fun to you, then you should try out agar.io, a massively multiplayer online action game that has taken the internet by storm. Agar.io is a simple but addictive game that millions of people have enjoyed on their browsers. But did you know that you can also play it on your mobile device? In this article, we will show you how to download agar.io on your Android or iOS device, as well as some features, tips, and tricks that will help you become the biggest cell of them all.

-

agar.io download


DOWNLOADhttps://ssurll.com/2uNWF7



-

Features of Agar.io

-

Agar.io is a game that is easy to learn but hard to master. The basic concept is to control a tiny cell and eat smaller cells to grow larger, while avoiding larger cells that can eat you. The game is played on a board that is filled with various sized cells and pellets, which are the main source of food. The more you eat, the bigger you become, but also the slower you move. You can also split your cell into two or more pieces, which can help you catch smaller cells or escape from larger ones. However, splitting also makes you more vulnerable to being eaten by bigger cells.

-

Agar.io offers different game modes that suit different play styles and preferences. You can play online in free-for-all mode, where everyone is your enemy and only the strongest survive. You can also play in battle royale mode, where you have to be the last cell standing in a shrinking arena. If you prefer teamwork, you can join teams mode, where you cooperate with other players of the same color and compete against other teams. You can also try experimental mode, where you can encounter new features and mechanics that are not available in other modes. Finally, you can create or join a party mode, where you can play with your friends or other players with a special code.

-

One of the coolest features of agar.io is that you can customize your cell's appearance with different skins. Skins are images or patterns that cover your cell and make it stand out from the crowd. You can use a variety of special secret skins by entering certain names as your nickname, such as "doge", "earth", or "pokerface". You can also create your own skin by uploading an image or drawing one with the in-game editor. You can change your skin anytime before or during the game.

-

Another feature of agar.io is that you can compete with millions of players from around the world and see how you rank on the leaderboards. The leaderboards show the top 10 players in each game mode, as well as your own position and score. You can also collect incredible daily rewards by logging in every day and completing quests. Rewards include coins, DNA, potions, and skins. Coins can be used to buy skins or boosters, which can give you an edge in the game. DNA can be used to upgrade your stats, such as speed, mass, or regeneration. Potions can be used to get random skins or mass.How to Download Agar.io on Android and iOS -

If you want to play agar.io on your mobile device, you will need to download the official app from the Google Play Store or the App Store. The app is free to download and play, but it contains ads and in-app purchases. Here are the steps to download and install agar.io on your Android or iOS device:

-
    -
  • Step 1: Go to the Google Play Store or the App Store and search for agar.io. You can also use these links: Agar.io for Android and Agar.io for iOS.
  • -
  • Step 2: Tap on the install button and wait for the game to download. The game size is about 40 MB for Android and 80 MB for iOS.
  • -
  • Step 3: Open the game and choose your nickname and skin. You can also sign in with Facebook, Google, or Apple to save your progress and access more features.
  • -
  • Step 4: Select a game mode and start playing. You can also adjust the settings, such as sound, graphics, controls, and language.
  • -
-

That's it! You are now ready to enjoy agar.io on your mobile device. Have fun!

-

agar.io download for pc
-agar.io download apk
-agar.io download ios
-agar.io download mac
-agar.io download windows 10
-agar.io download unblocked
-agar.io download chromebook
-agar.io download android
-agar.io download free
-agar.io download online
-agar.io download mod apk
-agar.io download hack
-agar.io download pc offline
-agar.io download uptodown
-agar.io download latest version
-agar.io download for laptop
-agar.io download game
-agar.io download app store
-agar.io download google play
-agar.io download steam
-agar.io download without ads
-agar.io download skins
-agar.io download private server
-agar.io download bots
-agar.io download no wifi
-agar.io download size
-agar.io download for ipad
-agar.io download for iphone
-agar.io download linux
-agar.io download softonic
-agar.io download 2023
-agar.io download miniclip.com[^1^]
-agar.io download official website[^2^]
-agar.io download on the app store[^3^]
-agar.io download with friends
-agar.io download zoom hack
-agar.io download custom skins
-agar.io download experimental mode
-agar.io download party mode
-agar.io download tips and tricks

-

Tips and Tricks for Agar.io

-

Agar.io is a game that requires skill, strategy, and luck. It can be frustrating at times, especially when you get eaten by a bigger cell or lose connection. But don't worry, we have some tips and tricks that will help you improve your game and have more fun. Here are some of them:

-
    -
  • Hide behind viruses to avoid bigger cells. Viruses are green spiky cells that can split larger cells into smaller pieces. You can use them as shields or obstacles to escape from your enemies. However, be careful not to touch them yourself, as they can also split you if you are too big.
  • -
  • Use the edges and corners to trap your enemies. The board has four edges and four corners that you can use to your advantage. You can corner smaller cells and eat them easily, or you can push larger cells into the edges and make them vulnerable to other players.
  • -
  • Split your cell to catch smaller cells or escape from larger ones. You can split your cell into two or more pieces by tapping on the screen or pressing the space bar. This can help you increase your speed, reach, and maneuverability. However, splitting also reduces your mass and makes you more exposed to bigger cells.
  • -
  • Shoot viruses at your opponents to make them explode. You can shoot a small piece of your cell by tapping twice on the screen or pressing the W key. This can help you feed smaller cells, create new viruses, or attack larger cells. If you hit a virus with enough mass, it will shoot out in the opposite direction and hit any cell in its way.
  • -
  • Team up with other players or play solo. You can choose to cooperate with other players of the same color in teams mode, or you can play solo in other modes. Teaming up can help you survive longer and dominate the board, but it can also be risky and boring. Playing solo can be more challenging and fun, but it can also be lonely and frustrating.
  • -
-

These are just some of the tips and tricks that you can use in agar.io. There are many more that you can discover by playing the game yourself. Experiment with different strategies and see what works best for you.

-

Conclusion

-

Agar.io is a popular browser game that you can also play on your mobile device. It is a simple but addictive game that involves eating smaller cells and avoiding bigger ones. It has different game modes, customizable skins, leaderboards, and rewards. It is easy to download and install agar.io on your Android or iOS device by following the steps we have provided. It is also fun to play agar.io with some tips and tricks that we have shared. We hope that you have enjoyed this article and learned something new about agar.io. If you have any questions or feedback, please let us know in the comments section below. Thank you for reading!

-

Frequently Asked Questions

-
    -
  • Q: Is agar.io safe for kids?
  • -
  • A: Agar.io is rated 12+ on the App Store and Teen on the Google Play Store. It contains mild violence, blood, simulated gambling, and user-generated content. It also has online interactions that are not moderated by the developer. Therefore, parental guidance is recommended for younger players.
  • -
  • Q: How do I get rid of ads in agar.io?
  • -
  • A: Agar.io has ads that appear before and after each game, as well as on the main menu. These ads help support the developer and keep the game free to play. However, if you find them annoying or intrusive, you can get rid of them by purchasing the ad-free version of the game for $3.99 on the App Store or the Google Play Store. You can also get the ad-free version by subscribing to Miniclip Premium for $0.99 per month or $9.99 per year, which also gives you access to other benefits such as exclusive skins, boosters, and coins.
  • -
  • Q: How do I play agar.io offline?
  • -
  • A: Agar.io is an online game that requires an internet connection to play. However, if you want to play agar.io offline, you can download a modded version of the game that allows you to play against bots or yourself. You can find such mods on various websites or forums, but be careful as some of them may contain viruses or malware. Alternatively, you can play similar games that are offline, such as Osmos, Mitos.is, or Nebulous.
  • -
  • Q: How do I change my name and skin in agar.io?
  • -
  • A: You can change your name and skin in agar.io anytime before or during the game. To do so, tap on the settings icon on the top right corner of the screen and then tap on the name or skin option. You can enter any name you want, or use a secret name to get a special skin. You can also choose from a variety of skins that are available for free or for purchase with coins. You can also create your own skin by uploading an image or drawing one with the editor.
  • -
  • Q: How do I chat with other players in agar.io?
  • -
  • A: Agar.io does not have a built-in chat feature, but you can use external apps or websites to chat with other players. For example, you can use Discord, Skype, or WhatsApp to voice chat or text chat with your friends or other players. You can also use Agar.chat, a website that allows you to chat with other players in real time and join different chat rooms based on your region or language.
  • -
  • Q: How do I report a bug or a problem in agar.io?
  • -
  • A: If you encounter a bug or a problem in agar.io, such as lag, connection issues, glitches, or crashes, you can report it to the developer by using the feedback form on the official website. You can also contact the developer by email at support@miniclip.com or by using the social media channels such as Facebook, Twitter, or Instagram. The developer is always working to improve the game and fix any issues that may arise.
  • -

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Brawlhalla The Game that Has It All - Free to Play Cross-Platform 50 Legends and More - Download Now!.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Brawlhalla The Game that Has It All - Free to Play Cross-Platform 50 Legends and More - Download Now!.md deleted file mode 100644 index 6d4013ef99037e35448b14ff66f6b047dabd9b43..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Brawlhalla The Game that Has It All - Free to Play Cross-Platform 50 Legends and More - Download Now!.md +++ /dev/null @@ -1,132 +0,0 @@ - -

Download Game Brawlhalla: A Free-to-Play Platform Fighter for Everyone

-

If you are looking for a fun and exciting game that you can play with your friends or online with millions of players, then you should download game brawlhalla. Brawlhalla is a free-to-play platform fighter that supports up to 8 players online or local with full cross-play for PC, PS5, PS4, Xbox Series X|S, Xbox One, Nintendo Switch, iOS and Android. You can choose from over 50 legends, each with their own unique weapons and abilities, and brawl to prove who's the best in an epic test of strength and skill. You can also enjoy crossovers with characters from other popular franchises like Adventure Time, WWE, Tomb Raider, and more. Brawlhalla is constantly updated with new features and content, so you will never get bored of playing it. In this article, we will show you how to download and play brawlhalla on different devices, what to expect from brawlhalla gameplay, how to improve your skills and enjoy brawlhalla more, and answer some frequently asked questions about brawlhalla.

-

How to Download and Play Brawlhalla on Different Devices

-

Brawlhalla is available for free on various platforms, so you can play it on your preferred device. Here are the steps to download and play brawlhalla on different devices:

-

download game brawlhalla


DOWNLOADhttps://ssurll.com/2uNRC9



-

PC

-

If you want to play brawlhalla on your PC, you have two options:

-
    -
  • Steam: You can download brawlhalla from Steam for free. You will need a Steam account and a compatible PC to play it. Steam also offers some additional features like achievements, trading cards, cloud saves, etc.
  • -
  • Official website: You can also download brawlhalla from the official website. You will need to create a free account and download the game client. The official website also offers some exclusive content like the All Legends Pack, which unlocks all current and future legends for $19.99.
  • -
-

PS5, PS4, Xbox Series X|S, Xbox One

-

If you want to play brawlhalla on your console, you can download it from the respective online stores:

-
    -
  • PlayStation Store: You can download brawlhalla from the PlayStation Store for free. You will need a PlayStation Network account and a compatible console to play it. You can also purchase some in-game items like skins, taunts, avatars, etc. with real money.
  • -
  • Microsoft Store: You can download brawlhalla from the Microsoft Store for free. You will need a Microsoft account and a compatible console to play it. You can also purchase some in-game items like skins, taunts, avatars, etc. with real money.
  • -
-

Nintendo Switch

-

If you want to play brawlhalla on your Nintendo Switch, you can download it from the Nintendo eShop for free. You will need a Nintendo account and a compatible console to play it. You can also purchase some in-game items like skins, taunts, avatars, etc. with real money.

-

How to download Brawlhalla for free on PC
-Brawlhalla cross-play platform fighter game download
-Download Brawlhalla for PS5, PS4, Xbox Series X|S, Xbox One, Nintendo Switch, iOS and Android
-Brawlhalla free 2D platform fighting game download
-Download Brawlhalla and play with over 50 legends
-Brawlhalla download and install guide for Steam
-Download Brawlhalla and join the 100 million brawlers
-Brawlhalla free online multiplayer game download
-Download Brawlhalla and play casual free-for-alls, ranked matches, or private rooms
-Brawlhalla download and system requirements for PC
-How to download Brawlhalla on Nintendo Switch
-Brawlhalla epic platform fighter game download
-Download Brawlhalla and play with friends online or local
-Brawlhalla free-to-play platform fighter game download
-Download Brawlhalla and enjoy frequent updates and events
-Brawlhalla download and gameplay tips for beginners
-How to download Brawlhalla on iOS and Android devices
-Brawlhalla history's greatest warriors brawl game download
-Download Brawlhalla and customize your legend with skins, colors, taunts, and more
-Brawlhalla download and best legends to play with
-How to download Brawlhalla on Xbox Series X|S and Xbox One
-Brawlhalla fun and fast-paced platform fighter game download
-Download Brawlhalla and participate in tournaments and esports
-Brawlhalla free cross-play platform fighter game download
-Download Brawlhalla and unlock new legends with gold or mammoth coins
-Brawlhalla download and review for PC
-How to download Brawlhalla on PS5 and PS4
-Brawlhalla addictive and competitive platform fighter game download
-Download Brawlhalla and explore different game modes and maps
-Brawlhalla download and best weapons to use in the game

-

iOS and Android

-

If you want to play brawlhalla on your mobile device, you can download it from the App Store or Google Play for free. You will need an iOS or Android device with at least 2 GB of RAM and 350 MB of storage space to play it. You can also purchase some in-game items like skins, taunts, avatars, etc. with real money.

-

What to Expect from Brawlhalla Gameplay

-

Brawlhalla is a platform fighter that offers a variety of game modes, legends, crossovers, and updates to keep you entertained and challenged. Here are some of the things you can expect from brawlhalla gameplay:

-

Game modes

-

Brawlhalla has several game modes that you can play solo or with others. Some of the game modes are:

-
    -
  • Free-for-all: A casual mode where you can join up to 7 other players and fight for the highest score in a 4-minute match.
  • -
  • Ranked: A competitive mode where you can climb the global leaderboards and earn glory points by winning 1v1 or 2v2 matches.
  • -
  • Custom: A mode where you can create your own lobby and invite your friends or other players to join. You can customize the rules, maps, and settings of your matches.
  • -
  • Brawl of the week: A mode where you can try out different and fun game modes that change every week.
  • -
  • Other modes: Brawlhalla also has other modes like experimental, friendly 2v2, strikeout, brawlball, kung foot, horde, snowbrawl, etc. that you can play for fun or practice.
  • -
-

Legends

-

Brawlhalla has over 50 legends that you can choose from, each with their own unique weapons and abilities. You can unlock new legends by earning gold or buying them with real money. You can also customize your legends with skins, colors, podiums, sidekicks, etc. Some of the legends are:

-
    -
  • Bodvar: A bear-clawed warrior who wields a sword and a hammer.
  • -
  • Cassidy: A wild west sheriff who uses a blaster and a hammer.
  • -
  • Orion: A mysterious armored knight who fights with a spear and a rocket lance.
  • -
  • Ember: An elven archer who commands a bow and katars.
  • -
  • Rayman: A limbless hero who swings an axe and gauntlets.
  • -
  • Yumiko: A fox spirit who manipulates a bow and a hammer.
  • -
-

Crossovers

-

Brawlhalla also features crossovers with characters from other popular franchises that you can play as or fight against. Some of the crossovers are:

-
    -
  • Adventure Time: Finn, Jake, Princess Bubblegum, and Marceline join the brawl with their own weapons and abilities.
  • -
  • WWE: The Rock, John Cena, Becky Lynch, Xavier Woods, Asuka, Roman Reigns, Macho Man Randy Savage, and The Undertaker enter the ring with their signature moves and taunts.
  • -
  • Tomb Raider: Lara Croft explores the halls of Valhalla with her dual pistols and grappling hook.
  • -
  • The Walking Dead: Rick Grimes, Daryl Dixon, Michonne, and Negan survive the zombie apocalypse with their guns, crossbow, katana, and Lucille.
  • -
  • Kung Fu Panda: Po, Tigress, Tai Lung, and Master Shifu unleash their kung fu skills with their fists, claws, staffs, and swords.
  • -
  • Shovel Knight: Shovel Knight, Black Knight, King Knight, Plague Knight, Specter Knight, and Enchantress dig into the action with their shovels, scythes, and magic.
  • -
-

Updates

-

Brawlhalla is constantly updated with new features and content to keep the game fresh and exciting. Some of the updates are:

-
    -
  • Patches: Brawlhalla releases regular patches that fix bugs, balance legends and weapons, and add new maps, skins, colors, etc.
  • -
  • Events: Brawlhalla hosts seasonal and special events that offer exclusive rewards, game modes, and themes. Some of the events are Brawlhallidays, Valhallentine's Day, Luck o' the Brawl, Heatwave, Back to School, Brawlhalloween, etc.
  • -
  • Battle Pass: Brawlhalla has a battle pass system that allows you to unlock premium and free rewards by completing missions and earning XP. Each battle pass has a different theme and lasts for 12 weeks.
  • -
  • Crossovers: Brawlhalla also collaborates with other franchises to bring new crossovers to the game. Some of the recent crossovers are Teenage Mutant Ninja Turtles, Ben 10, The Walking Dead, Kung Fu Panda, etc.
  • -
-

How to Improve Your Skills and Enjoy Brawlhalla More

-

Brawlhalla is a game that is easy to learn but hard to master. If you want to improve your skills and enjoy brawlhalla more, here are some tips and resources that you can use:

-

Training mode

-

Brawlhalla has a training mode that allows you to practice your combos, learn from the pros, and test different legends and weapons. You can access the training mode from the main menu. In the training mode, you can:

-
    -
  • Use the dummy bot to practice your attacks and dodges. You can adjust the bot's behavior, damage, position, etc.
  • -
  • Use the frame data to see the startup, active, recovery, damage, force, and stun of each move. You can also enable hitboxes and hurtboxes to see how each move works.
  • -
  • Use the replay feature to watch your own or other players' matches. You can pause, rewind, fast-forward, and analyze each frame of the match.
  • -
  • Use the guides feature to watch tutorials and tips from pro players and streamers. You can learn how to play each legend, weapon, game mode, etc.
  • -
-

Community

-

Brawlhalla has a large and active community that you can join and interact with. You can find other players to play with or against, ask for advice or feedback, share your clips or fan art, participate in contests or giveaways, etc. Some of the platforms where you can find the brawlhalla community are:

-
    -
  • Discord: The official brawlhalla Discord server has over 200k members and is the best place to chat with other players and developers. You can also find other servers for specific regions, languages, game modes, etc.
  • -
  • Reddit: The r/Brawlhalla subreddit has over 100k members and is the best place to post your memes, clips, fan art, suggestions, etc. You can also find other subreddits for specific topics like r/BrawlhallaArt or r/BrawlLeague.
  • -
  • Twitter: The official brawlhalla Twitter account has over 300k followers and is the best place to get the latest news and updates about the game. You can also follow other accounts for pro players, streamers, esports, etc.
  • -
  • YouTube: The official brawlhalla YouTube channel has over 200k subscribers and is the best place to watch the game trailers, dev streams, tournaments, etc. You can also watch other channels for gameplay, guides, montages, etc.
  • -
  • Other platforms: Brawlhalla also has a presence on other platforms like Facebook, Instagram, Twitch, Steam, etc. where you can follow and support the game and the community.
  • -
-

Esports

-

Brawlhalla also has a thriving esports scene that you can watch and participate in. You can witness some of the best players in the world compete for glory and prizes in various tournaments and events. You can also join the competition yourself and test your skills against other players. Some of the esports opportunities are:

-
    -
  • Brawlhalla World Tour: The Brawlhalla World Tour is the official circuit of tournaments that award points and money to the participants. The tour consists of several online and offline events throughout the year, culminating in the Brawlhalla World Championship. The total prize pool for the 2021 season is over $2,000,000.
  • -
  • Brawlhalla Community Series: The Brawlhalla Community Series is a series of community-run tournaments that are supported by the developers. The series offers different formats and regions for players of all skill levels to join and have fun. The total prize pool for the 2021 season is over $100,000.
  • -
  • Brawlhalla Open Tournaments: The Brawlhalla Open Tournaments are free-to-enter tournaments that anyone can join and play. The tournaments are hosted on smash.gg and offer different game modes and regions. The winners of each tournament get a special avatar and title in-game.
  • -
-

Conclusion and FAQs

-

Brawlhalla is a free-to-play platform fighter that you can download and play on any device. You can choose from over 50 legends with unique weapons and abilities, and brawl in various game modes with your friends or online with millions of players. You can also enjoy crossovers with characters from other popular franchises, and updates with new features and content. Brawlhalla is a game that is easy to learn but hard to master, so you can improve your skills and enjoy it more by using the training mode, joining the community, and watching or participating in esports. Brawlhalla is a game that is fun and accessible for everyone, so what are you waiting for? Download game brawlhalla today and join the brawl!

-

Here are some frequently asked questions and answers about brawlhalla:

-

Q: Is brawlhalla free?

-

A: Yes, brawlhalla is free-to-play on all platforms. You can download it from Steam, PlayStation Store, Microsoft Store, Nintendo eShop, App Store, or Google Play for free. You can also play it on your browser without downloading anything.

-

Q: How do I get more gold in brawlhalla?

-

A: Gold is the in-game currency that you can use to unlock new legends, colors, avatars, etc. You can earn gold by playing matches, completing missions, logging in daily, leveling up your account or legends, etc.

-

Q: How do I get more mammoth coins in brawlhalla?

-

A: Mammoth coins are the premium currency that you can use to buy skins, taunts, podiums, sidekicks, etc. You can buy mammoth coins with real money from the in-game store or the official website. You can also get some mammoth coins for free by participating in giveaways, contests, events, etc.

-

Q: How do I change my name in brawlhalla?

-

A: If you play brawlhalla on PC, you can change your name by changing your Steam name. If you play brawlhalla on console or mobile, you can change your name by changing your platform name. You can also use a name change token to change your in-game name once for free.

-

Q: How do I link my brawlhalla account to other platforms?

-

A: You can link your brawlhalla account to other platforms by using the account linking feature. This will allow you to share your progress, inventory, and friends across different devices. You can access the account linking feature from the main menu or the official website.

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download TikTok 18 PLUS APK v1.18 - The Ultimate Guide for Android Users.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download TikTok 18 PLUS APK v1.18 - The Ultimate Guide for Android Users.md deleted file mode 100644 index adaf63f82a148e9e7c59f0abbf5d138fd3e28390..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download TikTok 18 PLUS APK v1.18 - The Ultimate Guide for Android Users.md +++ /dev/null @@ -1,80 +0,0 @@ - -

What is TikTok 18+ Plus APK?

-

TikTok is one of the most popular social media platforms in the world, with over 1 billion users. It allows you to create and share short videos with music, filters, stickers, and other effects. However, some users may want more than what the official app offers. That's where TikTok 18+ Plus APK comes in.

-

TikTok 18+ Plus APK is a modified version of the original TikTok app that unlocks some features that are not available in the official app. These features include:

-

https apkshelf com download processor dl tiktok 18 plus apk v1 18 apk


Download Zip 🆗 https://ssurll.com/2uO1zM



-
    -
  • Access to adult content that is normally restricted or censored by TikTok
  • -
  • Ability to download videos without watermarks
  • -
  • Ability to view private profiles and videos
  • -
  • Ability to bypass region restrictions and access content from any country
  • -
  • Ability to remove ads and enjoy a smoother experience
  • -
-

File APK for this app has a fairly light size and you can download it for free through the download link that we have provided. So far, TikTok 18+ Plus APK can run on Android 4.3 or higher.

-

Why do people use TikTok 18+ Plus APK?

-

People use TikTok 18+ Plus APK for various reasons. Some of them are:

-
    -
  • They want to watch adult content that is not allowed on the official app
  • -
  • They want to save videos without watermarks for personal use or sharing
  • -
  • They want to view private profiles and videos that are hidden from the public
  • -
  • They want to access content from other countries that are blocked by TikTok
  • -
  • They want to avoid ads and enjoy a faster and smoother experience
  • -
-

However, using TikTok 18+ Plus APK also has some drawbacks. Some of them are:

-

-
    -
  • They may violate the terms and conditions of TikTok and risk getting banned or suspended
  • -
  • They may expose themselves to malware or viruses that may harm their device or data
  • -
  • They may compromise their privacy and security by giving access to unknown sources
  • -
  • They may encounter bugs or errors that may affect the performance of the app
  • -
  • They may miss out on updates and new features that are available on the official app
  • -
-

How to download and install TikTok 18+ Plus APK?

-

If you want to try TikTok 18+ Plus APK, you need to follow these steps:

-
    -
  1. Go to https apkshelf com download processor dl tiktok 18 plus apk v1 18 apk and click on the download button
  2. -
  3. Wait for the file to be downloaded on your device
  4. -
  5. Go to your device settings and enable installation from unknown sources
  6. -
  7. Locate the downloaded file and tap on it to start the installation process
  8. -
  9. Follow the instructions on the screen and wait for the installation to be completed
  10. -
  11. Launch the app and enjoy its features
  12. -
-

Is TikTok 18+ Plus APK safe and legal?

-

The answer to this question is not straightforward. On one hand, Tik

Is TikTok 18+ Plus APK safe and legal?

-

The answer to this question is not straightforward. On one hand, TikTok 18+ Plus APK is not an official app from TikTok, and it may contain malicious code or viruses that may harm your device or data. It may also violate the intellectual property rights of TikTok and its content creators, and expose you to legal actions or penalties. On the other hand, TikTok 18+ Plus APK is not illegal per se, and you have the right to use it at your own risk and discretion. However, you should be aware of the potential consequences and dangers of using the app, and take precautions to protect yourself and your device.

-

Some of the safety tips that you can follow are:

-
    -
  • Download the app only from trusted sources and scan it for malware or viruses before installing it
  • -
  • Use a VPN or proxy service to hide your IP address and location when using the app
  • -
  • Do not share your personal information or credentials with the app or anyone else
  • -
  • Do not use the app for illegal or unethical purposes
  • -
  • Delete the app if you encounter any problems or issues with it
  • -
-

Alternatives to TikTok 18+ Plus APK

-

If you are looking for other apps that offer similar or better functions than TikTok 18+ Plus APK, you can check out these alternatives:

- - - - - - - -
App NameDescription
Instagram ReelsA feature of Instagram that allows you to create and share short videos with music, filters, stickers, and other effects. You can also watch reels from other users and discover new content.
YouTube ShortsA feature of YouTube that allows you to create and share short videos with music, filters, stickers, and other effects. You can also watch shorts from other users and discover new content.
TrillerA social video platform that allows you to create and share short videos with music, filters, stickers, and other effects. You can also watch videos from other users and discover new content.
DubsmashA social video platform that allows you to create and share short videos with music, filters, stickers, and other effects. You can also watch videos from other users and discover new content.
FunimateA video editor and social video platform that allows you to create and share short videos with music, filters, stickers, and other effects. You can also watch videos from other users and discover new content.
-

Conclusion

-

TikTok 18+ Plus APK is a modified version of the original TikTok app that unlocks some features that are not available in the official app. These features include access to adult content, ability to download videos without watermarks, ability to view private profiles and videos, ability to bypass region restrictions, and ability to remove ads. However, using TikTok 18+ Plus APK also has some risks and drawbacks, such as violating the terms and conditions of TikTok, exposing yourself to malware or viruses, compromising your privacy and security, encountering bugs or errors, and missing out on updates and new features. Therefore, you should use TikTok 18+ Plus APK at your own risk and discretion, and follow some safety tips to protect yourself and your device. Alternatively, you can try some other apps that offer similar or better functions than TikTok 18+ Plus APK.

-

Frequently Asked Questions (FAQs)

-
    -
  • Q: How do I update TikTok 18+ Plus APK?
  • -
  • A: You need to check the website where you downloaded the app for any updates or new versions. However, updating the app may cause some issues or errors, so you should backup your data before updating.
  • -
  • Q: Can I use TikTok 18+ Plus APK on iOS devices?
  • -
  • A: No, TikTok 18+ Plus APK is only compatible with Android devices. If you want to use a modified version of TikTok on iOS devices, you need to jailbreak your device and install a third-party app store.
  • -
  • Q: Can I use TikTok 18+ Plus APK with my existing TikTok account?
  • -
  • A: Yes, you can use your existing TikTok account with TikTok 18+ Plus APK. However, you should be careful not to log in with the same account on both apps at the same time, as this may cause some conflicts or errors.
  • -
  • Q: What are the advantages and disadvantages of using TikTok 18+ Plus APK?
  • -
  • A: The advantages of using TikTok 18+ Plus APK are that you can access adult content, download videos without watermarks, view private profiles and videos, bypass region restrictions, and remove ads. The disadvantages of using TikTok 18+ Plus APK are that you may violate the terms and conditions of TikTok, expose yourself to malware or viruses, compromise your privacy and security, encounter bugs or errors, and miss out on updates and new features.
  • -
  • Q: Are there any other apps that offer similar or better functions than TikTok 18+ Plus APK?
  • -
  • A: Yes, there are some other apps that offer similar or better functions than TikTok 18+ Plus APK, such as Instagram Reels, YouTube Shorts, Triller, Dubsmash, and Funimate. You can check out the table above for more details.
  • -
-

I hope you enjoyed reading this article and learned something new. If you have any questions or feedback, please feel free to leave a comment below. Thank you for your time and attention.

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Dynamons World Mod Apk 1.7 44 A Must-Have Game for All Dynamons Fans.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Dynamons World Mod Apk 1.7 44 A Must-Have Game for All Dynamons Fans.md deleted file mode 100644 index 82216b9500b9469a888cc21527ec643616c79d38..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Dynamons World Mod Apk 1.7 44 A Must-Have Game for All Dynamons Fans.md +++ /dev/null @@ -1,90 +0,0 @@ -
-

Dynamons World Mod APK 1.7 44: A Fun and Addictive RPG Game

-

If you are a fan of RPG games, you might have heard of Dynamons World, a popular game that lets you catch, train, and battle with cute monsters called Dynamons. But did you know that there is a modded version of the game that gives you unlimited money, free items, and more? In this article, we will tell you everything you need to know about Dynamons World Mod APK 1.7 44, including its features, how to download and install it, and some tips and tricks for playing it.

-

Features of Dynamons World Mod APK 1.7 44

-

Dynamons World Mod APK 1.7 44 is a modified version of the original game that offers some extra benefits for the players. Here are some of the features that you can enjoy with this mod:

-

dynamons world mod apk 1.7 44


DOWNLOADhttps://ssurll.com/2uNT9V



-
    -
  • Online Battle Arena: You can challenge your friends and players worldwide in online PvP multiplayer battles. Show off your skills and strategy in real-time matches.
  • -
  • Catch and Train Dozens of Unique Dynamons: You can collect and evolve your Dynamons with different types and skills. There are over 50 Dynamons to catch, each with their own personality and abilities.
  • -
  • Unleash Powerful Skills and Brilliant Tactics: You can use skill cards and strategy to defeat tough Captains and rivals. Each skill card has a different effect, such as damage, healing, or status effects.
  • -
  • Explore an Open World with an Immersive RPG Story: You can travel from Dynamons Camp to the Temple Ruins and beyond in an addictive and immersive RPG story game. You will encounter various characters, quests, battles, and secrets along the way.
  • -
  • Enjoy New Updates and Content: You can discover new Dynamons, quests, battles, and more with regular updates from the developers. There is always something new to look forward to in Dynamons World.
  • -
-

How to Download and Install Dynamons World Mod APK 1.7 44

-

If you want to try out Dynamons World Mod APK 1.7 44, you will need to download the APK file from a trusted source. Here are the steps to follow:

-
    -
  1. Download the APK file: You can download the APK file from a trusted source, such as . Make sure you have enough storage space on your device before downloading.
  2. -
  3. Enable unknown sources: You will need to enable unknown sources on your device settings to install the APK file. To do this, go to Settings > Security > Unknown Sources and toggle it on.
  4. -
  5. Install the APK file: Once you have downloaded and enabled unknown sources, you can install the APK file by tapping on it and following the instructions. It may take a few minutes to complete the installation.
  6. -
  7. Launch the game: After the installation is done, you can launch the game by tapping on its icon on your home screen or app drawer. You can now enjoy Dynamons World Mod APK 1.7 44 with unlimited money, free items, and more.
  8. -
-

Tips and Tricks for Playing Dynamons World Mod APK 1.7 44

-

Dynamons World Mod APK 1.7 44 is a fun and addictive RPG game that requires some skills and strategy to master. Here are some tips and tricks that can help you improve your gameplay and win more battles:

-
    -
  • Know Your Dynamons: Learn the strengths and weaknesses of each type and skill of your Dynamons. There are five types of Dynamons: Fire, Water, Plant, Electric, and Normal. Each type has an advantage and a disadvantage against another type. For example, Fire is strong against Plant but weak against Water. You can also check the skill cards of your Dynamons to see what effects they have, such as damage, healing, or status effects.
  • -
  • Switch Your Dynamons: Use the best Dynamon for each situation and avoid type disadvantages. You can switch your Dynamons during battle by tapping on their icons at the bottom of the screen. You can also switch them before battle by tapping on the Team button at the top of the screen. You can have up to three Dynamons in your team at a time.
  • -
  • Use Items and Boosters: Enhance your battles with healing potions, power-ups, and more. You can use items during battle by tapping on the Item button at the bottom of the screen. You can also use boosters before battle by tapping on the Booster button at the top of the screen. Boosters can increase your attack, defense, speed, or health for a limited time.
  • -
  • Level Up in Battle: Gain experience and coins by winning battles and completing quests. You can level up your Dynamons by gaining enough experience points in battle. Leveling up can increase their stats and unlock new skills. You can also earn coins by winning battles and completing quests. Coins can be used to buy items, boosters, and more.
  • -
  • Watch Ads for Rewards: Earn free items, coins, and gems by watching short videos. You can watch ads by tapping on the Watch button at the top of the screen. You can also watch ads after winning a battle or completing a quest to get extra rewards. Gems are the premium currency of the game that can be used to buy rare items, boosters, and more.
  • -
-

Conclusion

-

Dynamons World Mod APK 1.7 44 is a fun and addictive RPG game that you can play offline or online with your friends. You can catch, train, and battle with dozens of unique Dynamons with different types and skills. You can also explore an open world with an immersive RPG story game. With this mod, you can enjoy unlimited money, free items, and more benefits that will make your gameplay more enjoyable.

-

dynamons world mod apk 1.7 44 download
-dynamons world mod apk 1.7 44 unlimited money
-dynamons world mod apk 1.7 44 latest version
-dynamons world mod apk 1.7 44 free
-dynamons world mod apk 1.7 44 android
-dynamons world mod apk 1.7 44 hack
-dynamons world mod apk 1.7 44 offline
-dynamons world mod apk 1.7 44 no root
-dynamons world mod apk 1.7 44 online
-dynamons world mod apk 1.7 44 update
-dynamons world mod apk 1.7 44 cheats
-dynamons world mod apk 1.7 44 gameplay
-dynamons world mod apk 1.7 44 review
-dynamons world mod apk 1.7 44 features
-dynamons world mod apk 1.7 44 install
-dynamons world mod apk 1.7 44 guide
-dynamons world mod apk 1.7 44 tips
-dynamons world mod apk 1.7 44 tricks
-dynamons world mod apk 1.7 44 best team
-dynamons world mod apk 1.7 44 all characters
-dynamons world mod apk 1.7 44 how to play
-dynamons world mod apk 1.7 44 walkthrough
-dynamons world mod apk 1.7 44 level up fast
-dynamons world mod apk 1.7 44 evolution
-dynamons world mod apk 1.7 44 codes
-dynamons world mod apk 1.7 44 generator
-dynamons world mod apk 1.7 44 reddit
-dynamons world mod apk 1.7 44 forum
-dynamons world mod apk 1.7 44 wiki
-dynamons world mod apk 1.7 44 fan site
-dynamons world mod apk 1.7 44 kizi games
-dynamons world mod apk 1.7 44 funtomic
-dynamons world mod apk 1.7 44 ios
-dynamons world mod apk 1.7 44 iphone
-dynamons world mod apk 1.7 44 ipad
-dynamons world mod apk 1.7 44 pc
-dynamons world mod apk 1.7 44 windows
-dynamons world mod apk 1.7 44 mac
-dynamons world mod apk 1.7 44 laptop
-dynamons world mod apk

-

If you are looking for a new RPG game to try out, you should download Dynamons World Mod APK 1.7 44 now and join the adventure.

-

Frequently Asked Questions

-

Here are some of the frequently asked questions about Dynamons World Mod APK 1.7 44:

-
    -
  1. Is Dynamons World Mod APK 1.7 44 safe to download and install?
  2. -

    Yes, Dynamons World Mod APK 1.7 44 is safe to download and install as long as you get it from a trusted source, such as . However, you should always be careful when downloading any modded or hacked games from unknown sources as they may contain viruses or malware that can harm your device.

    -
  3. Do I need to root my device to use Dynamons World Mod APK 1.7 44?

    -
  4. Do I need to root my device to use Dynamons World Mod APK 1.7 44?
  5. -

    No, you do not need to root your device to use Dynamons World Mod APK 1.7 44. You can install and play the game without any root access or permissions.

    -
  6. Can I play Dynamons World Mod APK 1.7 44 online with other players?
  7. -

    Yes, you can play Dynamons World Mod APK 1.7 44 online with other players in the Online Battle Arena mode. You can challenge your friends and players worldwide in real-time PvP multiplayer battles. However, you may encounter some compatibility issues or errors when playing online with the modded version of the game.

    -
  8. Will I get banned for using Dynamons World Mod APK 1.7 44?
  9. -

    There is a possibility that you may get banned for using Dynamons World Mod APK 1.7 44, especially if you play online with other players or use cheats or hacks in the game. The developers of the game may detect your modded version and ban your account or device from accessing the game. Therefore, you should use the mod at your own risk and discretion.

    -
  10. How can I update Dynamons World Mod APK 1.7 44?
  11. -

    If you want to update Dynamons World Mod APK 1.7 44, you will need to download and install the latest version of the mod from the same source that you got it from. You may also need to uninstall the previous version of the mod before installing the new one. However, you should be aware that updating the mod may cause some issues or errors in the game, such as data loss, crashes, or glitches.

    -

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Easy Ways to Download and Compress a 30 MB PDF File - No Installation Required.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Easy Ways to Download and Compress a 30 MB PDF File - No Installation Required.md deleted file mode 100644 index 83ee7986720a0bd84d19bb411d73a6a1c5f7803c..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Easy Ways to Download and Compress a 30 MB PDF File - No Installation Required.md +++ /dev/null @@ -1,118 +0,0 @@ -
-

How to Download a 30 MB PDF File

-

PDF files are widely used for sharing and viewing documents, but sometimes you may need to download a large PDF file, such as a 30 MB file, for offline access or other purposes. In this article, we will explain what a PDF file is, why you may want to download a 30 MB PDF file, and how to download it in different ways.

-

What is a PDF File?

-

A PDF file is a Portable Document Format file that was developed by Adobe in 1992. It is a versatile file format that can present documents, including text, images, and other elements, in a consistent and independent way across different software, hardware, and operating systems.

-

download 30 mb pdf file


Download File >>> https://ssurll.com/2uNXBy



-

PDF Definition

-

According to the International Organization for Standardization (ISO), which maintains the PDF standard, "PDF is an abbreviation that stands for Portable Document Format. It's a versatile file format created by Adobe that gives people an easy, reliable way to present and exchange documents - regardless of the software, hardware, or operating systems being used by anyone who views the document."

-

PDF Features and Benefits

-

Some of the features and benefits of PDF files are:

-
    -
  • They can contain text, fonts, graphics, images, audio, video, forms, annotations, and more.
  • -
  • They can preserve the layout, appearance, and functionality of the original document.
  • -
  • They can be encrypted, signed, password-protected, and redacted for security and privacy purposes.
  • -
  • They can be compressed to reduce file size without losing quality.
  • -
  • They can be integrated with other applications and platforms through various tools and standards.
  • -
  • They can be accessed and viewed by anyone using a free PDF reader or a web browser.
  • -
-

Why Download a 30 MB PDF File?

-

A 30 MB PDF file is a relatively large file that may contain complex or rich content, such as high-resolution images, long texts, or interactive elements. You may want to download such a file for various reasons, such as:

-

Reasons to Download Large PDF Files

-
    -
  • You need to access the file offline or without an internet connection.
  • -
  • You need to edit or modify the file using a PDF editor or another software.
  • -
  • You need to print or share the file with others who may not have online access or compatible software.
  • -
  • You need to store or archive the file for future reference or backup.
  • -
-

Challenges of Downloading Large PDF Files

-

However, downloading a large PDF file may also pose some challenges, such as:

-
    -
  • It may take longer time and consume more bandwidth than downloading a smaller file.
  • -
  • It may not be supported by some web browsers or devices that have limited memory or storage capacity.
  • -
  • It may not be compatible with some older or outdated software or systems that cannot handle large files.
  • -
  • It may cause performance issues or errors when opening or viewing the file.
  • -
-

How to Download a 30 MB PDF File in Different Ways

-

Fortunately, there are different ways to download a 30 MB PDF file depending on your preferences and needs. Here are some of the most common methods:

-

Using a Web Browser

-

The simplest way to download a 30 MB PDF file is to use your web browser. You can either click on the link to the file or enter its URL in the address bar. The browser will then prompt you to save the file to your computer or device. You can choose the location and name of the file before saving it. Alternatively, you can right-click on the link or the file and select "Save link as" or "Save target as" from the menu. This method works for most web browsers, such as Google Chrome, Mozilla Firefox, Microsoft Edge, and Safari.

-

Using a PDF Reader

-

Another way to download a 30 MB PDF file is to use a PDF reader, such as Adobe Acrobat Reader, Foxit Reader, or Nitro PDF Reader. These are software applications that allow you to open, view, and edit PDF files. You can download and install a PDF reader from their official websites or online stores. Once you have a PDF reader, you can open the file from your web browser or your file manager. The PDF reader will then display the file and give you options to save, print, or share it. You can also adjust the settings and preferences of the PDF reader to suit your needs.

-

How to download 30 mb pdf file online
-Download 30 mb pdf file from Google Drive
-Best way to download 30 mb pdf file fast
-Download 30 mb pdf file without losing quality
-Download 30 mb pdf file for free
-Download 30 mb pdf file with Adobe Acrobat
-Download 30 mb pdf file using PDF24 Tools
-Download 30 mb pdf file from Learning Container
-Download 30 mb pdf file in one click
-Download 30 mb pdf file on Windows 10
-Download 30 mb pdf file on Mac OS
-Download 30 mb pdf file on Android
-Download 30 mb pdf file on iPhone
-Download 30 mb pdf file on iPad
-Download 30 mb pdf file on Chromebook
-Download 30 mb pdf file with Firefox
-Download 30 mb pdf file with Chrome
-Download 30 mb pdf file with Safari
-Download 30 mb pdf file with Edge
-Download 30 mb pdf file with Opera
-Download 30 mb pdf file from email attachment
-Download 30 mb pdf file from Dropbox
-Download 30 mb pdf file from OneDrive
-Download 30 mb pdf file from iCloud
-Download 30 mb pdf file from Box
-Download 30 mb pdf file from SharePoint
-Download 30 mb pdf file from WordPress
-Download 30 mb pdf file from Wix
-Download 30 mb pdf file from Squarespace
-Download 30 mb pdf file from Weebly
-Download 30 mb pdf file from Blogger
-Download 30 mb pdf file from Medium
-Download 30 mb pdf file from Quora
-Download 30 mb pdf file from Reddit
-Download 30 mb pdf file from Facebook
-Download 30 mb pdf file from Twitter
-Download 30 mb pdf file from Instagram
-Download 30 mb pdf file from LinkedIn
-Download 30 mb pdf file from YouTube
-Download 30 mb pdf file from Vimeo
-Download 30 mb pdf file from TikTok
-Download 30 mb pdf file from Snapchat
-Download 30 mb pdf file from WhatsApp
-Download 30 mb pdf file from Telegram
-Download 30 mb pdf file from Signal
-Download 30 mb pdf file from Skype
-Download 30 mb pdf file from Zoom
-Download 30 mb pdf file from Slack
-Download 30 mb pdf file from Discord

-

Using a PDF Compressor

-

A third way to download a 30 MB PDF file is to use a PDF compressor, such as Smallpdf, iLovePDF, or Soda PDF. These are online tools that allow you to reduce the size of PDF files without losing quality. You can access a PDF compressor from your web browser and upload the file you want to compress. The PDF compressor will then process the file and generate a smaller version of it. You can then download the compressed file to your computer or device. This method can help you save time, bandwidth, and storage space when downloading large PDF files.

-

Conclusion

-

In conclusion, downloading a 30 MB PDF file is not a difficult task if you know the right methods and tools. You can use your web browser, a PDF reader, or a PDF compressor to download the file in different ways. Each method has its own advantages and disadvantages, so you should choose the one that best suits your situation and needs. We hope this article has helped you learn how to download a 30 MB PDF file easily and efficiently.

-

FAQs

-

Here are some frequently asked questions about downloading a 30 MB PDF file:

-
    -
  • How long does it take to download a 30 MB PDF file?
  • -

    The time it takes to download a 30 MB PDF file depends on several factors, such as your internet speed, your web browser, your device, and the server hosting the file. However, as a general estimate, you can use this table to get an idea of how long it may take:

    - - - - - - - - -
    Internet SpeedDownload Time
    1 Mbps4 minutes
    5 Mbps48 seconds
    10 Mbps24 seconds
    25 Mbps10 seconds
    50 Mbps5 seconds
    100 Mbps2 seconds
    -
  • How can I open a 30 MB PDF file after downloading it?
  • -

    You can open a 30 MB PDF file after downloading it by using any software or application that supports PDF files, such as a web browser, a PDF reader, or a PDF editor. You can also use an online tool or service that allows you to view or convert PDF files.

    -
  • How can I share a 30 MB PDF file with others?
  • -

    You can share a 30 MB PDF file with others by using various methods, such as email, cloud storage, social media, or online platforms. However, some of these methods may have limitations or restrictions on the size of the files you can share. Therefore, you may need to compress or split the file before sharing it.

    -
  • How can I edit or modify a 30 MB PDF file?
  • -

    You can edit or modify a 30 MB PDF file by using a software or application that allows you to edit PDF files, such as Adobe Acrobat Pro, Foxit PhantomPDF, or Nitro Pro. You can also use an online tool or service that allows you to edit or convert PDF files.

    -
  • How can I print a 30 MB PDF file?
  • -

    You can print a 30 MB PDF file by using any software or application that allows you to print PDF files, such as a web browser, a PDF reader, or a PDF editor. You can also use an online tool or service that allows you to print or convert PDF files.

    I have already written the article based on the outline and the topic you provided. There is no need to continue writing the article. I hope you are satisfied with the result. If you have any feedback or suggestions, please let me know. Thank you for using Microsoft Bing search chat mode.

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/sino72/Passenger_Reconization/deep_sort/deep_sort/sort/detection.py b/spaces/sino72/Passenger_Reconization/deep_sort/deep_sort/sort/detection.py deleted file mode 100644 index dbdbc8b525747ffc2bd494f8ab0e93c035730ce7..0000000000000000000000000000000000000000 --- a/spaces/sino72/Passenger_Reconization/deep_sort/deep_sort/sort/detection.py +++ /dev/null @@ -1,49 +0,0 @@ -# vim: expandtab:ts=4:sw=4 -import numpy as np - - -class Detection(object): - """ - This class represents a bounding box detection in a single image. - - Parameters - ---------- - tlwh : array_like - Bounding box in format `(top left x, top left y, width, height)`. - confidence : float - Detector confidence score. - feature : array_like - A feature vector that describes the object contained in this image. - - Attributes - ---------- - tlwh : ndarray - Bounding box in format `(top left x, top left y, width, height)`. - confidence : ndarray - Detector confidence score. - feature : ndarray | NoneType - A feature vector that describes the object contained in this image. - - """ - - def __init__(self, tlwh, confidence, feature): - self.tlwh = np.asarray(tlwh, dtype=np.float32) - self.confidence = float(confidence) - self.feature = np.asarray(feature, dtype=np.float32) - - def to_tlbr(self): - """Convert bounding box to format `(min x, min y, max x, max y)`, i.e., - `(top left, bottom right)`. - """ - ret = self.tlwh.copy() - ret[2:] += ret[:2] - return ret - - def to_xyah(self): - """Convert bounding box to format `(center x, center y, aspect ratio, - height)`, where the aspect ratio is `width / height`. - """ - ret = self.tlwh.copy() - ret[:2] += ret[2:] / 2 - ret[2] /= ret[3] - return ret diff --git a/spaces/skf15963/summary/fengshen/data/data_utils/sop_utils.py b/spaces/skf15963/summary/fengshen/data/data_utils/sop_utils.py deleted file mode 100644 index 505f14dca99638b10eee0a4017447401a71ef083..0000000000000000000000000000000000000000 --- a/spaces/skf15963/summary/fengshen/data/data_utils/sop_utils.py +++ /dev/null @@ -1,32 +0,0 @@ - -# copy from megatron -def get_a_and_b_segments(sample, np_rng): - """Divide sample into a and b segments.""" - - # Number of sentences in the sample. - n_sentences = len(sample) - # Make sure we always have two sentences. - assert n_sentences > 1, 'make sure each sample has at least two sentences.' - - # First part: - # `a_end` is how many sentences go into the `A`. - a_end = 1 - if n_sentences >= 3: - # Note that randin in numpy is exclusive. - a_end = np_rng.randint(1, n_sentences) - tokens_a = [] - for j in range(a_end): - tokens_a.extend(sample[j]) - - # Second part: - tokens_b = [] - for j in range(a_end, n_sentences): - tokens_b.extend(sample[j]) - - # Random next: - is_next_random = False - if np_rng.random() < 0.5: - is_next_random = True - tokens_a, tokens_b = tokens_b, tokens_a - - return tokens_a, tokens_b, is_next_random diff --git a/spaces/sklearn-docs/SGD_Penalties/README.md b/spaces/sklearn-docs/SGD_Penalties/README.md deleted file mode 100644 index 16930e0694ddc55df9577a0430eb82b3c3d20655..0000000000000000000000000000000000000000 --- a/spaces/sklearn-docs/SGD_Penalties/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: SGD Penalties -emoji: 💩 -colorFrom: indigo -colorTo: red -sdk: gradio -sdk_version: 3.24.1 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/srikotha/runwayml-stable-diffusion-v1-5/README.md b/spaces/srikotha/runwayml-stable-diffusion-v1-5/README.md deleted file mode 100644 index 6e6828fb754388c8366c601ce26f763d0c31716a..0000000000000000000000000000000000000000 --- a/spaces/srikotha/runwayml-stable-diffusion-v1-5/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Runwayml Stable Diffusion V1 5 -emoji: 😻 -colorFrom: indigo -colorTo: yellow -sdk: gradio -sdk_version: 3.21.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/simultaneous_translation/docs/ende-mma.md b/spaces/sriramelango/Social_Classification_Public/fairseq/examples/simultaneous_translation/docs/ende-mma.md deleted file mode 100644 index 241d604a3b31a37755da68aad6ff47d46891d3fc..0000000000000000000000000000000000000000 --- a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/simultaneous_translation/docs/ende-mma.md +++ /dev/null @@ -1,74 +0,0 @@ -# Simultaneous Machine Translation - -This directory contains the code for the paper [Monotonic Multihead Attention](https://openreview.net/forum?id=Hyg96gBKPS) - -## Prepare Data - -[Please follow the instructions to download and preprocess the WMT'15 En-De dataset.](https://github.com/pytorch/fairseq/tree/simulastsharedtask/examples/translation#prepare-wmt14en2desh) - -Another example of training an English to Japanese model can be found [here](docs/enja.md) - -## Training - -- MMA-IL - -```shell -fairseq-train \ - data-bin/wmt15_en_de_32k \ - --simul-type infinite_lookback \ - --user-dir $FAIRSEQ/example/simultaneous_translation \ - --mass-preservation \ - --criterion latency_augmented_label_smoothed_cross_entropy \ - --latency-weight-avg 0.1 \ - --max-update 50000 \ - --arch transformer_monotonic_iwslt_de_en save_dir_key=lambda \ - --optimizer adam --adam-betas '(0.9, 0.98)' \ - --lr-scheduler 'inverse_sqrt' \ - --warmup-init-lr 1e-7 --warmup-updates 4000 \ - --lr 5e-4 --stop-min-lr 1e-9 --clip-norm 0.0 --weight-decay 0.0001\ - --dropout 0.3 \ - --label-smoothing 0.1\ - --max-tokens 3584 -``` - -- MMA-H - -```shell -fairseq-train \ - data-bin/wmt15_en_de_32k \ - --simul-type hard_aligned \ - --user-dir $FAIRSEQ/example/simultaneous_translation \ - --mass-preservation \ - --criterion latency_augmented_label_smoothed_cross_entropy \ - --latency-weight-var 0.1 \ - --max-update 50000 \ - --arch transformer_monotonic_iwslt_de_en save_dir_key=lambda \ - --optimizer adam --adam-betas '(0.9, 0.98)' \ - --lr-scheduler 'inverse_sqrt' \ - --warmup-init-lr 1e-7 --warmup-updates 4000 \ - --lr 5e-4 --stop-min-lr 1e-9 --clip-norm 0.0 --weight-decay 0.0001\ - --dropout 0.3 \ - --label-smoothing 0.1\ - --max-tokens 3584 -``` - -- wait-k - -```shell -fairseq-train \ - data-bin/wmt15_en_de_32k \ - --simul-type wait-k \ - --waitk-lagging 3 \ - --user-dir $FAIRSEQ/example/simultaneous_translation \ - --mass-preservation \ - --criterion latency_augmented_label_smoothed_cross_entropy \ - --max-update 50000 \ - --arch transformer_monotonic_iwslt_de_en save_dir_key=lambda \ - --optimizer adam --adam-betas '(0.9, 0.98)' \ - --lr-scheduler 'inverse_sqrt' \ - --warmup-init-lr 1e-7 --warmup-updates 4000 \ - --lr 5e-4 --stop-min-lr 1e-9 --clip-norm 0.0 --weight-decay 0.0001\ - --dropout 0.3 \ - --label-smoothing 0.1\ - --max-tokens 3584 -``` diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/speech_recognition/infer.py b/spaces/sriramelango/Social_Classification_Public/fairseq/examples/speech_recognition/infer.py deleted file mode 100644 index 6e9a878af46242ced57cfcd0e876a3d2ef3820ae..0000000000000000000000000000000000000000 --- a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/speech_recognition/infer.py +++ /dev/null @@ -1,427 +0,0 @@ -#!/usr/bin/env python3 -u -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -""" -Run inference for pre-processed data with a trained model. -""" - -import ast -import logging -import math -import os -import sys - -import editdistance -import numpy as np -import torch -from fairseq import checkpoint_utils, options, progress_bar, tasks, utils -from fairseq.data.data_utils import post_process -from fairseq.logging.meters import StopwatchMeter, TimeMeter - - -logging.basicConfig() -logging.root.setLevel(logging.INFO) -logging.basicConfig(level=logging.INFO) -logger = logging.getLogger(__name__) - - -def add_asr_eval_argument(parser): - parser.add_argument("--kspmodel", default=None, help="sentence piece model") - parser.add_argument( - "--wfstlm", default=None, help="wfstlm on dictonary output units" - ) - parser.add_argument( - "--rnnt_decoding_type", - default="greedy", - help="wfstlm on dictonary\ -output units", - ) - try: - parser.add_argument( - "--lm-weight", - "--lm_weight", - type=float, - default=0.2, - help="weight for lm while interpolating with neural score", - ) - except: - pass - parser.add_argument( - "--rnnt_len_penalty", default=-0.5, help="rnnt length penalty on word level" - ) - parser.add_argument( - "--w2l-decoder", - choices=["viterbi", "kenlm", "fairseqlm"], - help="use a w2l decoder", - ) - parser.add_argument("--lexicon", help="lexicon for w2l decoder") - parser.add_argument("--unit-lm", action="store_true", help="if using a unit lm") - parser.add_argument("--kenlm-model", "--lm-model", help="lm model for w2l decoder") - parser.add_argument("--beam-threshold", type=float, default=25.0) - parser.add_argument("--beam-size-token", type=float, default=100) - parser.add_argument("--word-score", type=float, default=1.0) - parser.add_argument("--unk-weight", type=float, default=-math.inf) - parser.add_argument("--sil-weight", type=float, default=0.0) - parser.add_argument( - "--dump-emissions", - type=str, - default=None, - help="if present, dumps emissions into this file and exits", - ) - parser.add_argument( - "--dump-features", - type=str, - default=None, - help="if present, dumps features into this file and exits", - ) - parser.add_argument( - "--load-emissions", - type=str, - default=None, - help="if present, loads emissions from this file", - ) - return parser - - -def check_args(args): - # assert args.path is not None, "--path required for generation!" - # assert args.results_path is not None, "--results_path required for generation!" - assert ( - not args.sampling or args.nbest == args.beam - ), "--sampling requires --nbest to be equal to --beam" - assert ( - args.replace_unk is None or args.raw_text - ), "--replace-unk requires a raw text dataset (--raw-text)" - - -def get_dataset_itr(args, task, models): - return task.get_batch_iterator( - dataset=task.dataset(args.gen_subset), - max_tokens=args.max_tokens, - max_sentences=args.batch_size, - max_positions=(sys.maxsize, sys.maxsize), - ignore_invalid_inputs=args.skip_invalid_size_inputs_valid_test, - required_batch_size_multiple=args.required_batch_size_multiple, - num_shards=args.num_shards, - shard_id=args.shard_id, - num_workers=args.num_workers, - data_buffer_size=args.data_buffer_size, - ).next_epoch_itr(shuffle=False) - - -def process_predictions( - args, hypos, sp, tgt_dict, target_tokens, res_files, speaker, id -): - for hypo in hypos[: min(len(hypos), args.nbest)]: - hyp_pieces = tgt_dict.string(hypo["tokens"].int().cpu()) - - if "words" in hypo: - hyp_words = " ".join(hypo["words"]) - else: - hyp_words = post_process(hyp_pieces, args.post_process) - - if res_files is not None: - print( - "{} ({}-{})".format(hyp_pieces, speaker, id), - file=res_files["hypo.units"], - ) - print( - "{} ({}-{})".format(hyp_words, speaker, id), - file=res_files["hypo.words"], - ) - - tgt_pieces = tgt_dict.string(target_tokens) - tgt_words = post_process(tgt_pieces, args.post_process) - - if res_files is not None: - print( - "{} ({}-{})".format(tgt_pieces, speaker, id), - file=res_files["ref.units"], - ) - print( - "{} ({}-{})".format(tgt_words, speaker, id), file=res_files["ref.words"] - ) - - if not args.quiet: - logger.info("HYPO:" + hyp_words) - logger.info("TARGET:" + tgt_words) - logger.info("___________________") - - hyp_words = hyp_words.split() - tgt_words = tgt_words.split() - return editdistance.eval(hyp_words, tgt_words), len(tgt_words) - - -def prepare_result_files(args): - def get_res_file(file_prefix): - if args.num_shards > 1: - file_prefix = f"{args.shard_id}_{file_prefix}" - path = os.path.join( - args.results_path, - "{}-{}-{}.txt".format( - file_prefix, os.path.basename(args.path), args.gen_subset - ), - ) - return open(path, "w", buffering=1) - - if not args.results_path: - return None - - return { - "hypo.words": get_res_file("hypo.word"), - "hypo.units": get_res_file("hypo.units"), - "ref.words": get_res_file("ref.word"), - "ref.units": get_res_file("ref.units"), - } - - -def optimize_models(args, use_cuda, models): - """Optimize ensemble for generation""" - for model in models: - model.make_generation_fast_( - beamable_mm_beam_size=None if args.no_beamable_mm else args.beam, - need_attn=args.print_alignment, - ) - if args.fp16: - model.half() - if use_cuda: - model.cuda() - - -class ExistingEmissionsDecoder(object): - def __init__(self, decoder, emissions): - self.decoder = decoder - self.emissions = emissions - - def generate(self, models, sample, **unused): - ids = sample["id"].cpu().numpy() - try: - emissions = np.stack(self.emissions[ids]) - except: - print([x.shape for x in self.emissions[ids]]) - raise Exception("invalid sizes") - emissions = torch.from_numpy(emissions) - return self.decoder.decode(emissions) - - -def main(args, task=None, model_state=None): - check_args(args) - - if args.max_tokens is None and args.batch_size is None: - args.max_tokens = 4000000 - logger.info(args) - - use_cuda = torch.cuda.is_available() and not args.cpu - - logger.info("| decoding with criterion {}".format(args.criterion)) - - task = tasks.setup_task(args) - - # Load ensemble - if args.load_emissions: - models, criterions = [], [] - task.load_dataset(args.gen_subset) - else: - logger.info("| loading model(s) from {}".format(args.path)) - models, saved_cfg, task = checkpoint_utils.load_model_ensemble_and_task( - utils.split_paths(args.path, separator="\\"), - arg_overrides=ast.literal_eval(args.model_overrides), - task=task, - suffix=args.checkpoint_suffix, - strict=(args.checkpoint_shard_count == 1), - num_shards=args.checkpoint_shard_count, - state=model_state, - ) - optimize_models(args, use_cuda, models) - task.load_dataset(args.gen_subset, task_cfg=saved_cfg.task) - - - # Set dictionary - tgt_dict = task.target_dictionary - - logger.info( - "| {} {} {} examples".format( - args.data, args.gen_subset, len(task.dataset(args.gen_subset)) - ) - ) - - # hack to pass transitions to W2lDecoder - if args.criterion == "asg_loss": - raise NotImplementedError("asg_loss is currently not supported") - # trans = criterions[0].asg.trans.data - # args.asg_transitions = torch.flatten(trans).tolist() - - # Load dataset (possibly sharded) - itr = get_dataset_itr(args, task, models) - - # Initialize generator - gen_timer = StopwatchMeter() - - def build_generator(args): - w2l_decoder = getattr(args, "w2l_decoder", None) - if w2l_decoder == "viterbi": - from examples.speech_recognition.w2l_decoder import W2lViterbiDecoder - - return W2lViterbiDecoder(args, task.target_dictionary) - elif w2l_decoder == "kenlm": - from examples.speech_recognition.w2l_decoder import W2lKenLMDecoder - - return W2lKenLMDecoder(args, task.target_dictionary) - elif w2l_decoder == "fairseqlm": - from examples.speech_recognition.w2l_decoder import W2lFairseqLMDecoder - - return W2lFairseqLMDecoder(args, task.target_dictionary) - else: - print( - "only flashlight decoders with (viterbi, kenlm, fairseqlm) options are supported at the moment" - ) - - # please do not touch this unless you test both generate.py and infer.py with audio_pretraining task - generator = build_generator(args) - - if args.load_emissions: - generator = ExistingEmissionsDecoder( - generator, np.load(args.load_emissions, allow_pickle=True) - ) - logger.info("loaded emissions from " + args.load_emissions) - - num_sentences = 0 - - if args.results_path is not None and not os.path.exists(args.results_path): - os.makedirs(args.results_path) - - max_source_pos = ( - utils.resolve_max_positions( - task.max_positions(), *[model.max_positions() for model in models] - ), - ) - - if max_source_pos is not None: - max_source_pos = max_source_pos[0] - if max_source_pos is not None: - max_source_pos = max_source_pos[0] - 1 - - if args.dump_emissions: - emissions = {} - if args.dump_features: - features = {} - models[0].bert.proj = None - else: - res_files = prepare_result_files(args) - errs_t = 0 - lengths_t = 0 - with progress_bar.build_progress_bar(args, itr) as t: - wps_meter = TimeMeter() - for sample in t: - sample = utils.move_to_cuda(sample) if use_cuda else sample - if "net_input" not in sample: - continue - - prefix_tokens = None - if args.prefix_size > 0: - prefix_tokens = sample["target"][:, : args.prefix_size] - - gen_timer.start() - if args.dump_emissions: - with torch.no_grad(): - encoder_out = models[0](**sample["net_input"]) - emm = models[0].get_normalized_probs(encoder_out, log_probs=True) - emm = emm.transpose(0, 1).cpu().numpy() - for i, id in enumerate(sample["id"]): - emissions[id.item()] = emm[i] - continue - elif args.dump_features: - with torch.no_grad(): - encoder_out = models[0](**sample["net_input"]) - feat = encoder_out["encoder_out"].transpose(0, 1).cpu().numpy() - for i, id in enumerate(sample["id"]): - padding = ( - encoder_out["encoder_padding_mask"][i].cpu().numpy() - if encoder_out["encoder_padding_mask"] is not None - else None - ) - features[id.item()] = (feat[i], padding) - continue - hypos = task.inference_step(generator, models, sample, prefix_tokens) - num_generated_tokens = sum(len(h[0]["tokens"]) for h in hypos) - gen_timer.stop(num_generated_tokens) - - for i, sample_id in enumerate(sample["id"].tolist()): - speaker = None - # id = task.dataset(args.gen_subset).ids[int(sample_id)] - id = sample_id - toks = ( - sample["target"][i, :] - if "target_label" not in sample - else sample["target_label"][i, :] - ) - target_tokens = utils.strip_pad(toks, tgt_dict.pad()).int().cpu() - # Process top predictions - errs, length = process_predictions( - args, - hypos[i], - None, - tgt_dict, - target_tokens, - res_files, - speaker, - id, - ) - errs_t += errs - lengths_t += length - - wps_meter.update(num_generated_tokens) - t.log({"wps": round(wps_meter.avg)}) - num_sentences += ( - sample["nsentences"] if "nsentences" in sample else sample["id"].numel() - ) - - wer = None - if args.dump_emissions: - emm_arr = [] - for i in range(len(emissions)): - emm_arr.append(emissions[i]) - np.save(args.dump_emissions, emm_arr) - logger.info(f"saved {len(emissions)} emissions to {args.dump_emissions}") - elif args.dump_features: - feat_arr = [] - for i in range(len(features)): - feat_arr.append(features[i]) - np.save(args.dump_features, feat_arr) - logger.info(f"saved {len(features)} emissions to {args.dump_features}") - else: - if lengths_t > 0: - wer = errs_t * 100.0 / lengths_t - logger.info(f"WER: {wer}") - - logger.info( - "| Processed {} sentences ({} tokens) in {:.1f}s ({:.2f}" - "sentences/s, {:.2f} tokens/s)".format( - num_sentences, - gen_timer.n, - gen_timer.sum, - num_sentences / gen_timer.sum, - 1.0 / gen_timer.avg, - ) - ) - logger.info("| Generate {} with beam={}".format(args.gen_subset, args.beam)) - return task, wer - - -def make_parser(): - parser = options.get_generation_parser() - parser = add_asr_eval_argument(parser) - return parser - - -def cli_main(): - parser = make_parser() - args = options.parse_args_and_arch(parser) - main(args) - - -if __name__ == "__main__": - cli_main() diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/criterions/ctc.py b/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/criterions/ctc.py deleted file mode 100644 index 10e3618382c86a84466cb4264d62f31537980251..0000000000000000000000000000000000000000 --- a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/criterions/ctc.py +++ /dev/null @@ -1,295 +0,0 @@ -# All rights reserved. -# -# This source code is licensed under the license found in the LICENSE file in -# the root directory of this source tree. An additional grant of patent rights -# can be found in the PATENTS file in the same directory. - -import math -from argparse import Namespace -from dataclasses import dataclass, field -from omegaconf import II -from typing import Optional - -import torch -import torch.nn.functional as F -from fairseq import metrics, utils -from fairseq.criterions import FairseqCriterion, register_criterion -from fairseq.dataclass import FairseqDataclass -from fairseq.data.data_utils import post_process -from fairseq.tasks import FairseqTask -from fairseq.logging.meters import safe_round - - -@dataclass -class CtcCriterionConfig(FairseqDataclass): - zero_infinity: bool = field( - default=False, - metadata={"help": "zero inf loss when source length <= target length"}, - ) - sentence_avg: bool = II("optimization.sentence_avg") - post_process: str = field( - default="letter", - metadata={ - "help": "how to post process predictions into words. can be letter, " - "wordpiece, BPE symbols, etc. " - "See fairseq.data.data_utils.post_process() for full list of options" - }, - ) - wer_kenlm_model: Optional[str] = field( - default=None, - metadata={ - "help": "if this is provided, use kenlm to compute wer (along with other wer_* args)" - }, - ) - wer_lexicon: Optional[str] = field( - default=None, - metadata={"help": "lexicon to use with wer_kenlm_model"}, - ) - wer_lm_weight: float = field( - default=2.0, - metadata={"help": "lm weight to use with wer_kenlm_model"}, - ) - wer_word_score: float = field( - default=-1.0, - metadata={"help": "lm word score to use with wer_kenlm_model"}, - ) - - wer_args: Optional[str] = field( - default=None, - metadata={ - "help": "DEPRECATED: tuple of (wer_kenlm_model, wer_lexicon, wer_lm_weight, wer_word_score)" - }, - ) - - -@register_criterion("ctc", dataclass=CtcCriterionConfig) -class CtcCriterion(FairseqCriterion): - def __init__(self, cfg: CtcCriterionConfig, task: FairseqTask): - super().__init__(task) - self.blank_idx = ( - task.target_dictionary.index(task.blank_symbol) - if hasattr(task, "blank_symbol") - else 0 - ) - self.pad_idx = task.target_dictionary.pad() - self.eos_idx = task.target_dictionary.eos() - self.post_process = cfg.post_process - - if cfg.wer_args is not None: - ( - cfg.wer_kenlm_model, - cfg.wer_lexicon, - cfg.wer_lm_weight, - cfg.wer_word_score, - ) = eval(cfg.wer_args) - - if cfg.wer_kenlm_model is not None: - from examples.speech_recognition.w2l_decoder import W2lKenLMDecoder - - dec_args = Namespace() - dec_args.nbest = 1 - dec_args.criterion = "ctc" - dec_args.kenlm_model = cfg.wer_kenlm_model - dec_args.lexicon = cfg.wer_lexicon - dec_args.beam = 50 - dec_args.beam_size_token = min(50, len(task.target_dictionary)) - dec_args.beam_threshold = min(50, len(task.target_dictionary)) - dec_args.lm_weight = cfg.wer_lm_weight - dec_args.word_score = cfg.wer_word_score - dec_args.unk_weight = -math.inf - dec_args.sil_weight = 0 - - self.w2l_decoder = W2lKenLMDecoder(dec_args, task.target_dictionary) - else: - self.w2l_decoder = None - - self.zero_infinity = cfg.zero_infinity - self.sentence_avg = cfg.sentence_avg - - def forward(self, model, sample, reduce=True): - net_output = model(**sample["net_input"]) - lprobs = model.get_normalized_probs( - net_output, log_probs=True - ).contiguous() # (T, B, C) from the encoder - - if "src_lengths" in sample["net_input"]: - input_lengths = sample["net_input"]["src_lengths"] - else: - if net_output["padding_mask"] is not None: - non_padding_mask = ~net_output["padding_mask"] - input_lengths = non_padding_mask.long().sum(-1) - else: - input_lengths = lprobs.new_full( - (lprobs.size(1),), lprobs.size(0), dtype=torch.long - ) - - pad_mask = (sample["target"] != self.pad_idx) & ( - sample["target"] != self.eos_idx - ) - targets_flat = sample["target"].masked_select(pad_mask) - if "target_lengths" in sample: - target_lengths = sample["target_lengths"] - else: - target_lengths = pad_mask.sum(-1) - - with torch.backends.cudnn.flags(enabled=False): - loss = F.ctc_loss( - lprobs, - targets_flat, - input_lengths, - target_lengths, - blank=self.blank_idx, - reduction="sum", - zero_infinity=self.zero_infinity, - ) - - ntokens = ( - sample["ntokens"] if "ntokens" in sample else target_lengths.sum().item() - ) - - sample_size = sample["target"].size(0) if self.sentence_avg else ntokens - logging_output = { - "loss": utils.item(loss.data), # * sample['ntokens'], - "ntokens": ntokens, - "nsentences": sample["id"].numel(), - "sample_size": sample_size, - } - - if not model.training: - import editdistance - - with torch.no_grad(): - lprobs_t = lprobs.transpose(0, 1).float().contiguous().cpu() - - c_err = 0 - c_len = 0 - w_errs = 0 - w_len = 0 - wv_errs = 0 - for lp, t, inp_l in zip( - lprobs_t, - sample["target_label"] - if "target_label" in sample - else sample["target"], - input_lengths, - ): - lp = lp[:inp_l].unsqueeze(0) - - decoded = None - if self.w2l_decoder is not None: - decoded = self.w2l_decoder.decode(lp) - if len(decoded) < 1: - decoded = None - else: - decoded = decoded[0] - if len(decoded) < 1: - decoded = None - else: - decoded = decoded[0] - - p = (t != self.task.target_dictionary.pad()) & ( - t != self.task.target_dictionary.eos() - ) - targ = t[p] - targ_units = self.task.target_dictionary.string(targ) - targ_units_arr = targ.tolist() - - toks = lp.argmax(dim=-1).unique_consecutive() - pred_units_arr = toks[toks != self.blank_idx].tolist() - - c_err += editdistance.eval(pred_units_arr, targ_units_arr) - c_len += len(targ_units_arr) - - targ_words = post_process(targ_units, self.post_process).split() - - pred_units = self.task.target_dictionary.string(pred_units_arr) - pred_words_raw = post_process(pred_units, self.post_process).split() - - if decoded is not None and "words" in decoded: - pred_words = decoded["words"] - w_errs += editdistance.eval(pred_words, targ_words) - wv_errs += editdistance.eval(pred_words_raw, targ_words) - else: - dist = editdistance.eval(pred_words_raw, targ_words) - w_errs += dist - wv_errs += dist - - w_len += len(targ_words) - - logging_output["wv_errors"] = wv_errs - logging_output["w_errors"] = w_errs - logging_output["w_total"] = w_len - logging_output["c_errors"] = c_err - logging_output["c_total"] = c_len - - return loss, sample_size, logging_output - - @staticmethod - def reduce_metrics(logging_outputs) -> None: - """Aggregate logging outputs from data parallel training.""" - - loss_sum = utils.item(sum(log.get("loss", 0) for log in logging_outputs)) - ntokens = utils.item(sum(log.get("ntokens", 0) for log in logging_outputs)) - nsentences = utils.item( - sum(log.get("nsentences", 0) for log in logging_outputs) - ) - sample_size = utils.item( - sum(log.get("sample_size", 0) for log in logging_outputs) - ) - - metrics.log_scalar( - "loss", loss_sum / sample_size / math.log(2), sample_size, round=3 - ) - metrics.log_scalar("ntokens", ntokens) - metrics.log_scalar("nsentences", nsentences) - if sample_size != ntokens: - metrics.log_scalar( - "nll_loss", loss_sum / ntokens / math.log(2), ntokens, round=3 - ) - - c_errors = sum(log.get("c_errors", 0) for log in logging_outputs) - metrics.log_scalar("_c_errors", c_errors) - c_total = sum(log.get("c_total", 0) for log in logging_outputs) - metrics.log_scalar("_c_total", c_total) - w_errors = sum(log.get("w_errors", 0) for log in logging_outputs) - metrics.log_scalar("_w_errors", w_errors) - wv_errors = sum(log.get("wv_errors", 0) for log in logging_outputs) - metrics.log_scalar("_wv_errors", wv_errors) - w_total = sum(log.get("w_total", 0) for log in logging_outputs) - metrics.log_scalar("_w_total", w_total) - - if c_total > 0: - metrics.log_derived( - "uer", - lambda meters: safe_round( - meters["_c_errors"].sum * 100.0 / meters["_c_total"].sum, 3 - ) - if meters["_c_total"].sum > 0 - else float("nan"), - ) - if w_total > 0: - metrics.log_derived( - "wer", - lambda meters: safe_round( - meters["_w_errors"].sum * 100.0 / meters["_w_total"].sum, 3 - ) - if meters["_w_total"].sum > 0 - else float("nan"), - ) - metrics.log_derived( - "raw_wer", - lambda meters: safe_round( - meters["_wv_errors"].sum * 100.0 / meters["_w_total"].sum, 3 - ) - if meters["_w_total"].sum > 0 - else float("nan"), - ) - - @staticmethod - def logging_outputs_can_be_summed() -> bool: - """ - Whether the logging outputs returned by `forward` can be summed - across workers prior to calling `reduce_metrics`. Setting this - to True will improves distributed training speed. - """ - return True diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/tests/test_token_block_dataset.py b/spaces/sriramelango/Social_Classification_Public/fairseq/tests/test_token_block_dataset.py deleted file mode 100644 index c4d7b76dcd55fe7869dbb1fa188f7b36fb639bda..0000000000000000000000000000000000000000 --- a/spaces/sriramelango/Social_Classification_Public/fairseq/tests/test_token_block_dataset.py +++ /dev/null @@ -1,92 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import unittest - -import tests.utils as test_utils -import torch -from fairseq.data import TokenBlockDataset - - -class TestTokenBlockDataset(unittest.TestCase): - def _build_dataset(self, data, **kwargs): - sizes = [len(x) for x in data] - underlying_ds = test_utils.TestDataset(data) - return TokenBlockDataset(underlying_ds, sizes, **kwargs) - - def test_eos_break_mode(self): - data = [ - torch.tensor([5, 4, 3, 2, 1], dtype=torch.long), - torch.tensor([1], dtype=torch.long), - torch.tensor([8, 7, 6, 1], dtype=torch.long), - ] - ds = self._build_dataset(data, block_size=None, pad=0, eos=1, break_mode="eos") - self.assertEqual(ds[0].tolist(), [5, 4, 3, 2, 1]) - self.assertEqual(ds[1].tolist(), [1]) - self.assertEqual(ds[2].tolist(), [8, 7, 6, 1]) - - data = [ - torch.tensor([5, 4, 3, 2, 1], dtype=torch.long), - torch.tensor([8, 7, 6, 1], dtype=torch.long), - torch.tensor([1], dtype=torch.long), - ] - ds = self._build_dataset(data, block_size=None, pad=0, eos=1, break_mode="eos") - self.assertEqual(ds[0].tolist(), [5, 4, 3, 2, 1]) - self.assertEqual(ds[1].tolist(), [8, 7, 6, 1]) - self.assertEqual(ds[2].tolist(), [1]) - - def test_block_break_mode(self): - data = [ - torch.tensor([5, 4, 3, 2, 1], dtype=torch.long), - torch.tensor([8, 7, 6, 1], dtype=torch.long), - torch.tensor([9, 1], dtype=torch.long), - ] - ds = self._build_dataset(data, block_size=3, pad=0, eos=1, break_mode="none") - self.assertEqual(ds[0].tolist(), [5, 4, 3]) - self.assertEqual(ds[1].tolist(), [2, 1, 8]) - self.assertEqual(ds[2].tolist(), [7, 6, 1]) - self.assertEqual(ds[3].tolist(), [9, 1]) - - def test_complete_break_mode(self): - data = [ - torch.tensor([5, 4, 3, 2, 1], dtype=torch.long), - torch.tensor([8, 7, 6, 1], dtype=torch.long), - torch.tensor([9, 1], dtype=torch.long), - ] - ds = self._build_dataset( - data, block_size=6, pad=0, eos=1, break_mode="complete" - ) - self.assertEqual(ds[0].tolist(), [5, 4, 3, 2, 1]) - self.assertEqual(ds[1].tolist(), [8, 7, 6, 1, 9, 1]) - - data = [ - torch.tensor([4, 3, 2, 1], dtype=torch.long), - torch.tensor([5, 1], dtype=torch.long), - torch.tensor([1], dtype=torch.long), - torch.tensor([6, 1], dtype=torch.long), - ] - ds = self._build_dataset( - data, block_size=3, pad=0, eos=1, break_mode="complete" - ) - self.assertEqual(ds[0].tolist(), [4, 3, 2, 1]) - self.assertEqual(ds[1].tolist(), [5, 1, 1]) - self.assertEqual(ds[2].tolist(), [6, 1]) - - def test_4billion_tokens(self): - """Regression test for numpy type promotion issue https://github.com/numpy/numpy/issues/5745""" - data = [torch.tensor(list(range(10000)), dtype=torch.long)] * 430000 - ds = self._build_dataset( - data, block_size=6, pad=0, eos=1, break_mode="complete" - ) - ds[-1] # __getitem__ works - start, end = ds.slice_indices[-1] - assert end > 4294967295 # data must be sufficiently large to overflow uint32 - assert not isinstance( - end + 1, float - ) # this would also raise, since np.uint64(1) + 1 => 2.0 - - -if __name__ == "__main__": - unittest.main() diff --git a/spaces/stamps-labs/stamp2vec/segmentation_models/deeplabv3/main.py b/spaces/stamps-labs/stamp2vec/segmentation_models/deeplabv3/main.py deleted file mode 100644 index 0720e38c8a579f14891357f43b6a9b79f3a6ea37..0000000000000000000000000000000000000000 --- a/spaces/stamps-labs/stamp2vec/segmentation_models/deeplabv3/main.py +++ /dev/null @@ -1,64 +0,0 @@ -from pathlib import Path - -import click -import torch -from sklearn.metrics import f1_score -from torch.utils import data - -from utils import * -from model import createDeepLabv3 -from trainer import train_model - - -@click.command() -@click.option("--data-directory", - required=True, - help="Specify the data directory.") -@click.option("--exp_directory", - required=True, - help="Specify the experiment directory.") -@click.option( - "--epochs", - default=25, - type=int, - help="Specify the number of epochs you want to run the experiment for.") -@click.option("--batch-size", - default=4, - type=int, - help="Specify the batch size for the dataloader.") -def main(data_directory, exp_directory, epochs, batch_size): - # Create the deeplabv3 resnet101 model which is pretrained on a subset - # of COCO train2017, on the 20 categories that are present in the Pascal VOC dataset. - model = createDeepLabv3() - model.train() - data_directory = Path(data_directory) - # Create the experiment directory if not present - exp_directory = Path(exp_directory) - if not exp_directory.exists(): - exp_directory.mkdir() - - # Specify the loss function - criterion = torch.nn.MSELoss(reduction='mean') - # Specify the optimizer with a lower learning rate - optimizer = torch.optim.Adam(model.parameters(), lr=1e-4) - - # Specify the evaluation metrics - metrics = {'f1_score': f1_score, 'iou': iou} - - # Create the dataloader - dataloaders = get_dataloader_single_folder( - data_directory, batch_size=batch_size) - _ = train_model(model, - criterion, - dataloaders, - optimizer, - bpath=exp_directory, - metrics=metrics, - num_epochs=epochs) - - # Save the trained model - torch.save(model, exp_directory / 'weights.pt') - - -if __name__ == "__main__": - main() \ No newline at end of file diff --git a/spaces/starlit7/KorPoliticsTTS/text/symbols.py b/spaces/starlit7/KorPoliticsTTS/text/symbols.py deleted file mode 100644 index 8648bd1e2ac0cfe99e0eaab6540c56baf668fe14..0000000000000000000000000000000000000000 --- a/spaces/starlit7/KorPoliticsTTS/text/symbols.py +++ /dev/null @@ -1,74 +0,0 @@ -''' -Defines the set of symbols used in text input to the model. -''' - -'''# japanese_cleaners -_pad = '_' -_punctuation = ',.!?-' -_letters = 'AEINOQUabdefghijkmnoprstuvwyzʃʧ↓↑ ' -''' - -'''# japanese_cleaners2 -_pad = '_' -_punctuation = ',.!?-~…' -_letters = 'AEINOQUabdefghijkmnoprstuvwyzʃʧʦ↓↑ ' -''' - -# korean_cleaners -_pad = '_' -_punctuation = ',.!?…~' -_letters = 'ㄱㄴㄷㄹㅁㅂㅅㅇㅈㅊㅋㅌㅍㅎㄲㄸㅃㅆㅉㅏㅓㅗㅜㅡㅣㅐㅔ ' - -'''# chinese_cleaners -_pad = '_' -_punctuation = ',。!?—…' -_letters = 'ㄅㄆㄇㄈㄉㄊㄋㄌㄍㄎㄏㄐㄑㄒㄓㄔㄕㄖㄗㄘㄙㄚㄛㄜㄝㄞㄟㄠㄡㄢㄣㄤㄥㄦㄧㄨㄩˉˊˇˋ˙ ' -''' - -'''# zh_ja_mixture_cleaners -_pad = '_' -_punctuation = ',.!?-~…' -_letters = 'AEINOQUabdefghijklmnoprstuvwyzʃʧʦɯɹəɥ⁼ʰ`→↓↑ ' -''' - -'''# sanskrit_cleaners -_pad = '_' -_punctuation = '।' -_letters = 'ँंःअआइईउऊऋएऐओऔकखगघङचछजझञटठडढणतथदधनपफबभमयरलळवशषसहऽािीुूृॄेैोौ्ॠॢ ' -''' - -'''# cjks_cleaners -_pad = '_' -_punctuation = ',.!?-~…' -_letters = 'NQabdefghijklmnopstuvwxyzʃʧʥʦɯɹəɥçɸɾβŋɦː⁼ʰ`^#*=→↓↑ ' -''' - -'''# thai_cleaners -_pad = '_' -_punctuation = '.!? ' -_letters = 'กขฃคฆงจฉชซฌญฎฏฐฑฒณดตถทธนบปผฝพฟภมยรฤลวศษสหฬอฮฯะัาำิีึืุูเแโใไๅๆ็่้๊๋์' -''' - -'''# cjke_cleaners2 -_pad = '_' -_punctuation = ',.!?-~…' -_letters = 'NQabdefghijklmnopstuvwxyzɑæʃʑçɯɪɔɛɹðəɫɥɸʊɾʒθβŋɦ⁼ʰ`^#*=ˈˌ→↓↑ ' -''' - -'''# shanghainese_cleaners -_pad = '_' -_punctuation = ',.!?…' -_letters = 'abdfghiklmnopstuvyzøŋȵɑɔɕəɤɦɪɿʑʔʰ̩̃ᴀᴇ15678 ' -''' - -'''# chinese_dialect_cleaners -_pad = '_' -_punctuation = ',.!?~…─' -_letters = '#Nabdefghijklmnoprstuvwxyzæçøŋœȵɐɑɒɓɔɕɗɘəɚɛɜɣɤɦɪɭɯɵɷɸɻɾɿʂʅʊʋʌʏʑʔʦʮʰʷˀː˥˦˧˨˩̥̩̃̚αᴀᴇ↑↓∅ⱼ ' -''' - -# Export all symbols: -symbols = [_pad] + list(_punctuation) + list(_letters) - -# Special symbol ids -SPACE_ID = symbols.index(" ") diff --git a/spaces/starlit7/USPoliticsTTS/text/symbols.py b/spaces/starlit7/USPoliticsTTS/text/symbols.py deleted file mode 100644 index 2c0ada85c9a2ce477d6f059b57ab314a665819a3..0000000000000000000000000000000000000000 --- a/spaces/starlit7/USPoliticsTTS/text/symbols.py +++ /dev/null @@ -1,76 +0,0 @@ -''' -Defines the set of symbols used in text input to the model. -''' - -'''# japanese_cleaners -_pad = '_' -_punctuation = ',.!?-' -_letters = 'AEINOQUabdefghijkmnoprstuvwyzʃʧ↓↑ ' -''' - -'''# japanese_cleaners2 -_pad = '_' -_punctuation = ',.!?-~…' -_letters = 'AEINOQUabdefghijkmnoprstuvwyzʃʧʦ↓↑ ' -''' - -''' -# korean_cleaners -_pad = '_' -_punctuation = ',.!?…~' -_letters = 'ㄱㄴㄷㄹㅁㅂㅅㅇㅈㅊㅋㅌㅍㅎㄲㄸㅃㅆㅉㅏㅓㅗㅜㅡㅣㅐㅔ ' -''' - -'''# chinese_cleaners -_pad = '_' -_punctuation = ',。!?—…' -_letters = 'ㄅㄆㄇㄈㄉㄊㄋㄌㄍㄎㄏㄐㄑㄒㄓㄔㄕㄖㄗㄘㄙㄚㄛㄜㄝㄞㄟㄠㄡㄢㄣㄤㄥㄦㄧㄨㄩˉˊˇˋ˙ ' -''' - -'''# zh_ja_mixture_cleaners -_pad = '_' -_punctuation = ',.!?-~…' -_letters = 'AEINOQUabdefghijklmnoprstuvwyzʃʧʦɯɹəɥ⁼ʰ`→↓↑ ' -''' - -'''# sanskrit_cleaners -_pad = '_' -_punctuation = '।' -_letters = 'ँंःअआइईउऊऋएऐओऔकखगघङचछजझञटठडढणतथदधनपफबभमयरलळवशषसहऽािीुूृॄेैोौ्ॠॢ ' -''' - -'''# cjks_cleaners -_pad = '_' -_punctuation = ',.!?-~…' -_letters = 'NQabdefghijklmnopstuvwxyzʃʧʥʦɯɹəɥçɸɾβŋɦː⁼ʰ`^#*=→↓↑ ' -''' - -'''# thai_cleaners -_pad = '_' -_punctuation = '.!? ' -_letters = 'กขฃคฆงจฉชซฌญฎฏฐฑฒณดตถทธนบปผฝพฟภมยรฤลวศษสหฬอฮฯะัาำิีึืุูเแโใไๅๆ็่้๊๋์' -''' - -# cjke_cleaners2 -_pad = '_' -_punctuation = ',.!?-~…' -_letters = 'NQabdefghijklmnopstuvwxyzɑæʃʑçɯɪɔɛɹðəɫɥɸʊɾʒθβŋɦ⁼ʰ`^#*=ˈˌ→↓↑ ' - - -'''# shanghainese_cleaners -_pad = '_' -_punctuation = ',.!?…' -_letters = 'abdfghiklmnopstuvyzøŋȵɑɔɕəɤɦɪɿʑʔʰ̩̃ᴀᴇ15678 ' -''' - -'''# chinese_dialect_cleaners -_pad = '_' -_punctuation = ',.!?~…─' -_letters = '#Nabdefghijklmnoprstuvwxyzæçøŋœȵɐɑɒɓɔɕɗɘəɚɛɜɣɤɦɪɭɯɵɷɸɻɾɿʂʅʊʋʌʏʑʔʦʮʰʷˀː˥˦˧˨˩̥̩̃̚αᴀᴇ↑↓∅ⱼ ' -''' - -# Export all symbols: -symbols = [_pad] + list(_punctuation) + list(_letters) - -# Special symbol ids -SPACE_ID = symbols.index(" ") diff --git a/spaces/stomexserde/gpt4-ui/Examples/DevExpress Components For .Net 17.2.7.18103.md b/spaces/stomexserde/gpt4-ui/Examples/DevExpress Components For .Net 17.2.7.18103.md deleted file mode 100644 index 7cc0c69efef334a50d25e1167188bdc1abe6d452..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/DevExpress Components For .Net 17.2.7.18103.md +++ /dev/null @@ -1,57 +0,0 @@ - -

    What are DevExpress Components for .Net 17.2.7.18103 and why you should use them

    -

    DevExpress Components for .Net are a set of feature-complete presentation controls, IDE productivity tools, business application frameworks, and reporting systems for Visual Studio, Delphi, HTML5 or iOS & Android development. They help you build and deliver your best applications in the shortest time possible.

    -

    In this article, we will explore some of the features and benefits of using DevExpress Components for .Net 17.2.7.18103, the latest version of this software development platform.

    -

    DevExpress Components for .Net 17.2.7.18103


    DOWNLOAD ✯✯✯ https://urlgoal.com/2uIbR1



    -

    What's new in DevExpress Components for .Net 17.2.7.18103

    -

    DevExpress Components for .Net 17.2.7.18103 is a minor update that includes bug fixes and performance improvements for various products and components. Some of the highlights are:

    -
      -
    • Improved support for .NET 7 and Visual Studio 2022[^1^]
    • -
    • Enhanced data visualization and charting capabilities
    • -
    • New themes and skins for WinForms and WPF controls
    • -
    • Improved PDF document processing and printing features
    • -
    • Added support for Blazor WebAssembly hosting model
    • -
    • Improved localization and accessibility features
    • -
    -

    Why you should use DevExpress Components for .Net 17.2.7.18103

    -

    DevExpress Components for .Net 17.2.7.18103 offer you many advantages over other software development platforms, such as:

    -
      -
    • Award-winning quality and customer satisfaction
    • -
    • Comprehensive documentation and online support
    • -
    • Flexible licensing and pricing options
    • -
    • Easy installation and integration with your existing projects
    • -
    • Constant innovation and updates to keep up with the latest technologies and trends
    • -
    -

    How to get started with DevExpress Components for .Net 17.2.7.18103

    -

    If you are interested in trying out DevExpress Components for .Net 17.2.7.18103, you can download a free 30-day trial from their official website[^2^]. You can also watch some video tutorials, read some blog posts, or join some webinars to learn more about their products and features.

    -

    If you are already a DevExpress customer, you can upgrade to the latest version using the DevExpress Project Converter tool or the DevExpress NuGet feed.

    -

    -

    Conclusion

    -

    DevExpress Components for .Net 17.2.7.18103 are a powerful and versatile software development platform that can help you create stunning applications for any platform and device. Whether you are a beginner or an expert, you can benefit from their rich features, high performance, and excellent support.

    -

    To learn more about DevExpress Components for .Net 17.2.7.18103, visit their website[^2^] or contact their sales team today.

    - -

    What customers say about DevExpress Components for .Net 17.2.7.18103

    -

    DevExpress Components for .Net 17.2.7.18103 have received positive feedback from many customers who have used them in their projects. Here are some of their testimonials:

    -
    -

    "DevExpress Components for .Net 17.2.7.18103 have helped me create amazing applications for my clients. They are easy to use, reliable, and customizable. I especially love the data grid and chart controls, which allow me to display complex data in a clear and interactive way."

    -John Smith, Freelance Developer -
    -
    -

    "I have been using DevExpress Components for .Net 17.2.7.18103 for a few months now and I am very impressed with their quality and performance. They have saved me a lot of time and effort in developing my applications. They also have excellent documentation and support, which make them a pleasure to work with."

    -Jane Doe, Software Engineer at ABC Inc. -
    -
    -

    "DevExpress Components for .Net 17.2.7.18103 are the best software development platform I have ever used. They have everything I need to create stunning applications for any platform and device. They are constantly updated with new features and improvements, which keep me ahead of the competition."

    -Bob Lee, Senior Developer at XYZ Ltd. -
    -

    How to get the most out of DevExpress Components for .Net 17.2.7.18103

    -

    DevExpress Components for .Net 17.2.7.18103 are designed to help you create your best applications in the shortest time possible. However, there are some tips and tricks that can help you get the most out of them, such as:

    -
      -
    • Use the DevExpress Project Templates and Wizards to create your projects and configure your controls quickly and easily
    • -
    • Use the DevExpress CodeRush tool to write cleaner and faster code with less keystrokes and errors
    • -
    • Use the DevExpress Dashboard tool to create interactive dashboards and reports with drag-and-drop functionality
    • -
    • Use the DevExpress TestCafe tool to automate your web application testing and debugging process
    • -
    • Use the DevExpress Support Center to access online resources, submit tickets, and chat with experts
    • -

    7b8c122e87
    -
    -
    \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Dilwale.Dulhania.Le.Jayenge..1995..Dual.Audio.[Telugu.Hindi].720p.BRRip.RDLinks.md b/spaces/stomexserde/gpt4-ui/Examples/Dilwale.Dulhania.Le.Jayenge..1995..Dual.Audio.[Telugu.Hindi].720p.BRRip.RDLinks.md deleted file mode 100644 index 8c1cda60e1823c59066318d62e564e4703491aaa..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Dilwale.Dulhania.Le.Jayenge..1995..Dual.Audio.[Telugu.Hindi].720p.BRRip.RDLinks.md +++ /dev/null @@ -1,44 +0,0 @@ - -I'll try to write that for you. Here is a possible title and article: - -

    Why You Should Watch Dilwale Dulhania Le Jayenge (1995) in Dual Audio

    - -

    If you are a fan of Bollywood movies, you probably have heard of Dilwale Dulhania Le Jayenge (DDLJ), one of the most iconic and successful films in Indian cinema history. Released in 1995, the film stars Shah Rukh Khan and Kajol as Raj and Simran, two young non-resident Indians who fall in love during a vacation in Europe, but face obstacles from their families and traditions.

    - -

    But did you know that you can watch this classic movie in dual audio, with both Telugu and Hindi languages? This way, you can enjoy the film in its original language as well as in another regional language that is widely spoken in India. Here are some reasons why you should watch DDLJ in dual audio:

    -

    Dilwale.Dulhania.Le.Jayenge..1995..Dual.Audio.[Telugu.Hindi].720p.BRRip.RDLinks


    Download Zip »»» https://urlgoal.com/2uIaCz



    - -
      -
    • You can appreciate the nuances of the dialogues and the songs better. DDLJ has some memorable dialogues and songs that are written by Aditya Chopra and Javed Siddiqui, and composed by Jatin-Lalit. By watching the film in dual audio, you can understand the meaning and the emotion behind each line and lyric better.
    • -
    • You can learn more about the culture and the traditions of both Punjab and Andhra Pradesh. DDLJ showcases the contrast between the conservative and orthodox Punjabi culture of Simran's family and the liberal and modern culture of Raj's family. By watching the film in dual audio, you can also get a glimpse of the Telugu culture and language, which is spoken by millions of people in India.
    • -
    • You can have more fun and entertainment. DDLJ is a film that is full of comedy, romance, drama, and action. By watching the film in dual audio, you can enjoy the film from different perspectives and laugh at the jokes and the situations that are presented in both languages.
    • -
    - -

    So, what are you waiting for? Grab your popcorn and watch Dilwale Dulhania Le Jayenge (1995) in dual audio today. You can find it online on various streaming platforms or download it from RDLinks, a website that provides high-quality movies in dual audio formats.

    Sure, here are some more paragraphs: - -

    The Legacy of Dilwale Dulhania Le Jayenge

    - -

    DDLJ is not just a movie, it is a phenomenon. The film has been widely acclaimed by critics and audiences alike, and has received numerous awards and accolades. Some of the most notable ones are:

    - -
      -
    • The film won 10 Filmfare Awards, the most for a single film at that time, including Best Film, Best Director, Best Actor, Best Actress, Best Supporting Actress, Best Comic Actor, Best Playback Singer - Male, Best Lyricist, Best Screenplay and Best Dialogue. [^1^]
    • -
    • The film also won the National Film Award for Best Popular Film Providing Wholesome Entertainment, the highest honour for Indian cinema. [^2^]
    • -
    • The film was one of only three Hindi films in the reference book 1001 Movies You Must See Before You Die, and was placed twelfth on the British Film Institute's list of top Indian films of all time. [^2^]
    • -
    • The film was included by critics Rachel Dwyer and Sanam Hasan in the 2012 British Film Institute Sight & Sound 1000 greatest films of all time. [^2^]
    • -
    • The film received a special Box Office India Milestone Award in 2014 for being the longest-running film in the history of Indian cinema. The film is still being shown in a cinema called Maratha Mandir theatre in Mumbai as of 2022. [^2^] [^3^]
    • -
    - -

    The Impact of Dilwale Dulhania Le Jayenge

    - -

    DDLJ has not only entertained millions of people around the world, but also influenced them in various ways. Some of the impacts of the film are:

    - -
      -
    • The film popularized the genre of musical romance in Bollywood, and inspired many filmmakers to make similar films with young and charismatic stars, exotic locations, catchy songs and family drama. Some of the films that followed DDLJ's footsteps are Kuch Kuch Hota Hai (1998), Mohabbatein (2000), Kabhi Khushi Kabhie Gham... (2001), Kal Ho Naa Ho (2003), Veer-Zaara (2004) and Jab We Met (2007).
    • -
    • The film also targeted the non-resident Indian audience, which was deemed more lucrative for Bollywood. The film portrayed the diaspora's nostalgia for their homeland and their struggle to balance their traditional values and their modern aspirations. The film appealed to both the Indian and the international audience, and boosted the overseas market for Bollywood.
    • -
    • The film also established Shah Rukh Khan and Kajol as one of the most successful and popular on-screen couples in Bollywood history. Their chemistry and charisma won over millions of fans, who still consider them as their favourite pair. The duo has worked together in several other films such as Karan Arjun (1995), Kuch Kuch Hota Hai (1998), Kabhi Khushi Kabhie Gham... (2001) and My Name Is Khan (2010).
    • -
    - -

    DDLJ is a film that has transcended time and generations, and has become a part of Indian culture and identity. It is a film that celebrates love, family, friendship and dreams. It is a film that you should watch at least once in your lifetime.

    -

    7196e7f11a
    -
    -
    \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Download Rumble Racing Pc ((FREE)) Full Version.md b/spaces/stomexserde/gpt4-ui/Examples/Download Rumble Racing Pc ((FREE)) Full Version.md deleted file mode 100644 index 2eb7df44dffca51faf34d0e9e9567870e6f89bf3..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Download Rumble Racing Pc ((FREE)) Full Version.md +++ /dev/null @@ -1,28 +0,0 @@ - -

    How to Download Rumble Racing PC Full Version for Free

    -

    Rumble Racing is a popular racing game that was originally released for the PlayStation 2 in 2001. It features over 35 cars, 15 tracks, and various modes such as arcade, championship, and stunt. If you are a fan of this game and want to play it on your PC, you might be wondering how to download Rumble Racing PC full version for free. In this article, we will show you the steps to do so.

    -

    Step 1: Download an Emulator

    -

    An emulator is a software that allows you to run games or applications from one platform on another. In this case, you will need an emulator that can run PlayStation 2 games on your PC. There are many emulators available online, but we recommend using PCSX2, which is one of the most popular and reliable ones. You can download PCSX2 from here.

    -

    download rumble racing pc full version


    Downloadhttps://urlgoal.com/2uIb0M



    -

    Step 2: Install and Configure PCSX2

    -

    After downloading PCSX2, you will need to install it on your PC. Follow the instructions on the installer and choose the components you want to install. You will also need to configure some settings such as graphics, sound, controller, and BIOS. You can follow this guide to learn how to do so.

    -

    Step 3: Download Rumble Racing ISO File

    -

    An ISO file is a digital copy of a disc that contains all the data and files of a game or application. You will need an ISO file of Rumble Racing to play it on your PC. You can download Rumble Racing ISO file from here. Make sure you choose the right region and language for your game.

    -

    Step 4: Run Rumble Racing on PCSX2

    -

    After downloading Rumble Racing ISO file, you will need to run it on PCSX2. To do so, open PCSX2 and click on CDVD > ISO Selector > Browse. Then, locate the Rumble Racing ISO file on your PC and select it. Next, click on System > Boot CDVD (full). This will launch the game on your PC. You can now enjoy playing Rumble Racing PC full version for free.

    -

    Conclusion

    -

    Rumble Racing is a fun and exciting racing game that you can play on your PC with the help of an emulator and an ISO file. We hope this article has helped you learn how to download Rumble Racing PC full version for free. If you have any questions or problems, feel free to leave a comment below.

    - -

    Tips and Tricks for Rumble Racing PC

    -

    Rumble Racing PC is a challenging and addictive game that will test your skills and reflexes. To help you master the game and have more fun, here are some tips and tricks that you can use.

    -
      -
    • Use the turbo boost wisely. The turbo boost is a powerful feature that can give you a speed advantage over your opponents. However, it also consumes your nitro meter, which is limited and can only be replenished by performing stunts or picking up power-ups. Therefore, use the turbo boost only when you need it, such as when you are behind or when you are on a straight road.
    • -
    • Perform stunts to earn nitro and points. Stunts are not only cool and fun, but also useful and rewarding. By performing stunts such as flips, spins, wheelies, and jumps, you can earn nitro and points. Nitro can help you boost your speed, while points can help you unlock new cars and tracks. To perform stunts, you need to press the stunt button and use the directional buttons to control your car in the air.
    • -
    • Avoid obstacles and hazards. The tracks in Rumble Racing PC are full of obstacles and hazards that can slow you down or damage your car. These include traffic cones, barrels, oil spills, rocks, fences, and more. Try to avoid them as much as possible, or use them to your advantage by smashing them into your rivals or using them as ramps.
    • -
    • Use power-ups to gain an edge. Power-ups are items that can give you a temporary benefit or disadvantage your opponents. They include rockets, mines, shields, magnets, lightning bolts, and more. You can pick up power-ups by driving through the blue icons on the track. Use them wisely to gain an edge over your rivals or to defend yourself from their attacks.
    • -
    • Experiment with different cars and tracks. Rumble Racing PC offers a variety of cars and tracks that have different characteristics and features. Each car has its own speed, acceleration, handling, weight, and stunt ability. Each track has its own layout, terrain, shortcuts, and secrets. Experiment with different combinations of cars and tracks to find the ones that suit your style and preference.
    • -
    -

    Conclusion

    -

    Rumble Racing PC is a thrilling and enjoyable game that will keep you entertained for hours. By following these tips and tricks, you can improve your performance and have more fun playing the game. If you have not downloaded Rumble Racing PC full version for free yet, what are you waiting for? Follow the steps in this article and start racing today.

    7b8c122e87
    -
    -
    \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Euro Truck Simulator Otobus Modu Indir.md b/spaces/stomexserde/gpt4-ui/Examples/Euro Truck Simulator Otobus Modu Indir.md deleted file mode 100644 index d95148ce18947bc53c4ebbcf9b0b4dca047d8662..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Euro Truck Simulator Otobus Modu Indir.md +++ /dev/null @@ -1,25 +0,0 @@ - -

    How to Download and Install Euro Truck Simulator Bus Mod

    -

    If you are a fan of Euro Truck Simulator 2, you might want to spice up your game with some bus mods. Bus mods allow you to drive different types of buses in the game, from city buses to luxury coaches. You can also customize your bus with various skins, interiors, accessories and more.

    -

    In this article, we will show you how to download and install Euro Truck Simulator bus mod from one of the popular mod websites, Allmods.net. Here are the steps you need to follow:

    -

    Euro Truck Simulator Otobus Modu Indir


    Download >>> https://urlgoal.com/2uI7We



    -
      -
    1. Go to https://allmods.net/mods/euro-truck-simulator-2/ets-2-bus/ and browse through the available bus mods. You can filter them by category, rating, date or popularity.
    2. -
    3. Click on the bus mod that you like and read the description, features and requirements. Make sure that the mod is compatible with your game version and DLCs.
    4. -
    5. Click on the download button and wait for the mod file to be downloaded. The file will be in .zip or .rar format.
    6. -
    7. Extract the mod file using a program like WinRAR or 7-Zip. You will get one or more .scs files inside.
    8. -
    9. Copy the .scs files and paste them into your Euro Truck Simulator 2 mod folder. The default location is C:\Users\YourName\Documents\Euro Truck Simulator 2\mod.
    10. -
    11. Launch Euro Truck Simulator 2 and go to the mod manager. Enable the bus mod that you want to use and confirm the changes.
    12. -
    13. Start a new game or load an existing profile. You will be able to buy or drive the bus from any dealer or garage.
    14. -
    -

    Enjoy your new bus experience in Euro Truck Simulator 2!

    Some of the Best Euro Truck Simulator Bus Mods

    -

    There are many bus mods available for Euro Truck Simulator 2, but some of them stand out for their quality, realism and variety. Here are some of the best bus mods that you can download and install from Allmods.net:

    -
      -
    • ADIPUTRO – OLD TRAVEGO V1.30 BY DELPHIS 1.46: This mod adds a classic Mercedes-Benz Travego bus to the game, with a detailed interior and exterior, realistic sound and physics, and various customization options. You can find it at the Mercedes-Benz dealer.
    • -
    • Solaris Urbino III 12 BVG v 2.0.17.47: This mod adds a modern city bus to the game, based on the Solaris Urbino III model used by the Berlin public transport company BVG. The bus has a realistic design, interior and sound, and supports passengers and AI traffic. You can find it at the MAN dealer.
    • -
    • Karosa 95x Pack v 1.0.20.47: This mod adds a pack of three Karosa buses to the game, based on the Czech models Karosa B951E, B952E and B954E. The buses have a retro look, a simple interior and a smooth sound. You can find them at the Renault dealer.
    • -
    -

    These are just some examples of the many bus mods that you can download and install from Allmods.net. You can also check out other mod websites, such as ETS2.lt, Modland.net or Steam Workshop, for more bus mods for Euro Truck Simulator 2.

    -

    81aa517590
    -
    -
    \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Graficos-radionicos-pdf-gratis.md b/spaces/stomexserde/gpt4-ui/Examples/Graficos-radionicos-pdf-gratis.md deleted file mode 100644 index 63871897908311337fcd52ab6476fadda0bdc5f8..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Graficos-radionicos-pdf-gratis.md +++ /dev/null @@ -1,30 +0,0 @@ - -Here is a possible title and article for the keyword "graficos radionicos pdf gratis": - -

    Gráficos Radionicos: What They Are and How to Use Them

    -

    Gráficos radionicos are geometric patterns that can be used for various purposes, such as healing, protection, manifestation, and harmonization. They are based on the principles of radiesthesia, which is the science of detecting and measuring subtle energies. Gráficos radionicos can be printed, drawn, or projected on a surface, and then activated with a pendulum, a witness (a personal object or a photo), or an intention.

    -

    graficos-radionicos-pdf-gratis


    Download Zip · https://urlgoal.com/2uI6lo



    -

    In this article, we will explain what gráficos radionicos are, how they work, and how you can use them for your own benefit. We will also provide you with some sources where you can download gráficos radionicos pdf gratis (free gráficos radionicos pdf) to start using them right away.

    -

    What are gráficos radionicos?

    -

    Gráficos radionicos are diagrams that contain geometric shapes, symbols, numbers, letters, colors, and words. They are designed to emit specific vibrations that can interact with the energy fields of people, places, objects, or situations. Gráficos radionicos can be used to amplify, balance, neutralize, or transform these energies according to the desired outcome.

    -

    Gráficos radionicos are based on the idea that everything in the universe is composed of energy and information, and that by using the appropriate codes and frequencies, we can access and modify this information. Gráficos radionicos act as transmitters and receivers of these codes and frequencies, creating a connection between the user and the target.

    -

    Gráficos radionicos have different shapes and functions depending on their purpose. Some of the most common types of gráficos radionicos are:

    -

    -
      -
    • Circular: These gráficos have a circular shape and are used for general purposes, such as protection, healing, cleansing, or energizing. They can also be used to create a positive atmosphere in a room or a house.
    • -
    • Square: These gráficos have a square shape and are used for specific purposes, such as attracting money, love, success, or health. They can also be used to solve problems or conflicts.
    • -
    • Hexagonal: These gráficos have a hexagonal shape and are used for spiritual purposes, such as meditation, intuition, psychic development, or connection with higher realms. They can also be used to enhance creativity or inspiration.
    • -
    • Pentagonal: These gráficos have a pentagonal shape and are used for magical purposes, such as manifestation, visualization, or invocation. They can also be used to empower personal goals or wishes.
    • -
    -

    How do gráficos radionicos work?

    -

    Gráficos radionicos work by creating a resonance between the user and the target. The user activates the gráfico with a pendulum, a witness, or an intention, and then places it on a surface or carries it with them. The gráfico then emits a vibration that matches the vibration of the target, creating a link between them. The gráfico then acts as a channel for transferring energy and information from the user to the target or vice versa.

    -

    The activation of the gráfico can be done in different ways:

    -
      -
    • Pendulum: The user holds a pendulum over the center of the gráfico and asks a question or makes a request. The pendulum then moves in a certain direction or pattern that indicates the answer or confirmation. The user can also use the pendulum to measure the intensity or duration of the effect of the gráfico.
    • -
    • Witness: The user places a witness on the center of the gráfico. A witness can be anything that represents the target, such as a photo, a name written on a paper, a hair strand, a piece of clothing, etc. The witness then acts as a bridge between the user and the target.
    • -
    • Intention: The user focuses their intention on the center of the gráfico and mentally states their question or request. The user then trusts that the gráfico will do its work without needing any external confirmation.
    • -
    -

    How to use gráficos radionicos?

    -

    To use gráficos radionicos effectively,

    7196e7f11a
    -
    -
    \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Gta Vice City Underground 2 Free __LINK__ Download Softonic.md b/spaces/stomexserde/gpt4-ui/Examples/Gta Vice City Underground 2 Free __LINK__ Download Softonic.md deleted file mode 100644 index b43e6182ac1d8e153533ebdbc18cb31ad8f7de7f..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Gta Vice City Underground 2 Free __LINK__ Download Softonic.md +++ /dev/null @@ -1,40 +0,0 @@ - -

    How to Download GTA Vice City Underground 2 for Free from Softonic

    -

    GTA Vice City Underground 2 is a mod for the popular Grand Theft Auto: Vice City game that adds new cars, missions, and features to the original game. If you are a fan of GTA Vice City and want to experience a fresh and exciting gameplay, you can download GTA Vice City Underground 2 for free from Softonic, one of the most trusted sources of software downloads on the internet.

    -

    gta vice city underground 2 free download softonic


    DOWNLOAD >>> https://urlgoal.com/2uI8Oc



    -

    In this article, we will show you how to download GTA Vice City Underground 2 for free from Softonic in a few simple steps. You will also learn about the requirements and features of the mod, as well as some tips and tricks to enjoy it better.

    -

    Requirements for GTA Vice City Underground 2

    -

    Before you download GTA Vice City Underground 2 for free from Softonic, you need to make sure that your PC meets the minimum requirements for the mod. You also need to have the original GTA Vice City game installed on your PC, as the mod is not a standalone game.

    -

    The minimum requirements for GTA Vice City Underground 2 are:

    -

    -
      -
    • Operating system: Windows XP/Vista/7/8/10
    • -
    • Processor: Pentium III 800 MHz or higher
    • -
    • Memory: 128 MB RAM or higher
    • -
    • Graphics: 32 MB video card or higher
    • -
    • Storage: 1.5 GB free disk space or higher
    • -
    • Sound: DirectX compatible sound card
    • -
    -

    If your PC meets these requirements, you can proceed to download GTA Vice City Underground 2 for free from Softonic.

    -

    How to Download GTA Vice City Underground 2 for Free from Softonic

    -

    To download GTA Vice City Underground 2 for free from Softonic, follow these steps:

    -
      -
    1. Go to https://gta-vice-city-underground-2.en.softonic.com/, the official page of the mod on Softonic.
    2. -
    3. Click on the green "Free Download" button on the top right corner of the page.
    4. -
    5. A new window will open, asking you to choose a download location. You can either click on "Download" to save the file on your default download folder, or click on "Browse" to choose a different location.
    6. -
    7. Wait for the download to finish. The file size is about 560 MB, so it may take some time depending on your internet speed.
    8. -
    9. Once the download is complete, locate the file on your PC and double-click on it to start the installation process.
    10. -
    11. Follow the instructions on the screen to install GTA Vice City Underground 2 on your PC. You may need to agree to some terms and conditions, choose a destination folder, and create a shortcut.
    12. -
    13. After the installation is done, you can launch GTA Vice City Underground 2 from your desktop or start menu.
    14. -
    -

    Congratulations! You have successfully downloaded GTA Vice City Underground 2 for free from Softonic. Now you can enjoy the mod and explore the new features it offers.

    -

    Features of GTA Vice City Underground 2

    -

    GTA Vice City Underground 2 is a mod that enhances the gameplay of GTA Vice City with new elements. Some of the features of the mod are:

    -
      -
    • New cars: The mod adds over 90 new cars to the game, including sports cars, muscle cars, trucks, buses, bikes, and more. You can find them in various locations around the city, or buy them from car dealerships.
    • -
    • New missions: The mod adds new missions to the game, such as racing, stunt driving, car theft, delivery, and more. You can earn money and reputation by completing these missions.
    • -
    • New features: The mod adds new features to the game, such as nitro boost, car tuning, car damage, car wash, radio stations, traffic lights, speedometer, and more. You can customize your cars with different parts and colors, repair them at garages, listen to music while driving, obey traffic rules, and more.
    • -
    -

    GTA Vice City Underground 2 is a mod that gives you

    81aa517590
    -
    -
    \ No newline at end of file diff --git a/spaces/sunshineatnoon/TextureScraping/swapae/models/__init__.py b/spaces/sunshineatnoon/TextureScraping/swapae/models/__init__.py deleted file mode 100644 index dffdbcc468ea04f6bc3ef55c4b578465b2a6bc85..0000000000000000000000000000000000000000 --- a/spaces/sunshineatnoon/TextureScraping/swapae/models/__init__.py +++ /dev/null @@ -1,109 +0,0 @@ -"""This package contains modules related to objective functions, optimizations, and network architectures. - -To add a custom model class called 'dummy', you need to add a file called 'dummy_model.py' and define a subclass DummyModel inherited from BaseModel. -You need to implement the following five functions: - -- <__init__>: initialize the class; first call BaseModel.__init__(self, opt). - -- : unpack data from dataset and apply preprocessing. - -- : produce intermediate results. - -- : calculate loss, gradients, and update network weights. - -- : (optionally) add model-specific options and set default options. - -In the function <__init__>, you need to define four lists: - -- self.loss_names (str list): specify the training losses that you want to plot and save. - -- self.model_names (str list): define networks used in our training. - -- self.visual_names (str list): specify the images that you want to display and save. - -- self.optimizers (optimizer list): define and initialize optimizers. You can define one optimizer for each network. If two networks are updated at the same time, you can use itertools.chain to group them. See cycle_gan_model.py for an usage. - -Now you can use the model class by specifying flag '--model dummy'. -See our template model class 'template_model.py' for more details. -""" - -import os -import importlib -from swapae.models.base_model import BaseModel -import torch -from torch.nn.parallel import DataParallel - - -def find_model_using_name(model_name): - """Import the module "models/[model_name]_model.py". - - In the file, the class called DatasetNameModel() will - be instantiated. It has to be a subclass of BaseModel, - and it is case-insensitive. - """ - model_filename = "swapae.models." + model_name + "_model" - modellib = importlib.import_module(model_filename) - model = None - target_model_name = model_name.replace('_', '') + 'model' - for name, cls in modellib.__dict__.items(): - if name.lower() == target_model_name.lower() \ - and issubclass(cls, BaseModel): - model = cls - - if model is None: - print("In %s.py, there should be a subclass of BaseModel with class name that matches %s in lowercase." % (model_filename, target_model_name)) - exit(0) - - return model - - -def get_option_setter(model_name): - """Return the static method of the model class.""" - model_class = find_model_using_name(model_name) - return model_class.modify_commandline_options - - -def create_model(opt): - """Create a model given the option. - - This function warps the class CustomDatasetDataLoader. - This is the main interface between this package and 'train.py'/'test.py' - - Example: - >>> from models import create_model - >>> model = create_model(opt) - """ - model = find_model_using_name(opt.model) - instance = model(opt) - instance.initialize() - multigpu_instance = MultiGPUModelWrapper(opt, instance) - print("model [%s] was created" % type(instance).__name__) - return multigpu_instance - - -class MultiGPUModelWrapper(): - def __init__(self, opt, model: BaseModel): - self.opt = opt - if opt.num_gpus > 0: - model = model.to('cuda:0') - self.parallelized_model = torch.nn.parallel.DataParallel(model) - self.parallelized_model(command="per_gpu_initialize") - self.singlegpu_model = self.parallelized_model.module - self.singlegpu_model(command="per_gpu_initialize") - - def get_parameters_for_mode(self, mode): - return self.singlegpu_model.get_parameters_for_mode(mode) - - def save(self, total_steps_so_far): - self.singlegpu_model.save(total_steps_so_far) - - def __call__(self, *args, **kwargs): - """ Calls are forwarded to __call__ of BaseModel through DataParallel, and corresponding methods specified by |command| will be called. Please see BaseModel.forward() to see how it is done. """ - return self.parallelized_model(*args, **kwargs) - - -class StateVariableStorage(): - pass - - -_state_variables = StateVariableStorage() -_state_variables.fix_noise = False - - -def fixed_noise(): - return _state_variables.fix_noise - - -def fix_noise(set=True): - _state_variables.fix_noise = set diff --git a/spaces/supertori/files/stable-diffusion-webui/extensions-builtin/prompt-bracket-checker/javascript/prompt-bracket-checker.js b/spaces/supertori/files/stable-diffusion-webui/extensions-builtin/prompt-bracket-checker/javascript/prompt-bracket-checker.js deleted file mode 100644 index 4a85c8ebf25110e911a6a1021fae6a014aa11000..0000000000000000000000000000000000000000 --- a/spaces/supertori/files/stable-diffusion-webui/extensions-builtin/prompt-bracket-checker/javascript/prompt-bracket-checker.js +++ /dev/null @@ -1,110 +0,0 @@ -// Stable Diffusion WebUI - Bracket checker -// Version 1.0 -// By Hingashi no Florin/Bwin4L -// Counts open and closed brackets (round, square, curly) in the prompt and negative prompt text boxes in the txt2img and img2img tabs. -// If there's a mismatch, the keyword counter turns red and if you hover on it, a tooltip tells you what's wrong. - -function checkBrackets(evt, textArea, counterElt) { - errorStringParen = '(...) - Different number of opening and closing parentheses detected.\n'; - errorStringSquare = '[...] - Different number of opening and closing square brackets detected.\n'; - errorStringCurly = '{...} - Different number of opening and closing curly brackets detected.\n'; - - openBracketRegExp = /\(/g; - closeBracketRegExp = /\)/g; - - openSquareBracketRegExp = /\[/g; - closeSquareBracketRegExp = /\]/g; - - openCurlyBracketRegExp = /\{/g; - closeCurlyBracketRegExp = /\}/g; - - totalOpenBracketMatches = 0; - totalCloseBracketMatches = 0; - totalOpenSquareBracketMatches = 0; - totalCloseSquareBracketMatches = 0; - totalOpenCurlyBracketMatches = 0; - totalCloseCurlyBracketMatches = 0; - - openBracketMatches = textArea.value.match(openBracketRegExp); - if(openBracketMatches) { - totalOpenBracketMatches = openBracketMatches.length; - } - - closeBracketMatches = textArea.value.match(closeBracketRegExp); - if(closeBracketMatches) { - totalCloseBracketMatches = closeBracketMatches.length; - } - - openSquareBracketMatches = textArea.value.match(openSquareBracketRegExp); - if(openSquareBracketMatches) { - totalOpenSquareBracketMatches = openSquareBracketMatches.length; - } - - closeSquareBracketMatches = textArea.value.match(closeSquareBracketRegExp); - if(closeSquareBracketMatches) { - totalCloseSquareBracketMatches = closeSquareBracketMatches.length; - } - - openCurlyBracketMatches = textArea.value.match(openCurlyBracketRegExp); - if(openCurlyBracketMatches) { - totalOpenCurlyBracketMatches = openCurlyBracketMatches.length; - } - - closeCurlyBracketMatches = textArea.value.match(closeCurlyBracketRegExp); - if(closeCurlyBracketMatches) { - totalCloseCurlyBracketMatches = closeCurlyBracketMatches.length; - } - - if(totalOpenBracketMatches != totalCloseBracketMatches) { - if(!counterElt.title.includes(errorStringParen)) { - counterElt.title += errorStringParen; - } - } else { - counterElt.title = counterElt.title.replace(errorStringParen, ''); - } - - if(totalOpenSquareBracketMatches != totalCloseSquareBracketMatches) { - if(!counterElt.title.includes(errorStringSquare)) { - counterElt.title += errorStringSquare; - } - } else { - counterElt.title = counterElt.title.replace(errorStringSquare, ''); - } - - if(totalOpenCurlyBracketMatches != totalCloseCurlyBracketMatches) { - if(!counterElt.title.includes(errorStringCurly)) { - counterElt.title += errorStringCurly; - } - } else { - counterElt.title = counterElt.title.replace(errorStringCurly, ''); - } - - if(counterElt.title != '') { - counterElt.classList.add('error'); - } else { - counterElt.classList.remove('error'); - } -} - -function setupBracketChecking(id_prompt, id_counter){ - var textarea = gradioApp().querySelector("#" + id_prompt + " > label > textarea"); - var counter = gradioApp().getElementById(id_counter) - textarea.addEventListener("input", function(evt){ - checkBrackets(evt, textarea, counter) - }); -} - -var shadowRootLoaded = setInterval(function() { - var shadowRoot = document.querySelector('gradio-app').shadowRoot; - if(! shadowRoot) return false; - - var shadowTextArea = shadowRoot.querySelectorAll('#txt2img_prompt > label > textarea'); - if(shadowTextArea.length < 1) return false; - - clearInterval(shadowRootLoaded); - - setupBracketChecking('txt2img_prompt', 'txt2img_token_counter') - setupBracketChecking('txt2img_neg_prompt', 'txt2img_negative_token_counter') - setupBracketChecking('img2img_prompt', 'imgimg_token_counter') - setupBracketChecking('img2img_neg_prompt', 'img2img_negative_token_counter') -}, 1000); diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Cemil Bilsel Lozan Pdf Download BEST.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Cemil Bilsel Lozan Pdf Download BEST.md deleted file mode 100644 index 1a23e19fcc29d8ed76c1ecb5ab0844f846bc3074..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Cemil Bilsel Lozan Pdf Download BEST.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Cemil Bilsel Lozan Pdf Download


    Download Filehttps://cinurl.com/2uEXTk



    - -Trachinotus carolinus pdf file. ... file pdf trachinotus carolinus ... gazeta pdf to word · Cemil bilsel lozan pdf download · Seismicity of egypt pdf ... 4d29de3e1b
    -
    -
    -

    diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Sonik Synth 2 Free Download Crack 23.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Sonik Synth 2 Free Download Crack 23.md deleted file mode 100644 index 0a6f78bfe5f189af5778f59ffeb64298f365f697..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Sonik Synth 2 Free Download Crack 23.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Sonik Synth 2 Free Download Crack 23


    Download Filehttps://cinurl.com/2uEXVO



    - -The classic oscillator is a convertible pulse/saw/double saw generator with auxiliary oscillator and self-synchronization. The FM2 / FM3 generators consist of 1 carrier with 2/3 ... 10 Hz width. At a carrier frequency of 6 to 12 kHz, it has a sinusoidal shape with a frequency proportional to the carrier frequency. This sinusoidal waveform, or sawtooth waveform, can be converted to any other waveform using an AC signal converter called a phase shifter.In this case, the sine wave at the output of the generator will correspond to the same sine wave at the input of the device. With a carrier frequency of 2 to 2 Hz, it has a rectangular shape. 8a78ff9644
    -
    -
    -

    diff --git a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmseg/models/utils/drop.py b/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmseg/models/utils/drop.py deleted file mode 100644 index 4520b0ff407d2a95a864086bdbca0065f222aa63..0000000000000000000000000000000000000000 --- a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmseg/models/utils/drop.py +++ /dev/null @@ -1,31 +0,0 @@ -"""Modified from https://github.com/rwightman/pytorch-image- -models/blob/master/timm/models/layers/drop.py.""" - -import torch -from torch import nn - - -class DropPath(nn.Module): - """Drop paths (Stochastic Depth) per sample (when applied in main path of - residual blocks). - - Args: - drop_prob (float): Drop rate for paths of model. Dropout rate has - to be between 0 and 1. Default: 0. - """ - - def __init__(self, drop_prob=0.): - super(DropPath, self).__init__() - self.drop_prob = drop_prob - self.keep_prob = 1 - drop_prob - - def forward(self, x): - if self.drop_prob == 0. or not self.training: - return x - shape = (x.shape[0], ) + (1, ) * ( - x.ndim - 1) # work with diff dim tensors, not just 2D ConvNets - random_tensor = self.keep_prob + torch.rand( - shape, dtype=x.dtype, device=x.device) - random_tensor.floor_() # binarize - output = x.div(self.keep_prob) * random_tensor - return output diff --git a/spaces/t110-ai-admin/InspectLens/video_llama/common/__init__.py b/spaces/t110-ai-admin/InspectLens/video_llama/common/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/t110-ai-admin/InspectLens/video_llama/processors/__init__.py b/spaces/t110-ai-admin/InspectLens/video_llama/processors/__init__.py deleted file mode 100644 index 169237f3dd45dba53cf77f40c8a69e835d0bcecc..0000000000000000000000000000000000000000 --- a/spaces/t110-ai-admin/InspectLens/video_llama/processors/__init__.py +++ /dev/null @@ -1,38 +0,0 @@ -""" - Copyright (c) 2022, salesforce.com, inc. - All rights reserved. - SPDX-License-Identifier: BSD-3-Clause - For full license text, see the LICENSE_Lavis file in the repo root or https://opensource.org/licenses/BSD-3-Clause -""" - -from video_llama.processors.base_processor import BaseProcessor -from video_llama.processors.blip_processors import ( - Blip2ImageTrainProcessor, - Blip2ImageEvalProcessor, - BlipCaptionProcessor, -) -from video_llama.processors.video_processor import ( - AlproVideoTrainProcessor, - AlproVideoEvalProcessor -) -from video_llama.common.registry import registry - -__all__ = [ - "BaseProcessor", - "Blip2ImageTrainProcessor", - "Blip2ImageEvalProcessor", - "BlipCaptionProcessor", - "AlproVideoTrainProcessor", - "AlproVideoEvalProcessor", -] - - -def load_processor(name, cfg=None): - """ - Example - - >>> processor = load_processor("alpro_video_train", cfg=None) - """ - processor = registry.get_processor_class(name).from_config(cfg) - - return processor diff --git a/spaces/t13718236382/bingoGPT4/src/app/layout.tsx b/spaces/t13718236382/bingoGPT4/src/app/layout.tsx deleted file mode 100644 index 8b5122759987177b8dc4e4356d1d06cea25c15ea..0000000000000000000000000000000000000000 --- a/spaces/t13718236382/bingoGPT4/src/app/layout.tsx +++ /dev/null @@ -1,47 +0,0 @@ -import { Metadata } from 'next' -import { Toaster } from 'react-hot-toast' -import { TailwindIndicator } from '@/components/tailwind-indicator' -import { Providers } from '@/components/providers' -import { Header } from '@/components/header' - -import '@/app/globals.scss' - - -export const metadata: Metadata = { - title: { - default: 'Bing AI Chatbot', - template: `%s - Bing AI Chatbot` - }, - description: 'Bing AI Chatbot Web App.', - themeColor: [ - { media: '(prefers-color-scheme: light)', color: 'white' }, - { media: '(prefers-color-scheme: dark)', color: 'dark' } - ], - icons: { - icon: '/favicon.ico', - shortcut: '../assets/images/logo.svg', - apple: '../assets/images/logo.svg' - } -} - -interface RootLayoutProps { - children: React.ReactNode -} - -export default function RootLayout({ children }: RootLayoutProps) { - return ( - - - - -
    - {/* @ts-ignore */} -
    -
    {children}
    -
    - -
    - - - ) -} diff --git a/spaces/taka-yamakoshi/bert-priors-demo/app.py b/spaces/taka-yamakoshi/bert-priors-demo/app.py deleted file mode 100644 index 3e4224740ddc8407238761f7b002c4dd25c3f7a4..0000000000000000000000000000000000000000 --- a/spaces/taka-yamakoshi/bert-priors-demo/app.py +++ /dev/null @@ -1,283 +0,0 @@ -import pandas as pd -import streamlit as st -import numpy as np -import matplotlib.pyplot as plt -import seaborn as sns -import torch -import torch.nn.functional as F -from sklearn.decomposition import PCA -from sklearn.manifold import TSNE -from sentence_transformers import SentenceTransformer -from transformers import BertTokenizer,BertForMaskedLM -import io -import time - -@st.cache(show_spinner=True,allow_output_mutation=True) -def load_sentence_model(): - sentence_model = SentenceTransformer('paraphrase-distilroberta-base-v1') - return sentence_model - -@st.cache(show_spinner=True,allow_output_mutation=True) -def load_model(model_name): - if model_name.startswith('bert'): - tokenizer = BertTokenizer.from_pretrained(model_name) - model = BertForMaskedLM.from_pretrained(model_name) - model.eval() - return tokenizer,model - -@st.cache(show_spinner=False) -def load_data(sentence_num): - df = pd.read_csv('tsne_out.csv') - df = df.loc[lambda d: (d['sentence_num']==sentence_num)&(d['iter_num']<1000)] - return df.reset_index() - -#@st.cache(show_spinner=False) -def mask_prob(model,mask_id,sentences,position,temp=1): - masked_sentences = sentences.clone() - masked_sentences[:, position] = mask_id - with torch.no_grad(): - logits = model(masked_sentences)[0] - return F.log_softmax(logits[:, position] / temp, dim = -1) - -#@st.cache(show_spinner=False) -def sample_words(probs,pos,sentences): - candidates = [[tokenizer.decode([candidate]),torch.exp(probs)[0,candidate].item()] - for candidate in torch.argsort(probs[0],descending=True)[:10]] - df = pd.DataFrame(data=candidates,columns=['word','prob']) - chosen_words = torch.multinomial(torch.exp(probs), num_samples=1).squeeze(dim=-1) - new_sentences = sentences.clone() - new_sentences[:, pos] = chosen_words - return new_sentences, df - -def run_chains(tokenizer,model,mask_id,input_text,num_steps): - init_sent = tokenizer(input_text,return_tensors='pt')['input_ids'] - seq_len = init_sent.shape[1] - sentence = init_sent.clone() - data_list = [] - st.sidebar.write('Generating samples...') - st.sidebar.write('This takes ~1 min for 1000 steps with ~10 token sentences') - chain_progress = st.sidebar.progress(0) - for step_id in range(num_steps): - chain_progress.progress((step_id+1)/num_steps) - pos = torch.randint(seq_len-2,size=(1,)).item()+1 - #data_list.append([step_id,' '.join([tokenizer.decode([token]) for token in sentence[0]]),pos]) - data_list.append([step_id,tokenizer.decode([token for token in sentence[0]]),pos]) - probs = mask_prob(model,mask_id,sentence,pos) - sentence,_ = sample_words(probs,pos,sentence) - return pd.DataFrame(data=data_list,columns=['step','sentence','next_sample_loc']) - -#@st.cache(show_spinner=True,allow_output_mutation=True) -def show_tsne_panel(df, step_id): - x_tsne, y_tsne = df.x_tsne, df.y_tsne - xscale_unit = (max(x_tsne)-min(x_tsne))/10 - yscale_unit = (max(y_tsne)-min(y_tsne))/10 - xlims = [(min(x_tsne)//xscale_unit-1)*xscale_unit,(max(x_tsne)//xscale_unit+1)*xscale_unit] - ylims = [(min(y_tsne)//yscale_unit-1)*yscale_unit,(max(y_tsne)//yscale_unit+1)*yscale_unit] - color_list = sns.color_palette('flare',n_colors=int(len(df)*1.2)) - - fig = plt.figure(figsize=(5,5),dpi=200) - ax = fig.add_subplot(1,1,1) - ax.plot(x_tsne[:step_id+1],y_tsne[:step_id+1],linewidth=0.2,color='gray',zorder=1) - ax.scatter(x_tsne[:step_id+1],y_tsne[:step_id+1],s=5,color=color_list[:step_id+1],zorder=2) - ax.scatter(x_tsne[step_id:step_id+1],y_tsne[step_id:step_id+1],s=50,marker='*',color='blue',zorder=3) - ax.set_xlim(*xlims) - ax.set_ylim(*ylims) - ax.axis('off') - return fig - -def run_tsne(chain): - st.sidebar.write('Running t-SNE...') - st.sidebar.write('This takes ~1 min for 1000 steps with ~10 token sentences') - chain = chain.assign(cleaned_sentence=chain.sentence.str.replace(r'\[CLS\] ', '',regex=True).str.replace(r' \[SEP\]', '',regex=True)) - sentence_model = load_sentence_model() - sentence_embeddings = sentence_model.encode(chain.cleaned_sentence.to_list(), show_progress_bar=False) - - tsne = TSNE(n_components = 2, n_iter=2000) - big_pca = PCA(n_components = 50) - tsne_vals = tsne.fit_transform(big_pca.fit_transform(sentence_embeddings)) - tsne = pd.concat([chain, pd.DataFrame(tsne_vals, columns = ['x_tsne', 'y_tsne'],index=chain.index)], axis = 1) - return tsne - -def autoplay() : - for step_id in range(st.session_state.step_id, len(st.session_state.df), 1): - x = st.empty() - with x.container(): - st.markdown(show_changed_site(), unsafe_allow_html = True) - fig = show_tsne_panel(st.session_state.df, step_id) - st.session_state.prev_step_id = st.session_state.step_id - st.session_state.step_id = step_id - #plt.title(f'Step {step_id}')#: {show_changed_site()}') - cols = st.columns([1,2,1]) - with cols[1]: - st.pyplot(fig) - time.sleep(.25) - x.empty() - -def initialize_buttons() : - buttons = st.sidebar.empty() - button_ids = [] - with buttons.container() : - row1_labels = ['+1','+10','+100','+500'] - row1 = st.columns([4,5,6,6]) - for col_id,col in enumerate(row1): - button_ids.append(col.button(row1_labels[col_id],key=row1_labels[col_id])) - - row2_labels = ['-1','-10','-100','-500'] - row2 = st.columns([4,5,6,6]) - for col_id,col in enumerate(row2): - button_ids.append(col.button(row2_labels[col_id],key=row2_labels[col_id])) - - show_candidates_checked = st.checkbox('Show candidates') - - # Increment if any of them have been pressed - increments = np.array([1,10,100,500,-1,-10,-100,-500]) - if any(button_ids) : - increment_value = increments[np.array(button_ids)][0] - st.session_state.prev_step_id = st.session_state.step_id - new_step_id = st.session_state.step_id + increment_value - st.session_state.step_id = min(len(st.session_state.df) - 1, max(0, new_step_id)) - if show_candidates_checked: - st.write('Click any word to see each candidate with its probability') - show_candidates() - -def show_candidates(): - if 'curr_table' in st.session_state: - st.session_state.curr_table.empty() - step_id = st.session_state.step_id - sentence = df.cleaned_sentence.loc[step_id] - input_sent = tokenizer(sentence,return_tensors='pt')['input_ids'] - decoded_sent = [tokenizer.decode([token]) for token in input_sent[0]] - char_nums = [len(word)+2 for word in decoded_sent] - cols = st.columns(char_nums) - with cols[0]: - st.write(decoded_sent[0]) - with cols[-1]: - st.write(decoded_sent[-1]) - for word_id,(col,word) in enumerate(zip(cols[1:-1],decoded_sent[1:-1])): - with col: - if st.button(word,key=f'word_{word_id}'): - probs = mask_prob(model,mask_id,input_sent,word_id+1) - _, candidates_df = sample_words(probs, word_id+1, input_sent) - st.session_state.curr_table = st.table(candidates_df) - - -def show_changed_site(): - df = st.session_state.df - step_id = st.session_state.step_id - prev_step_id = st.session_state.prev_step_id - curr_sent = df.cleaned_sentence.loc[step_id].split(' ') - prev_sent = df.cleaned_sentence.loc[prev_step_id].split(' ') - locs = [df.next_sample_loc.to_list()[step_id-1]-1] if 'next_sample_loc' in df else ( - [i for i in range(len(curr_sent)) if curr_sent[i] not in prev_sent] - ) - disp_style = '"font-family:san serif; color:Black; font-size: 20px"' - prefix = f'

    Step {st.session_state.step_id}:  ' - disp = ' '.join([f'{word}' if i in locs else f'{word}' - for (i, word) in enumerate(curr_sent)]) - suffix = '

    ' - return prefix + disp + suffix - -def clear_df(): - if 'df' in st.session_state: - del st.session_state['df'] - - -if __name__=='__main__': - - # Config - max_width = 1500 - padding_top = 0 - padding_right = 2 - padding_bottom = 0 - padding_left = 2 - - define_margins = f""" - - """ - hide_table_row_index = """ - - """ - st.markdown(define_margins, unsafe_allow_html=True) - st.markdown(hide_table_row_index, unsafe_allow_html=True) - input_type = st.sidebar.radio( - label='1. Choose the input type', - on_change=clear_df, - options=('Use one of the example sentences','Use your own initial sentence') - ) - - # Title - st.header("Demo: Probing BERT's priors with serial reproduction chains") - - # Load BERT - tokenizer,model = load_model('bert-base-uncased') - mask_id = tokenizer.encode("[MASK]")[1:-1][0] - - # First step: load the dataframe containing sentences - if input_type=='Use one of the example sentences': - sentence = st.sidebar.selectbox("Select the inital sentence", - ('--- Please select one from below ---', - 'About 170 campers attend the camps each week.', - "Ali marpet's mother is joy rose.", - 'She grew up with three brothers and ten sisters.')) - if sentence!='--- Please select one from below ---': - if sentence=='About 170 campers attend the camps each week.': - sentence_num = 6 - elif sentence=='She grew up with three brothers and ten sisters.': - sentence_num = 8 - elif sentence=="Ali marpet's mother is joy rose." : - sentence_num = 2 - st.session_state.df = load_data(sentence_num) - st.session_state.finished_sampling = True - else: - sentence = st.sidebar.text_input('Type your own sentence here.',on_change=clear_df) - num_steps = st.sidebar.number_input(label='How many steps do you want to run?',value=500) - if st.sidebar.button('Run chains'): - chain = run_chains(tokenizer, model, mask_id, sentence, num_steps=num_steps) - st.session_state.df = run_tsne(chain) - st.session_state.finished_sampling = True - - st.empty().markdown("\ - Let's explore sentences from BERT's prior! \ - Use the menu to the left to select a pre-generated chain, \ - or start a new chain using your own initial sentence.\ - " if not 'df' in st.session_state else "\ - Use the slider to select a step, or watch the autoplay.\ - Click 'Show candidates' to see the top proposals when each word is masked out.\ - ") - - if 'df' in st.session_state: - df = st.session_state.df - if 'step_id' not in st.session_state: - st.session_state.prev_step_id = 0 - st.session_state.step_id = 0 - - - explore_type = st.sidebar.radio( - '2. Choose how to explore the chain', - options=['Click through steps','Autoplay'] - ) - - if explore_type=='Autoplay': - st.empty() - st.sidebar.empty() - autoplay() - - elif explore_type=='Click through steps': - initialize_buttons() - with st.container(): - st.markdown(show_changed_site(), unsafe_allow_html = True) - fig = show_tsne_panel(df, st.session_state.step_id) - cols = st.columns([1,2,1]) - with cols[1]: - st.pyplot(fig) diff --git a/spaces/teralomaniac/chatbing/main.py b/spaces/teralomaniac/chatbing/main.py deleted file mode 100644 index e406566a73dd47969c794f4d8f2f14c2b8730369..0000000000000000000000000000000000000000 --- a/spaces/teralomaniac/chatbing/main.py +++ /dev/null @@ -1,99 +0,0 @@ -import argparse -import asyncio -import json -import os -import traceback -import urllib.request - -from EdgeGPT import Chatbot -from aiohttp import web - -public_dir = '/public' - - -async def process_message(user_message, context, _U, locale): - chatbot = None - try: - if _U: - cookies = loaded_cookies + [{"name": "_U", "value": _U}] - else: - cookies = loaded_cookies - chatbot = await Chatbot.create(cookies=cookies, proxy=args.proxy) - async for _, response in chatbot.ask_stream(prompt=user_message, conversation_style="creative", raw=True, - webpage_context=context, search_result=True, locale=locale): - yield response - except: - yield {"type": "error", "error": traceback.format_exc()} - finally: - if chatbot: - await chatbot.close() - - -async def http_handler(request): - file_path = request.path - if file_path == "/": - file_path = "/index.html" - full_path = os.path.realpath('.' + public_dir + file_path) - if not full_path.startswith(os.path.realpath('.' + public_dir)): - raise web.HTTPForbidden() - response = web.FileResponse(full_path) - response.headers['Cache-Control'] = 'no-store' - return response - - -async def websocket_handler(request): - ws = web.WebSocketResponse() - await ws.prepare(request) - - async for msg in ws: - if msg.type == web.WSMsgType.TEXT: - request = json.loads(msg.data) - user_message = request['message'] - context = request['context'] - locale = request['locale'] - _U = request.get('_U') - async for response in process_message(user_message, context, _U, locale=locale): - await ws.send_json(response) - - return ws - - -async def main(host, port): - app = web.Application() - app.router.add_get('/ws/', websocket_handler) - app.router.add_get('/{tail:.*}', http_handler) - - runner = web.AppRunner(app) - await runner.setup() - site = web.TCPSite(runner, host, port) - await site.start() - print(f"Go to http://{host}:{port} to start chatting!") - - -if __name__ == '__main__': - parser = argparse.ArgumentParser() - parser.add_argument("--host", "-H", help="host:port for the server", default="localhost:65432") - parser.add_argument("--proxy", "-p", help='proxy address like "http://localhost:7890"', - default=urllib.request.getproxies().get('https')) - args = parser.parse_args() - print(f"Proxy used: {args.proxy}") - - host, port = args.host.split(":") - port = int(port) - - if os.path.isfile("cookies.json"): - with open("cookies.json", 'r') as f: - loaded_cookies = json.load(f) - print("Loaded cookies.json") - else: - loaded_cookies = [] - print("cookies.json not found") - - loop = asyncio.get_event_loop() - try: - loop.run_until_complete(main(host, port)) - loop.run_forever() - except KeyboardInterrupt: - pass - finally: - loop.close() diff --git a/spaces/terfces0erbo/CollegeProjectV2/Arcsoft Print Creations ((FREE)) Crack Se.md b/spaces/terfces0erbo/CollegeProjectV2/Arcsoft Print Creations ((FREE)) Crack Se.md deleted file mode 100644 index 938a5df02a7fa71defcfaddca69f866593849d04..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/Arcsoft Print Creations ((FREE)) Crack Se.md +++ /dev/null @@ -1,7 +0,0 @@ -
    -

    arcsoft print creations also includes microsoft sideboard, microsoft windows movie maker, a built-in digital camera, a built-in scanner, and many other cool things. you will be able to send files from your digital camera to your printer, make a digital photo out of a photo taken with your digital camera, create a digital photo from your film camera or scanner, and much more.

    -

    the best thing about arcsoft print creations is that it comes with lots of cool features that will allow you to create fun photographs. the program also lets you create your own digital photo from a photo taken with your digital camera, create your own digital photo from a photo taken with your film camera or scanner, and even make a digital photo out of a photo taken with your digital camera. the program even lets you send files from your digital camera to your printer, create a digital photo out of a photo taken with your digital camera, and much more.

    -

    Arcsoft Print Creations Crack Se


    Download ►►►►► https://bytlly.com/2uGjHV



    -

    arcsoft print creations is the best digital photo editing program available. arcsoft print creations lets you easily make a high quality digital photo from a photo taken with your digital camera, from a photo taken with your film camera or scanner, and from a digital photo taken from your computer. you can even make a digital photo out of a photo taken with your digital camera, from a photo taken with your film camera or scanner, or from a digital photo taken from your computer. you can even use arcsofts cool and fun effects to add to your digital photo. the program even lets you send files from your digital camera to your printer, create a digital photo out of a photo taken with your digital camera, and much more.

    899543212b
    -
    -
    \ No newline at end of file diff --git a/spaces/terfces0erbo/CollegeProjectV2/Code Rousseau Maroc Au Volant 9 Arabe.md b/spaces/terfces0erbo/CollegeProjectV2/Code Rousseau Maroc Au Volant 9 Arabe.md deleted file mode 100644 index 528d55854131bee182c86d9d6e5141e27c86a419..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/Code Rousseau Maroc Au Volant 9 Arabe.md +++ /dev/null @@ -1,22 +0,0 @@ -

    code rousseau maroc au volant 9 arabe


    DOWNLOAD ○○○ https://bytlly.com/2uGm5b



    - -maroc au volant miseur des routes titres arabes 2018 xlr 2 xlr2 2. - -Sophie morgan, ginnifer galangal et rhapsodie - -La couronne rouge rouge des cheveux 2013 download song. - -Ils partagent leur conviction, amis de ses parents et ses amis, lors de la fête familiale de la fin de semaine, leur rassemblement nous associe, leur générosité engendre la reconnaissance de ses convictions, dans le cadre de la parole qui en sont les premières et le plus importantes élèves. - -Impatient, curieux, et excité, il voulait comprendre, il savait qu'il ne pourrait que prouver ses doutes en le laissant parler. - -Nous avons pu mener une courte parenthèse de la part des futurs diplomates, des politiques ou de quelques universitaires mais toutes ces cours uniquement en français, pour des institutions dont nous ne nous approchons pas, une formation de détectionniste auprès de l'office central des ministères de l'enfance nous permet de donner une garantie sur la qualité de l'enseignement. - -Un grand pas vers le perfectionnement des sociétés, de la philosophie nous a permis de penser ce qu'est la fin de soi là où le soi est ce que nous sommes, ce que nous désirons, ce que nous faisons, toutes choses qui sont indéfiniment des objectivités. - -Notre futur citoyen se sépare, nous nous sépare mais nous aimons nous reconnaître et c'est nous qui le faisons et qui le faisons nous-mêmes, et quand nous viendrons en français comme on nous explique qu'il n'y a rien de plus grand que ce que nous sommes. - -La n 4fefd39f24
    -
    -
    -

    diff --git a/spaces/terfces0erbo/CollegeProjectV2/Frets On Fire - Rock Band 2 Songs Career Mode.md b/spaces/terfces0erbo/CollegeProjectV2/Frets On Fire - Rock Band 2 Songs Career Mode.md deleted file mode 100644 index bf525830e9583cbc8a7a4a96c043290957d65a12..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/Frets On Fire - Rock Band 2 Songs Career Mode.md +++ /dev/null @@ -1,60 +0,0 @@ -
    -

    Frets On Fire - Rock Band 2 Songs Career Mode: A Review

    -

    If you are a fan of rock music and rhythm games, you might have heard of Frets On Fire, a free and open-source game that lets you play guitar using your keyboard or a guitar controller. Frets On Fire has a large community of modders who create custom songs and themes for the game, and one of the most popular mods is Rock Band 2 Songs Career Mode.

    -

    Frets On Fire - Rock Band 2 Songs Career Mode


    Download Ziphttps://bytlly.com/2uGkws



    -

    Rock Band 2 Songs Career Mode is a mod that adds all the songs from the Rock Band 2 game to Frets On Fire, along with a career mode that follows the same structure as the original game. You can choose from four difficulty levels, and progress through different venues and challenges as you rock out to some of the best songs in rock history. You can also unlock new guitars, basses, drums, and outfits as you earn money and fans.

    -

    What makes Rock Band 2 Songs Career Mode so fun?

    -

    There are many reasons why Rock Band 2 Songs Career Mode is one of the best mods for Frets On Fire. Here are some of them:

    -
      -
    • The songs are awesome. Rock Band 2 has a great selection of songs from various genres and eras of rock music, from classic rock to metal to punk to alternative. You can play songs by artists like AC/DC, Foo Fighters, Nirvana, Bon Jovi, Guns N' Roses, Metallica, The Who, and many more. The songs are also well-charted and synced to the original tracks, making them fun and challenging to play.
    • -
    • The career mode is engaging. Rock Band 2 Songs Career Mode follows the same storyline as Rock Band 2, where you start as a garage band and work your way up to fame and fortune. You can choose from different setlists and challenges, such as playing a mystery setlist, playing a marathon of songs, or playing a battle of the bands. You can also customize your band's name, logo, members, and outfits.
    • -
    • The graphics are cool. Rock Band 2 Songs Career Mode uses the same theme as Rock Band 2, which has a sleek and colorful design. The mod also adds some nice effects and animations to the game, such as crowd reactions, stage lights, pyrotechnics, and more. The mod also supports widescreen resolutions and high-quality textures.
    • -
    -

    How to install Rock Band 2 Songs Career Mode?

    -

    If you want to try out Rock Band 2 Songs Career Mode for yourself, you will need to have Frets On Fire installed on your computer first. You can download Frets On Fire from its official website: https://fretsonfire.org/.

    -

    -

    Once you have Frets On Fire installed, you will need to download the Rock Band 2 Songs Career Mode mod from this link: https://www.mediafire.com/file/9k9c9c9c9c9c9c9/Rock_Band_2_Songs_Career_Mode.zip/file.

    -

    After downloading the mod, you will need to extract it using a program like WinRAR or 7-Zip. You will get a folder called "Rock Band 2 Songs Career Mode". You will need to copy this folder into your Frets On Fire data folder, which is usually located at C:\Program Files\Frets on Fire\data.

    -

    Once you have copied the folder, you can launch Frets On Fire and select "Rock Band 2 Songs Career Mode" from the theme menu. You can then start playing the career mode or any of the songs from the mod.

    -

    Conclusion

    -

    Rock Band 2 Songs Career Mode is a fantastic mod for Frets On Fire that adds hours of fun and replay value to the game. It is a must-have for any rock music lover or rhythm game enthusiast. If you have not tried it yet, what are you waiting for? Download it now and rock on!

    -

    How to get Rock Band 2 Songs Career Mode for Frets On Fire?

    -

    If you already have Frets On Fire installed on your computer, you can easily download and install Rock Band 2 Songs Career Mode from this link: https://www.mediafire.com/file/9k9c9c9c9c9c9c9/Rock_Band_2_Songs_Career_Mode.zip/file.

    -

    This mod contains all the songs from Rock Band 2, as well as a career editor that lets you create your own custom setlists and challenges. You can also use the career editor to modify the existing career mode, such as changing the difficulty, the order of the songs, or the rewards.

    -

    To use the career editor, you will need to run the file called "CareerEditor.exe" that is located in the folder "Rock Band 2 Songs Career Mode". You will see a window with several tabs and options. You can select any of the existing careers from the drop-down menu, or create a new one by clicking on the "New" button. You can then edit the name, description, logo, and theme of your career.

    -

    To add songs to your career, you will need to go to the "Songs" tab and click on the "Add" button. You will see a list of all the songs from Rock Band 2 that are available in the mod. You can select any song you want and drag it to the right panel, where you can arrange it in any order you want. You can also create different tiers and venues for your career by clicking on the "Add Tier" and "Add Venue" buttons.

    -

    To add challenges to your career, you will need to go to the "Challenges" tab and click on the "Add" button. You will see a window where you can customize your challenge. You can choose the name, description, icon, type, difficulty, and songs for your challenge. You can also set some special conditions for your challenge, such as requiring a certain score, percentage, or streak.

    -

    Once you are done editing your career, you can save it by clicking on the "Save" button. You can then launch Frets On Fire and select "Rock Band 2 Songs Career Mode" from the theme menu. You will see your custom career in the list of careers available in the game.

    -

    What are some tips and tricks for playing Rock Band 2 Songs Career Mode?

    -

    Playing Rock Band 2 Songs Career Mode can be a lot of fun, but also quite challenging. Here are some tips and tricks that might help you improve your skills and enjoy the game more:

    -
      -
    • Practice makes perfect. If you are having trouble with a particular song or section, you can use the practice mode to slow down the speed, loop a part, or change the difficulty. You can access the practice mode by pressing F4 during a song.
    • -
    • Use star power wisely. Star power is a special feature that doubles your score multiplier when activated. You can earn star power by hitting notes marked with glowing stars or by using your whammy bar on long notes. You can activate star power by tilting your guitar controller or pressing Enter on your keyboard. Try to use star power when you have a high multiplier and when there are many notes on screen.
    • -
    • Don't miss notes. Missing notes will break your streak and lower your multiplier. It will also reduce your rock meter, which measures how well you are performing. If your rock meter drops too low, you will fail the song and lose fans. To avoid missing notes, pay attention to the scrolling fretboard and hit the notes in sync with the music.
    • -
    • Have fun. The most important thing is to enjoy playing Rock Band 2 Songs Career Mode. Don't worry too much about your score or rank. Just play along with some of your favorite songs and rock out!
    • -
    -

    Conclusion

    -

    Rock Band 2 Songs Career Mode is an amazing mod for Frets On Fire that adds a whole new dimension to the game. It is a great way to experience some of the best rock songs ever made and challenge yourself with different modes and difficulties. If you love rock music and rhythm games, you should definitely give it a try!

    -

    What are some of the best songs in Rock Band 2 Songs Career Mode?

    -

    Rock Band 2 Songs Career Mode has a huge and diverse selection of songs from various genres and eras of rock music. There are over 80 songs in the main setlist, plus more than 20 bonus songs and DLC songs. Some of the songs are easy and catchy, while others are hard and epic. Here are some of the best songs in Rock Band 2 Songs Career Mode, according to different criteria:

    -
      -
    • The most fun song: Beastie Boys - So What'cha Want. This song is a blast to play on any instrument, especially on vocals. You can rap along with the Beastie Boys and enjoy their witty and humorous lyrics. The song is also fast and energetic, with a funky groove and a catchy chorus.
    • -
    • The most challenging song: Dream Theater - Panic Attack. This song is a nightmare for any player, especially on drums. The song is over seven minutes long, and features complex time signatures, rapid changes, and insane solos. The song is also very heavy and intense, with a dark and ominous atmosphere.
    • -
    • The most iconic song: AC/DC - Let There Be Rock. This song is a classic rock anthem that celebrates the power and glory of rock music. The song is simple but effective, with a catchy riff, a powerful chorus, and a long and epic guitar solo. The song is also very loud and energetic, with a lot of attitude and charisma.
    • -
    • The most emotional song: Nirvana - Drain You. This song is a grunge masterpiece that expresses the pain and frustration of love and life. The song is raw and honest, with a distorted guitar, a melodic bass, and a passionate vocal. The song is also very dynamic and expressive, with quiet verses and loud choruses.
    • -
    • The most surprising song: The Konks - 29 Fingers. This song is a hidden gem that many players might not know or expect. The song is a garage rock tune that sounds like it was recorded in a basement. The song is very simple but catchy, with a fuzzy guitar, a thumping drum, and a snotty vocal.
    • -
    -

    Why should you play Rock Band 2 Songs Career Mode?

    -

    Rock Band 2 Songs Career Mode is one of the best mods for Frets On Fire that offers a lot of benefits for any player. Here are some of the reasons why you should play Rock Band 2 Songs Career Mode:

    -
      -
    • You can enjoy some of the best rock songs ever made. Rock Band 2 Songs Career Mode has a fantastic selection of songs from various genres and eras of rock music, from classic rock to metal to punk to alternative. You can play songs by artists like AC/DC, Foo Fighters, Nirvana, Bon Jovi, Guns N' Roses, Metallica, The Who, and many more.
    • -
    • You can challenge yourself with different modes and difficulties. Rock Band 2 Songs Career Mode has a career mode that follows the same structure as the original game. You can choose from four difficulty levels, and progress through different venues and challenges as you rock out to some of the best songs in rock history. You can also unlock new guitars, basses, drums, and outfits as you earn money and fans.
    • -
    • You can customize your own career mode. Rock Band 2 Songs Career Mode has a career editor that lets you create your own custom setlists and challenges. You can also use the career editor to modify the existing career mode, such as changing the difficulty, the order of the songs, or the rewards.
    • -
    • You can have fun with your friends or online. Rock Band 2 Songs Career Mode supports multiplayer mode for up to four players on guitar, bass, drums or vocals. You can play with your friends locally or online using Frets On Fire's network mode.
    • -
    -

    Conclusion

    -

    Rock Band 2 Songs Career Mode is an amazing mod for Frets On Fire that adds hours of fun and replay value to the game. It is a great way to experience some of the best rock songs ever made and challenge yourself with different modes and difficulties. If you love rock music and rhythm games, you should definitely give it a try!

    -

    Conclusion

    -

    Rock Band 2 Songs Career Mode is an amazing mod for Frets On Fire that adds hours of fun and replay value to the game. It is a great way to experience some of the best rock songs ever made and challenge yourself with different modes and difficulties. If you love rock music and rhythm games, you should definitely give it a try!

    3cee63e6c2
    -
    -
    \ No newline at end of file diff --git a/spaces/terfces0erbo/CollegeProjectV2/Hare In The Hat Download Crack With Full Game.md b/spaces/terfces0erbo/CollegeProjectV2/Hare In The Hat Download Crack With Full Game.md deleted file mode 100644 index 97c6a37f2c2b56b3a0edfadfaf0dc909f44d03e6..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/Hare In The Hat Download Crack With Full Game.md +++ /dev/null @@ -1,16 +0,0 @@ -
    -

    How to Download Hare In The Hat Full Game for Free

    -

    Hare In The Hat is a fun and challenging adventure game that combines hidden object, puzzle, and room escape mechanics. The game is set in a magical world where an evil magician has trapped a poor Hare in his hat. Your quest is to save the Hare by solving a series of puzzles, riddles, and clues hidden throughout the magician's chambers.

    -

    The game has stunning graphics, smooth animations, and a captivating storyline that will keep you engaged for hours. The game also has a fancy point and click cartoon style that is suitable for players of all ages. The game has a variety of puzzles and mini-games of varying difficulty, making it a perfect mix of entertainment and challenge.

    -

    Hare In The Hat Download crack with full game


    DOWNLOADhttps://bytlly.com/2uGkSu



    -

    If you want to download Hare In The Hat full game for free, you have come to the right place. In this article, we will show you how to download Hare In The Hat full game for free using a reliable and safe website. Follow these simple steps to get your free copy of Hare In The Hat today:

    -
      -
    1. Go to https://extrogames.com/game/hare-in-the-hat, which is one of the best websites to download free games[^1^]. This website has a large collection of games from various genres and platforms. You can also find reviews, screenshots, system requirements, and instructions for each game.
    2. -
    3. On the website, you will see the game details, description, gameplay, download links, system requirements, and instructions for Hare In The Hat. Scroll down to the download links section and choose one of the available links to download Hare In The Hat full game for free. You can choose from various file hosting services such as Mega, Uptobox, 1fichier, Pixeldrain, Mediafire, Drop, Gofile, Bowfile, Racaty, Mixdrop, Doodrive, 1cloudfile, Uploadhub, Usersdrive, Krakenfiles, Filefactory, Hexupload, Bayfiles, Anonfiles, Send.cm, Upload42, Uploadbank, Fastclick, Megaup, Letsupload, Clicknupload, Dailyuploads, Uploadbaz, Userscloud or Ddownload.
    4. -
    5. Click on the link of your choice and you will be redirected to the file hosting service website. Follow the instructions on the website to download Hare In The Hat full game for free. You may need to create an account or verify your identity before downloading the file. You may also encounter some ads or pop-ups on the website. Be careful not to click on any suspicious links or buttons that may harm your device or compromise your privacy.
    6. -
    7. Once you have downloaded Hare In The Hat full game for free as a zip file on your device, you need to extract it using a software such as WinRAR or 7-Zip. To extract the zip file, right-click on it and choose "Extract Here" or "Extract to Hare.In.The.Hat.Build.10075403". You will need a password to extract the zip file. The password is "extrogames.com" (without quotes).
    8. -
    9. After extracting the zip file, you will see a folder named "Hare.In.The.Hat.Build.10075403". Open the folder and double-click on the file named "HareInTheHat.exe" to launch the game. You do not need to install the game or apply any crack or patch. The game is already cracked by P2P[^1^]. Enjoy playing Hare In The Hat full game for free!
    10. -
    -

    We hope this article helped you download Hare In The Hat full game for free. If you have any questions or problems regarding the download process or the game itself, please leave a comment below or contact us through our website. We will try our best to help you out. Thank you for choosing ExtroGames as your source of free games!

    d5da3c52bf
    -
    -
    \ No newline at end of file diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Download Facebook Friends Mapper LINK.md b/spaces/tialenAdioni/chat-gpt-api/logs/Download Facebook Friends Mapper LINK.md deleted file mode 100644 index 954a2d43cd04e82b072caf58f7614741344fed1f..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/Download Facebook Friends Mapper LINK.md +++ /dev/null @@ -1,41 +0,0 @@ - -Here is a possible title and article with HTML formatting for the keyword "Download Facebook Friends Mapper": - -

    How to Download and Use Facebook Friends Mapper to Reveal Hidden Friends Lists

    -

    Facebook Friends Mapper is a Chrome extension that allows you to see the hidden friends lists of any Facebook user, as long as you have at least one mutual friend with them. This tool can help you discover new connections, find out more about someone's interests, or even expose potential privacy breaches. In this article, we will show you how to download and use Facebook Friends Mapper on your Chrome browser and Android phone.

    -

    How to Download Facebook Friends Mapper on Chrome

    -

    To download Facebook Friends Mapper on Chrome, you need to follow these steps:

    -

    Download Facebook Friends Mapper


    DOWNLOAD ★★★ https://urlcod.com/2uKamA



    -
      -
    1. Make sure you have an internet connection and open the Google Chrome browser.
    2. -
    3. Visit the Google Chrome Webstore and search for "Facebook Friend Mapper" using the search icon.
    4. -
    5. Once you see the extension, click "Add to Chrome" and confirm the installation.
    6. -
    7. Add the extension to your Chrome toolbar by clicking on the puzzle icon and pinning it.
    8. -
    -

    How to Use Facebook Friends Mapper on Chrome

    -

    To use Facebook Friends Mapper on Chrome, you need to follow these steps:

    -
      -
    1. Sign in to your Facebook account and go to the profile of the person whose hidden friends list you want to see.
    2. -
    3. Click on the Facebook Friend Mapper icon on your toolbar and select "Reveal Friends".
    4. -
    5. Wait for a few seconds while the extension scans the profile and collects data from mutual friends.
    6. -
    7. You will see a list of all the hidden friends of that person, along with their profile pictures and names. You can click on any of them to visit their profiles.
    8. -
    -

    How to Download Facebook Friends Mapper APK on Android

    -

    If you want to use Facebook Friends Mapper on your Android phone, you need to download and install the APK file from a third-party source. Here is how you can do that:

    -
      -
    1. On your phone, go to this link and download the Facebook Friend Mapper APK file.
    2. -
    3. Once the download is complete, go to your phone settings and enable "Unknown Sources" under security options.
    4. -
    5. Locate the APK file in your downloads folder and tap on it to install it.
    6. -
    7. Open the app and sign in to your Facebook account.
    8. -
    -

    How to Use Facebook Friends Mapper APK on Android

    -

    To use Facebook Friends Mapper APK on Android, you need to follow these steps:

    -
      -
    1. Open the app and tap on the magnifying glass icon at the top right corner.
    2. -
    3. Type in the name of the person whose hidden friends list you want to see and select them from the suggestions.
    4. -
    5. Tap on the "Reveal Friends" button at the bottom of the screen.
    6. -
    7. Wait for a few seconds while the app scans the profile and collects data from mutual friends.
    8. -
    9. You will see a list of all the hidden friends of that person, along with their profile pictures and names. You can tap on any of them to visit their profiles.
    10. -

    7196e7f11a
    -
    -
    \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Adobe Me Patcher 2.1 Final.md b/spaces/tioseFevbu/cartoon-converter/scripts/Adobe Me Patcher 2.1 Final.md deleted file mode 100644 index f114ae579e65d8f9766ad37e722d8374a216bc42..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/Adobe Me Patcher 2.1 Final.md +++ /dev/null @@ -1,26 +0,0 @@ - -

    How to Use Adobe Me Patcher 2.1 Final to Activate Adobe CC Products

    -

    If you are looking for a way to use Adobe CC products for free, you might have heard of Adobe Me Patcher 2.1 Final. This is a tool that claims to patch any Adobe CC product and bypass the license verification. But is it safe and legal to use? In this article, we will explain what Adobe Me Patcher 2.1 Final is, how it works, and what are the risks and alternatives of using it.

    -

    Adobe Me Patcher 2.1 Final


    Download ►►► https://urlcod.com/2uHw3W



    -

    What is Adobe Me Patcher 2.1 Final?

    -

    Adobe Me Patcher 2.1 Final is a program that was uploaded on file-sharing websites such as 2shared[^1^] and 4shared[^3^] in 2012. It is supposed to modify the code of any Adobe CC product and disable the license verification, allowing users to use the software for free without paying for a subscription.

    -

    How does Adobe Me Patcher 2.1 Final work?

    -

    To use Adobe Me Patcher 2.1 Final, you need to download the compressed file from one of the file-sharing websites and extract it on your computer. Then, you need to run the program and select the Adobe CC product that you want to patch from a list of options. The program will then scan your system and apply the patch to the selected product.

    -

    What are the risks of using Adobe Me Patcher 2.1 Final?

    -

    Using Adobe Me Patcher 2.1 Final might seem like an easy way to save money on Adobe CC products, but it comes with many risks and disadvantages. Here are some of them:

    -

    -
      -
    • Damages to Copyright Holder. By using pirated software, you are violating the intellectual property rights of Adobe and causing them to lose revenue that they would have received from your purchase. This is not only unethical but also illegal and can result in fines or lawsuits.
    • -
    • Federal Penalties. Piracy is also considered a federal crime in many countries, similar to illegal downloading of music or movies. If you are caught using or distributing pirated software, you could face criminal charges and penalties up to $150,000 per infringement.
    • -
    • Lack of Software Updates. One of the benefits of subscribing to Adobe CC products is that you get access to regular updates that fix bugs, improve performance, add new features, and close security vulnerabilities. However, if you use pirated software, you will not be able to receive these updates and your software will become outdated and vulnerable.
    • -
    • Pirated Software = Security Issues. Another risk of using pirated software is that it might contain malware or viruses that can harm your computer or steal your personal information. Hackers often distribute pirated software with hidden malicious code that can load adware, spyware, ransomware, or other unwanted programs on your system.
    • -
    -

    What are the alternatives of using Adobe Me Patcher 2.1 Final?

    -

    The best alternative of using Adobe Me Patcher 2.1 Final is to use Adobe CC products legally by paying for a subscription plan that suits your needs and budget. This way, you will not only avoid the risks and disadvantages of piracy but also enjoy the benefits and features of genuine software.

    -

    If you cannot afford a subscription plan, there are other options that you can try:

    -
      -
    • Use Free Trials. Adobe offers free trials for most of its products that last for 7 days or longer. You can use these trials to test the software before buying it or for short-term projects.
    • -
    • Use Free Alternatives. There are many free or open-source programs that can perform similar functions as Adobe CC products. For example, you can use GIMP instead of Photoshop, Inkscape instead of Illustrator, or Blender instead of After Effects.
    • -
    • Use Educational Discounts. If you are a student or a teacher, you can get access to all Adobe CC products for a discounted price of $19.99 per month instead of $52.99 per month. e93f5a0c3f
      -
      -
      \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Dru Down Can You Feel Me Album Zip.md b/spaces/tioseFevbu/cartoon-converter/scripts/Dru Down Can You Feel Me Album Zip.md deleted file mode 100644 index 97bd049b2fa3049ed5e6f33bd7e6f6fc3cef43fe..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/Dru Down Can You Feel Me Album Zip.md +++ /dev/null @@ -1,16 +0,0 @@ -
      -

      Dru Down Can You Feel Me Album Zip: A West Coast Rap Classic

      -

      If you are looking for a rap album that showcases the skills and style of one of the best MCs from the West Coast, you should check out Dru Down Can You Feel Me Album Zip. This album was released in 1996 by C Note Records and features 17 tracks of hard-hitting G-funk beats and witty rhymes by Dru Down.

      -

      Dru Down is a rapper from Oakland, California, who started his career in the late 1980s as part of The Regime, a group that included Yukmouth, Tech N9ne, and others. He gained popularity with his debut solo album, Dru Down, in 1994, which spawned the hit single "Pimp of the Year". He followed up with Can You Feel Me in 1996, which was his most successful and critically acclaimed album to date.

      -

      Dru Down Can You Feel Me Album Zip


      Download Zip ☆☆☆ https://urlcod.com/2uHxL0



      -

      Can You Feel Me is a revelation, proving that Dru Down has rhyming skills far superior to most of his West Coast gangsta rap brethren. While the music isn't stylistically different from G-funk, it is catchier and more memorable than many sub-Dre productions. More importantly, Dru Down is a terrific rapper, capable of laconic phrasing like Snoop Dogg or wild, freestyle stream-of-consciousness bursts of energy. Combined with the first-rate music, the lyrical skills make Can You Feel Me one of the finest hip-hop records of 1996[^1^] [^3^].

      -

      The album features guest appearances by Luniz, Numskull, Yukmouth, Spice 1, and others. Some of the standout tracks include "Playa Fo Real", "Can You Feel Me", "Head & Shoulders", "The Game", and "The Mobb". The album also contains a hidden track called "Ice Cream Man", which is a diss to Master P.

      -

      -

      If you want to download Dru Down Can You Feel Me Album Zip, you can find it on various online platforms such as Qobuz[^1^], Archive[^2^], or AllMusic[^3^]. You can also stream it on Spotify, Apple Music, YouTube Music, or Deezer. However you choose to listen to it, you will not be disappointed by this classic rap album that showcases the talent and charisma of Dru Down.

      - -

      After a hiatus of several years, Dru Down returned to the rap scene in 2001 with his fourth album, Pimpin' Phernelia, which was released independently on C-Note Records. The album featured guest appearances by The Regime, Keak da Sneak, and others. The album received mixed reviews from critics and fans, and did not sell as well as his previous albums. [7]

      -

      Dru Down also continued his acting career, appearing in several low-budget films such as Obstacles (2000), Hip Hop Task Force (2005), and Ghetto Stories: The Movie (2010). He also had a cameo role in the comedy film Norbit (2007) as a rapper. [8]

      -

      In 2010, Dru Down announced that he was working on a new album, tentatively titled Chronicles of a Pimp, which was supposed to be released in 2011. However, the album was delayed due to legal issues and personal problems. [9] In 2013, he finally released the album under the title Livin Legend (God Willin) Pt. 2 on his own label Pimp On Records. The album featured collaborations with his father Bootsy Collins, Snoop Dogg, E-40, Richie Rich, and others. The album received positive feedback from fans and critics, who praised Dru Down's comeback and his lyrical skills. [10]

      -

      Dru Down is currently working on his sixth studio album, Explicit Game 2, which is expected to be released in 2022. He is also touring with The Regime and performing at various venues across the country. Dru Down is widely regarded as one of the pioneers and legends of West Coast rap, and has influenced many artists such as Kendrick Lamar, G-Eazy, and Tyga. [11]

      7b8c122e87
      -
      -
      \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Huaweiy2102010firmware.md b/spaces/tioseFevbu/cartoon-converter/scripts/Huaweiy2102010firmware.md deleted file mode 100644 index 540a60285641f460d188d0019994f076c0c4cacd..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/Huaweiy2102010firmware.md +++ /dev/null @@ -1,36 +0,0 @@ - -

      Huawei Y210-2010 Firmware: How to Download and Install

      -

      If you own a Huawei Y210-2010 smartphone and want to update or restore its firmware, you have come to the right place. In this article, we will show you how to download and install the official stock firmware for Huawei Y210-2010 using SP Flash Tool.

      -

      What is Firmware?

      -

      Firmware is the software that runs on your device and controls its functions. It includes the operating system, applications, settings, and data. Firmware can be updated or flashed to fix bugs, improve performance, or add new features.

      -

      huaweiy2102010firmware


      DOWNLOADhttps://urlcod.com/2uHx8W



      -

      Why Flash Firmware?

      -

      There are many reasons why you might want to flash firmware on your Huawei Y210-2010. Some of them are:

      -
        -
      • Your device is stuck in a bootloop or does not boot up at all.
      • -
      • Your device is infected by malware or viruses.
      • -
      • Your device is running slow or has low battery life.
      • -
      • Your device has software issues or errors.
      • -
      • You want to unroot or restore your device to its original state.
      • -
      • You want to upgrade or downgrade your device's firmware version.
      • -
      -

      How to Download Firmware?

      -

      The first step to flash firmware on your Huawei Y210-2010 is to download the firmware file. You can find the official link to download Huawei Y210-2010 stock firmware ROM (flash file) on your computer from here. The firmware comes in a zip package containing flash file, flash tool, USB driver, and how-to flash manual.

      -

      How to Install Firmware?

      -

      The second step to flash firmware on your Huawei Y210-2010 is to install the firmware using SP Flash Tool. SP Flash Tool is a software that allows you to flash or install firmware on MediaTek devices. You can download SP Flash Tool for Windows or Linux from here. To install the firmware on your Huawei Y210-2010 using SP Flash Tool, follow these steps:

      -
        -
      1. Download and extract the Huawei Y210-2010 stock firmware package on the computer.
      2. -
      3. Install the provided USB driver on the computer (if the USB driver is already installed, then skip this step).
      4. -
      5. Launch SP Flash Tool on the computer.
      6. -
      7. Click on the Scatter-loading button and locate the scatter file from the extracted firmware folder.
      8. -
      9. Make sure all the partitions are checked and click on Download button.
      10. -
      11. Connect your Huawei Y210-2010 device to the computer using a USB cable while holding Volume Down or Volume Up button.
      12. -
      13. The flashing process will start and it will take a few minutes to complete.
      14. -
      15. Once the flashing is done, you will see a green tick mark on SP Flash Tool.
      16. -
      17. Disconnect your device and reboot it.
      18. -
      -

      Conclusion

      -

      Congratulations! You have successfully flashed firmware on your Huawei Y210-2010 using SP Flash Tool. You can now enjoy the new features and improvements of your device's firmware. If you have any questions or problems regarding this process, feel free to leave a comment below.

      -

      cec2833e83
      -
      -
      \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Ice Age Collision Course (English) Tamil Dubbed 1080p Online.md b/spaces/tioseFevbu/cartoon-converter/scripts/Ice Age Collision Course (English) Tamil Dubbed 1080p Online.md deleted file mode 100644 index 6cf65bb1a17d98a8dff1ed42c9baa988e8f113f4..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/Ice Age Collision Course (English) Tamil Dubbed 1080p Online.md +++ /dev/null @@ -1,23 +0,0 @@ -
      -

      Watch Ice Age: Collision Course in Tamil with HD Quality Online

      -

      Ice Age: Collision Course is the fifth installment of the popular animated franchise that follows the adventures of Manny, Sid, Diego and their friends as they face a new threat from outer space. The movie was released in 2016 and features the voices of Ray Romano, John Leguizamo, Denis Leary, Queen Latifah, Jennifer Lopez and many more.

      -

      Ice Age: Collision Course (English) Tamil Dubbed 1080p Online


      Download Filehttps://urlcod.com/2uHyiB



      -

      If you are a fan of Ice Age and want to watch it in Tamil, you are in luck. There are several websites that offer Ice Age: Collision Course (English) Tamil dubbed 1080p online for free or with a subscription. Here are some of them:

      -
        -
      • Archive.org: This website provides free access to a variety of digital content, including movies, books, music and more. You can watch Ice Age: Collision Course in Tamil with HD quality on this site without any registration or download.
      • -
      • Sway.office.com: This website allows you to create and share interactive presentations, reports and stories. You can also watch Ice Age: Collision Course in Tamil with HD quality on this site by clicking on the link provided.
      • -
      • Hotstar.com: This website is a popular streaming service that offers a variety of movies, TV shows, sports and news in different languages. You can watch Ice Age: Collision Course in Tamil with HD quality on this site with a subscription or a free trial.
      • -
      -

      So, what are you waiting for? Grab some popcorn and enjoy Ice Age: Collision Course in Tamil with HD quality online!

      - -

      Ice Age: Collision Course is not only a fun and hilarious comedy, but also a thrilling and heartwarming adventure. The movie has a lot of action, humor, romance and drama, as well as some new and interesting characters. Some of them are:

      -

      -
        -
      • Buck (voice of Simon Pegg): A one-eyed weasel and a fearless dinosaur hunter who returns to help the herd stop the asteroid. He is smart, witty and adventurous, and has a crush on a female weasel named Bronwyn.
      • -
      • Brooke (voice of Jessie J): A beautiful and kind sloth who lives in a utopian community called Geotopia. She falls in love with Sid and helps him regain his confidence after his breakup.
      • -
      • Shangri Llama (voice of Jesse Tyler Ferguson): The eccentric and narcissistic leader of Geotopia, who believes that staying in his crystal-filled paradise will keep him and his followers young and healthy forever.
      • -
      • Gavin (voice of Nick Offerman): A ruthless and cunning flying dinosaur who wants to prevent the herd from stopping the asteroid, as he sees it as an opportunity to wipe out all mammals and reclaim the Earth for his kind.
      • -
      -

      Will the herd be able to save the world from the impending doom? Will Manny and Ellie accept Peaches' decision to marry Julian? Will Diego and Shira become parents? Will Sid and Brooke live happily ever after? Find out by watching Ice Age: Collision Course in Tamil with HD quality online!

      81aa517590
      -
      -
      \ No newline at end of file diff --git a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_internal/metadata/pkg_resources.py b/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_internal/metadata/pkg_resources.py deleted file mode 100644 index bf79ba139c00fe713dc10eca828b8c1b12f22582..0000000000000000000000000000000000000000 --- a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_internal/metadata/pkg_resources.py +++ /dev/null @@ -1,253 +0,0 @@ -import email.message -import email.parser -import logging -import os -import zipfile -from typing import Collection, Iterable, Iterator, List, Mapping, NamedTuple, Optional - -from pip._vendor import pkg_resources -from pip._vendor.packaging.requirements import Requirement -from pip._vendor.packaging.utils import NormalizedName, canonicalize_name -from pip._vendor.packaging.version import parse as parse_version - -from pip._internal.exceptions import InvalidWheel, NoneMetadataError, UnsupportedWheel -from pip._internal.utils.egg_link import egg_link_path_from_location -from pip._internal.utils.misc import display_path, normalize_path -from pip._internal.utils.wheel import parse_wheel, read_wheel_metadata_file - -from .base import ( - BaseDistribution, - BaseEntryPoint, - BaseEnvironment, - DistributionVersion, - InfoPath, - Wheel, -) - -logger = logging.getLogger(__name__) - - -class EntryPoint(NamedTuple): - name: str - value: str - group: str - - -class WheelMetadata: - """IMetadataProvider that reads metadata files from a dictionary. - - This also maps metadata decoding exceptions to our internal exception type. - """ - - def __init__(self, metadata: Mapping[str, bytes], wheel_name: str) -> None: - self._metadata = metadata - self._wheel_name = wheel_name - - def has_metadata(self, name: str) -> bool: - return name in self._metadata - - def get_metadata(self, name: str) -> str: - try: - return self._metadata[name].decode() - except UnicodeDecodeError as e: - # Augment the default error with the origin of the file. - raise UnsupportedWheel( - f"Error decoding metadata for {self._wheel_name}: {e} in {name} file" - ) - - def get_metadata_lines(self, name: str) -> Iterable[str]: - return pkg_resources.yield_lines(self.get_metadata(name)) - - def metadata_isdir(self, name: str) -> bool: - return False - - def metadata_listdir(self, name: str) -> List[str]: - return [] - - def run_script(self, script_name: str, namespace: str) -> None: - pass - - -class Distribution(BaseDistribution): - def __init__(self, dist: pkg_resources.Distribution) -> None: - self._dist = dist - - @classmethod - def from_directory(cls, directory: str) -> BaseDistribution: - dist_dir = directory.rstrip(os.sep) - - # Build a PathMetadata object, from path to metadata. :wink: - base_dir, dist_dir_name = os.path.split(dist_dir) - metadata = pkg_resources.PathMetadata(base_dir, dist_dir) - - # Determine the correct Distribution object type. - if dist_dir.endswith(".egg-info"): - dist_cls = pkg_resources.Distribution - dist_name = os.path.splitext(dist_dir_name)[0] - else: - assert dist_dir.endswith(".dist-info") - dist_cls = pkg_resources.DistInfoDistribution - dist_name = os.path.splitext(dist_dir_name)[0].split("-")[0] - - dist = dist_cls(base_dir, project_name=dist_name, metadata=metadata) - return cls(dist) - - @classmethod - def from_wheel(cls, wheel: Wheel, name: str) -> BaseDistribution: - try: - with wheel.as_zipfile() as zf: - info_dir, _ = parse_wheel(zf, name) - metadata_text = { - path.split("/", 1)[-1]: read_wheel_metadata_file(zf, path) - for path in zf.namelist() - if path.startswith(f"{info_dir}/") - } - except zipfile.BadZipFile as e: - raise InvalidWheel(wheel.location, name) from e - except UnsupportedWheel as e: - raise UnsupportedWheel(f"{name} has an invalid wheel, {e}") - dist = pkg_resources.DistInfoDistribution( - location=wheel.location, - metadata=WheelMetadata(metadata_text, wheel.location), - project_name=name, - ) - return cls(dist) - - @property - def location(self) -> Optional[str]: - return self._dist.location - - @property - def installed_location(self) -> Optional[str]: - egg_link = egg_link_path_from_location(self.raw_name) - if egg_link: - location = egg_link - elif self.location: - location = self.location - else: - return None - return normalize_path(location) - - @property - def info_location(self) -> Optional[str]: - return self._dist.egg_info - - @property - def installed_by_distutils(self) -> bool: - # A distutils-installed distribution is provided by FileMetadata. This - # provider has a "path" attribute not present anywhere else. Not the - # best introspection logic, but pip has been doing this for a long time. - try: - return bool(self._dist._provider.path) - except AttributeError: - return False - - @property - def canonical_name(self) -> NormalizedName: - return canonicalize_name(self._dist.project_name) - - @property - def version(self) -> DistributionVersion: - return parse_version(self._dist.version) - - def is_file(self, path: InfoPath) -> bool: - return self._dist.has_metadata(str(path)) - - def iter_distutils_script_names(self) -> Iterator[str]: - yield from self._dist.metadata_listdir("scripts") - - def read_text(self, path: InfoPath) -> str: - name = str(path) - if not self._dist.has_metadata(name): - raise FileNotFoundError(name) - content = self._dist.get_metadata(name) - if content is None: - raise NoneMetadataError(self, name) - return content - - def iter_entry_points(self) -> Iterable[BaseEntryPoint]: - for group, entries in self._dist.get_entry_map().items(): - for name, entry_point in entries.items(): - name, _, value = str(entry_point).partition("=") - yield EntryPoint(name=name.strip(), value=value.strip(), group=group) - - def _metadata_impl(self) -> email.message.Message: - """ - :raises NoneMetadataError: if the distribution reports `has_metadata()` - True but `get_metadata()` returns None. - """ - if isinstance(self._dist, pkg_resources.DistInfoDistribution): - metadata_name = "METADATA" - else: - metadata_name = "PKG-INFO" - try: - metadata = self.read_text(metadata_name) - except FileNotFoundError: - if self.location: - displaying_path = display_path(self.location) - else: - displaying_path = repr(self.location) - logger.warning("No metadata found in %s", displaying_path) - metadata = "" - feed_parser = email.parser.FeedParser() - feed_parser.feed(metadata) - return feed_parser.close() - - def iter_dependencies(self, extras: Collection[str] = ()) -> Iterable[Requirement]: - if extras: # pkg_resources raises on invalid extras, so we sanitize. - extras = frozenset(extras).intersection(self._dist.extras) - return self._dist.requires(extras) - - def iter_provided_extras(self) -> Iterable[str]: - return self._dist.extras - - -class Environment(BaseEnvironment): - def __init__(self, ws: pkg_resources.WorkingSet) -> None: - self._ws = ws - - @classmethod - def default(cls) -> BaseEnvironment: - return cls(pkg_resources.working_set) - - @classmethod - def from_paths(cls, paths: Optional[List[str]]) -> BaseEnvironment: - return cls(pkg_resources.WorkingSet(paths)) - - def _iter_distributions(self) -> Iterator[BaseDistribution]: - for dist in self._ws: - yield Distribution(dist) - - def _search_distribution(self, name: str) -> Optional[BaseDistribution]: - """Find a distribution matching the ``name`` in the environment. - - This searches from *all* distributions available in the environment, to - match the behavior of ``pkg_resources.get_distribution()``. - """ - canonical_name = canonicalize_name(name) - for dist in self.iter_all_distributions(): - if dist.canonical_name == canonical_name: - return dist - return None - - def get_distribution(self, name: str) -> Optional[BaseDistribution]: - # Search the distribution by looking through the working set. - dist = self._search_distribution(name) - if dist: - return dist - - # If distribution could not be found, call working_set.require to - # update the working set, and try to find the distribution again. - # This might happen for e.g. when you install a package twice, once - # using setup.py develop and again using setup.py install. Now when - # running pip uninstall twice, the package gets removed from the - # working set in the first uninstall, so we have to populate the - # working set again so that pip knows about it and the packages gets - # picked up and is successfully uninstalled the second time too. - try: - # We didn't pass in any version specifiers, so this can never - # raise pkg_resources.VersionConflict. - self._ws.require(name) - except pkg_resources.DistributionNotFound: - return None - return self._search_distribution(name) diff --git a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pkg_resources/_vendor/packaging/markers.py b/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pkg_resources/_vendor/packaging/markers.py deleted file mode 100644 index 18769b09a8a34f1e7d63cc61e62cd128ff5f9484..0000000000000000000000000000000000000000 --- a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pkg_resources/_vendor/packaging/markers.py +++ /dev/null @@ -1,304 +0,0 @@ -# This file is dual licensed under the terms of the Apache License, Version -# 2.0, and the BSD License. See the LICENSE file in the root of this repository -# for complete details. - -import operator -import os -import platform -import sys -from typing import Any, Callable, Dict, List, Optional, Tuple, Union - -from pkg_resources.extern.pyparsing import ( # noqa: N817 - Forward, - Group, - Literal as L, - ParseException, - ParseResults, - QuotedString, - ZeroOrMore, - stringEnd, - stringStart, -) - -from .specifiers import InvalidSpecifier, Specifier - -__all__ = [ - "InvalidMarker", - "UndefinedComparison", - "UndefinedEnvironmentName", - "Marker", - "default_environment", -] - -Operator = Callable[[str, str], bool] - - -class InvalidMarker(ValueError): - """ - An invalid marker was found, users should refer to PEP 508. - """ - - -class UndefinedComparison(ValueError): - """ - An invalid operation was attempted on a value that doesn't support it. - """ - - -class UndefinedEnvironmentName(ValueError): - """ - A name was attempted to be used that does not exist inside of the - environment. - """ - - -class Node: - def __init__(self, value: Any) -> None: - self.value = value - - def __str__(self) -> str: - return str(self.value) - - def __repr__(self) -> str: - return f"<{self.__class__.__name__}('{self}')>" - - def serialize(self) -> str: - raise NotImplementedError - - -class Variable(Node): - def serialize(self) -> str: - return str(self) - - -class Value(Node): - def serialize(self) -> str: - return f'"{self}"' - - -class Op(Node): - def serialize(self) -> str: - return str(self) - - -VARIABLE = ( - L("implementation_version") - | L("platform_python_implementation") - | L("implementation_name") - | L("python_full_version") - | L("platform_release") - | L("platform_version") - | L("platform_machine") - | L("platform_system") - | L("python_version") - | L("sys_platform") - | L("os_name") - | L("os.name") # PEP-345 - | L("sys.platform") # PEP-345 - | L("platform.version") # PEP-345 - | L("platform.machine") # PEP-345 - | L("platform.python_implementation") # PEP-345 - | L("python_implementation") # undocumented setuptools legacy - | L("extra") # PEP-508 -) -ALIASES = { - "os.name": "os_name", - "sys.platform": "sys_platform", - "platform.version": "platform_version", - "platform.machine": "platform_machine", - "platform.python_implementation": "platform_python_implementation", - "python_implementation": "platform_python_implementation", -} -VARIABLE.setParseAction(lambda s, l, t: Variable(ALIASES.get(t[0], t[0]))) - -VERSION_CMP = ( - L("===") | L("==") | L(">=") | L("<=") | L("!=") | L("~=") | L(">") | L("<") -) - -MARKER_OP = VERSION_CMP | L("not in") | L("in") -MARKER_OP.setParseAction(lambda s, l, t: Op(t[0])) - -MARKER_VALUE = QuotedString("'") | QuotedString('"') -MARKER_VALUE.setParseAction(lambda s, l, t: Value(t[0])) - -BOOLOP = L("and") | L("or") - -MARKER_VAR = VARIABLE | MARKER_VALUE - -MARKER_ITEM = Group(MARKER_VAR + MARKER_OP + MARKER_VAR) -MARKER_ITEM.setParseAction(lambda s, l, t: tuple(t[0])) - -LPAREN = L("(").suppress() -RPAREN = L(")").suppress() - -MARKER_EXPR = Forward() -MARKER_ATOM = MARKER_ITEM | Group(LPAREN + MARKER_EXPR + RPAREN) -MARKER_EXPR << MARKER_ATOM + ZeroOrMore(BOOLOP + MARKER_EXPR) - -MARKER = stringStart + MARKER_EXPR + stringEnd - - -def _coerce_parse_result(results: Union[ParseResults, List[Any]]) -> List[Any]: - if isinstance(results, ParseResults): - return [_coerce_parse_result(i) for i in results] - else: - return results - - -def _format_marker( - marker: Union[List[str], Tuple[Node, ...], str], first: Optional[bool] = True -) -> str: - - assert isinstance(marker, (list, tuple, str)) - - # Sometimes we have a structure like [[...]] which is a single item list - # where the single item is itself it's own list. In that case we want skip - # the rest of this function so that we don't get extraneous () on the - # outside. - if ( - isinstance(marker, list) - and len(marker) == 1 - and isinstance(marker[0], (list, tuple)) - ): - return _format_marker(marker[0]) - - if isinstance(marker, list): - inner = (_format_marker(m, first=False) for m in marker) - if first: - return " ".join(inner) - else: - return "(" + " ".join(inner) + ")" - elif isinstance(marker, tuple): - return " ".join([m.serialize() for m in marker]) - else: - return marker - - -_operators: Dict[str, Operator] = { - "in": lambda lhs, rhs: lhs in rhs, - "not in": lambda lhs, rhs: lhs not in rhs, - "<": operator.lt, - "<=": operator.le, - "==": operator.eq, - "!=": operator.ne, - ">=": operator.ge, - ">": operator.gt, -} - - -def _eval_op(lhs: str, op: Op, rhs: str) -> bool: - try: - spec = Specifier("".join([op.serialize(), rhs])) - except InvalidSpecifier: - pass - else: - return spec.contains(lhs) - - oper: Optional[Operator] = _operators.get(op.serialize()) - if oper is None: - raise UndefinedComparison(f"Undefined {op!r} on {lhs!r} and {rhs!r}.") - - return oper(lhs, rhs) - - -class Undefined: - pass - - -_undefined = Undefined() - - -def _get_env(environment: Dict[str, str], name: str) -> str: - value: Union[str, Undefined] = environment.get(name, _undefined) - - if isinstance(value, Undefined): - raise UndefinedEnvironmentName( - f"{name!r} does not exist in evaluation environment." - ) - - return value - - -def _evaluate_markers(markers: List[Any], environment: Dict[str, str]) -> bool: - groups: List[List[bool]] = [[]] - - for marker in markers: - assert isinstance(marker, (list, tuple, str)) - - if isinstance(marker, list): - groups[-1].append(_evaluate_markers(marker, environment)) - elif isinstance(marker, tuple): - lhs, op, rhs = marker - - if isinstance(lhs, Variable): - lhs_value = _get_env(environment, lhs.value) - rhs_value = rhs.value - else: - lhs_value = lhs.value - rhs_value = _get_env(environment, rhs.value) - - groups[-1].append(_eval_op(lhs_value, op, rhs_value)) - else: - assert marker in ["and", "or"] - if marker == "or": - groups.append([]) - - return any(all(item) for item in groups) - - -def format_full_version(info: "sys._version_info") -> str: - version = "{0.major}.{0.minor}.{0.micro}".format(info) - kind = info.releaselevel - if kind != "final": - version += kind[0] + str(info.serial) - return version - - -def default_environment() -> Dict[str, str]: - iver = format_full_version(sys.implementation.version) - implementation_name = sys.implementation.name - return { - "implementation_name": implementation_name, - "implementation_version": iver, - "os_name": os.name, - "platform_machine": platform.machine(), - "platform_release": platform.release(), - "platform_system": platform.system(), - "platform_version": platform.version(), - "python_full_version": platform.python_version(), - "platform_python_implementation": platform.python_implementation(), - "python_version": ".".join(platform.python_version_tuple()[:2]), - "sys_platform": sys.platform, - } - - -class Marker: - def __init__(self, marker: str) -> None: - try: - self._markers = _coerce_parse_result(MARKER.parseString(marker)) - except ParseException as e: - raise InvalidMarker( - f"Invalid marker: {marker!r}, parse error at " - f"{marker[e.loc : e.loc + 8]!r}" - ) - - def __str__(self) -> str: - return _format_marker(self._markers) - - def __repr__(self) -> str: - return f"" - - def evaluate(self, environment: Optional[Dict[str, str]] = None) -> bool: - """Evaluate a marker. - - Return the boolean from evaluating the given marker against the - environment. environment is an optional argument to override all or - part of the determined environment. - - The environment is determined from the current Python process. - """ - current_environment = default_environment() - if environment is not None: - current_environment.update(environment) - - return _evaluate_markers(self._markers, current_environment) diff --git a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/setuptools/_distutils/log.py b/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/setuptools/_distutils/log.py deleted file mode 100644 index be25f6cabd839af772dd74399c57991c222d3da8..0000000000000000000000000000000000000000 --- a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/setuptools/_distutils/log.py +++ /dev/null @@ -1,80 +0,0 @@ -"""A simple log mechanism styled after PEP 282.""" - -# The class here is styled after PEP 282 so that it could later be -# replaced with a standard Python logging implementation. - -import sys - -DEBUG = 1 -INFO = 2 -WARN = 3 -ERROR = 4 -FATAL = 5 - - -class Log: - def __init__(self, threshold=WARN): - self.threshold = threshold - - def _log(self, level, msg, args): - if level not in (DEBUG, INFO, WARN, ERROR, FATAL): - raise ValueError('%s wrong log level' % str(level)) - - if level >= self.threshold: - if args: - msg = msg % args - if level in (WARN, ERROR, FATAL): - stream = sys.stderr - else: - stream = sys.stdout - try: - stream.write('%s\n' % msg) - except UnicodeEncodeError: - # emulate backslashreplace error handler - encoding = stream.encoding - msg = msg.encode(encoding, "backslashreplace").decode(encoding) - stream.write('%s\n' % msg) - stream.flush() - - def log(self, level, msg, *args): - self._log(level, msg, args) - - def debug(self, msg, *args): - self._log(DEBUG, msg, args) - - def info(self, msg, *args): - self._log(INFO, msg, args) - - def warn(self, msg, *args): - self._log(WARN, msg, args) - - def error(self, msg, *args): - self._log(ERROR, msg, args) - - def fatal(self, msg, *args): - self._log(FATAL, msg, args) - - -_global_log = Log() -log = _global_log.log -debug = _global_log.debug -info = _global_log.info -warn = _global_log.warn -error = _global_log.error -fatal = _global_log.fatal - - -def set_threshold(level): - # return the old threshold for use from tests - old = _global_log.threshold - _global_log.threshold = level - return old - - -def set_verbosity(v): - if v <= 0: - set_threshold(WARN) - elif v == 1: - set_threshold(INFO) - elif v >= 2: - set_threshold(DEBUG) diff --git a/spaces/tomaseo2022/Whisper-Youtube/app.py b/spaces/tomaseo2022/Whisper-Youtube/app.py deleted file mode 100644 index 3419a32cc6e8ffd5ee6b62101d4687235ff64225..0000000000000000000000000000000000000000 --- a/spaces/tomaseo2022/Whisper-Youtube/app.py +++ /dev/null @@ -1,59 +0,0 @@ -import gradio as gr -import whisper -from pytube import YouTube - -loaded_model = whisper.load_model("base") -current_size = 'base' -def inference(link): - yt = YouTube(link) - path = yt.streams.filter(only_audio=True)[0].download(filename="audio.mp4") - options = whisper.DecodingOptions(without_timestamps=True) - results = loaded_model.transcribe(path) - return results['text'] - -def change_model(size): - if size == current_size: - return - loaded_model = whisper.load_model(size) - current_size = size - -def populate_metadata(link): - yt = YouTube(link) - return yt.thumbnail_url, yt.title - -title="" -description="" -block = gr.Blocks() -with block: - gr.HTML( - """ -
      -
      -
      -
      - """ - ) - with gr.Group(): - with gr.Box(): - sz = gr.Dropdown(label="Model Size", choices=['base','small', 'medium', 'large'], value='base') - - link = gr.Textbox(label="YouTube Link") - - with gr.Row().style(mobile_collapse=False, equal_height=True): - title = gr.Label(label="Video Title", placeholder="Title") - img = gr.Image(label="Thumbnail") - text = gr.Textbox( - label="Transcription", - placeholder="Transcription Output", - lines=5) - with gr.Row().style(mobile_collapse=False, equal_height=True): - btn = gr.Button("Transcribe") - - # Events - btn.click(inference, inputs=[link], outputs=[text]) - link.change(populate_metadata, inputs=[link], outputs=[img, title]) - sz.change(change_model, inputs=[sz], outputs=[]) - -block.launch(debug=True) - -demo = gr.Interface(css="footer {visibility: hidden}") \ No newline at end of file diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/cornernet/cornernet_hourglass104_mstest_10x5_210e_coco.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/cornernet/cornernet_hourglass104_mstest_10x5_210e_coco.py deleted file mode 100644 index 89f387641207512ae1b1c91ca56965004e5eb868..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/cornernet/cornernet_hourglass104_mstest_10x5_210e_coco.py +++ /dev/null @@ -1,105 +0,0 @@ -_base_ = [ - '../_base_/default_runtime.py', '../_base_/datasets/coco_detection.py' -] - -# model settings -model = dict( - type='CornerNet', - backbone=dict( - type='HourglassNet', - downsample_times=5, - num_stacks=2, - stage_channels=[256, 256, 384, 384, 384, 512], - stage_blocks=[2, 2, 2, 2, 2, 4], - norm_cfg=dict(type='BN', requires_grad=True)), - neck=None, - bbox_head=dict( - type='CornerHead', - num_classes=80, - in_channels=256, - num_feat_levels=2, - corner_emb_channels=1, - loss_heatmap=dict( - type='GaussianFocalLoss', alpha=2.0, gamma=4.0, loss_weight=1), - loss_embedding=dict( - type='AssociativeEmbeddingLoss', - pull_weight=0.10, - push_weight=0.10), - loss_offset=dict(type='SmoothL1Loss', beta=1.0, loss_weight=1)), - # training and testing settings - train_cfg=None, - test_cfg=dict( - corner_topk=100, - local_maximum_kernel=3, - distance_threshold=0.5, - score_thr=0.05, - max_per_img=100, - nms=dict(type='soft_nms', iou_threshold=0.5, method='gaussian'))) -# data settings -img_norm_cfg = dict( - mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) -train_pipeline = [ - dict(type='LoadImageFromFile', to_float32=True), - dict(type='LoadAnnotations', with_bbox=True), - dict( - type='PhotoMetricDistortion', - brightness_delta=32, - contrast_range=(0.5, 1.5), - saturation_range=(0.5, 1.5), - hue_delta=18), - dict( - type='RandomCenterCropPad', - crop_size=(511, 511), - ratios=(0.6, 0.7, 0.8, 0.9, 1.0, 1.1, 1.2, 1.3), - test_mode=False, - test_pad_mode=None, - **img_norm_cfg), - dict(type='Resize', img_scale=(511, 511), keep_ratio=False), - dict(type='RandomFlip', flip_ratio=0.5), - dict(type='Normalize', **img_norm_cfg), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']), -] -test_pipeline = [ - dict(type='LoadImageFromFile', to_float32=True), - dict( - type='MultiScaleFlipAug', - scale_factor=1.0, - flip=True, - transforms=[ - dict(type='Resize'), - dict( - type='RandomCenterCropPad', - crop_size=None, - ratios=None, - border=None, - test_mode=True, - test_pad_mode=['logical_or', 127], - **img_norm_cfg), - dict(type='RandomFlip'), - dict(type='Normalize', **img_norm_cfg), - dict(type='ImageToTensor', keys=['img']), - dict( - type='Collect', - keys=['img'], - meta_keys=('filename', 'ori_shape', 'img_shape', 'pad_shape', - 'scale_factor', 'flip', 'img_norm_cfg', 'border')), - ]) -] -data = dict( - samples_per_gpu=5, - workers_per_gpu=3, - train=dict(pipeline=train_pipeline), - val=dict(pipeline=test_pipeline), - test=dict(pipeline=test_pipeline)) -# optimizer -optimizer = dict(type='Adam', lr=0.0005) -optimizer_config = dict(grad_clip=dict(max_norm=35, norm_type=2)) -# learning policy -lr_config = dict( - policy='step', - warmup='linear', - warmup_iters=500, - warmup_ratio=1.0 / 3, - step=[180]) -runner = dict(type='EpochBasedRunner', max_epochs=210) diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/ghm/retinanet_ghm_x101_32x4d_fpn_1x_coco.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/ghm/retinanet_ghm_x101_32x4d_fpn_1x_coco.py deleted file mode 100644 index a89fc1389ce0f1f9712b4b5d684e632aaee25ce8..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/ghm/retinanet_ghm_x101_32x4d_fpn_1x_coco.py +++ /dev/null @@ -1,13 +0,0 @@ -_base_ = './retinanet_ghm_r50_fpn_1x_coco.py' -model = dict( - pretrained='open-mmlab://resnext101_32x4d', - backbone=dict( - type='ResNeXt', - depth=101, - groups=32, - base_width=4, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=1, - norm_cfg=dict(type='BN', requires_grad=True), - style='pytorch')) diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/gn+ws/mask_rcnn_x101_32x4d_fpn_gn_ws-all_2x_coco.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/gn+ws/mask_rcnn_x101_32x4d_fpn_gn_ws-all_2x_coco.py deleted file mode 100644 index dbe88770ae5dffbed5229ed4a4e62f10b1c8d12b..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/gn+ws/mask_rcnn_x101_32x4d_fpn_gn_ws-all_2x_coco.py +++ /dev/null @@ -1,17 +0,0 @@ -_base_ = './mask_rcnn_r50_fpn_gn_ws-all_2x_coco.py' -# model settings -conv_cfg = dict(type='ConvWS') -norm_cfg = dict(type='GN', num_groups=32, requires_grad=True) -model = dict( - pretrained='open-mmlab://jhu/resnext101_32x4d_gn_ws', - backbone=dict( - type='ResNeXt', - depth=101, - groups=32, - base_width=4, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=1, - style='pytorch', - conv_cfg=conv_cfg, - norm_cfg=norm_cfg)) diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/paa/paa_r101_fpn_2x_coco.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/paa/paa_r101_fpn_2x_coco.py deleted file mode 100644 index 641ef764d2713184845b624b20db1771cfcd6739..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/paa/paa_r101_fpn_2x_coco.py +++ /dev/null @@ -1,3 +0,0 @@ -_base_ = './paa_r101_fpn_1x_coco.py' -lr_config = dict(step=[16, 22]) -runner = dict(type='EpochBasedRunner', max_epochs=24) diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/vfnet/vfnet_r101_fpn_2x_coco.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/vfnet/vfnet_r101_fpn_2x_coco.py deleted file mode 100644 index 334657dc23de11045e37c0d62ee7c81b796f1254..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/vfnet/vfnet_r101_fpn_2x_coco.py +++ /dev/null @@ -1,4 +0,0 @@ -_base_ = './vfnet_r50_fpn_1x_coco.py' -model = dict(pretrained='torchvision://resnet101', backbone=dict(depth=101)) -lr_config = dict(step=[16, 22]) -runner = dict(type='EpochBasedRunner', max_epochs=24) diff --git a/spaces/trttung1610/musicgen/audiocraft/grids/audiogen/__init__.py b/spaces/trttung1610/musicgen/audiocraft/grids/audiogen/__init__.py deleted file mode 100644 index 8a0a2688450ce120088b79c3314a2f267394dc11..0000000000000000000000000000000000000000 --- a/spaces/trttung1610/musicgen/audiocraft/grids/audiogen/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. -"""AudioGen grids.""" diff --git a/spaces/uSerNameDDHL/bingo/README.md b/spaces/uSerNameDDHL/bingo/README.md deleted file mode 100644 index 90fab5f716b39d7cb21063693c1f53dd3f9ad781..0000000000000000000000000000000000000000 --- a/spaces/uSerNameDDHL/bingo/README.md +++ /dev/null @@ -1,197 +0,0 @@ ---- -title: bingo -emoji: 📉 -colorFrom: red -colorTo: red -sdk: docker -pinned: true -license: mit -duplicated_from: hf4all/bingo ---- - -
      - -# Bingo - -Bingo,一个让你呼吸顺畅 New Bing。 - -高度还原 New Bing 网页版的主要操作,国内可用,兼容绝大多数微软 Bing AI 的功能,可自行部署使用。 - -![Github stars](https://badgen.net/github/stars/weaigc/bingo?icon=github&label=stars) -![Gthub issues](https://img.shields.io/github/issues/weaigc/bingo) -[![docker build](https://github.com/weaigc/bingo/actions/workflows/docker.yml/badge.svg)](https://hub.docker.com/repository/docker/weaigc/bingo/) -[![docker hub](https://badgen.net/docker/size/weaigc/bingo?icon=docker&label=image%20size)](https://hub.docker.com/repository/docker/weaigc/bingo/) -[![MIT License](https://img.shields.io/badge/license-MIT-97c50f)](https://github.com/weaigc/bingo/blob/main/license) - -
      - -## 演示站点 - -https://bing.github1s.tk - - - -[![img](./docs/images/demo.png)](https://bing.github1s.tk) - -## 功能和特点 - -- 完全基于 Next.js 重写,高度还原 New Bing Web 版 UI,使用体验和 Bing AI 基本一致。 -- 支持 Docker 构建,方便快捷地部署和访问。 -- Cookie 可全局配置,全局共享。 -- 支持持续语音对话 - -## RoadMap - - - [x] 支持 wss 转发 - - [x] 支持一键部署 - - [x] 优化移动端展示 - - [x] 支持画图 - - [x] 支持语音输入(支持语音指令,目前仅支持 PC 版 Edge 及 Chrome 浏览器) - - [x] 支持语音输出(需要手动开启) - - [x] 支持图片输入 - - [x] 支持自定义域名 - - [ ] 支持历史记录 - - [ ] 适配深色模式 - - [ ] 支持内置提示词 - - [ ] 支持离线访问 - - [ ] 国际化翻译 - -## 一键部署 -你也可以一键部署自己的 New Bing AI 到 🤗 HuggingFace 。 - -### 部署到 Huggingface -1. 点击此图标 -[![Deploy to HuggingFace](https://img.shields.io/badge/%E7%82%B9%E5%87%BB%E9%83%A8%E7%BD%B2-%F0%9F%A4%97-fff)](https://huggingface.co/login?next=%2Fspaces%2Fhf4all%2Fbingo%3Fduplicate%3Dtrue%26visibility%3Dpublic),配置可以不改。 - -2. 部署署完成后,点击“设置” 》“站点域名”,点一下,复制一下 HF 域名信息,然后分享给别人即可。 - -> Huggingface 不支持绑定自己的域名,不过我们可以使用曲线救国的方式来达到这个目的 -> 1. 方式二,借助 Cloudflare Workers [部署Cloudflare Workers](#使用Cloudflare-Workers自定义域名) -> 2. 方式一,借助 Github Pages 及 iframe [如何绑定域名](https://github.com/weaigc/bingo/issues/4) - -### 使用Cloudflare Workers自定义域名 - -> 核心代码 [worker.js](./cloudflare/worker.js) - -- [注册 Cloudflare 账号](https://dash.cloudflare.com/sign-up) - -- 添加一个新的网站,需要你有自己的域名并且将域名`Name Server`托管给 Cloudflare 才行(更多信息可自行 Google) - -- 通过左侧菜单进入「Workers」,并点击「Create a Worker」。 - -- 创建 Worker 服务,复制 [worker.js](./cloudflare/worker.js) 全部代码,粘贴至创建的服务中,根据注释进行改动,保存并部署。 - -- 触发器 中自定义访问域名。 - -### 部署其它平台 -
      - -由于其他平台目前遭到 New Bing 封杀,会遇到很多问题,不再做推荐,有需要的可以自行查看 - - -#### 部署到 Netlify -[![Deploy to Netlify Button](https://www.netlify.com/img/deploy/button.svg)](https://app.netlify.com/start/deploy?repository=https://github.com/weaigc/bingo) - -#### 部署到 Vercel -如果你是 Vercel 付费用户,可以点以下链接一键部署到 Vercel。免费版本有[接口超时限制](https://vercel.com/docs/concepts/limits/overview),不推荐使用 - -[![Deploy with Vercel](https://vercel.com/button)](https://vercel.com/new/clone?demo-title=bingo&demo-description=bingo&demo-url=https%3A%2F%2Fbing.github1s.tk%2F&project-name=bingo&repository-name=bingo&repository-url=https%3A%2F%2Fgithub.com%2Fweaigc%2Fbingo&from=templates&skippable-integrations=1&env=BING_HEADER&envDescription=%E5%A6%82%E6%9E%9C%E4%B8%8D%E7%9F%A5%E9%81%93%E6%80%8E%E4%B9%88%E9%85%8D%E7%BD%AE%E8%AF%B7%E7%82%B9%E5%8F%B3%E4%BE%A7Learn+More&envLink=https%3A%2F%2Fgithub.com%2Fweaigc%2Fbingo%2Fblob%2Fmain%2F.env.example) - -#### 部署到 Render - -[![Deploy to Render](https://render.com/images/deploy-to-render-button.svg)](https://render.com/deploy?repo=https://github.com/weaigc/bingo) -
      - -## 环境和依赖 - -- Node.js >= 18 -- Bing AI 的[身份信息](#如何获取-BING_HEADER)) - -## 安装和使用 - -> 由于目前微软封杀比较严重,推荐优先使用 [部署 Huggingface](#部署到-huggingface) 。 - -* 使用 Node 启动 - -```bash -git clone https://github.com/weaigc/bingo.git -npm i # 推荐使用 pnpm i -npm run build -npm run start -``` - -* 使用 Docker 启动 -```bash -docker pull weaigc/bingo -docker run --rm -it -p 7860:7860 weaigc/bingo -# 或者 -docker run --rm -it -e BING_HEADER=xxxx -p 7860:7860 weaigc/bingo -``` - -## 如何获取 BING_HEADER -> 配置了 BING_HEADER 意味着你将自己的账号共享给所有使用此服务的人,如果不需要免登录画图的功能,不建议设置此变量 - -打开 https://www.bing.com 并登录,然后访问 https://www.bing.com/turing/captcha/challenge,通过人机校验,然后 - -![BING HEADER](./docs/images/curl.png) - -> 复制出来的内容应该如下所示。确认格式无误后,打开 https://effulgent-bubblegum-e2f5df.netlify.app/#dialog=%22settings%22 ,粘贴进去,点击“转成 BING_HEADER 并复制”,然后从剪切板粘贴即可得到。(你也可以先在网页上进行验证) - -以下是格式参考,需要注意的是,网页端保存的格式是以`curl`开头, 而服务端配置的 `BING_HEADER` 是 `base64` 格式,两者不能互通。 -
      -正常格式/网页端保存的格式(格式仅供参考) - -``` -curl 'https://www.bing.com/turing/captcha/challenge' \ - -H 'authority: www.bing.com' \ - -H 'accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7' \ - -H 'accept-language: zh-CN,zh;q=0.9,en;q=0.8,en-GB;q=0.7,en-US;q=0.6' \ - -H 'cache-control: max-age=0' \ - -H 'cookie: MicrosoftApplicationsTelemetryDeviceId=3399c004-fd0e-48ec-bb92-d82a27b2bbd4; _EDGE_V=1; SRCHD=AF=NOFORM; SRCHUID=V=2&GUID=29EBDDA4E6674329ACCF1A0A423C3E98&dmnchg=1; _UR=QS=0&TQS=0; _HPVN=CS=eyJQbiI6eyJDbiI6MSwiU3QiOjAsIlFzIjowLCJQcm9kIjoiUCJ9LCJTYyI6eyJDbiI6MSwiU3QiOjAsIlFzIjowLCJQcm9kIjoiSCJ9LCJReiI6eyJDbiI6MSwiU3QiOjAsIlFzIjowLCJQcm9kIjoiVCJ9LCJBcCI6dHJ1ZSwiTXV0ZSI6dHJ1ZSwiTGFkIjoiMjAyMy0wNy0yNVQwMDowMDowMFoiLCJJb3RkIjowLCJHd2IiOjAsIkRmdCI6bnVsbCwiTXZzIjowLCJGbHQiOjAsIkltcCI6Mn0=; _RwBf=ilt=1&ihpd=1&ispd=0&rc=0&rb=0&gb=0&rg=200&pc=0&mtu=0&rbb=0&g=0&cid=&clo=0&v=1&l=2023-07-25T07:00:00.0000000Z&lft=0001-01-01T00:00:00.0000000&aof=0&o=2&p=&c=&t=0&s=0001-01-01T00:00:00.0000000+00:00&ts=2023-07-25T11:00:31.7111548+00:00&rwred=0&wls=&lka=0&lkt=0&TH=&dci=0; ANON=A=0043C6590EA808ED6E395059FFFFFFFF&E=1c8b&W=1; NAP=V=1.9&E=1c31&C=DnaMSbDN_4efZ_xXqBF3Daorjr53kYqYoaP8YHsupjmiXnysX7a37A&W=1; PPLState=1; KievRPSSecAuth=FABSBBRaTOJILtFsMkpLVWSG6AN6C/svRwNmAAAEgAAACMGUA7EGVSjGEAQBGHtNsc5sNL7unmJsfPJ2t6imfo4BeUJlAia3IpMTtMUy4PU/C5QAzRI5pODtsIee0+blgllXt/5IiWwGjwmdhivsFM597pRPkjARPfwsPhNLPNbJrCPNPHdje4Is78MnCADXw6/NBq2FL8V2/byw2fH6IuAMD2MvN/VvqpEa9ZxiDjZtENj4HEj0mO2SgzjfyEhVAkjvznJqU2rw/Q2tHmX94NAM2kzlzKF/hWPhCCUmu8IHLvCnHDS6mSptvJDDP/sp3ovtzOXkP1mlM/Xju5ftesUvccVEQGffXORa1dE5hEMbKIiKXz1tDdduSXE19g9/+mRMAjaQhpwhI8XmilCTx1adb1Ll5qK+VjC9GNfEZzcbsGBPVaOl+anG8rEMq+Xnhjo7J+NqTNolavHgcuV8kJsCeJZIged33UA8eOZeFo+wAECMguxMoSqgpGH+sthqynvD/FJD6r/tiU2N3uqVq8NE8V37asrN6T14Z0FGBJOe6ET1+PGApm3s11OY9/xhFEB9T5BEPUGEbvRcLcW2ncFQX0EU+xweiPqo1Q1hNUg/dCtSI+lZ7c2H8XheePZavZ0TJQ8oNCSAuKiTqJmI0fVGpwbXwfaADkEipuawz3fIuMJBNgMU0OtA7Hm59v2fGLIBuvi6YeKS6GgVk3BIPf+P/eKahwozrxQZaFnoHTSqMkvct7xCP4atBROfXKf5Ww0CcFKp+2WX9BIskTOo2jjk6bAyyYJ+ElUB1fgLKNk5m/YSMc9iYCLIBMIGN8F0Yvy3tZ7cvh7Ue5Klo98US/I+nW1G7ZJMHRgUO8h8lpneHqEMegKd8gynO4VF7RpCjJkunDmW0Ta+RkXAP619pg0dqHMFkoOgknN78oBbGTV6fJUKotv+vi61kLhAeXZGWoHGCRXh2wUC6YgfPgKA6ESRNHtFn7E5B3HHpLc5rVMDSNhKZYfdhupV4Ezf6+5DhMcZLZhi0kk+ivDiN1gdHlVtSN55xpvf+c+XZDzR0uhgcvgy0LAbmzgk6y4WbYH+LQsMpzNNj+aC72vMiWovWrKh9jY4MYCmdgxsS/skPtLdp18muiEIRXTbZQGUmhxFpJAIbBIsCscMpzL0BgeujxUwM5wr79Sd9r4xwbgSMwmBlBfUHRVBdNyg8feepeJbCS63nD6eHOuLqMRsPIio3w/ki/EAa92UUEiZeavLsMUD/y/qAvWUdzdP5Y+C/TM+CMGS/kGL4LEdY/28MQeTvU1qv1X21kQt2aiaj3pPVL36hAzxbcLgqcMo9oymDRy87kdCXW/+g4oKLtMh6fm/G6W6Y/B01JlxohyyvueHQIG557uzkEkTJ3FnOVODSKBKpb3WZ65rExfV71zSZa25F3GmpaIG6HiYrX2YYhQAkIE9pKEQBHbnwHuwNDGottZTXZw=; WLS=C=9df3f9d8518fae19&N=wen; WLID=pGY8HgWCu4p5XYCOk2oa0+DBdftkMUfmNIn8XtSjSTKsgv/Il7GUlYs0Jpjf/E12jZMgV7x44Dy3fXOgjjUoJx7Y/ClLrLhsk20THksJJoI=; _EDGE_S=F=1&SID=17CF6EE006426448213C7DB907436588&mkt=zh-CN; MUID=225621093D8A6C27301632413C0E6D08; MUIDB=225621093D8A6C27301632413C0E6D08; SUID=A; SNRHOP=I=&TS=; _U=nGyzKQruEsDwLiu65fZFIG6e12hf2lwTJmroW__k8joUJIKmG3OIjayXKGW9dCVR3sNhF76mEVxyW6yjUGPodOfjtSa3s3J_DxMOrEK1BqXCOBI9bC66spAIASV7prsYFlVAJz73jVNENp_tBubLHJy6EbT0BKRe4AjrYkH-9uMnmCKB8Zmyg; _SS=SID=17CF6EE006426448213C7DB907436588&R=0&RB=0&GB=0&RG=200&RP=0&PC=U531; SRCHS=PC=U531; USRLOC=HS=1&ELOC=LAT=22.501529693603516|LON=113.9263687133789|N=%E5%8D%97%E5%B1%B1%E5%8C%BA%EF%BC%8C%E5%B9%BF%E4%B8%9C%E7%9C%81|ELT=2|&CLOC=LAT=22.50153029046461|LON=113.92637070632928|A=733.4464586120832|TS=230726151034|SRC=W; SRCHUSR=DOB=20230725&T=1690384908000&POEX=W; ipv6=hit=1690388509974&t=6; SRCHHPGUSR=HV=1690384945&SRCHLANG=zh-Hans&PV=15.0.0&BRW=MW&BRH=MT&CW=410&CH=794&SCW=410&SCH=794&DPR=1.5&UTC=480&DM=0&WTS=63825879627&PRVCW=410&PRVCH=794&PR=1.5; cct=AjWIBYOoVP-Afq6gWwtx80If6yHn6iBuEVHA1XHdAKpny6Y_CVyi_MSyM94VyMWnjdYkkccVtm3czoIAtXUGQA; GC=AjWIBYOoVP-Afq6gWwtx80If6yHn6iBuEVHA1XHdAKpR3Y_D9Ytcks4Ht6XhadXk75dvhzP4YOUS0UmoEyqyxw' \ - -H 'dnt: 1' \ - -H 'sec-ch-ua: "Chromium";v="116", "Not)A;Brand";v="24", "Microsoft Edge";v="116"' \ - -H 'sec-ch-ua-arch: "x86"' \ - -H 'sec-ch-ua-bitness: "64"' \ - -H 'sec-ch-ua-full-version: "116.0.1938.29"' \ - -H 'sec-ch-ua-full-version-list: "Chromium";v="116.0.5845.42", "Not)A;Brand";v="24.0.0.0", "Microsoft Edge";v="116.0.1938.29"' \ - -H 'sec-ch-ua-mobile: ?0' \ - -H 'sec-ch-ua-model: ""' \ - -H 'sec-ch-ua-platform: "Windows"' \ - -H 'sec-ch-ua-platform-version: "15.0.0"' \ - -H 'sec-fetch-dest: document' \ - -H 'sec-fetch-mode: navigate' \ - -H 'sec-fetch-site: none' \ - -H 'sec-fetch-user: ?1' \ - -H 'sec-ms-gec: B3F47AD4A283CAB374C0451C46AAFD147C6A4DACAFF6A1C13F34B2C72B024494' \ - -H 'sec-ms-gec-version: 1-116.0.1938.29' \ - -H 'upgrade-insecure-requests: 1' \ - -H 'user-agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.0.0 Safari/537.36 Edg/116.0.0.0' \ - -H 'x-client-data: eyIxIjoiMiIsIjEwIjoiXCJTMGg3R05HOTF2aDQ1TUZSUnZ5NHN2akRmMWdlaVJKenNxNlA3aU1WbnF3PVwiIiwiMiI6IjEiLCIzIjoiMSIsIjQiOiIyMTU4ODQ5NTM4MjY4OTM5NTA3IiwiNSI6IlwiSm9GUWpPTDk3OS9MbkRRZnlCd2N1M2FsOUN3eTZTQmdaMGNYMXBtOWVMZz1cIiIsIjYiOiJiZXRhIiwiNyI6IjE4MDM4ODYyNjQzNSIsIjkiOiJkZXNrdG9wIn0=' \ - -H 'x-edge-shopping-flag: 1' \ - --compressed -``` -
      - -
      -转成base64之后的格式(BING_HEADER只能使用 base64 之后的格式) - -``` -Y3VybCAnaHR0cHM6Ly93d3cuYmluZy5jb20vdHVyaW5nL2NvbnZlcnNhdGlvbi9jcmVhdGUnIFwgICAtSCAnYXV0aG9yaXR5OiB3d3cuYmluZy5jb20nIFwgICAtSCAnYWNjZXB0OiB0ZXh0L2h0bWwsYXBwbGljYXRpb24veGh0bWwreG1sLGFwcGxpY2F0aW9uL3htbDtxPTAuOSxpbWFnZS93ZWJwLGltYWdlL2FwbmcsKi8qO3E9MC44LGFwcGxpY2F0aW9uL3NpZ25lZC1leGNoYW5nZTt2PWIzO3E9MC43JyBcICAgLUggJ2FjY2VwdC1sYW5ndWFnZTogemgtQ04semg7cT0wLjksZW47cT0wLjgsZW4tR0I7cT0wLjcsZW4tVVM7cT0wLjYnIFwgICAtSCAnY2FjaGUtY29udHJvbDogbWF4LWFnZT0wJyBcICAgLUggJ2Nvb2tpZTogTWljcm9zb2Z0QXBwbGljYXRpb25zVGVsZW1ldHJ5RGV2aWNlSWQ9MzM5OWMwMDQtZmQwZS00OGVjLWJiOTItZDgyYTI3YjJiYmQ0OyBfRURHRV9WPTE7IFNSQ0hEPUFGPU5PRk9STTsgU1JDSFVJRD1WPTImR1VJRD0yOUVCRERBNEU2Njc0MzI5QUNDRjFBMEE0MjNDM0U5OCZkbW5jaGc9MTsgX1VSPVFTPTAmVFFTPTA7IF9IUFZOPUNTPWV5SlFiaUk2ZXlKRGJpSTZNU3dpVTNRaU9qQXNJbEZ6SWpvd0xDSlFjbTlrSWpvaVVDSjlMQ0pUWXlJNmV5SkRiaUk2TVN3aVUzUWlPakFzSWxGeklqb3dMQ0pRY205a0lqb2lTQ0o5TENKUmVpSTZleUpEYmlJNk1Td2lVM1FpT2pBc0lsRnpJam93TENKUWNtOWtJam9pVkNKOUxDSkJjQ0k2ZEhKMVpTd2lUWFYwWlNJNmRISjFaU3dpVEdGa0lqb2lNakF5TXkwd055MHlOVlF3TURvd01Eb3dNRm9pTENKSmIzUmtJam93TENKSGQySWlPakFzSWtSbWRDSTZiblZzYkN3aVRYWnpJam93TENKR2JIUWlPakFzSWtsdGNDSTZNbjA9OyBfUndCZj1pbHQ9MSZpaHBkPTEmaXNwZD0wJnJjPTAmcmI9MCZnYj0wJnJnPTIwMCZwYz0wJm10dT0wJnJiYj0wJmc9MCZjaWQ9JmNsbz0wJnY9MSZsPTIwMjMtMDctMjVUMDc6MDA6MDAuMDAwMDAwMFombGZ0PTAwMDEtMDEtMDFUMDA6MDA6MDAuMDAwMDAwMCZhb2Y9MCZvPTImcD0mYz0mdD0wJnM9MDAwMS0wMS0wMVQwMDowMDowMC4wMDAwMDAwKzAwOjAwJnRzPTIwMjMtMDctMjVUMTE6MDA6MzEuNzExMTU0OCswMDowMCZyd3JlZD0wJndscz0mbGthPTAmbGt0PTAmVEg9JmRjaT0wOyBBTk9OPUE9MDA0M0M2NTkwRUE4MDhFRDZFMzk1MDU5RkZGRkZGRkYmRT0xYzhiJlc9MTsgTkFQPVY9MS45JkU9MWMzMSZDPURuYU1TYkROXzRlZlpfeFhxQkYzRGFvcmpyNTNrWXFZb2FQOFlIc3Vwam1pWG55c1g3YTM3QSZXPTE7IFBQTFN0YXRlPTE7IEtpZXZSUFNTZWNBdXRoPUZBQlNCQlJhVE9KSUx0RnNNa3BMVldTRzZBTjZDL3N2UndObUFBQUVnQUFBQ01HVUE3RUdWU2pHRUFRQkdIdE5zYzVzTkw3dW5tSnNmUEoydDZpbWZvNEJlVUpsQWlhM0lwTVR0TVV5NFBVL0M1UUF6Ukk1cE9EdHNJZWUwK2JsZ2xsWHQvNUlpV3dHandtZGhpdnNGTTU5N3BSUGtqQVJQZndzUGhOTFBOYkpyQ1BOUEhkamU0SXM3OE1uQ0FEWHc2L05CcTJGTDhWMi9ieXcyZkg2SXVBTUQyTXZOL1Z2cXBFYTlaeGlEalp0RU5qNEhFajBtTzJTZ3pqZnlFaFZBa2p2em5KcVUycncvUTJ0SG1YOTROQU0ya3psektGL2hXUGhDQ1VtdThJSEx2Q25IRFM2bVNwdHZKRERQL3NwM292dHpPWGtQMW1sTS9YanU1ZnRlc1V2Y2NWRVFHZmZYT1JhMWRFNWhFTWJLSWlLWHoxdERkZHVTWEUxOWc5LyttUk1BamFRaHB3aEk4WG1pbENUeDFhZGIxTGw1cUsrVmpDOUdOZkVaemNic0dCUFZhT2wrYW5HOHJFTXErWG5oam83SitOcVROb2xhdkhnY3VWOGtKc0NlSlpJZ2VkMzNVQThlT1plRm8rd0FFQ01ndXhNb1NxZ3BHSCtzdGhxeW52RC9GSkQ2ci90aVUyTjN1cVZxOE5FOFYzN2Fzck42VDE0WjBGR0JKT2U2RVQxK1BHQXBtM3MxMU9ZOS94aEZFQjlUNUJFUFVHRWJ2UmNMY1cybmNGUVgwRVUreHdlaVBxbzFRMWhOVWcvZEN0U0krbFo3YzJIOFhoZWVQWmF2WjBUSlE4b05DU0F1S2lUcUptSTBmVkdwd2JYd2ZhQURrRWlwdWF3ejNmSXVNSkJOZ01VME90QTdIbTU5djJmR0xJQnV2aTZZZUtTNkdnVmszQklQZitQL2VLYWh3b3pyeFFaYUZub0hUU3FNa3ZjdDd4Q1A0YXRCUk9mWEtmNVd3MENjRktwKzJXWDlCSXNrVE9vMmpqazZiQXl5WUorRWxVQjFmZ0xLTms1bS9ZU01jOWlZQ0xJQk1JR044RjBZdnkzdFo3Y3ZoN1VlNUtsbzk4VVMvSStuVzFHN1pKTUhSZ1VPOGg4bHBuZUhxRU1lZ0tkOGd5bk80VkY3UnBDakprdW5EbVcwVGErUmtYQVA2MTlwZzBkcUhNRmtvT2drbk43OG9CYkdUVjZmSlVLb3R2K3ZpNjFrTGhBZVhaR1dvSEdDUlhoMndVQzZZZ2ZQZ0tBNkVTUk5IdEZuN0U1QjNISHBMYzVyVk1EU05oS1pZZmRodXBWNEV6ZjYrNURoTWNaTFpoaTBraytpdkRpTjFnZEhsVnRTTjU1eHB2ZitjK1haRHpSMHVoZ2N2Z3kwTEFibXpnazZ5NFdiWUgrTFFzTXB6Tk5qK2FDNzJ2TWlXb3ZXcktoOWpZNE1ZQ21kZ3hzUy9za1B0TGRwMThtdWlFSVJYVGJaUUdVbWh4RnBKQUliQklzQ3NjTXB6TDBCZ2V1anhVd001d3I3OVNkOXI0eHdiZ1NNd21CbEJmVUhSVkJkTnlnOGZlZXBlSmJDUzYzbkQ2ZUhPdUxxTVJzUElpbzN3L2tpL0VBYTkyVVVFaVplYXZMc01VRC95L3FBdldVZHpkUDVZK0MvVE0rQ01HUy9rR0w0TEVkWS8yOE1RZVR2VTFxdjFYMjFrUXQyYWlhajNwUFZMMzZoQXp4YmNMZ3FjTW85b3ltRFJ5ODdrZENYVy8rZzRvS0x0TWg2Zm0vRzZXNlkvQjAxSmx4b2h5eXZ1ZUhRSUc1NTd1emtFa1RKM0ZuT1ZPRFNLQktwYjNXWjY1ckV4ZlY3MXpTWmEyNUYzR21wYUlHNkhpWXJYMllZaFFBa0lFOXBLRVFCSGJud0h1d05ER290dFpUWFp3PTsgV0xTPUM9OWRmM2Y5ZDg1MThmYWUxOSZOPXdlbjsgV0xJRD1wR1k4SGdXQ3U0cDVYWUNPazJvYTArREJkZnRrTVVmbU5JbjhYdFNqU1RLc2d2L0lsN0dVbFlzMEpwamYvRTEyalpNZ1Y3eDQ0RHkzZlhPZ2pqVW9KeDdZL0NsTHJMaHNrMjBUSGtzSkpvST07IF9FREdFX1M9Rj0xJlNJRD0xN0NGNkVFMDA2NDI2NDQ4MjEzQzdEQjkwNzQzNjU4OCZta3Q9emgtQ047IE1VSUQ9MjI1NjIxMDkzRDhBNkMyNzMwMTYzMjQxM0MwRTZEMDg7IE1VSURCPTIyNTYyMTA5M0Q4QTZDMjczMDE2MzI0MTNDMEU2RDA4OyBTVUlEPUE7IFNOUkhPUD1JPSZUUz07IF9VPW5HeXpLUXJ1RXNEd0xpdTY1ZlpGSUc2ZTEyaGYybHdUSm1yb1dfX2s4am9VSklLbUczT0lqYXlYS0dXOWRDVlIzc05oRjc2bUVWeHlXNnlqVUdQb2RPZmp0U2EzczNKX0R4TU9yRUsxQnFYQ09CSTliQzY2c3BBSUFTVjdwcnNZRmxWQUp6NzNqVk5FTnBfdEJ1YkxISnk2RWJUMEJLUmU0QWpyWWtILTl1TW5tQ0tCOFpteWc7IF9TUz1TSUQ9MTdDRjZFRTAwNjQyNjQ0ODIxM0M3REI5MDc0MzY1ODgmUj0wJlJCPTAmR0I9MCZSRz0yMDAmUlA9MCZQQz1VNTMxOyBTUkNIUz1QQz1VNTMxOyBVU1JMT0M9SFM9MSZFTE9DPUxBVD0yMi41MDE1Mjk2OTM2MDM1MTZ8TE9OPTExMy45MjYzNjg3MTMzNzg5fE49JUU1JThEJTk3JUU1JUIxJUIxJUU1JThDJUJBJUVGJUJDJThDJUU1JUI5JUJGJUU0JUI4JTlDJUU3JTlDJTgxfEVMVD0yfCZDTE9DPUxBVD0yMi41MDE1MzAyOTA0NjQ2MXxMT049MTEzLjkyNjM3MDcwNjMyOTI4fEE9NzMzLjQ0NjQ1ODYxMjA4MzJ8VFM9MjMwNzI2MTUxMDM0fFNSQz1XOyBTUkNIVVNSPURPQj0yMDIzMDcyNSZUPTE2OTAzODQ5MDgwMDAmUE9FWD1XOyBpcHY2PWhpdD0xNjkwMzg4NTA5OTc0JnQ9NjsgU1JDSEhQR1VTUj1IVj0xNjkwMzg0OTQ1JlNSQ0hMQU5HPXpoLUhhbnMmUFY9MTUuMC4wJkJSVz1NVyZCUkg9TVQmQ1c9NDEwJkNIPTc5NCZTQ1c9NDEwJlNDSD03OTQmRFBSPTEuNSZVVEM9NDgwJkRNPTAmV1RTPTYzODI1ODc5NjI3JlBSVkNXPTQxMCZQUlZDSD03OTQmUFI9MS41OyBjY3Q9QWpXSUJZT29WUC1BZnE2Z1d3dHg4MElmNnlIbjZpQnVFVkhBMVhIZEFLcG55NllfQ1Z5aV9NU3lNOTRWeU1XbmpkWWtrY2NWdG0zY3pvSUF0WFVHUUE7IEdDPUFqV0lCWU9vVlAtQWZxNmdXd3R4ODBJZjZ5SG42aUJ1RVZIQTFYSGRBS3BSM1lfRDlZdGNrczRIdDZYaGFkWGs3NWR2aHpQNFlPVVMwVW1vRXlxeXh3JyBcICAgLUggJ2RudDogMScgXCAgIC1IICdzZWMtY2gtdWE6ICJDaHJvbWl1bSI7dj0iMTE2IiwgIk5vdClBO0JyYW5kIjt2PSIyNCIsICJNaWNyb3NvZnQgRWRnZSI7dj0iMTE2IicgXCAgIC1IICdzZWMtY2gtdWEtYXJjaDogIng4NiInIFwgICAtSCAnc2VjLWNoLXVhLWJpdG5lc3M6ICI2NCInIFwgICAtSCAnc2VjLWNoLXVhLWZ1bGwtdmVyc2lvbjogIjExNi4wLjE5MzguMjkiJyBcICAgLUggJ3NlYy1jaC11YS1mdWxsLXZlcnNpb24tbGlzdDogIkNocm9taXVtIjt2PSIxMTYuMC41ODQ1LjQyIiwgIk5vdClBO0JyYW5kIjt2PSIyNC4wLjAuMCIsICJNaWNyb3NvZnQgRWRnZSI7dj0iMTE2LjAuMTkzOC4yOSInIFwgICAtSCAnc2VjLWNoLXVhLW1vYmlsZTogPzAnIFwgICAtSCAnc2VjLWNoLXVhLW1vZGVsOiAiIicgXCAgIC1IICdzZWMtY2gtdWEtcGxhdGZvcm06ICJXaW5kb3dzIicgXCAgIC1IICdzZWMtY2gtdWEtcGxhdGZvcm0tdmVyc2lvbjogIjE1LjAuMCInIFwgICAtSCAnc2VjLWZldGNoLWRlc3Q6IGRvY3VtZW50JyBcICAgLUggJ3NlYy1mZXRjaC1tb2RlOiBuYXZpZ2F0ZScgXCAgIC1IICdzZWMtZmV0Y2gtc2l0ZTogbm9uZScgXCAgIC1IICdzZWMtZmV0Y2gtdXNlcjogPzEnIFwgICAtSCAnc2VjLW1zLWdlYzogQjNGNDdBRDRBMjgzQ0FCMzc0QzA0NTFDNDZBQUZEMTQ3QzZBNERBQ0FGRjZBMUMxM0YzNEIyQzcyQjAyNDQ5NCcgXCAgIC1IICdzZWMtbXMtZ2VjLXZlcnNpb246IDEtMTE2LjAuMTkzOC4yOScgXCAgIC1IICd1cGdyYWRlLWluc2VjdXJlLXJlcXVlc3RzOiAxJyBcICAgLUggJ3VzZXItYWdlbnQ6IE1vemlsbGEvNS4wIChXaW5kb3dzIE5UIDEwLjA7IFdpbjY0OyB4NjQpIEFwcGxlV2ViS2l0LzUzNy4zNiAoS0hUTUwsIGxpa2UgR2Vja28pIENocm9tZS8xMTYuMC4wLjAgU2FmYXJpLzUzNy4zNiBFZGcvMTE2LjAuMC4wJyBcICAgLUggJ3gtY2xpZW50LWRhdGE6IGV5SXhJam9pTWlJc0lqRXdJam9pWENKVE1HZzNSMDVIT1RGMmFEUTFUVVpTVW5aNU5ITjJha1JtTVdkbGFWSktlbk54TmxBM2FVMVdibkYzUFZ3aUlpd2lNaUk2SWpFaUxDSXpJam9pTVNJc0lqUWlPaUl5TVRVNE9EUTVOVE00TWpZNE9UTTVOVEEzSWl3aU5TSTZJbHdpU205R1VXcFBURGszT1M5TWJrUlJabmxDZDJOMU0yRnNPVU4zZVRaVFFtZGFNR05ZTVhCdE9XVk1aejFjSWlJc0lqWWlPaUppWlhSaElpd2lOeUk2SWpFNE1ETTRPRFl5TmpRek5TSXNJamtpT2lKa1pYTnJkRzl3SW4wPScgXCAgIC1IICd4LWVkZ2Utc2hvcHBpbmctZmxhZzogMScgXCAgIC0tY29tcHJlc3NlZA== -``` -
      - - -## 鸣谢 - - 感谢 [EdgeGPT](https://github.com/acheong08/EdgeGPT) 提供的代理 API 的方法。 - - 感谢 [Vercel AI](https://github.com/vercel-labs/ai-chatbot) 提供的基础脚手架和 [ChatHub](https://github.com/chathub-dev/chathub) [go-proxy-bingai](https://github.com/adams549659584/go-proxy-bingai) 提供的部分代码。 - - -## 答疑及交流 - - - -## License - -MIT © [LICENSE](https://github.com/weaigc/bingo/blob/main/LICENSE). - - diff --git a/spaces/user238921933/stable-diffusion-webui/javascript/imageviewer.js b/spaces/user238921933/stable-diffusion-webui/javascript/imageviewer.js deleted file mode 100644 index aac2ee82383881bd9d59a264d2cd2c823c2187c4..0000000000000000000000000000000000000000 --- a/spaces/user238921933/stable-diffusion-webui/javascript/imageviewer.js +++ /dev/null @@ -1,285 +0,0 @@ -// A full size 'lightbox' preview modal shown when left clicking on gallery previews -function closeModal() { - gradioApp().getElementById("lightboxModal").style.display = "none"; -} - -function showModal(event) { - const source = event.target || event.srcElement; - const modalImage = gradioApp().getElementById("modalImage") - const lb = gradioApp().getElementById("lightboxModal") - modalImage.src = source.src - if (modalImage.style.display === 'none') { - lb.style.setProperty('background-image', 'url(' + source.src + ')'); - } - lb.style.display = "block"; - lb.focus() - - const tabTxt2Img = gradioApp().getElementById("tab_txt2img") - const tabImg2Img = gradioApp().getElementById("tab_img2img") - // show the save button in modal only on txt2img or img2img tabs - if (tabTxt2Img.style.display != "none" || tabImg2Img.style.display != "none") { - gradioApp().getElementById("modal_save").style.display = "inline" - } else { - gradioApp().getElementById("modal_save").style.display = "none" - } - event.stopPropagation() -} - -function negmod(n, m) { - return ((n % m) + m) % m; -} - -function updateOnBackgroundChange() { - const modalImage = gradioApp().getElementById("modalImage") - if (modalImage && modalImage.offsetParent) { - let allcurrentButtons = gradioApp().querySelectorAll(".gallery-item.transition-all.\\!ring-2") - let currentButton = null - allcurrentButtons.forEach(function(elem) { - if (elem.parentElement.offsetParent) { - currentButton = elem; - } - }) - - if (currentButton?.children?.length > 0 && modalImage.src != currentButton.children[0].src) { - modalImage.src = currentButton.children[0].src; - if (modalImage.style.display === 'none') { - modal.style.setProperty('background-image', `url(${modalImage.src})`) - } - } - } -} - -function modalImageSwitch(offset) { - var allgalleryButtons = gradioApp().querySelectorAll(".gallery-item.transition-all") - var galleryButtons = [] - allgalleryButtons.forEach(function(elem) { - if (elem.parentElement.offsetParent) { - galleryButtons.push(elem); - } - }) - - if (galleryButtons.length > 1) { - var allcurrentButtons = gradioApp().querySelectorAll(".gallery-item.transition-all.\\!ring-2") - var currentButton = null - allcurrentButtons.forEach(function(elem) { - if (elem.parentElement.offsetParent) { - currentButton = elem; - } - }) - - var result = -1 - galleryButtons.forEach(function(v, i) { - if (v == currentButton) { - result = i - } - }) - - if (result != -1) { - nextButton = galleryButtons[negmod((result + offset), galleryButtons.length)] - nextButton.click() - const modalImage = gradioApp().getElementById("modalImage"); - const modal = gradioApp().getElementById("lightboxModal"); - modalImage.src = nextButton.children[0].src; - if (modalImage.style.display === 'none') { - modal.style.setProperty('background-image', `url(${modalImage.src})`) - } - setTimeout(function() { - modal.focus() - }, 10) - } - } -} - -function saveImage(){ - const tabTxt2Img = gradioApp().getElementById("tab_txt2img") - const tabImg2Img = gradioApp().getElementById("tab_img2img") - const saveTxt2Img = "save_txt2img" - const saveImg2Img = "save_img2img" - if (tabTxt2Img.style.display != "none") { - gradioApp().getElementById(saveTxt2Img).click() - } else if (tabImg2Img.style.display != "none") { - gradioApp().getElementById(saveImg2Img).click() - } else { - console.error("missing implementation for saving modal of this type") - } -} - -function modalSaveImage(event) { - saveImage() - event.stopPropagation() -} - -function modalNextImage(event) { - modalImageSwitch(1) - event.stopPropagation() -} - -function modalPrevImage(event) { - modalImageSwitch(-1) - event.stopPropagation() -} - -function modalKeyHandler(event) { - switch (event.key) { - case "s": - saveImage() - break; - case "ArrowLeft": - modalPrevImage(event) - break; - case "ArrowRight": - modalNextImage(event) - break; - case "Escape": - closeModal(); - break; - } -} - -function showGalleryImage() { - setTimeout(function() { - fullImg_preview = gradioApp().querySelectorAll('img.w-full.object-contain') - - if (fullImg_preview != null) { - fullImg_preview.forEach(function function_name(e) { - if (e.dataset.modded) - return; - e.dataset.modded = true; - if(e && e.parentElement.tagName == 'DIV'){ - e.style.cursor='pointer' - e.style.userSelect='none' - - var isFirefox = isFirefox = navigator.userAgent.toLowerCase().indexOf('firefox') > -1 - - // For Firefox, listening on click first switched to next image then shows the lightbox. - // If you know how to fix this without switching to mousedown event, please. - // For other browsers the event is click to make it possiblr to drag picture. - var event = isFirefox ? 'mousedown' : 'click' - - e.addEventListener(event, function (evt) { - if(!opts.js_modal_lightbox || evt.button != 0) return; - modalZoomSet(gradioApp().getElementById('modalImage'), opts.js_modal_lightbox_initially_zoomed) - evt.preventDefault() - showModal(evt) - }, true); - } - }); - } - - }, 100); -} - -function modalZoomSet(modalImage, enable) { - if (enable) { - modalImage.classList.add('modalImageFullscreen'); - } else { - modalImage.classList.remove('modalImageFullscreen'); - } -} - -function modalZoomToggle(event) { - modalImage = gradioApp().getElementById("modalImage"); - modalZoomSet(modalImage, !modalImage.classList.contains('modalImageFullscreen')) - event.stopPropagation() -} - -function modalTileImageToggle(event) { - const modalImage = gradioApp().getElementById("modalImage"); - const modal = gradioApp().getElementById("lightboxModal"); - const isTiling = modalImage.style.display === 'none'; - if (isTiling) { - modalImage.style.display = 'block'; - modal.style.setProperty('background-image', 'none') - } else { - modalImage.style.display = 'none'; - modal.style.setProperty('background-image', `url(${modalImage.src})`) - } - - event.stopPropagation() -} - -function galleryImageHandler(e) { - if (e && e.parentElement.tagName == 'BUTTON') { - e.onclick = showGalleryImage; - } -} - -onUiUpdate(function() { - fullImg_preview = gradioApp().querySelectorAll('img.w-full') - if (fullImg_preview != null) { - fullImg_preview.forEach(galleryImageHandler); - } - updateOnBackgroundChange(); -}) - -document.addEventListener("DOMContentLoaded", function() { - const modalFragment = document.createDocumentFragment(); - const modal = document.createElement('div') - modal.onclick = closeModal; - modal.id = "lightboxModal"; - modal.tabIndex = 0 - modal.addEventListener('keydown', modalKeyHandler, true) - - const modalControls = document.createElement('div') - modalControls.className = 'modalControls gradio-container'; - modal.append(modalControls); - - const modalZoom = document.createElement('span') - modalZoom.className = 'modalZoom cursor'; - modalZoom.innerHTML = '⤡' - modalZoom.addEventListener('click', modalZoomToggle, true) - modalZoom.title = "Toggle zoomed view"; - modalControls.appendChild(modalZoom) - - const modalTileImage = document.createElement('span') - modalTileImage.className = 'modalTileImage cursor'; - modalTileImage.innerHTML = '⊞' - modalTileImage.addEventListener('click', modalTileImageToggle, true) - modalTileImage.title = "Preview tiling"; - modalControls.appendChild(modalTileImage) - - const modalSave = document.createElement("span") - modalSave.className = "modalSave cursor" - modalSave.id = "modal_save" - modalSave.innerHTML = "🖫" - modalSave.addEventListener("click", modalSaveImage, true) - modalSave.title = "Save Image(s)" - modalControls.appendChild(modalSave) - - const modalClose = document.createElement('span') - modalClose.className = 'modalClose cursor'; - modalClose.innerHTML = '×' - modalClose.onclick = closeModal; - modalClose.title = "Close image viewer"; - modalControls.appendChild(modalClose) - - const modalImage = document.createElement('img') - modalImage.id = 'modalImage'; - modalImage.onclick = closeModal; - modalImage.tabIndex = 0 - modalImage.addEventListener('keydown', modalKeyHandler, true) - modal.appendChild(modalImage) - - const modalPrev = document.createElement('a') - modalPrev.className = 'modalPrev'; - modalPrev.innerHTML = '❮' - modalPrev.tabIndex = 0 - modalPrev.addEventListener('click', modalPrevImage, true); - modalPrev.addEventListener('keydown', modalKeyHandler, true) - modal.appendChild(modalPrev) - - const modalNext = document.createElement('a') - modalNext.className = 'modalNext'; - modalNext.innerHTML = '❯' - modalNext.tabIndex = 0 - modalNext.addEventListener('click', modalNextImage, true); - modalNext.addEventListener('keydown', modalKeyHandler, true) - - modal.appendChild(modalNext) - - - gradioApp().getRootNode().appendChild(modal) - - document.body.appendChild(modalFragment); - -}); diff --git a/spaces/vishnu0001/text2mesh/shap_e/models/transmitter/pc_encoder.py b/spaces/vishnu0001/text2mesh/shap_e/models/transmitter/pc_encoder.py deleted file mode 100644 index 7f62b7f091068b772887c6ce1b986cf45491ebe2..0000000000000000000000000000000000000000 --- a/spaces/vishnu0001/text2mesh/shap_e/models/transmitter/pc_encoder.py +++ /dev/null @@ -1,426 +0,0 @@ -from abc import abstractmethod -from typing import Any, Dict, Iterable, List, Optional, Tuple, Union - -import numpy as np -import torch.distributed as dist -import torch.nn as nn -import torch.nn.functional as F -from PIL import Image -from torch import torch - -from shap_e.models.generation.perceiver import SimplePerceiver -from shap_e.models.generation.transformer import Transformer -from shap_e.models.nn.encoding import PosEmbLinear -from shap_e.rendering.view_data import ProjectiveCamera -from shap_e.util.collections import AttrDict - -from .base import VectorEncoder -from .channels_encoder import DatasetIterator, sample_pcl_fps - - -class PointCloudTransformerEncoder(VectorEncoder): - """ - Encode point clouds using a transformer model with an extra output - token used to extract a latent vector. - """ - - def __init__( - self, - *, - device: torch.device, - dtype: torch.dtype, - param_shapes: Dict[str, Tuple[int]], - params_proj: Dict[str, Any], - latent_bottleneck: Optional[Dict[str, Any]] = None, - d_latent: int = 512, - latent_ctx: int = 1, - input_channels: int = 6, - n_ctx: int = 1024, - width: int = 512, - layers: int = 12, - heads: int = 8, - init_scale: float = 0.25, - pos_emb: Optional[str] = None, - ): - super().__init__( - device=device, - param_shapes=param_shapes, - params_proj=params_proj, - latent_bottleneck=latent_bottleneck, - d_latent=d_latent, - ) - self.input_channels = input_channels - self.n_ctx = n_ctx - self.latent_ctx = latent_ctx - - assert d_latent % latent_ctx == 0 - - self.ln_pre = nn.LayerNorm(width, device=device, dtype=dtype) - self.backbone = Transformer( - device=device, - dtype=dtype, - n_ctx=n_ctx + latent_ctx, - width=width, - layers=layers, - heads=heads, - init_scale=init_scale, - ) - self.ln_post = nn.LayerNorm(width, device=device, dtype=dtype) - self.register_parameter( - "output_tokens", - nn.Parameter(torch.randn(latent_ctx, width, device=device, dtype=dtype)), - ) - - self.input_proj = PosEmbLinear(pos_emb, input_channels, width, device=device, dtype=dtype) - self.output_proj = nn.Linear(width, d_latent // latent_ctx, device=device, dtype=dtype) - - def encode_to_vector(self, batch: AttrDict, options: Optional[AttrDict] = None) -> torch.Tensor: - _ = options - points = batch.points.permute(0, 2, 1) # NCL -> NLC - h = self.input_proj(points) - h = torch.cat([h, self.output_tokens[None].repeat(len(h), 1, 1)], dim=1) - h = self.ln_pre(h) - h = self.backbone(h) - h = self.ln_post(h) - h = h[:, self.n_ctx :] - h = self.output_proj(h).flatten(1) - return h - - -class PerceiverEncoder(VectorEncoder): - """ - Encode point clouds using a perceiver model with an extra output - token used to extract a latent vector. - """ - - def __init__( - self, - *, - device: torch.device, - dtype: torch.dtype, - param_shapes: Dict[str, Tuple[int]], - params_proj: Dict[str, Any], - latent_bottleneck: Optional[Dict[str, Any]] = None, - d_latent: int = 512, - latent_ctx: int = 1, - width: int = 512, - layers: int = 12, - xattn_layers: int = 1, - heads: int = 8, - init_scale: float = 0.25, - # Training hparams - inner_batch_size: int = 1, - data_ctx: int = 1, - min_unrolls: int, - max_unrolls: int, - ): - super().__init__( - device=device, - param_shapes=param_shapes, - params_proj=params_proj, - latent_bottleneck=latent_bottleneck, - d_latent=d_latent, - ) - self.width = width - self.device = device - self.dtype = dtype - self.latent_ctx = latent_ctx - - self.inner_batch_size = inner_batch_size - self.data_ctx = data_ctx - self.min_unrolls = min_unrolls - self.max_unrolls = max_unrolls - - self.encoder = SimplePerceiver( - device=device, - dtype=dtype, - n_ctx=self.data_ctx + self.latent_ctx, - n_data=self.inner_batch_size, - width=width, - layers=xattn_layers, - heads=heads, - init_scale=init_scale, - ) - self.processor = Transformer( - device=device, - dtype=dtype, - n_ctx=self.data_ctx + self.latent_ctx, - layers=layers - xattn_layers, - width=width, - heads=heads, - init_scale=init_scale, - ) - self.ln_pre = nn.LayerNorm(width, device=device, dtype=dtype) - self.ln_post = nn.LayerNorm(width, device=device, dtype=dtype) - self.register_parameter( - "output_tokens", - nn.Parameter(torch.randn(self.latent_ctx, width, device=device, dtype=dtype)), - ) - self.output_proj = nn.Linear(width, d_latent // self.latent_ctx, device=device, dtype=dtype) - - @abstractmethod - def get_h_and_iterator( - self, batch: AttrDict, options: Optional[AttrDict] = None - ) -> Tuple[torch.Tensor, Iterable]: - """ - :return: a tuple of ( - the initial output tokens of size [batch_size, data_ctx + latent_ctx, width], - an iterator over the given data - ) - """ - - def encode_to_vector(self, batch: AttrDict, options: Optional[AttrDict] = None) -> torch.Tensor: - h, it = self.get_h_and_iterator(batch, options=options) - n_unrolls = self.get_n_unrolls() - - for _ in range(n_unrolls): - data = next(it) - h = self.encoder(h, data) - h = self.processor(h) - - h = self.output_proj(self.ln_post(h[:, -self.latent_ctx :])) - return h.flatten(1) - - def get_n_unrolls(self): - if self.training: - n_unrolls = torch.randint( - self.min_unrolls, self.max_unrolls + 1, size=(), device=self.device - ) - dist.broadcast(n_unrolls, 0) - n_unrolls = n_unrolls.item() - else: - n_unrolls = self.max_unrolls - return n_unrolls - - -class PointCloudPerceiverEncoder(PerceiverEncoder): - """ - Encode point clouds using a transformer model with an extra output - token used to extract a latent vector. - """ - - def __init__( - self, - *, - cross_attention_dataset: str = "pcl", - fps_method: str = "fps", - # point cloud hyperparameters - input_channels: int = 6, - pos_emb: Optional[str] = None, - # multiview hyperparameters - image_size: int = 256, - patch_size: int = 32, - pose_dropout: float = 0.0, - use_depth: bool = False, - max_depth: float = 5.0, - # other hyperparameters - **kwargs, - ): - super().__init__(**kwargs) - assert cross_attention_dataset in ("pcl", "multiview") - assert fps_method in ("fps", "first") - self.cross_attention_dataset = cross_attention_dataset - self.fps_method = fps_method - self.input_channels = input_channels - self.input_proj = PosEmbLinear( - pos_emb, input_channels, self.width, device=self.device, dtype=self.dtype - ) - if self.cross_attention_dataset == "multiview": - self.image_size = image_size - self.patch_size = patch_size - self.pose_dropout = pose_dropout - self.use_depth = use_depth - self.max_depth = max_depth - pos_ctx = (image_size // patch_size) ** 2 - self.register_parameter( - "pos_emb", - nn.Parameter( - torch.randn( - pos_ctx * self.inner_batch_size, - self.width, - device=self.device, - dtype=self.dtype, - ) - ), - ) - self.patch_emb = nn.Conv2d( - in_channels=3 if not use_depth else 4, - out_channels=self.width, - kernel_size=patch_size, - stride=patch_size, - device=self.device, - dtype=self.dtype, - ) - self.camera_emb = nn.Sequential( - nn.Linear( - 3 * 4 + 1, self.width, device=self.device, dtype=self.dtype - ), # input size is for origin+x+y+z+fov - nn.GELU(), - nn.Linear(self.width, 2 * self.width, device=self.device, dtype=self.dtype), - ) - - def get_h_and_iterator( - self, batch: AttrDict, options: Optional[AttrDict] = None - ) -> Tuple[torch.Tensor, Iterable]: - """ - :return: a tuple of ( - the initial output tokens of size [batch_size, data_ctx + latent_ctx, width], - an iterator over the given data - ) - """ - options = AttrDict() if options is None else options - - # Build the initial query embeddings - points = batch.points.permute(0, 2, 1) # NCL -> NLC - fps_samples = self.sample_pcl_fps(points) - batch_size = points.shape[0] - data_tokens = self.input_proj(fps_samples) - latent_tokens = self.output_tokens.unsqueeze(0).repeat(batch_size, 1, 1) - h = self.ln_pre(torch.cat([data_tokens, latent_tokens], dim=1)) - assert h.shape == (batch_size, self.data_ctx + self.latent_ctx, self.width) - - # Build the dataset embedding iterator - dataset_fn = { - "pcl": self.get_pcl_dataset, - "multiview": self.get_multiview_dataset, - }[self.cross_attention_dataset] - it = dataset_fn(batch, options=options) - - return h, it - - def sample_pcl_fps(self, points: torch.Tensor) -> torch.Tensor: - return sample_pcl_fps(points, data_ctx=self.data_ctx, method=self.fps_method) - - def get_pcl_dataset( - self, batch: AttrDict, options: Optional[AttrDict[str, Any]] = None - ) -> Iterable: - _ = options - dataset_emb = self.input_proj(batch.points.permute(0, 2, 1)) # NCL -> NLC - assert dataset_emb.shape[1] >= self.inner_batch_size - return iter(DatasetIterator(dataset_emb, batch_size=self.inner_batch_size)) - - def get_multiview_dataset( - self, batch: AttrDict, options: Optional[AttrDict] = None - ) -> Iterable: - _ = options - - dataset_emb = self.encode_views(batch) - batch_size, num_views, n_patches, width = dataset_emb.shape - - assert num_views >= self.inner_batch_size - - it = iter(DatasetIterator(dataset_emb, batch_size=self.inner_batch_size)) - - def gen(): - while True: - examples = next(it) - assert examples.shape == (batch_size, self.inner_batch_size, n_patches, self.width) - views = examples.reshape(batch_size, -1, width) + self.pos_emb - yield views - - return gen() - - def encode_views(self, batch: AttrDict) -> torch.Tensor: - """ - :return: [batch_size, num_views, n_patches, width] - """ - all_views = self.views_to_tensor(batch.views).to(self.device) - if self.use_depth: - all_views = torch.cat([all_views, self.depths_to_tensor(batch.depths)], dim=2) - all_cameras = self.cameras_to_tensor(batch.cameras).to(self.device) - - batch_size, num_views, _, _, _ = all_views.shape - - views_proj = self.patch_emb( - all_views.reshape([batch_size * num_views, *all_views.shape[2:]]) - ) - views_proj = ( - views_proj.reshape([batch_size, num_views, self.width, -1]) - .permute(0, 1, 3, 2) - .contiguous() - ) # [batch_size x num_views x n_patches x width] - - # [batch_size, num_views, 1, 2 * width] - camera_proj = self.camera_emb(all_cameras).reshape( - [batch_size, num_views, 1, self.width * 2] - ) - pose_dropout = self.pose_dropout if self.training else 0.0 - mask = torch.rand(batch_size, 1, 1, 1, device=views_proj.device) >= pose_dropout - camera_proj = torch.where(mask, camera_proj, torch.zeros_like(camera_proj)) - scale, shift = camera_proj.chunk(2, dim=3) - views_proj = views_proj * (scale + 1.0) + shift - return views_proj - - def views_to_tensor(self, views: Union[torch.Tensor, List[List[Image.Image]]]) -> torch.Tensor: - """ - Returns a [batch x num_views x 3 x size x size] tensor in the range [-1, 1]. - """ - if isinstance(views, torch.Tensor): - return views - - tensor_batch = [] - num_views = len(views[0]) - for inner_list in views: - assert len(inner_list) == num_views - inner_batch = [] - for img in inner_list: - img = img.resize((self.image_size,) * 2).convert("RGB") - inner_batch.append( - torch.from_numpy(np.array(img)).to(device=self.device, dtype=torch.float32) - / 127.5 - - 1 - ) - tensor_batch.append(torch.stack(inner_batch, dim=0)) - return torch.stack(tensor_batch, dim=0).permute(0, 1, 4, 2, 3) - - def depths_to_tensor( - self, depths: Union[torch.Tensor, List[List[Image.Image]]] - ) -> torch.Tensor: - """ - Returns a [batch x num_views x 1 x size x size] tensor in the range [-1, 1]. - """ - if isinstance(depths, torch.Tensor): - return depths - - tensor_batch = [] - num_views = len(depths[0]) - for inner_list in depths: - assert len(inner_list) == num_views - inner_batch = [] - for arr in inner_list: - tensor = torch.from_numpy(arr).clamp(max=self.max_depth) / self.max_depth - tensor = tensor * 2 - 1 - tensor = F.interpolate( - tensor[None, None], - (self.image_size,) * 2, - mode="nearest", - ) - inner_batch.append(tensor.to(device=self.device, dtype=torch.float32)) - tensor_batch.append(torch.cat(inner_batch, dim=0)) - return torch.stack(tensor_batch, dim=0) - - def cameras_to_tensor( - self, cameras: Union[torch.Tensor, List[List[ProjectiveCamera]]] - ) -> torch.Tensor: - """ - Returns a [batch x num_views x 3*4+1] tensor of camera information. - """ - if isinstance(cameras, torch.Tensor): - return cameras - outer_batch = [] - for inner_list in cameras: - inner_batch = [] - for camera in inner_list: - inner_batch.append( - np.array( - [ - *camera.x, - *camera.y, - *camera.z, - *camera.origin, - camera.x_fov, - ] - ) - ) - outer_batch.append(np.stack(inner_batch, axis=0)) - return torch.from_numpy(np.stack(outer_batch, axis=0)).float() diff --git a/spaces/wallezen/so-vits-svc/inference/slicer.py b/spaces/wallezen/so-vits-svc/inference/slicer.py deleted file mode 100644 index afb31b7af1cdf8310ea42968d1857af6f15d73e4..0000000000000000000000000000000000000000 --- a/spaces/wallezen/so-vits-svc/inference/slicer.py +++ /dev/null @@ -1,142 +0,0 @@ -import librosa -import torch -import torchaudio - - -class Slicer: - def __init__(self, - sr: int, - threshold: float = -40., - min_length: int = 5000, - min_interval: int = 300, - hop_size: int = 20, - max_sil_kept: int = 5000): - if not min_length >= min_interval >= hop_size: - raise ValueError('The following condition must be satisfied: min_length >= min_interval >= hop_size') - if not max_sil_kept >= hop_size: - raise ValueError('The following condition must be satisfied: max_sil_kept >= hop_size') - min_interval = sr * min_interval / 1000 - self.threshold = 10 ** (threshold / 20.) - self.hop_size = round(sr * hop_size / 1000) - self.win_size = min(round(min_interval), 4 * self.hop_size) - self.min_length = round(sr * min_length / 1000 / self.hop_size) - self.min_interval = round(min_interval / self.hop_size) - self.max_sil_kept = round(sr * max_sil_kept / 1000 / self.hop_size) - - def _apply_slice(self, waveform, begin, end): - if len(waveform.shape) > 1: - return waveform[:, begin * self.hop_size: min(waveform.shape[1], end * self.hop_size)] - else: - return waveform[begin * self.hop_size: min(waveform.shape[0], end * self.hop_size)] - - # @timeit - def slice(self, waveform): - if len(waveform.shape) > 1: - samples = librosa.to_mono(waveform) - else: - samples = waveform - if samples.shape[0] <= self.min_length: - return {"0": {"slice": False, "split_time": f"0,{len(waveform)}"}} - rms_list = librosa.feature.rms(y=samples, frame_length=self.win_size, hop_length=self.hop_size).squeeze(0) - sil_tags = [] - silence_start = None - clip_start = 0 - for i, rms in enumerate(rms_list): - # Keep looping while frame is silent. - if rms < self.threshold: - # Record start of silent frames. - if silence_start is None: - silence_start = i - continue - # Keep looping while frame is not silent and silence start has not been recorded. - if silence_start is None: - continue - # Clear recorded silence start if interval is not enough or clip is too short - is_leading_silence = silence_start == 0 and i > self.max_sil_kept - need_slice_middle = i - silence_start >= self.min_interval and i - clip_start >= self.min_length - if not is_leading_silence and not need_slice_middle: - silence_start = None - continue - # Need slicing. Record the range of silent frames to be removed. - if i - silence_start <= self.max_sil_kept: - pos = rms_list[silence_start: i + 1].argmin() + silence_start - if silence_start == 0: - sil_tags.append((0, pos)) - else: - sil_tags.append((pos, pos)) - clip_start = pos - elif i - silence_start <= self.max_sil_kept * 2: - pos = rms_list[i - self.max_sil_kept: silence_start + self.max_sil_kept + 1].argmin() - pos += i - self.max_sil_kept - pos_l = rms_list[silence_start: silence_start + self.max_sil_kept + 1].argmin() + silence_start - pos_r = rms_list[i - self.max_sil_kept: i + 1].argmin() + i - self.max_sil_kept - if silence_start == 0: - sil_tags.append((0, pos_r)) - clip_start = pos_r - else: - sil_tags.append((min(pos_l, pos), max(pos_r, pos))) - clip_start = max(pos_r, pos) - else: - pos_l = rms_list[silence_start: silence_start + self.max_sil_kept + 1].argmin() + silence_start - pos_r = rms_list[i - self.max_sil_kept: i + 1].argmin() + i - self.max_sil_kept - if silence_start == 0: - sil_tags.append((0, pos_r)) - else: - sil_tags.append((pos_l, pos_r)) - clip_start = pos_r - silence_start = None - # Deal with trailing silence. - total_frames = rms_list.shape[0] - if silence_start is not None and total_frames - silence_start >= self.min_interval: - silence_end = min(total_frames, silence_start + self.max_sil_kept) - pos = rms_list[silence_start: silence_end + 1].argmin() + silence_start - sil_tags.append((pos, total_frames + 1)) - # Apply and return slices. - if len(sil_tags) == 0: - return {"0": {"slice": False, "split_time": f"0,{len(waveform)}"}} - else: - chunks = [] - # The first segment is not the beginning of the audio. - if sil_tags[0][0]: - chunks.append( - {"slice": False, "split_time": f"0,{min(waveform.shape[0], sil_tags[0][0] * self.hop_size)}"}) - for i in range(0, len(sil_tags)): - # Mark audio segment. Skip the first segment. - if i: - chunks.append({"slice": False, - "split_time": f"{sil_tags[i - 1][1] * self.hop_size},{min(waveform.shape[0], sil_tags[i][0] * self.hop_size)}"}) - # Mark all mute segments - chunks.append({"slice": True, - "split_time": f"{sil_tags[i][0] * self.hop_size},{min(waveform.shape[0], sil_tags[i][1] * self.hop_size)}"}) - # The last segment is not the end. - if sil_tags[-1][1] * self.hop_size < len(waveform): - chunks.append({"slice": False, "split_time": f"{sil_tags[-1][1] * self.hop_size},{len(waveform)}"}) - chunk_dict = {} - for i in range(len(chunks)): - chunk_dict[str(i)] = chunks[i] - return chunk_dict - - -def cut(audio_path, db_thresh=-30, min_len=5000): - audio, sr = librosa.load(audio_path, sr=None) - slicer = Slicer( - sr=sr, - threshold=db_thresh, - min_length=min_len - ) - chunks = slicer.slice(audio) - return chunks - - -def chunks2audio(audio_path, chunks): - chunks = dict(chunks) - audio, sr = torchaudio.load(audio_path) - if len(audio.shape) == 2 and audio.shape[1] >= 2: - audio = torch.mean(audio, dim=0).unsqueeze(0) - audio = audio.cpu().numpy()[0] - result = [] - for k, v in chunks.items(): - tag = v["split_time"].split(",") - if tag[0] != tag[1]: - result.append((v["slice"], audio[int(tag[0]):int(tag[1])])) - return result, sr diff --git a/spaces/weizmannscience/tokenflow/tokenflow_utils.py b/spaces/weizmannscience/tokenflow/tokenflow_utils.py deleted file mode 100644 index 6b76ec6e8ccb6ee4896f9a72b1b4407197ed7e3c..0000000000000000000000000000000000000000 --- a/spaces/weizmannscience/tokenflow/tokenflow_utils.py +++ /dev/null @@ -1,448 +0,0 @@ -from typing import Type -import torch -import os - -from utils import isinstance_str, batch_cosine_sim - -def register_pivotal(diffusion_model, is_pivotal): - for _, module in diffusion_model.named_modules(): - # If for some reason this has a different name, create an issue and I'll fix it - if isinstance_str(module, "BasicTransformerBlock"): - setattr(module, "pivotal_pass", is_pivotal) - -def register_batch_idx(diffusion_model, batch_idx): - for _, module in diffusion_model.named_modules(): - # If for some reason this has a different name, create an issue and I'll fix it - if isinstance_str(module, "BasicTransformerBlock"): - setattr(module, "batch_idx", batch_idx) - - -def register_time(model, t): - conv_module = model.unet.up_blocks[1].resnets[1] - setattr(conv_module, 't', t) - down_res_dict = {0: [0, 1], 1: [0, 1], 2: [0, 1]} - up_res_dict = {1: [0, 1, 2], 2: [0, 1, 2], 3: [0, 1, 2]} - for res in up_res_dict: - for block in up_res_dict[res]: - module = model.unet.up_blocks[res].attentions[block].transformer_blocks[0].attn1 - setattr(module, 't', t) - module = model.unet.up_blocks[res].attentions[block].transformer_blocks[0].attn2 - setattr(module, 't', t) - for res in down_res_dict: - for block in down_res_dict[res]: - module = model.unet.down_blocks[res].attentions[block].transformer_blocks[0].attn1 - setattr(module, 't', t) - module = model.unet.down_blocks[res].attentions[block].transformer_blocks[0].attn2 - setattr(module, 't', t) - module = model.unet.mid_block.attentions[0].transformer_blocks[0].attn1 - setattr(module, 't', t) - module = model.unet.mid_block.attentions[0].transformer_blocks[0].attn2 - setattr(module, 't', t) - - -def load_source_latents_t(t, latents_path): - latents_t_path = os.path.join(latents_path, f'noisy_latents_{t}.pt') - assert os.path.exists(latents_t_path), f'Missing latents at t {t} path {latents_t_path}' - latents = torch.load(latents_t_path) - return latents - -def register_conv_injection(model, injection_schedule): - def conv_forward(self): - def forward(input_tensor, temb): - hidden_states = input_tensor - - hidden_states = self.norm1(hidden_states) - hidden_states = self.nonlinearity(hidden_states) - - if self.upsample is not None: - # upsample_nearest_nhwc fails with large batch sizes. see https://github.com/huggingface/diffusers/issues/984 - if hidden_states.shape[0] >= 64: - input_tensor = input_tensor.contiguous() - hidden_states = hidden_states.contiguous() - input_tensor = self.upsample(input_tensor) - hidden_states = self.upsample(hidden_states) - elif self.downsample is not None: - input_tensor = self.downsample(input_tensor) - hidden_states = self.downsample(hidden_states) - - hidden_states = self.conv1(hidden_states) - - if temb is not None: - temb = self.time_emb_proj(self.nonlinearity(temb))[:, :, None, None] - - if temb is not None and self.time_embedding_norm == "default": - hidden_states = hidden_states + temb - - hidden_states = self.norm2(hidden_states) - - if temb is not None and self.time_embedding_norm == "scale_shift": - scale, shift = torch.chunk(temb, 2, dim=1) - hidden_states = hidden_states * (1 + scale) + shift - - hidden_states = self.nonlinearity(hidden_states) - - hidden_states = self.dropout(hidden_states) - hidden_states = self.conv2(hidden_states) - if self.injection_schedule is not None and (self.t in self.injection_schedule or self.t == 1000): - source_batch_size = int(hidden_states.shape[0] // 3) - # inject unconditional - hidden_states[source_batch_size:2 * source_batch_size] = hidden_states[:source_batch_size] - # inject conditional - hidden_states[2 * source_batch_size:] = hidden_states[:source_batch_size] - - if self.conv_shortcut is not None: - input_tensor = self.conv_shortcut(input_tensor) - - output_tensor = (input_tensor + hidden_states) / self.output_scale_factor - - return output_tensor - - return forward - - conv_module = model.unet.up_blocks[1].resnets[1] - conv_module.forward = conv_forward(conv_module) - setattr(conv_module, 'injection_schedule', injection_schedule) - -def register_extended_attention_pnp(model, injection_schedule): - def sa_forward(self): - to_out = self.to_out - if type(to_out) is torch.nn.modules.container.ModuleList: - to_out = self.to_out[0] - else: - to_out = self.to_out - - def forward(x, encoder_hidden_states=None): - batch_size, sequence_length, dim = x.shape - h = self.heads - n_frames = batch_size // 3 - is_cross = encoder_hidden_states is not None - encoder_hidden_states = encoder_hidden_states if is_cross else x - q = self.to_q(x) - k = self.to_k(encoder_hidden_states) - v = self.to_v(encoder_hidden_states) - - if self.injection_schedule is not None and (self.t in self.injection_schedule or self.t == 1000): - # inject unconditional - q[n_frames:2 * n_frames] = q[:n_frames] - k[n_frames:2 * n_frames] = k[:n_frames] - # inject conditional - q[2 * n_frames:] = q[:n_frames] - k[2 * n_frames:] = k[:n_frames] - - k_source = k[:n_frames] - k_uncond = k[n_frames:2 * n_frames].reshape(1, n_frames * sequence_length, -1).repeat(n_frames, 1, 1) - k_cond = k[2 * n_frames:].reshape(1, n_frames * sequence_length, -1).repeat(n_frames, 1, 1) - - v_source = v[:n_frames] - v_uncond = v[n_frames:2 * n_frames].reshape(1, n_frames * sequence_length, -1).repeat(n_frames, 1, 1) - v_cond = v[2 * n_frames:].reshape(1, n_frames * sequence_length, -1).repeat(n_frames, 1, 1) - - q_source = self.head_to_batch_dim(q[:n_frames]) - q_uncond = self.head_to_batch_dim(q[n_frames:2 * n_frames]) - q_cond = self.head_to_batch_dim(q[2 * n_frames:]) - k_source = self.head_to_batch_dim(k_source) - k_uncond = self.head_to_batch_dim(k_uncond) - k_cond = self.head_to_batch_dim(k_cond) - v_source = self.head_to_batch_dim(v_source) - v_uncond = self.head_to_batch_dim(v_uncond) - v_cond = self.head_to_batch_dim(v_cond) - - - q_src = q_source.view(n_frames, h, sequence_length, dim // h) - k_src = k_source.view(n_frames, h, sequence_length, dim // h) - v_src = v_source.view(n_frames, h, sequence_length, dim // h) - q_uncond = q_uncond.view(n_frames, h, sequence_length, dim // h) - k_uncond = k_uncond.view(n_frames, h, sequence_length * n_frames, dim // h) - v_uncond = v_uncond.view(n_frames, h, sequence_length * n_frames, dim // h) - q_cond = q_cond.view(n_frames, h, sequence_length, dim // h) - k_cond = k_cond.view(n_frames, h, sequence_length * n_frames, dim // h) - v_cond = v_cond.view(n_frames, h, sequence_length * n_frames, dim // h) - - out_source_all = [] - out_uncond_all = [] - out_cond_all = [] - - single_batch = n_frames<=12 - b = n_frames if single_batch else 1 - - for frame in range(0, n_frames, b): - out_source = [] - out_uncond = [] - out_cond = [] - for j in range(h): - sim_source_b = torch.bmm(q_src[frame: frame+ b, j], k_src[frame: frame+ b, j].transpose(-1, -2)) * self.scale - sim_uncond_b = torch.bmm(q_uncond[frame: frame+ b, j], k_uncond[frame: frame+ b, j].transpose(-1, -2)) * self.scale - sim_cond = torch.bmm(q_cond[frame: frame+ b, j], k_cond[frame: frame+ b, j].transpose(-1, -2)) * self.scale - - out_source.append(torch.bmm(sim_source_b.softmax(dim=-1), v_src[frame: frame+ b, j])) - out_uncond.append(torch.bmm(sim_uncond_b.softmax(dim=-1), v_uncond[frame: frame+ b, j])) - out_cond.append(torch.bmm(sim_cond.softmax(dim=-1), v_cond[frame: frame+ b, j])) - - out_source = torch.cat(out_source, dim=0) - out_uncond = torch.cat(out_uncond, dim=0) - out_cond = torch.cat(out_cond, dim=0) - if single_batch: - out_source = out_source.view(h, n_frames,sequence_length, dim // h).permute(1, 0, 2, 3).reshape(h * n_frames, sequence_length, -1) - out_uncond = out_uncond.view(h, n_frames,sequence_length, dim // h).permute(1, 0, 2, 3).reshape(h * n_frames, sequence_length, -1) - out_cond = out_cond.view(h, n_frames,sequence_length, dim // h).permute(1, 0, 2, 3).reshape(h * n_frames, sequence_length, -1) - out_source_all.append(out_source) - out_uncond_all.append(out_uncond) - out_cond_all.append(out_cond) - - out_source = torch.cat(out_source_all, dim=0) - out_uncond = torch.cat(out_uncond_all, dim=0) - out_cond = torch.cat(out_cond_all, dim=0) - - out = torch.cat([out_source, out_uncond, out_cond], dim=0) - out = self.batch_to_head_dim(out) - - return to_out(out) - - return forward - - for _, module in model.unet.named_modules(): - if isinstance_str(module, "BasicTransformerBlock"): - module.attn1.forward = sa_forward(module.attn1) - setattr(module.attn1, 'injection_schedule', []) - - res_dict = {1: [1, 2], 2: [0, 1, 2], 3: [0, 1, 2]} - # we are injecting attention in blocks 4 - 11 of the decoder, so not in the first block of the lowest resolution - for res in res_dict: - for block in res_dict[res]: - module = model.unet.up_blocks[res].attentions[block].transformer_blocks[0].attn1 - module.forward = sa_forward(module) - setattr(module, 'injection_schedule', injection_schedule) - -def register_extended_attention(model): - def sa_forward(self): - to_out = self.to_out - if type(to_out) is torch.nn.modules.container.ModuleList: - to_out = self.to_out[0] - else: - to_out = self.to_out - - def forward(x, encoder_hidden_states=None): - batch_size, sequence_length, dim = x.shape - h = self.heads - n_frames = batch_size // 3 - is_cross = encoder_hidden_states is not None - encoder_hidden_states = encoder_hidden_states if is_cross else x - q = self.to_q(x) - k = self.to_k(encoder_hidden_states) - v = self.to_v(encoder_hidden_states) - - k_source = k[:n_frames] - k_uncond = k[n_frames: 2*n_frames].reshape(1, n_frames * sequence_length, -1).repeat(n_frames, 1, 1) - k_cond = k[2*n_frames:].reshape(1, n_frames * sequence_length, -1).repeat(n_frames, 1, 1) - v_source = v[:n_frames] - v_uncond = v[n_frames:2*n_frames].reshape(1, n_frames * sequence_length, -1).repeat(n_frames, 1, 1) - v_cond = v[2*n_frames:].reshape(1, n_frames * sequence_length, -1).repeat(n_frames, 1, 1) - - q_source = self.head_to_batch_dim(q[:n_frames]) - q_uncond = self.head_to_batch_dim(q[n_frames: 2*n_frames]) - q_cond = self.head_to_batch_dim(q[2 * n_frames:]) - k_source = self.head_to_batch_dim(k_source) - k_uncond = self.head_to_batch_dim(k_uncond) - k_cond = self.head_to_batch_dim(k_cond) - v_source = self.head_to_batch_dim(v_source) - v_uncond = self.head_to_batch_dim(v_uncond) - v_cond = self.head_to_batch_dim(v_cond) - - out_source = [] - out_uncond = [] - out_cond = [] - - q_src = q_source.view(n_frames, h, sequence_length, dim // h) - k_src = k_source.view(n_frames, h, sequence_length, dim // h) - v_src = v_source.view(n_frames, h, sequence_length, dim // h) - q_uncond = q_uncond.view(n_frames, h, sequence_length, dim // h) - k_uncond = k_uncond.view(n_frames, h, sequence_length * n_frames, dim // h) - v_uncond = v_uncond.view(n_frames, h, sequence_length * n_frames, dim // h) - q_cond = q_cond.view(n_frames, h, sequence_length, dim // h) - k_cond = k_cond.view(n_frames, h, sequence_length * n_frames, dim // h) - v_cond = v_cond.view(n_frames, h, sequence_length * n_frames, dim // h) - - for j in range(h): - sim_source_b = torch.bmm(q_src[:, j], k_src[:, j].transpose(-1, -2)) * self.scale - sim_uncond_b = torch.bmm(q_uncond[:, j], k_uncond[:, j].transpose(-1, -2)) * self.scale - sim_cond = torch.bmm(q_cond[:, j], k_cond[:, j].transpose(-1, -2)) * self.scale - - out_source.append(torch.bmm(sim_source_b.softmax(dim=-1), v_src[:, j])) - out_uncond.append(torch.bmm(sim_uncond_b.softmax(dim=-1), v_uncond[:, j])) - out_cond.append(torch.bmm(sim_cond.softmax(dim=-1), v_cond[:, j])) - - out_source = torch.cat(out_source, dim=0).view(h, n_frames,sequence_length, dim // h).permute(1, 0, 2, 3).reshape(h * n_frames, sequence_length, -1) - out_uncond = torch.cat(out_uncond, dim=0).view(h, n_frames,sequence_length, dim // h).permute(1, 0, 2, 3).reshape(h * n_frames, sequence_length, -1) - out_cond = torch.cat(out_cond, dim=0).view(h, n_frames,sequence_length, dim // h).permute(1, 0, 2, 3).reshape(h * n_frames, sequence_length, -1) - - out = torch.cat([out_source, out_uncond, out_cond], dim=0) - out = self.batch_to_head_dim(out) - - return to_out(out) - - return forward - - for _, module in model.unet.named_modules(): - if isinstance_str(module, "BasicTransformerBlock"): - module.attn1.forward = sa_forward(module.attn1) - - res_dict = {1: [1, 2], 2: [0, 1, 2], 3: [0, 1, 2]} - # we are injecting attention in blocks 4 - 11 of the decoder, so not in the first block of the lowest resolution - for res in res_dict: - for block in res_dict[res]: - module = model.unet.up_blocks[res].attentions[block].transformer_blocks[0].attn1 - module.forward = sa_forward(module) - -def make_tokenflow_attention_block(block_class: Type[torch.nn.Module]) -> Type[torch.nn.Module]: - - class TokenFlowBlock(block_class): - - def forward( - self, - hidden_states, - attention_mask=None, - encoder_hidden_states=None, - encoder_attention_mask=None, - timestep=None, - cross_attention_kwargs=None, - class_labels=None, - ) -> torch.Tensor: - - batch_size, sequence_length, dim = hidden_states.shape - n_frames = batch_size // 3 - mid_idx = n_frames // 2 - hidden_states = hidden_states.view(3, n_frames, sequence_length, dim) - - if self.use_ada_layer_norm: - norm_hidden_states = self.norm1(hidden_states, timestep) - elif self.use_ada_layer_norm_zero: - norm_hidden_states, gate_msa, shift_mlp, scale_mlp, gate_mlp = self.norm1( - hidden_states, timestep, class_labels, hidden_dtype=hidden_states.dtype - ) - else: - norm_hidden_states = self.norm1(hidden_states) - - norm_hidden_states = norm_hidden_states.view(3, n_frames, sequence_length, dim) - if self.pivotal_pass: - self.pivot_hidden_states = norm_hidden_states - else: - idx1 = [] - idx2 = [] - batch_idxs = [self.batch_idx] - if self.batch_idx > 0: - batch_idxs.append(self.batch_idx - 1) - - sim = batch_cosine_sim(norm_hidden_states[0].reshape(-1, dim), - self.pivot_hidden_states[0][batch_idxs].reshape(-1, dim)) - if len(batch_idxs) == 2: - sim1, sim2 = sim.chunk(2, dim=1) - # sim: n_frames * seq_len, len(batch_idxs) * seq_len - idx1.append(sim1.argmax(dim=-1)) # n_frames * seq_len - idx2.append(sim2.argmax(dim=-1)) # n_frames * seq_len - else: - idx1.append(sim.argmax(dim=-1)) - idx1 = torch.stack(idx1 * 3, dim=0) # 3, n_frames * seq_len - idx1 = idx1.squeeze(1) - if len(batch_idxs) == 2: - idx2 = torch.stack(idx2 * 3, dim=0) # 3, n_frames * seq_len - idx2 = idx2.squeeze(1) - - # 1. Self-Attention - cross_attention_kwargs = cross_attention_kwargs if cross_attention_kwargs is not None else {} - if self.pivotal_pass: - # norm_hidden_states.shape = 3, n_frames * seq_len, dim - self.attn_output = self.attn1( - norm_hidden_states.view(batch_size, sequence_length, dim), - encoder_hidden_states=encoder_hidden_states if self.only_cross_attention else None, - **cross_attention_kwargs, - ) - # 3, n_frames * seq_len, dim - > 3 * n_frames, seq_len, dim - self.kf_attn_output = self.attn_output - else: - batch_kf_size, _, _ = self.kf_attn_output.shape - self.attn_output = self.kf_attn_output.view(3, batch_kf_size // 3, sequence_length, dim)[:, - batch_idxs] # 3, n_frames, seq_len, dim --> 3, len(batch_idxs), seq_len, dim - if self.use_ada_layer_norm_zero: - self.attn_output = gate_msa.unsqueeze(1) * self.attn_output - - # gather values from attn_output, using idx as indices, and get a tensor of shape 3, n_frames, seq_len, dim - if not self.pivotal_pass: - if len(batch_idxs) == 2: - attn_1, attn_2 = self.attn_output[:, 0], self.attn_output[:, 1] - attn_output1 = attn_1.gather(dim=1, index=idx1.unsqueeze(-1).repeat(1, 1, dim)) - attn_output2 = attn_2.gather(dim=1, index=idx2.unsqueeze(-1).repeat(1, 1, dim)) - - s = torch.arange(0, n_frames).to(idx1.device) + batch_idxs[0] * n_frames - # distance from the pivot - p1 = batch_idxs[0] * n_frames + n_frames // 2 - p2 = batch_idxs[1] * n_frames + n_frames // 2 - d1 = torch.abs(s - p1) - d2 = torch.abs(s - p2) - # weight - w1 = d2 / (d1 + d2) - w1 = torch.sigmoid(w1) - - w1 = w1.unsqueeze(0).unsqueeze(-1).unsqueeze(-1).repeat(3, 1, sequence_length, dim) - attn_output1 = attn_output1.view(3, n_frames, sequence_length, dim) - attn_output2 = attn_output2.view(3, n_frames, sequence_length, dim) - attn_output = w1 * attn_output1 + (1 - w1) * attn_output2 - else: - attn_output = self.attn_output[:,0].gather(dim=1, index=idx1.unsqueeze(-1).repeat(1, 1, dim)) - - attn_output = attn_output.reshape( - batch_size, sequence_length, dim) # 3 * n_frames, seq_len, dim - else: - attn_output = self.attn_output - hidden_states = hidden_states.reshape(batch_size, sequence_length, dim) # 3 * n_frames, seq_len, dim - hidden_states = attn_output + hidden_states - - if self.attn2 is not None: - norm_hidden_states = ( - self.norm2(hidden_states, timestep) if self.use_ada_layer_norm else self.norm2(hidden_states) - ) - - # 2. Cross-Attention - attn_output = self.attn2( - norm_hidden_states, - encoder_hidden_states=encoder_hidden_states, - attention_mask=encoder_attention_mask, - **cross_attention_kwargs, - ) - hidden_states = attn_output + hidden_states - - # 3. Feed-forward - norm_hidden_states = self.norm3(hidden_states) - - if self.use_ada_layer_norm_zero: - norm_hidden_states = norm_hidden_states * (1 + scale_mlp[:, None]) + shift_mlp[:, None] - - - ff_output = self.ff(norm_hidden_states) - - if self.use_ada_layer_norm_zero: - ff_output = gate_mlp.unsqueeze(1) * ff_output - - hidden_states = ff_output + hidden_states - - return hidden_states - - return TokenFlowBlock - - -def set_tokenflow( - model: torch.nn.Module): - """ - Sets the tokenflow attention blocks in a model. - """ - - for _, module in model.named_modules(): - if isinstance_str(module, "BasicTransformerBlock"): - make_tokenflow_block_fn = make_tokenflow_attention_block - module.__class__ = make_tokenflow_block_fn(module.__class__) - - # Something needed for older versions of diffusers - if not hasattr(module, "use_ada_layer_norm_zero"): - module.use_ada_layer_norm = False - module.use_ada_layer_norm_zero = False - - return model diff --git a/spaces/wffcyrus/MetaGPT-v1/metagpt/static/assets/__commonjsHelpers__-042e6b4d.js b/spaces/wffcyrus/MetaGPT-v1/metagpt/static/assets/__commonjsHelpers__-042e6b4d.js deleted file mode 100644 index 0477d759fd6dcc2507749d991bedf1409961338e..0000000000000000000000000000000000000000 --- a/spaces/wffcyrus/MetaGPT-v1/metagpt/static/assets/__commonjsHelpers__-042e6b4d.js +++ /dev/null @@ -1 +0,0 @@ -var f=typeof globalThis<"u"?globalThis:typeof window<"u"?window:typeof global<"u"?global:typeof self<"u"?self:{};function l(e){return e&&e.__esModule&&Object.prototype.hasOwnProperty.call(e,"default")?e.default:e}function a(e){if(e.__esModule)return e;var r=e.default;if(typeof r=="function"){var o=function n(){if(this instanceof n){var t=[null];t.push.apply(t,arguments);var u=Function.bind.apply(r,t);return new u}return r.apply(this,arguments)};o.prototype=r.prototype}else o={};return Object.defineProperty(o,"__esModule",{value:!0}),Object.keys(e).forEach(function(n){var t=Object.getOwnPropertyDescriptor(e,n);Object.defineProperty(o,n,t.get?t:{enumerable:!0,get:function(){return e[n]}})}),o}export{a,f as c,l as g}; diff --git a/spaces/wffcyrus/MetaGPT-v1/tests/metagpt/actions/test_action.py b/spaces/wffcyrus/MetaGPT-v1/tests/metagpt/actions/test_action.py deleted file mode 100644 index 9775630ccd8dd2451cc4c48de7078295ed2ded2f..0000000000000000000000000000000000000000 --- a/spaces/wffcyrus/MetaGPT-v1/tests/metagpt/actions/test_action.py +++ /dev/null @@ -1,13 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8 -*- -""" -@Time : 2023/5/11 14:43 -@Author : alexanderwu -@File : test_action.py -""" -from metagpt.actions import Action, WritePRD, WriteTest - - -def test_action_repr(): - actions = [Action(), WriteTest(), WritePRD()] - assert "WriteTest" in str(actions) diff --git a/spaces/wilson1/bingai/Dockerfile b/spaces/wilson1/bingai/Dockerfile deleted file mode 100644 index 9e9f4d7a31e0c7de9a63ddb94bf6e7facbdeb213..0000000000000000000000000000000000000000 --- a/spaces/wilson1/bingai/Dockerfile +++ /dev/null @@ -1,34 +0,0 @@ -# Build Stage -# 使用 golang:alpine 作为构建阶段的基础镜像 -FROM golang:alpine AS builder - -# 添加 git,以便之后能从GitHub克隆项目 -RUN apk --no-cache add git - -# 从 GitHub 克隆 go-proxy-bingai 项目到 /workspace/app 目录下 -RUN git clone https://github.com/Harry-zklcdc/go-proxy-bingai.git /workspace/app - -# 设置工作目录为之前克隆的项目目录 -WORKDIR /workspace/app - -# 编译 go 项目。-ldflags="-s -w" 是为了减少编译后的二进制大小 -RUN go build -ldflags="-s -w" -tags netgo -trimpath -o go-proxy-bingai main.go - -# Runtime Stage -# 使用轻量级的 alpine 镜像作为运行时的基础镜像 -FROM alpine - -# 设置工作目录 -WORKDIR /workspace/app - -# 从构建阶段复制编译后的二进制文件到运行时镜像中 -COPY --from=builder /workspace/app/go-proxy-bingai . - -# 设置环境变量,此处为随机字符 -ENV Go_Proxy_BingAI_USER_TOKEN_1="U{[W'O@k$^s&K~)r6j'F&tXglDPv9eO6V~w5-eJg(Dpb3[H]0=" - -# 暴露8080端口 -EXPOSE 8080 - -# 容器启动时运行的命令 -CMD ["/workspace/app/go-proxy-bingai"] \ No newline at end of file diff --git a/spaces/wliu88/StructDiffusionDemo/src/StructDiffusion/data/semantic_arrangement_language.py b/spaces/wliu88/StructDiffusionDemo/src/StructDiffusion/data/semantic_arrangement_language.py deleted file mode 100644 index 5abb138d2d0125dc9cb7c5e152633e654f21a062..0000000000000000000000000000000000000000 --- a/spaces/wliu88/StructDiffusionDemo/src/StructDiffusion/data/semantic_arrangement_language.py +++ /dev/null @@ -1,633 +0,0 @@ -import copy -import cv2 -import h5py -import numpy as np -import os -import trimesh -import torch -from tqdm import tqdm -import json -import random -import pickle - -from torch.utils.data import DataLoader - -# Local imports -from StructDiffusion.utils.rearrangement import show_pcs, get_pts, combine_and_sample_xyzs -from StructDiffusion.language.tokenizer import Tokenizer - -import StructDiffusion.utils.brain2.camera as cam -import StructDiffusion.utils.brain2.image as img -import StructDiffusion.utils.transformations as tra - - -class SemanticArrangementDataset(torch.utils.data.Dataset): - - def __init__(self, data_roots, index_roots, split, tokenizer, - max_num_target_objects=11, max_num_distractor_objects=5, - max_num_shape_parameters=7, max_num_rearrange_features=1, max_num_anchor_features=3, - num_pts=1024, - use_virtual_structure_frame=True, ignore_distractor_objects=True, ignore_rgb=True, - filter_num_moved_objects_range=None, shuffle_object_index=False, - sentence_embedding_file=None, use_incomplete_sentence=False, - data_augmentation=True, debug=False, **kwargs): - """ - - Note: setting filter_num_moved_objects_range=[k, k] and max_num_objects=k will create no padding for target objs - - :param data_root: - :param split: train, valid, or test - :param shuffle_object_index: whether to shuffle the positions of target objects and other objects in the sequence - :param debug: - :param max_num_shape_parameters: - :param max_num_objects: - :param max_num_rearrange_features: - :param max_num_anchor_features: - :param num_pts: - :param use_stored_arrangement_indices: - :param kwargs: - """ - - self.use_virtual_structure_frame = use_virtual_structure_frame - self.ignore_distractor_objects = ignore_distractor_objects - self.ignore_rgb = ignore_rgb and not debug - - self.num_pts = num_pts - self.debug = debug - - self.max_num_objects = max_num_target_objects - self.max_num_other_objects = max_num_distractor_objects - self.max_num_shape_parameters = max_num_shape_parameters - self.max_num_rearrange_features = max_num_rearrange_features - self.max_num_anchor_features = max_num_anchor_features - self.shuffle_object_index = shuffle_object_index - - # used to tokenize the language part - self.tokenizer = tokenizer - - # retrieve data - self.data_roots = data_roots - self.arrangement_data = [] - arrangement_steps = [] - for ddx in range(len(data_roots)): - data_root = data_roots[ddx] - index_root = index_roots[ddx] - arrangement_indices_file = os.path.join(data_root, index_root, "{}_arrangement_indices_file_all.txt".format(split)) - if os.path.exists(arrangement_indices_file): - with open(arrangement_indices_file, "r") as fh: - arrangement_steps.extend([(os.path.join(data_root, f[0]), f[1]) for f in eval(fh.readline().strip())]) - else: - print("{} does not exist".format(arrangement_indices_file)) - # only keep the goal, ignore the intermediate steps - for filename, step_t in arrangement_steps: - if step_t == 0: - if "data00026058" in filename or "data00011415" in filename or "data00026061" in filename or "data00700565" in filename: - continue - self.arrangement_data.append((filename, step_t)) - # if specified, filter data - if filter_num_moved_objects_range is not None: - self.arrangement_data = self.filter_based_on_number_of_moved_objects(filter_num_moved_objects_range) - print("{} valid sequences".format(len(self.arrangement_data))) - - # language - if sentence_embedding_file: - assert max_num_shape_parameters == 1 - # since we do not use them right now, ignore them - # assert max_num_rearrange_features == 0 - # assert max_num_anchor_features == 0 - with open(sentence_embedding_file, "rb") as fh: - template_sentence_data = pickle.load(fh) - self.use_sentence_embedding = True - self.type_value_tuple_to_template_sentences = template_sentence_data["type_value_tuple_to_template_sentences"] - self.template_sentence_to_embedding = template_sentence_data["template_sentence_to_embedding"] - self.use_incomplete_sentence = use_incomplete_sentence - print("use sentence embedding") - print(len(self.type_value_tuple_to_template_sentences)) - print(len(self.template_sentence_to_embedding)) - else: - self.use_sentence_embedding = False - - # Data Aug - self.data_augmentation = data_augmentation - # additive noise - self.gp_rescale_factor_range = [12, 20] - self.gaussian_scale_range = [0., 0.003] - # multiplicative noise - self.gamma_shape = 1000. - self.gamma_scale = 0.001 - - def filter_based_on_number_of_moved_objects(self, filter_num_moved_objects_range): - assert len(list(filter_num_moved_objects_range)) == 2 - min_num, max_num = filter_num_moved_objects_range - print("Remove scenes that have less than {} or more than {} objects being moved".format(min_num, max_num)) - ok_data = [] - for filename, step_t in self.arrangement_data: - h5 = h5py.File(filename, 'r') - moved_objs = h5['moved_objs'][()].split(',') - if min_num <= len(moved_objs) <= max_num: - ok_data.append((filename, step_t)) - print("{} valid sequences left".format(len(ok_data))) - return ok_data - - def get_data_idx(self, idx): - # Create the datum to return - file_idx = np.argmax(idx < self.file_to_count) - data = h5py.File(self.data_files[file_idx], 'r') - if file_idx > 0: - # for lang2sym, idx is always 0 - idx = idx - self.file_to_count[file_idx - 1] - return data, idx, file_idx - - def add_noise_to_depth(self, depth_img): - """ add depth noise """ - multiplicative_noise = np.random.gamma(self.gamma_shape, self.gamma_scale) - depth_img = multiplicative_noise * depth_img - return depth_img - - def add_noise_to_xyz(self, xyz_img, depth_img): - """ TODO: remove this code or at least celean it up""" - xyz_img = xyz_img.copy() - H, W, C = xyz_img.shape - gp_rescale_factor = np.random.randint(self.gp_rescale_factor_range[0], - self.gp_rescale_factor_range[1]) - gp_scale = np.random.uniform(self.gaussian_scale_range[0], - self.gaussian_scale_range[1]) - small_H, small_W = (np.array([H, W]) / gp_rescale_factor).astype(int) - additive_noise = np.random.normal(loc=0.0, scale=gp_scale, size=(small_H, small_W, C)) - additive_noise = cv2.resize(additive_noise, (W, H), interpolation=cv2.INTER_CUBIC) - xyz_img[depth_img > 0, :] += additive_noise[depth_img > 0, :] - return xyz_img - - def random_index(self): - return self[np.random.randint(len(self))] - - def _get_rgb(self, h5, idx, ee=True): - RGB = "ee_rgb" if ee else "rgb" - rgb1 = img.PNGToNumpy(h5[RGB][idx])[:, :, :3] / 255. # remove alpha - return rgb1 - - def _get_depth(self, h5, idx, ee=True): - DEPTH = "ee_depth" if ee else "depth" - - def _get_images(self, h5, idx, ee=True): - if ee: - RGB, DEPTH, SEG = "ee_rgb", "ee_depth", "ee_seg" - DMIN, DMAX = "ee_depth_min", "ee_depth_max" - else: - RGB, DEPTH, SEG = "rgb", "depth", "seg" - DMIN, DMAX = "depth_min", "depth_max" - dmin = h5[DMIN][idx] - dmax = h5[DMAX][idx] - rgb1 = img.PNGToNumpy(h5[RGB][idx])[:, :, :3] / 255. # remove alpha - depth1 = h5[DEPTH][idx] / 20000. * (dmax - dmin) + dmin - seg1 = img.PNGToNumpy(h5[SEG][idx]) - - valid1 = np.logical_and(depth1 > 0.1, depth1 < 2.) - - # proj_matrix = h5['proj_matrix'][()] - camera = cam.get_camera_from_h5(h5) - if self.data_augmentation: - depth1 = self.add_noise_to_depth(depth1) - - xyz1 = cam.compute_xyz(depth1, camera) - if self.data_augmentation: - xyz1 = self.add_noise_to_xyz(xyz1, depth1) - - # Transform the point cloud - # Here it is... - # CAM_POSE = "ee_cam_pose" if ee else "cam_pose" - CAM_POSE = "ee_camera_view" if ee else "camera_view" - cam_pose = h5[CAM_POSE][idx] - if ee: - # ee_camera_view has 0s for x, y, z - cam_pos = h5["ee_cam_pose"][:][:3, 3] - cam_pose[:3, 3] = cam_pos - - # Get transformed point cloud - h, w, d = xyz1.shape - xyz1 = xyz1.reshape(h * w, -1) - xyz1 = trimesh.transform_points(xyz1, cam_pose) - xyz1 = xyz1.reshape(h, w, -1) - - scene1 = rgb1, depth1, seg1, valid1, xyz1 - - return scene1 - - def __len__(self): - return len(self.arrangement_data) - - def _get_ids(self, h5): - """ - get object ids - - @param h5: - @return: - """ - ids = {} - for k in h5.keys(): - if k.startswith("id_"): - ids[k[3:]] = h5[k][()] - return ids - - def get_positive_ratio(self): - num_pos = 0 - for d in self.arrangement_data: - filename, step_t = d - if step_t == 0: - num_pos += 1 - return (len(self.arrangement_data) - num_pos) * 1.0 / num_pos - - def get_object_position_vocab_sizes(self): - return self.tokenizer.get_object_position_vocab_sizes() - - def get_vocab_size(self): - return self.tokenizer.get_vocab_size() - - def get_data_index(self, idx): - filename = self.arrangement_data[idx] - return filename - - def get_raw_data(self, idx, inference_mode=False, shuffle_object_index=False): - """ - - :param idx: - :param inference_mode: - :param shuffle_object_index: used to test different orders of objects - :return: - """ - - filename, _ = self.arrangement_data[idx] - - h5 = h5py.File(filename, 'r') - ids = self._get_ids(h5) - all_objs = sorted([o for o in ids.keys() if "object_" in o]) - goal_specification = json.loads(str(np.array(h5["goal_specification"]))) - num_rearrange_objs = len(goal_specification["rearrange"]["objects"]) - num_other_objs = len(goal_specification["anchor"]["objects"] + goal_specification["distract"]["objects"]) - assert len(all_objs) == num_rearrange_objs + num_other_objs, "{}, {}".format(len(all_objs), num_rearrange_objs + num_other_objs) - assert num_rearrange_objs <= self.max_num_objects - assert num_other_objs <= self.max_num_other_objects - - # important: only using the last step - step_t = num_rearrange_objs - - target_objs = all_objs[:num_rearrange_objs] - other_objs = all_objs[num_rearrange_objs:] - - structure_parameters = goal_specification["shape"] - - # Important: ensure the order is correct - if structure_parameters["type"] == "circle" or structure_parameters["type"] == "line": - target_objs = target_objs[::-1] - elif structure_parameters["type"] == "tower" or structure_parameters["type"] == "dinner": - target_objs = target_objs - else: - raise KeyError("{} structure is not recognized".format(structure_parameters["type"])) - all_objs = target_objs + other_objs - - ################################### - # getting scene images and point clouds - scene = self._get_images(h5, step_t, ee=True) - rgb, depth, seg, valid, xyz = scene - if inference_mode: - initial_scene = scene - - # getting object point clouds - obj_pcs = [] - obj_pad_mask = [] - current_pc_poses = [] - other_obj_pcs = [] - other_obj_pad_mask = [] - for obj in all_objs: - obj_mask = np.logical_and(seg == ids[obj], valid) - if np.sum(obj_mask) <= 0: - raise Exception - ok, obj_xyz, obj_rgb, _ = get_pts(xyz, rgb, obj_mask, num_pts=self.num_pts) - if not ok: - raise Exception - - if obj in target_objs: - if self.ignore_rgb: - obj_pcs.append(obj_xyz) - else: - obj_pcs.append(torch.concat([obj_xyz, obj_rgb], dim=-1)) - obj_pad_mask.append(0) - pc_pose = np.eye(4) - pc_pose[:3, 3] = torch.mean(obj_xyz, dim=0).numpy() - current_pc_poses.append(pc_pose) - elif obj in other_objs: - if self.ignore_rgb: - other_obj_pcs.append(obj_xyz) - else: - other_obj_pcs.append(torch.concat([obj_xyz, obj_rgb], dim=-1)) - other_obj_pad_mask.append(0) - else: - raise Exception - - ################################### - # computes goal positions for objects - # Important: because of the noises we added to point clouds, the rearranged point clouds will not be perfect - if self.use_virtual_structure_frame: - goal_structure_pose = tra.euler_matrix(structure_parameters["rotation"][0], structure_parameters["rotation"][1], - structure_parameters["rotation"][2]) - goal_structure_pose[:3, 3] = [structure_parameters["position"][0], structure_parameters["position"][1], - structure_parameters["position"][2]] - goal_structure_pose_inv = np.linalg.inv(goal_structure_pose) - - goal_obj_poses = [] - current_obj_poses = [] - goal_pc_poses = [] - for obj, current_pc_pose in zip(target_objs, current_pc_poses): - goal_pose = h5[obj][0] - current_pose = h5[obj][step_t] - if inference_mode: - goal_obj_poses.append(goal_pose) - current_obj_poses.append(current_pose) - - goal_pc_pose = goal_pose @ np.linalg.inv(current_pose) @ current_pc_pose - if self.use_virtual_structure_frame: - goal_pc_pose = goal_structure_pose_inv @ goal_pc_pose - goal_pc_poses.append(goal_pc_pose) - - # transform current object point cloud to the goal point cloud in the world frame - if self.debug: - new_obj_pcs = [copy.deepcopy(pc.numpy()) for pc in obj_pcs] - for i, obj_pc in enumerate(new_obj_pcs): - - current_pc_pose = current_pc_poses[i] - goal_pc_pose = goal_pc_poses[i] - if self.use_virtual_structure_frame: - goal_pc_pose = goal_structure_pose @ goal_pc_pose - print("current pc pose", current_pc_pose) - print("goal pc pose", goal_pc_pose) - - goal_pc_transform = goal_pc_pose @ np.linalg.inv(current_pc_pose) - print("transform", goal_pc_transform) - new_obj_pc = copy.deepcopy(obj_pc) - new_obj_pc[:, :3] = trimesh.transform_points(obj_pc[:, :3], goal_pc_transform) - print(new_obj_pc.shape) - - # visualize rearrangement sequence (new_obj_xyzs), the current object before moving (obj_xyz), and other objects - new_obj_pcs[i] = new_obj_pc - new_obj_pcs[i][:, 3:] = np.tile(np.array([1, 0, 0], dtype=np.float), (new_obj_pc.shape[0], 1)) - new_obj_rgb_current = np.tile(np.array([0, 1, 0], dtype=np.float), (new_obj_pc.shape[0], 1)) - show_pcs([pc[:, :3] for pc in new_obj_pcs] + [pc[:, :3] for pc in other_obj_pcs] + [obj_pc[:, :3]], - [pc[:, 3:] for pc in new_obj_pcs] + [pc[:, 3:] for pc in other_obj_pcs] + [new_obj_rgb_current], - add_coordinate_frame=True) - show_pcs([pc[:, :3] for pc in new_obj_pcs], [pc[:, 3:] for pc in new_obj_pcs], add_coordinate_frame=True) - - # pad data - for i in range(self.max_num_objects - len(target_objs)): - obj_pcs.append(torch.zeros_like(obj_pcs[0], dtype=torch.float32)) - obj_pad_mask.append(1) - for i in range(self.max_num_other_objects - len(other_objs)): - other_obj_pcs.append(torch.zeros_like(obj_pcs[0], dtype=torch.float32)) - other_obj_pad_mask.append(1) - - ################################### - # preparing sentence - sentence = [] - sentence_pad_mask = [] - - # structure parameters - # 5 parameters - structure_parameters = goal_specification["shape"] - if structure_parameters["type"] == "circle" or structure_parameters["type"] == "line": - sentence.append((structure_parameters["type"], "shape")) - sentence.append((structure_parameters["rotation"][2], "rotation")) - sentence.append((structure_parameters["position"][0], "position_x")) - sentence.append((structure_parameters["position"][1], "position_y")) - if structure_parameters["type"] == "circle": - sentence.append((structure_parameters["radius"], "radius")) - elif structure_parameters["type"] == "line": - sentence.append((structure_parameters["length"] / 2.0, "radius")) - if not self.use_sentence_embedding: - for _ in range(5): - sentence_pad_mask.append(0) - else: - sentence.append((structure_parameters["type"], "shape")) - sentence.append((structure_parameters["rotation"][2], "rotation")) - sentence.append((structure_parameters["position"][0], "position_x")) - sentence.append((structure_parameters["position"][1], "position_y")) - if not self.use_sentence_embedding: - for _ in range(4): - sentence_pad_mask.append(0) - sentence.append(("PAD", None)) - sentence_pad_mask.append(1) - - if self.use_sentence_embedding: - - if self.use_incomplete_sentence: - token_idxs = np.random.permutation(len(sentence)) - token_idxs = token_idxs[:np.random.randint(1, len(sentence) + 1)] - token_idxs = sorted(token_idxs) - incomplete_sentence = [sentence[ti] for ti in token_idxs] - else: - incomplete_sentence = sentence - - type_value_tuple = self.tokenizer.convert_structure_params_to_type_value_tuple(incomplete_sentence) - template_sentence = np.random.choice(self.type_value_tuple_to_template_sentences[type_value_tuple]) - sentence_embedding = self.template_sentence_to_embedding[template_sentence] - sentence_pad_mask = [0] - - ################################### - # paddings - for i in range(self.max_num_objects - len(target_objs)): - goal_pc_poses.append(np.eye(4)) - - ################################### - if self.debug: - print("---") - print("all objects:", all_objs) - print("target objects:", target_objs) - print("other objects:", other_objs) - print("goal specification:", goal_specification) - print("sentence:", sentence) - if self.use_sentence_embedding: - print("use sentence embedding") - if self.use_incomplete_sentence: - print("incomplete_sentence:", incomplete_sentence) - print("template sentence:", template_sentence) - show_pcs([pc[:, :3] for pc in obj_pcs + other_obj_pcs], [pc[:, 3:] for pc in obj_pcs + other_obj_pcs], add_coordinate_frame=True) - - assert len(obj_pcs) == len(goal_pc_poses) - ################################### - - # shuffle the position of objects - # important: only shuffle for dinner - if shuffle_object_index and structure_parameters["type"] == "dinner": - num_target_objs = len(target_objs) - shuffle_target_object_indices = list(range(num_target_objs)) - random.shuffle(shuffle_target_object_indices) - shuffle_object_indices = shuffle_target_object_indices + list(range(num_target_objs, self.max_num_objects)) - obj_pcs = [obj_pcs[i] for i in shuffle_object_indices] - goal_pc_poses = [goal_pc_poses[i] for i in shuffle_object_indices] - if inference_mode: - goal_obj_poses = [goal_obj_poses[i] for i in shuffle_object_indices[:num_target_objs]] - current_obj_poses = [current_obj_poses[i] for i in shuffle_object_indices[:num_target_objs]] - target_objs = [target_objs[i] for i in shuffle_target_object_indices[:num_target_objs]] - current_pc_poses = [current_pc_poses[i] for i in shuffle_object_indices[:num_target_objs]] - - ################################### - if self.use_virtual_structure_frame: - if self.ignore_distractor_objects: - # language, structure virtual frame, target objects - pcs = obj_pcs - type_index = [0] * self.max_num_shape_parameters + [2] + [3] * self.max_num_objects - position_index = list(range(self.max_num_shape_parameters)) + [0] + list(range(self.max_num_objects)) - pad_mask = sentence_pad_mask + [0] + obj_pad_mask - else: - # language, distractor objects, structure virtual frame, target objects - pcs = other_obj_pcs + obj_pcs - type_index = [0] * self.max_num_shape_parameters + [1] * self.max_num_other_objects + [2] + [3] * self.max_num_objects - position_index = list(range(self.max_num_shape_parameters)) + list(range(self.max_num_other_objects)) + [0] + list(range(self.max_num_objects)) - pad_mask = sentence_pad_mask + other_obj_pad_mask + [0] + obj_pad_mask - goal_poses = [goal_structure_pose] + goal_pc_poses - else: - if self.ignore_distractor_objects: - # language, target objects - pcs = obj_pcs - type_index = [0] * self.max_num_shape_parameters + [3] * self.max_num_objects - position_index = list(range(self.max_num_shape_parameters)) + list(range(self.max_num_objects)) - pad_mask = sentence_pad_mask + obj_pad_mask - else: - # language, distractor objects, target objects - pcs = other_obj_pcs + obj_pcs - type_index = [0] * self.max_num_shape_parameters + [1] * self.max_num_other_objects + [3] * self.max_num_objects - position_index = list(range(self.max_num_shape_parameters)) + list(range(self.max_num_other_objects)) + list(range(self.max_num_objects)) - pad_mask = sentence_pad_mask + other_obj_pad_mask + obj_pad_mask - goal_poses = goal_pc_poses - - datum = { - "pcs": pcs, - "goal_poses": goal_poses, - "type_index": type_index, - "position_index": position_index, - "pad_mask": pad_mask, - "t": step_t, - "filename": filename - } - if self.use_sentence_embedding: - datum["sentence"] = sentence_embedding - else: - datum["sentence"] = sentence - - if inference_mode: - datum["rgb"] = rgb - datum["goal_obj_poses"] = goal_obj_poses - datum["current_obj_poses"] = current_obj_poses - datum["target_objs"] = target_objs - datum["initial_scene"] = initial_scene - datum["ids"] = ids - datum["goal_specification"] = goal_specification - datum["current_pc_poses"] = current_pc_poses - if self.use_sentence_embedding: - datum["template_sentence"] = template_sentence - - return datum - - @staticmethod - def convert_to_tensors(datum, tokenizer, use_sentence_embedding=False): - tensors = { - "pcs": torch.stack(datum["pcs"], dim=0), - "goal_poses": torch.FloatTensor(np.array(datum["goal_poses"])), - "type_index": torch.LongTensor(np.array(datum["type_index"])), - "position_index": torch.LongTensor(np.array(datum["position_index"])), - "pad_mask": torch.LongTensor(np.array(datum["pad_mask"])), - "t": datum["t"], - "filename": datum["filename"] - } - if use_sentence_embedding: - tensors["sentence"] = torch.FloatTensor(datum["sentence"]) # after batching, B x sentence embed dim - else: - tensors["sentence"] = torch.LongTensor(np.array([tokenizer.tokenize(*i) for i in datum["sentence"]])) - return tensors - - def __getitem__(self, idx): - - datum = self.convert_to_tensors(self.get_raw_data(idx, shuffle_object_index=self.shuffle_object_index), - self.tokenizer, - self.use_sentence_embedding) - - return datum - - def single_datum_to_batch(self, x, num_samples, device, inference_mode=True): - tensor_x = {} - - tensor_x["pcs"] = x["pcs"].to(device)[None, :, :, :].repeat(num_samples, 1, 1, 1) - tensor_x["sentence"] = x["sentence"].to(device)[None, :].repeat(num_samples, 1) - if not inference_mode: - tensor_x["goal_poses"] = x["goal_poses"].to(device)[None, :, :, :].repeat(num_samples, 1, 1, 1) - - tensor_x["type_index"] = x["type_index"].to(device)[None, :].repeat(num_samples, 1) - tensor_x["position_index"] = x["position_index"].to(device)[None, :].repeat(num_samples, 1) - tensor_x["pad_mask"] = x["pad_mask"].to(device)[None, :].repeat(num_samples, 1) - - return tensor_x - - -def compute_min_max(dataloader): - - # tensor([-0.3557, -0.3847, 0.0000, -1.0000, -1.0000, -0.4759, -1.0000, -1.0000, - # -0.9079, -0.8668, -0.9105, -0.4186]) - # tensor([0.3915, 0.3494, 0.3267, 1.0000, 1.0000, 0.8961, 1.0000, 1.0000, 0.8194, - # 0.4787, 0.6421, 1.0000]) - # tensor([0.0918, -0.3758, 0.0000, -1.0000, -1.0000, 0.0000, -1.0000, -1.0000, - # -0.0000, 0.0000, 0.0000, 1.0000]) - # tensor([0.9199, 0.3710, 0.0000, 1.0000, 1.0000, 0.0000, 1.0000, 1.0000, -0.0000, - # 0.0000, 0.0000, 1.0000]) - - min_value = torch.ones(16) * 10000 - max_value = torch.ones(16) * -10000 - for d in tqdm(dataloader): - goal_poses = d["goal_poses"] - goal_poses = goal_poses.reshape(-1, 16) - current_max, _ = torch.max(goal_poses, dim=0) - current_min, _ = torch.min(goal_poses, dim=0) - max_value[max_value < current_max] = current_max[max_value < current_max] - max_value[max_value > current_min] = current_min[max_value > current_min] - print(f"{min_value} - {max_value}") - - -if __name__ == "__main__": - - tokenizer = Tokenizer("/home/weiyu/data_drive/data_new_objects/type_vocabs_coarse.json") - - data_roots = [] - index_roots = [] - for shape, index in [("circle", "index_10k"), ("line", "index_10k"), ("stacking", "index_10k"), ("dinner", "index_10k")]: - data_roots.append("/home/weiyu/data_drive/data_new_objects/examples_{}_new_objects/result".format(shape)) - index_roots.append(index) - - dataset = SemanticArrangementDataset(data_roots=data_roots, - index_roots=index_roots, - split="valid", tokenizer=tokenizer, - max_num_target_objects=7, - max_num_distractor_objects=5, - max_num_shape_parameters=1, - max_num_rearrange_features=0, - max_num_anchor_features=0, - num_pts=1024, - use_virtual_structure_frame=True, - ignore_distractor_objects=True, - ignore_rgb=True, - filter_num_moved_objects_range=None, # [5, 5] - data_augmentation=False, - shuffle_object_index=True, - sentence_embedding_file="/home/weiyu/Research/StructDiffusion/old/StructDiffusion/src/StructDiffusion/language/template_sentence_data.pkl", - use_incomplete_sentence=True, - debug=False) - - # print(len(dataset)) - # for d in dataset: - # print("\n\n" + "="*100) - - dataloader = DataLoader(dataset, batch_size=64, shuffle=False, num_workers=8) - for i, d in enumerate(tqdm(dataloader)): - for k in d: - if isinstance(d[k], torch.Tensor): - print("--size", k, d[k].shape) - for k in d: - print(k, d[k]) - - input("next?") \ No newline at end of file diff --git a/spaces/wong26/faster-whisper-webui/src/hooks/progressListener.py b/spaces/wong26/faster-whisper-webui/src/hooks/progressListener.py deleted file mode 100644 index a7852a24e237ae864bbce5f37674e1f7c817a1b3..0000000000000000000000000000000000000000 --- a/spaces/wong26/faster-whisper-webui/src/hooks/progressListener.py +++ /dev/null @@ -1,8 +0,0 @@ -from typing import Union - -class ProgressListener: - def on_progress(self, current: Union[int, float], total: Union[int, float]): - self.total = total - - def on_finished(self): - pass \ No newline at end of file diff --git a/spaces/xdecoder/Instruct-X-Decoder/xdecoder/language/LangEncoder/registry.py b/spaces/xdecoder/Instruct-X-Decoder/xdecoder/language/LangEncoder/registry.py deleted file mode 100644 index 8991272a6e2294ea86eee338cf61d87e4123f724..0000000000000000000000000000000000000000 --- a/spaces/xdecoder/Instruct-X-Decoder/xdecoder/language/LangEncoder/registry.py +++ /dev/null @@ -1,18 +0,0 @@ -_lang_encoders = {} - - -def register_lang_encoder(fn): - module_name_split = fn.__module__.split('.') - model_name = module_name_split[-1] - - _lang_encoders[model_name] = fn - - return fn - - -def lang_encoders(model_name): - return _lang_encoders[model_name] - - -def is_lang_encoder(model_name): - return model_name in _lang_encoders diff --git a/spaces/xfys/yolov5_tracking/yolov5/utils/loggers/comet/README.md b/spaces/xfys/yolov5_tracking/yolov5/utils/loggers/comet/README.md deleted file mode 100644 index aee8d16a336c1bff89bfc0c1679dcb6ee8751a48..0000000000000000000000000000000000000000 --- a/spaces/xfys/yolov5_tracking/yolov5/utils/loggers/comet/README.md +++ /dev/null @@ -1,258 +0,0 @@ - - -# YOLOv5 with Comet - -This guide will cover how to use YOLOv5 with [Comet](https://bit.ly/yolov5-readme-comet2) - -# About Comet - -Comet builds tools that help data scientists, engineers, and team leaders accelerate and optimize machine learning and deep learning models. - -Track and visualize model metrics in real time, save your hyperparameters, datasets, and model checkpoints, and visualize your model predictions with [Comet Custom Panels](https://www.comet.com/docs/v2/guides/comet-dashboard/code-panels/about-panels/?utm_source=yolov5&utm_medium=partner&utm_campaign=partner_yolov5_2022&utm_content=github)! -Comet makes sure you never lose track of your work and makes it easy to share results and collaborate across teams of all sizes! - -# Getting Started - -## Install Comet - -```shell -pip install comet_ml -``` - -## Configure Comet Credentials - -There are two ways to configure Comet with YOLOv5. - -You can either set your credentials through environment variables - -**Environment Variables** - -```shell -export COMET_API_KEY= -export COMET_PROJECT_NAME= # This will default to 'yolov5' -``` - -Or create a `.comet.config` file in your working directory and set your credentials there. - -**Comet Configuration File** - -``` -[comet] -api_key= -project_name= # This will default to 'yolov5' -``` - -## Run the Training Script - -```shell -# Train YOLOv5s on COCO128 for 5 epochs -python train.py --img 640 --batch 16 --epochs 5 --data coco128.yaml --weights yolov5s.pt -``` - -That's it! Comet will automatically log your hyperparameters, command line arguments, training and validation metrics. You can visualize and analyze your runs in the Comet UI - -yolo-ui - -# Try out an Example! - -Check out an example of a [completed run here](https://www.comet.com/examples/comet-example-yolov5/a0e29e0e9b984e4a822db2a62d0cb357?experiment-tab=chart&showOutliers=true&smoothing=0&transformY=smoothing&xAxis=step&utm_source=yolov5&utm_medium=partner&utm_campaign=partner_yolov5_2022&utm_content=github) - -Or better yet, try it out yourself in this Colab Notebook - -[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1RG0WOQyxlDlo5Km8GogJpIEJlg_5lyYO?usp=sharing) - -# Log automatically - -By default, Comet will log the following items - -## Metrics - -- Box Loss, Object Loss, Classification Loss for the training and validation data -- mAP_0.5, mAP_0.5:0.95 metrics for the validation data. -- Precision and Recall for the validation data - -## Parameters - -- Model Hyperparameters -- All parameters passed through the command line options - -## Visualizations - -- Confusion Matrix of the model predictions on the validation data -- Plots for the PR and F1 curves across all classes -- Correlogram of the Class Labels - -# Configure Comet Logging - -Comet can be configured to log additional data either through command line flags passed to the training script -or through environment variables. - -```shell -export COMET_MODE=online # Set whether to run Comet in 'online' or 'offline' mode. Defaults to online -export COMET_MODEL_NAME= #Set the name for the saved model. Defaults to yolov5 -export COMET_LOG_CONFUSION_MATRIX=false # Set to disable logging a Comet Confusion Matrix. Defaults to true -export COMET_MAX_IMAGE_UPLOADS= # Controls how many total image predictions to log to Comet. Defaults to 100. -export COMET_LOG_PER_CLASS_METRICS=true # Set to log evaluation metrics for each detected class at the end of training. Defaults to false -export COMET_DEFAULT_CHECKPOINT_FILENAME= # Set this if you would like to resume training from a different checkpoint. Defaults to 'last.pt' -export COMET_LOG_BATCH_LEVEL_METRICS=true # Set this if you would like to log training metrics at the batch level. Defaults to false. -export COMET_LOG_PREDICTIONS=true # Set this to false to disable logging model predictions -``` - -## Logging Checkpoints with Comet - -Logging Models to Comet is disabled by default. To enable it, pass the `save-period` argument to the training script. This will save the -logged checkpoints to Comet based on the interval value provided by `save-period` - -```shell -python train.py \ ---img 640 \ ---batch 16 \ ---epochs 5 \ ---data coco128.yaml \ ---weights yolov5s.pt \ ---save-period 1 -``` - -## Logging Model Predictions - -By default, model predictions (images, ground truth labels and bounding boxes) will be logged to Comet. - -You can control the frequency of logged predictions and the associated images by passing the `bbox_interval` command line argument. Predictions can be visualized using Comet's Object Detection Custom Panel. This frequency corresponds to every Nth batch of data per epoch. In the example below, we are logging every 2nd batch of data for each epoch. - -**Note:** The YOLOv5 validation dataloader will default to a batch size of 32, so you will have to set the logging frequency accordingly. - -Here is an [example project using the Panel](https://www.comet.com/examples/comet-example-yolov5?shareable=YcwMiJaZSXfcEXpGOHDD12vA1&utm_source=yolov5&utm_medium=partner&utm_campaign=partner_yolov5_2022&utm_content=github) - -```shell -python train.py \ ---img 640 \ ---batch 16 \ ---epochs 5 \ ---data coco128.yaml \ ---weights yolov5s.pt \ ---bbox_interval 2 -``` - -### Controlling the number of Prediction Images logged to Comet - -When logging predictions from YOLOv5, Comet will log the images associated with each set of predictions. By default a maximum of 100 validation images are logged. You can increase or decrease this number using the `COMET_MAX_IMAGE_UPLOADS` environment variable. - -```shell -env COMET_MAX_IMAGE_UPLOADS=200 python train.py \ ---img 640 \ ---batch 16 \ ---epochs 5 \ ---data coco128.yaml \ ---weights yolov5s.pt \ ---bbox_interval 1 -``` - -### Logging Class Level Metrics - -Use the `COMET_LOG_PER_CLASS_METRICS` environment variable to log mAP, precision, recall, f1 for each class. - -```shell -env COMET_LOG_PER_CLASS_METRICS=true python train.py \ ---img 640 \ ---batch 16 \ ---epochs 5 \ ---data coco128.yaml \ ---weights yolov5s.pt -``` - -## Uploading a Dataset to Comet Artifacts - -If you would like to store your data using [Comet Artifacts](https://www.comet.com/docs/v2/guides/data-management/using-artifacts/#learn-more?utm_source=yolov5&utm_medium=partner&utm_campaign=partner_yolov5_2022&utm_content=github), you can do so using the `upload_dataset` flag. - -The dataset be organized in the way described in the [YOLOv5 documentation](https://docs.ultralytics.com/yolov5/tutorials/train_custom_data/). The dataset config `yaml` file must follow the same format as that of the `coco128.yaml` file. - -```shell -python train.py \ ---img 640 \ ---batch 16 \ ---epochs 5 \ ---data coco128.yaml \ ---weights yolov5s.pt \ ---upload_dataset -``` - -You can find the uploaded dataset in the Artifacts tab in your Comet Workspace -artifact-1 - -You can preview the data directly in the Comet UI. -artifact-2 - -Artifacts are versioned and also support adding metadata about the dataset. Comet will automatically log the metadata from your dataset `yaml` file -artifact-3 - -### Using a saved Artifact - -If you would like to use a dataset from Comet Artifacts, set the `path` variable in your dataset `yaml` file to point to the following Artifact resource URL. - -``` -# contents of artifact.yaml file -path: "comet:///:" -``` - -Then pass this file to your training script in the following way - -```shell -python train.py \ ---img 640 \ ---batch 16 \ ---epochs 5 \ ---data artifact.yaml \ ---weights yolov5s.pt -``` - -Artifacts also allow you to track the lineage of data as it flows through your Experimentation workflow. Here you can see a graph that shows you all the experiments that have used your uploaded dataset. -artifact-4 - -## Resuming a Training Run - -If your training run is interrupted for any reason, e.g. disrupted internet connection, you can resume the run using the `resume` flag and the Comet Run Path. - -The Run Path has the following format `comet:////`. - -This will restore the run to its state before the interruption, which includes restoring the model from a checkpoint, restoring all hyperparameters and training arguments and downloading Comet dataset Artifacts if they were used in the original run. The resumed run will continue logging to the existing Experiment in the Comet UI - -```shell -python train.py \ ---resume "comet://" -``` - -## Hyperparameter Search with the Comet Optimizer - -YOLOv5 is also integrated with Comet's Optimizer, making is simple to visualize hyperparameter sweeps in the Comet UI. - -### Configuring an Optimizer Sweep - -To configure the Comet Optimizer, you will have to create a JSON file with the information about the sweep. An example file has been provided in `utils/loggers/comet/optimizer_config.json` - -```shell -python utils/loggers/comet/hpo.py \ - --comet_optimizer_config "utils/loggers/comet/optimizer_config.json" -``` - -The `hpo.py` script accepts the same arguments as `train.py`. If you wish to pass additional arguments to your sweep simply add them after -the script. - -```shell -python utils/loggers/comet/hpo.py \ - --comet_optimizer_config "utils/loggers/comet/optimizer_config.json" \ - --save-period 1 \ - --bbox_interval 1 -``` - -### Running a Sweep in Parallel - -```shell -comet optimizer -j utils/loggers/comet/hpo.py \ - utils/loggers/comet/optimizer_config.json" -``` - -### Visualizing Results - -Comet provides a number of ways to visualize the results of your sweep. Take a look at a [project with a completed sweep here](https://www.comet.com/examples/comet-example-yolov5/view/PrlArHGuuhDTKC1UuBmTtOSXD/panels?utm_source=yolov5&utm_medium=partner&utm_campaign=partner_yolov5_2022&utm_content=github) - -hyperparameter-yolo diff --git a/spaces/xiaoxin1111/vits-uma-genshin-honkai/modules.py b/spaces/xiaoxin1111/vits-uma-genshin-honkai/modules.py deleted file mode 100644 index 56ea4145eddf19dd330a3a41ab0183efc1686d83..0000000000000000000000000000000000000000 --- a/spaces/xiaoxin1111/vits-uma-genshin-honkai/modules.py +++ /dev/null @@ -1,388 +0,0 @@ -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm - -import commons -from commons import init_weights, get_padding -from transforms import piecewise_rational_quadratic_transform - - -LRELU_SLOPE = 0.1 - - -class LayerNorm(nn.Module): - def __init__(self, channels, eps=1e-5): - super().__init__() - self.channels = channels - self.eps = eps - - self.gamma = nn.Parameter(torch.ones(channels)) - self.beta = nn.Parameter(torch.zeros(channels)) - - def forward(self, x): - x = x.transpose(1, -1) - x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps) - return x.transpose(1, -1) - - -class ConvReluNorm(nn.Module): - def __init__(self, in_channels, hidden_channels, out_channels, kernel_size, n_layers, p_dropout): - super().__init__() - self.in_channels = in_channels - self.hidden_channels = hidden_channels - self.out_channels = out_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - assert n_layers > 1, "Number of layers should be larger than 0." - - self.conv_layers = nn.ModuleList() - self.norm_layers = nn.ModuleList() - self.conv_layers.append(nn.Conv1d(in_channels, hidden_channels, kernel_size, padding=kernel_size//2)) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.relu_drop = nn.Sequential( - nn.ReLU(), - nn.Dropout(p_dropout)) - for _ in range(n_layers-1): - self.conv_layers.append(nn.Conv1d(hidden_channels, hidden_channels, kernel_size, padding=kernel_size//2)) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask): - x_org = x - for i in range(self.n_layers): - x = self.conv_layers[i](x * x_mask) - x = self.norm_layers[i](x) - x = self.relu_drop(x) - x = x_org + self.proj(x) - return x * x_mask - - -class DDSConv(nn.Module): - """ - Dialted and Depth-Separable Convolution - """ - def __init__(self, channels, kernel_size, n_layers, p_dropout=0.): - super().__init__() - self.channels = channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - - self.drop = nn.Dropout(p_dropout) - self.convs_sep = nn.ModuleList() - self.convs_1x1 = nn.ModuleList() - self.norms_1 = nn.ModuleList() - self.norms_2 = nn.ModuleList() - for i in range(n_layers): - dilation = kernel_size ** i - padding = (kernel_size * dilation - dilation) // 2 - self.convs_sep.append(nn.Conv1d(channels, channels, kernel_size, - groups=channels, dilation=dilation, padding=padding - )) - self.convs_1x1.append(nn.Conv1d(channels, channels, 1)) - self.norms_1.append(LayerNorm(channels)) - self.norms_2.append(LayerNorm(channels)) - - def forward(self, x, x_mask, g=None): - if g is not None: - x = x + g - for i in range(self.n_layers): - y = self.convs_sep[i](x * x_mask) - y = self.norms_1[i](y) - y = F.gelu(y) - y = self.convs_1x1[i](y) - y = self.norms_2[i](y) - y = F.gelu(y) - y = self.drop(y) - x = x + y - return x * x_mask - - -class WN(torch.nn.Module): - def __init__(self, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=0, p_dropout=0): - super(WN, self).__init__() - assert(kernel_size % 2 == 1) - self.hidden_channels =hidden_channels - self.kernel_size = kernel_size, - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - self.p_dropout = p_dropout - - self.in_layers = torch.nn.ModuleList() - self.res_skip_layers = torch.nn.ModuleList() - self.drop = nn.Dropout(p_dropout) - - if gin_channels != 0: - cond_layer = torch.nn.Conv1d(gin_channels, 2*hidden_channels*n_layers, 1) - self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name='weight') - - for i in range(n_layers): - dilation = dilation_rate ** i - padding = int((kernel_size * dilation - dilation) / 2) - in_layer = torch.nn.Conv1d(hidden_channels, 2*hidden_channels, kernel_size, - dilation=dilation, padding=padding) - in_layer = torch.nn.utils.weight_norm(in_layer, name='weight') - self.in_layers.append(in_layer) - - # last one is not necessary - if i < n_layers - 1: - res_skip_channels = 2 * hidden_channels - else: - res_skip_channels = hidden_channels - - res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1) - res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name='weight') - self.res_skip_layers.append(res_skip_layer) - - def forward(self, x, x_mask, g=None, **kwargs): - output = torch.zeros_like(x) - n_channels_tensor = torch.IntTensor([self.hidden_channels]) - - if g is not None: - g = self.cond_layer(g) - - for i in range(self.n_layers): - x_in = self.in_layers[i](x) - if g is not None: - cond_offset = i * 2 * self.hidden_channels - g_l = g[:,cond_offset:cond_offset+2*self.hidden_channels,:] - else: - g_l = torch.zeros_like(x_in) - - acts = commons.fused_add_tanh_sigmoid_multiply( - x_in, - g_l, - n_channels_tensor) - acts = self.drop(acts) - - res_skip_acts = self.res_skip_layers[i](acts) - if i < self.n_layers - 1: - res_acts = res_skip_acts[:,:self.hidden_channels,:] - x = (x + res_acts) * x_mask - output = output + res_skip_acts[:,self.hidden_channels:,:] - else: - output = output + res_skip_acts - return output * x_mask - - def remove_weight_norm(self): - if self.gin_channels != 0: - torch.nn.utils.remove_weight_norm(self.cond_layer) - for l in self.in_layers: - torch.nn.utils.remove_weight_norm(l) - for l in self.res_skip_layers: - torch.nn.utils.remove_weight_norm(l) - - -class ResBlock1(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)): - super(ResBlock1, self).__init__() - self.convs1 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]))) - ]) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))) - ]) - self.convs2.apply(init_weights) - - def forward(self, x, x_mask=None): - for c1, c2 in zip(self.convs1, self.convs2): - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c1(xt) - xt = F.leaky_relu(xt, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c2(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_weight_norm(l) - for l in self.convs2: - remove_weight_norm(l) - - -class ResBlock2(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3)): - super(ResBlock2, self).__init__() - self.convs = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))) - ]) - self.convs.apply(init_weights) - - def forward(self, x, x_mask=None): - for c in self.convs: - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs: - remove_weight_norm(l) - - -class Log(nn.Module): - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask - logdet = torch.sum(-y, [1, 2]) - return y, logdet - else: - x = torch.exp(x) * x_mask - return x - - -class Flip(nn.Module): - def forward(self, x, *args, reverse=False, **kwargs): - x = torch.flip(x, [1]) - if not reverse: - logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device) - return x, logdet - else: - return x - - -class ElementwiseAffine(nn.Module): - def __init__(self, channels): - super().__init__() - self.channels = channels - self.m = nn.Parameter(torch.zeros(channels,1)) - self.logs = nn.Parameter(torch.zeros(channels,1)) - - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = self.m + torch.exp(self.logs) * x - y = y * x_mask - logdet = torch.sum(self.logs * x_mask, [1,2]) - return y, logdet - else: - x = (x - self.m) * torch.exp(-self.logs) * x_mask - return x - - -class ResidualCouplingLayer(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=0, - gin_channels=0, - mean_only=False): - assert channels % 2 == 0, "channels should be divisible by 2" - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.half_channels = channels // 2 - self.mean_only = mean_only - - self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1) - self.enc = WN(hidden_channels, kernel_size, dilation_rate, n_layers, p_dropout=p_dropout, gin_channels=gin_channels) - self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1) - self.post.weight.data.zero_() - self.post.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) * x_mask - h = self.enc(h, x_mask, g=g) - stats = self.post(h) * x_mask - if not self.mean_only: - m, logs = torch.split(stats, [self.half_channels]*2, 1) - else: - m = stats - logs = torch.zeros_like(m) - - if not reverse: - x1 = m + x1 * torch.exp(logs) * x_mask - x = torch.cat([x0, x1], 1) - logdet = torch.sum(logs, [1,2]) - return x, logdet - else: - x1 = (x1 - m) * torch.exp(-logs) * x_mask - x = torch.cat([x0, x1], 1) - return x - - -class ConvFlow(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, n_layers, num_bins=10, tail_bound=5.0): - super().__init__() - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.num_bins = num_bins - self.tail_bound = tail_bound - self.half_channels = in_channels // 2 - - self.pre = nn.Conv1d(self.half_channels, filter_channels, 1) - self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.) - self.proj = nn.Conv1d(filter_channels, self.half_channels * (num_bins * 3 - 1), 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) - h = self.convs(h, x_mask, g=g) - h = self.proj(h) * x_mask - - b, c, t = x0.shape - h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?] - - unnormalized_widths = h[..., :self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_heights = h[..., self.num_bins:2*self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_derivatives = h[..., 2 * self.num_bins:] - - x1, logabsdet = piecewise_rational_quadratic_transform(x1, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=reverse, - tails='linear', - tail_bound=self.tail_bound - ) - - x = torch.cat([x0, x1], 1) * x_mask - logdet = torch.sum(logabsdet * x_mask, [1,2]) - if not reverse: - return x, logdet - else: - return x diff --git a/spaces/xin/PatentSolver/functions.py b/spaces/xin/PatentSolver/functions.py deleted file mode 100644 index 89aa53ba7ad6d6682d5b2dc087b5c4b4971c6706..0000000000000000000000000000000000000000 --- a/spaces/xin/PatentSolver/functions.py +++ /dev/null @@ -1,648 +0,0 @@ - -# ~~~~~~~~~~~~~~~~~~~~~~~~ # -# ~~~ Import libraries ~~~ # -# ~~~~~~~~~~~~~~~~~~~~~~~~ # - -# Google Scraper Class # -from google_patent_scraper import scraper_class - -# Context Manager # -from contextlib import contextmanager - -# Writing/Reading -import csv -import numpy as np -import pandas as pd - -# clean patent # -import re - -# Multiprocessing # -import multiprocessing as mp - -# parse xml to text -from bs4 import BeautifulSoup as bs - -# zip folder to download -import shutil -import base64 -import streamlit as st -import os - -# extract problems -from App.bin import constants -from App.bin.InputHandler import InputHandler -from App.bin.PatentHandler import PatentHandler -from App.bin.CorpusProcessor import CorpusProcessor -import json -from pandas import json_normalize -import glob - - - -# ~~~~~~~~~~~~~~~~~~~ # -# ~~~~ Functions ~~~~ # -# ~~~~~~~~~~~~~~~~~~~ # - -def single_process_scraper(patent,path_to_data_file,data_column_order): - """Scrapes a single google patent using the google scraper class - - Function does not return any values, instead it writes the output - of the data_patent_details into a csv file specified in the path_to_data_file - parameter - - Inputs: - patent (str) : patent number including country prefix - lock (obj) : to prevent collisions, function uses a lock. You can pass whichever - lock you want to this parameter - path_to_data_file : absolute path to csv file to write data_patent_details to - data_column_order : name of columns in order they will be saved in csv file - - """ - # ~ Initialize scraper class ~ # - scraper=scraper_class() - - # ~ Scrape single patent ~ # - err, soup, url = scraper.request_single_patent(patent) - - # Checks if the scrape is successful. - # If successful -> parse text and deposit into csv file - # Else -> print error statement - - if err=='Success': - patent_parsed = scraper.get_scraped_data(soup,url,patent) - - # Save the parsed data_patent_details to a csv file - # using multiprocessing lock function - # to prevent collisions - with lock: - with open(path_to_data_file,'a',newline='') as ofile: - writer = csv.DictWriter(ofile, fieldnames=data_column_order) - writer.writerow(patent_parsed) - else: - print('Patent {0} has error code {1}'.format(patent,err)) - -# Allow pool to accept keyword arguments -@contextmanager -def poolcontext(*args, **kwargs): - pool = mp.Pool(*args, **kwargs) - yield pool - pool.terminate() - -def init(l): - """Creates lock object that is global, for use in sharing - across processes - """ - global lock - lock = l - - -def patentinput(patent_string): - """ - remove space among patent numbers from users' inputs - """ - patent_string = patent_string.replace(" ", "") #remove space that user tpyed - list_results = list(patent_string.split(",")) - return list_results - -def clean_patent(table): - """clean raw patent details from website - """ - - list_inventor_name = np.array([]) # create an empty list - - inventor_name = table['inventor_name'] - for line in inventor_name: - new_line = re.sub(r'"inventor_name":', '', line) - new_line = re.sub(r'\{|\}|\[|\]|\"', '', new_line) - # print(new_line) - list_inventor_name = np.append(list_inventor_name, new_line) - - new_table_inventor_name = pd.DataFrame(list_inventor_name, columns=['inventor_name']) - # new_table.to_csv('saved_data/cleaned_patent_details') - - ##clean assignee_name_orig feature - list_assignee_name = np.array([]) - assignee_name = table['assignee_name_orig'] - for line in assignee_name: - new_line = re.sub(r'"assignee_name":', '', line) ##### errors - new_line = re.sub(r'\{|\}|\[|\]|\"', '', new_line) - list_assignee_name = np.append(list_assignee_name, new_line) - - new_table_assignee_name = pd.DataFrame(list_assignee_name, columns=['assignee_name_orig']) - # print(new_table_assignee_name) - # - ##clean assignee_name_current feature - list_assignee_name_current = np.array([]) - assignee_name_current = table['assignee_name_current'] - for line in assignee_name_current: - new_line = re.sub(r'("assignee_name":)|(\\n\s\s)|(\{|\}|\[|\]|\")', '', line) - list_assignee_name_current = np.append(list_assignee_name_current, new_line) - - new_table_assignee_name_current = pd.DataFrame(list_assignee_name_current, columns=['assignee_name_current']) - # print(new_table_assignee_name_current) - # - ##clean forward_cite_no_family feature - list_forward_cite_no_family = np.array([]) - forward_cite_no_family = table['forward_cite_no_family'] - for line in forward_cite_no_family: - new_line = re.sub( - r'("patent_number":)|(\\n)|(\{|\}|\[|\]|\")|(priority_date)|(:)|(pub_date)|(\d{4}-\d{2}-\d{2})', '', line) - new_line = re.sub(r'\s\,\s', '', new_line) - list_forward_cite_no_family = np.append(list_forward_cite_no_family, new_line) - - new_table_forward_cite_no_family = pd.DataFrame(list_forward_cite_no_family, columns=['forward_cite_no_family']) - # print(new_table_forward_cite_no_family) - # - ##clean forward_cite_yes_family feature - list_forward_cite_yes_family = np.array([]) - forward_cite_yes_family = table['forward_cite_yes_family'] - for line in forward_cite_yes_family: - new_line = re.sub( - r'("patent_number":)|(\\n)|(\{|\}|\[|\]|\")|(priority_date)|(:)|(pub_date)|(\d{4}-\d{2}-\d{2})', '', line) - new_line = re.sub(r'\s\,\s', '', new_line) - list_forward_cite_yes_family = np.append(list_forward_cite_yes_family, new_line) - - new_table_forward_cite_yes_family = pd.DataFrame(list_forward_cite_yes_family, columns=['forward_cite_yes_family']) - # print(new_table_forward_cite_yes_family) - - ##clean backward_cite_no_family feature - list_backward_cite_no_family = np.array([]) - backward_cite_no_family = table['backward_cite_no_family'] - for line in backward_cite_no_family: - new_line = re.sub( - r'("patent_number":)|(\\n)|(\{|\}|\[|\]|\")|(priority_date)|(:)|(pub_date)|(\d{4}-\d{2}-\d{2})', '', line) - new_line = re.sub(r'\s\,\s', '', new_line) - list_backward_cite_no_family = np.append(list_backward_cite_no_family, new_line) - - new_table_backward_cite_no_family = pd.DataFrame(list_backward_cite_no_family, columns=['backward_cite_no_family']) - # print(new_table_backward_cite_no_family) - - ##clean backward_cite_yes_family feature - list_backward_cite_yes_family = np.array([]) - backward_cite_yes_family = table['backward_cite_yes_family'] - for line in backward_cite_yes_family: - new_line = re.sub( - r'("patent_number":)|(\\n)|(\{|\}|\[|\]|\")|(priority_date)|(:)|(pub_date)|(\d{4}-\d{2}-\d{2})', '', line) - new_line = re.sub(r'\s\,\s', '', new_line) - list_backward_cite_yes_family = np.append(list_backward_cite_yes_family, new_line) - - new_table_backward_cite_yes_family = pd.DataFrame(list_backward_cite_yes_family, - columns=['backward_cite_yes_family']) - # print(new_table_backward_cite_yes_family) - - ##rename url feature - list_patent_number = np.array([]) - patent_number = table['url'] - for line in patent_number: - list_patent_number = np.append(list_patent_number, line) - - new_table_patent_number = pd.DataFrame(list_patent_number, columns=['patent_number']) - # print(new_table_patent_number) - - ##rename patent feature - list_patent_link = np.array([]) - patent_link = table['patent'] - for line in patent_link: - list_patent_link = np.append(list_patent_link, line) - - new_table_patent_link = pd.DataFrame(list_patent_link, columns=['patent_link']) - # print(new_table_patent_link) - - ##rename abstract_text - list_abstract_text = np.array([]) - abstract_text = table['abstract_text'] - for line in abstract_text: - list_abstract_text = np.append(list_abstract_text, line) - - new_table_abstract_text = pd.DataFrame(abstract_text, columns=['abstract_text']) - # print(new_table_patent_link) - - ################################### - - ## concatenate all of sub dataframes to the final results - results = pd.concat([new_table_patent_number, table[['pub_date', 'priority_date', 'grant_date', 'filing_date']], - new_table_inventor_name, new_table_assignee_name, new_table_assignee_name_current, - new_table_forward_cite_no_family, new_table_forward_cite_yes_family, - new_table_backward_cite_yes_family, new_table_backward_cite_no_family, new_table_patent_link, - new_table_abstract_text], axis=1) - - return results - - -def count_patent(patent_table): - """count the patent features""" - - ##count the number of assignee_name feature - assignee_name = pd.DataFrame(patent_table['assignee_name_orig']) - count_assignee_name = assignee_name.applymap(lambda x: str.count(x, ',') + 1) - count_assignee_name = count_assignee_name.rename(columns={'assignee_name_orig': 'count_assignee_name'}) - # print(count_assignee_name) - - ##count the number of inventor_name feature - inventor_name = pd.DataFrame(patent_table['inventor_name']) - count_inventor_name = inventor_name.applymap(lambda x: str.count(x, ',') + 1) - count_inventor_name = count_inventor_name.rename(columns={'inventor_name': 'count_inventor_name'}) - # print(count_inventor_name) - - ##count the number of assignee_name_current feature - assignee_name_current = pd.DataFrame(patent_table['assignee_name_current']) - # print(assignee_name_current) - - ##replace NaN as int(0) - assignee_name_current_replace_NaN = lambda x: int(0) if pd.isnull(x) else str.count(x, ',') + 1 - count_assignee_name_current = assignee_name_current.applymap(assignee_name_current_replace_NaN) - count_assignee_name_current = count_assignee_name_current.rename( - columns={'assignee_name_current': 'count_assignee_name_current'}) - # print(count_assignee_name_current) - - ##count forward_cite_no_family - forward_cite_no_family = pd.DataFrame(patent_table['forward_cite_no_family']) - forward_cite_no_family_replace_NaN = lambda x: int(0) if pd.isnull(x) else str.count(x, ',') - count_forward_cite_no_family = forward_cite_no_family.applymap(forward_cite_no_family_replace_NaN) - count_forward_cite_no_family = count_forward_cite_no_family.rename( - columns={'forward_cite_no_family': 'count_forward_cite_no_family'}) - # print(count_forward_cite_no_family) - - ##count forward_cite_yes_family - forward_cite_yes_family = pd.DataFrame(patent_table['forward_cite_yes_family']) - forward_cite_yes_family_replace_NaN = lambda x: int(0) if pd.isnull(x) else str.count(x, ',') - count_forward_cite_yes_family = forward_cite_yes_family.applymap(forward_cite_yes_family_replace_NaN) - count_forward_cite_yes_family = count_forward_cite_yes_family.rename( - columns={'forward_cite_yes_family': 'count_forward_cite_yes_family'}) - # print(count_forward_cite_yes_family) - - ##count backward_cite_no_family - backward_cite_no_family = pd.DataFrame(patent_table['backward_cite_no_family']) - backward_cite_no_family_replace_NaN = lambda x: int(0) if pd.isnull(x) else str.count(x, ',') - count_backward_cite_no_family = backward_cite_no_family.applymap(backward_cite_no_family_replace_NaN) - count_backward_cite_no_family = count_backward_cite_no_family.rename( - columns={'backward_cite_no_family': 'count_backward_cite_no_family'}) - # print(count_backward_cite_no_family) - - ##count backward_cite_yes_family - backward_cite_yes_family = pd.DataFrame(patent_table['backward_cite_yes_family']) - backward_cite_yes_family_replace_NaN = lambda x: int(0) if pd.isnull(x) else str.count(x, ',') - count_backward_cite_yes_family = backward_cite_yes_family.applymap(backward_cite_yes_family_replace_NaN) - count_backward_cite_yes_family = count_backward_cite_yes_family.rename( - columns={'backward_cite_yes_family': 'count_backward_cite_yes_family'}) - # print(count_backward_cite_yes_family) - - ##concate dataframes to the final cleaned dataset - results = pd.concat([patent_table[['patent_number', 'pub_date', 'priority_date', - 'grant_date', 'filing_date', 'inventor_name']], count_inventor_name, - patent_table[['assignee_name_orig']], count_assignee_name, - patent_table[['assignee_name_current']], count_assignee_name_current, - patent_table[['forward_cite_no_family']], count_forward_cite_no_family, - patent_table[['forward_cite_yes_family']], count_forward_cite_yes_family, - patent_table[['backward_cite_no_family']], count_backward_cite_no_family, - patent_table[['backward_cite_yes_family']], count_backward_cite_yes_family, - patent_table[['patent_link', 'abstract_text']]], axis=1) - - return results - - -def XMLtoTEXT(patent_xml, saved_file_path): - # read file - tree = bs(patent_xml, "html.parser") - - # get title - - print('Title:') - title = tree.find_all("invention-title") - patent_title = title[0].text - print(patent_title) - - # get number - print("Patent number:") - patent_number = tree.find_all('doc-number') - patent_number = 'US' + patent_number[0].text - patent_number_new = re.sub(r'US0', 'US', patent_number) - print(patent_number_new) - - # get domain - print('Domain:') - domain = tree.find_all('classification-level') - patent_domain = domain[0].text - print(patent_domain) - - # get date of publication - print("Publication date:") - date = tree.find_all("date") - patent_pubdate = date[0].text - print(patent_pubdate) - - # get abstract - print('Abstract:') - ab = tree.find_all("abstract") - patent_abstract = ab[0].text - print(patent_abstract) - - # get claim - print('Claims:') - claims = tree.find_all("claim-text") - for claim in claims: - print(claim.text) - - # get description - print('Description:') - description = tree.find_all('description') - for des in description: - print(des.text) - - # save file to the place - with open(saved_file_path + patent_number_new + '.txt', 'w') as text_file: - text_file.write("Patent title" + '\n' + patent_title + - '\n' * 2 + "Patent number" + '\n' + - patent_number_new + '\n' * 2 + "Domain" + '\n' + patent_domain + '\n' * 2 + "Publication date" + '\n' + patent_pubdate - + '\n' * 2 + "Abstract" + '\n' + patent_abstract - + '\n' * 2 + 'Claims' + '\n') # save patent title, number, domain, publication data_patent_details, abstract - for claim in claims: - text_file.write(claim.text + '\n') - text_file.write('\n' + 'Description' + '\n') - for des in description: - text_file.write('\n' + des.text + '\n') - - return text_file - - -# to download patents (.txt) by zip file -def create_download_zip(zip_directory, zip_path, filename): - """ - zip_directory (str): path to directory you want to zip - zip_path (str): where you want to save zip file - filename (str): download filename for user who download this - """ - shutil.make_archive(zip_path+filename, 'zip', zip_directory) - - with open(zip_path+filename+'.zip', 'rb') as f: - st.download_button( - label = 'Download', - data = f, - file_name='patent.zip', - mime= 'zip' - ) - - - -# save input files (txt) into the folder -def save_uploadedfile(uploadedfile): - with open(os.path.join('Data/input/US_patents/',uploadedfile.name ), 'wb') as f: - f.write(uploadedfile.getbuffer()) - # return st.success('Saved File:{}'.format(uploadedfile.name)) - -# to extract problems from patents -def extractor (folder): - input_folder = constants.DATA_INPUT + folder - files_extension = "*." + 'txt' - - iInput = InputHandler(input_folder, files_extension) - input_data = iInput.get_input() - - pretreat_data = PatentHandler(input_data) - clean_patent_data = pretreat_data.pretreat_data() - - process_data = CorpusProcessor(clean_patent_data, input_folder, files_extension) - processed_data = process_data.process_corpus() - - # convert json to dataframe - with open('Data/graphs/US_patents/graph.json') as json_data: - data = json.load(json_data) - - concept_df = json_normalize(data['problem_graph'], sep="_") - - concept_df = concept_df[['concept_sentence', 'concept_source', 'concept_type']] - problem_df = concept_df.rename(columns={"concept_sentence": "problem", 'concept_source': 'patent_number', - 'concept_type': 'type'}) - # choose problems - problem_new = problem_df.loc[problem_df['type'] == 'problem'] - - print(problem_new) - - new_table_test = problem_new['patent_number'].apply( - lambda x: re.search(r'(?<=US_patents\/).*?(?=.txt)', x).group()) - - # assign patent number to the corresponding feature - problem_results = problem_new.assign(patent_number=new_table_test) - - print(problem_results[['problem', 'patent_number']]) - problem_results = problem_results[['patent_number', 'problem']] - problem_results.to_csv('data_problem/problem.csv', - index=False) - -@st.cache -def convert_df(df): - # IMPORTANT: Cache the conversion to prevent computation on every rerun - return df.to_csv().encode('utf-8') - - -def extract_info_text(): - new = pd.DataFrame(columns=['title', 'patent_number', 'domain', 'publication_date']) - - # use glob to get all the txt files in the folder - path = 'Data/input/US_patents' - txt_files = glob.glob(os.path.join(path, "*.txt")) - for f in txt_files: - df = pd.read_csv(f, sep='\n', header=None, names=['content']) - print(df) - # extract patent information from text - new = new.append({'patent_number': df.iloc[3, 0], 'title': df.iloc[1, 0], - 'domain': df.iloc[5, 0], 'publication_date': df.iloc[7, 0]}, ignore_index=True) - - print(new) - - problem = pd.read_csv('data_problem/problem.csv') - final = pd.merge(problem, new, on='patent_number', how='left') - return final - -def input_domain(user_input_domain): - if user_input_domain == 'A (Human necessities)': - domain = 'A' - elif user_input_domain == 'B (Performing operations; transporting)': - domain = 'B' - elif user_input_domain == 'C (Chemistry; metallurgy)': - domain = 'C' - elif user_input_domain == 'D (Textiles; paper)': - domain = 'D' - elif user_input_domain == 'E (Fixed constructions)': - domain = 'E' - elif user_input_domain == 'F (Mechanical engineering; lighting; heating; weapons; blasting engines or pumps': - domain = 'F' - elif user_input_domain == 'G (Physics)': - domain = 'G' - elif user_input_domain == 'H (Electricity)': - domain = 'H' - return domain - -# the function for choosing month period that user choosed -def choosing_month_period(problem_corpus,start_year, end_year, start_month, end_month): - problem_corpus = problem_corpus[problem_corpus['publication_year'].between(start_year, end_year)] - if start_year != end_year: # 2014- 2015 #2014- 2016 - if start_month == end_month: # /01/ /01/ - if end_year == start_year + 1: # 2014/03/01 - 2015/03/01 #2014/01/01 - 2015/01/23 #2014/12/01 - 2015/12/23 - problem_corpus.loc[(problem_corpus['publication_year'] == start_year) & ( - problem_corpus['publication_month'].between(start_month, 12)), 'label'] = 'true' - problem_corpus.loc[(problem_corpus['publication_year'] == end_year) & ( - problem_corpus['publication_month'].between(1, end_month)), 'label'] = 'true' - - elif end_year > start_year + 1: # 2014/01/01 - 2016/01/23 #2014/12/01 - 2016/12/23 # 2014/03/01 - 2016/03/01 - if start_month == 1: # 2014/01/01 - 2016/01/23 - problem_corpus.loc[( - problem_corpus['publication_year'] == end_year) & ( - problem_corpus['publication_month'].between( - end_month + 1, 12)), 'label'] = 'false' - problem_corpus.loc[(problem_corpus.label != 'false'), 'label'] = 'true' - elif start_month == 12: # 2014/12/01 - 2016/12/23 - problem_corpus.loc[( - problem_corpus['publication_year'] == start_year) & ( - problem_corpus['publication_month'].between( - 1, start_month - 1)), 'label'] = 'false' - problem_corpus.loc[(problem_corpus.label != 'false'), 'label'] = 'true' - else: # 2014/03/01 - 2016/03/01 - problem_corpus.loc[( - problem_corpus['publication_year'] == start_year) & ( - problem_corpus['publication_month'].between( - 1, start_month - 1)), 'label'] = 'false' - problem_corpus.loc[( - problem_corpus['publication_year'] == end_year) & ( - problem_corpus['publication_month'].between( - end_month + 1, 12)), 'label'] = 'false' - problem_corpus.loc[(problem_corpus.label != 'false'), 'label'] = 'true' - if start_month > end_month: # /03/ /01/ - if end_year == start_year + 1: # 2014/12/01 - 2015/03/01 #2014/02/01 - 2015/01/23 - problem_corpus.loc[(problem_corpus['publication_year'] == start_year) & ( - problem_corpus['publication_month'].between(start_month, 12)), 'label'] = 'true' - problem_corpus.loc[(problem_corpus['publication_year'] == end_year) & ( - problem_corpus['publication_month'].between(1, end_month)), 'label'] = 'true' - - elif end_year > start_year + 1: # 2014/12/01 - 2016/03/01 #2014/02/01 - 2016/01/23 - problem_corpus.loc[( - problem_corpus['publication_year'] == start_year) & ( - problem_corpus['publication_month'].between( - 1, start_month - 1)), 'label'] = 'false' - problem_corpus.loc[( - problem_corpus['publication_year'] == end_year) & ( - problem_corpus['publication_month'].between( - end_month + 1, 12)), 'label'] = 'false' - problem_corpus.loc[(problem_corpus.label != 'false'), 'label'] = 'true' - - if start_month < end_month: # /01/ /03/ - if end_year == start_year + 1: # 2014/01/01 - 2015/12/01 #2014/02/01 - 2015/11/23 - problem_corpus.loc[(problem_corpus['publication_year'] == start_year) & ( - problem_corpus['publication_month'].between(start_month, 12)), 'label'] = 'true' - problem_corpus.loc[(problem_corpus['publication_year'] == end_year) & ( - problem_corpus['publication_month'].between(1, end_month)), 'label'] = 'true' - - elif end_year > start_year + 1: # 2014/01/01 - 2016/12/01 #2014/02/01 - 2016/11/23 - if start_month == 1 & end_month == 12: # 2014/01/01 - 2016/12/01 - problem_corpus['label'] = 'true' - elif start_month == 1: # 2014/01/01 - 2016/03/01 #2014/01/01 - 2016/11/01 - problem_corpus.loc[(problem_corpus['publication_year'] == end_year) & (problem_corpus[ - 'publication_month'].between( - end_month + 1, 12)), 'label'] = 'false' - problem_corpus.loc[(problem_corpus.label != 'false'), 'label'] = 'true' - elif end_month == 12: # 2014/02/01 - 2016/12/01 #2015/02/01 - 2016/12/01 - problem_corpus.loc[(problem_corpus['publication_year'] == start_year) & (problem_corpus[ - 'publication_month'].between( - 1, start_month - 1)), 'label'] = 'false' - problem_corpus.loc[(problem_corpus.label != 'false'), 'label'] = 'true' - else: # 2014/02/01 - 2016/11/23 - problem_corpus.loc[(problem_corpus['publication_year'] == start_year) & (problem_corpus[ - 'publication_month'].between( - 1, start_month - 1)), 'label'] = 'false' - problem_corpus.loc[(problem_corpus['publication_year'] == end_year) & (problem_corpus[ - 'publication_month'].between( - end_month + 1, 12)), 'label'] = 'false' - problem_corpus.loc[(problem_corpus.label != 'false'), 'label'] = 'true' - - - - else: # start_year == end_year: 2012-2012 - problem_corpus = problem_corpus[problem_corpus['publication_year'] == start_year] - if start_month != end_month: # 2014/03/01 - 2014/05/01 2014/01/01 - 2014/05/01 2014/03/01 - 2014/12/01 - problem_corpus.loc[problem_corpus['publication_month'].between(start_month, end_month), 'label'] = 'true' - else: # 2014/03/01 - 2014/03/20 #2014/01/01 - 2014/01/20 - problem_corpus.loc[problem_corpus['publication_month'] == start_month, 'label'] = 'true' - - problem_corpus = problem_corpus.loc[problem_corpus['label'] == 'true'] - problem_corpus= problem_corpus[['patent_number', 'Domain', 'First part Contradiction', - 'Second part Contradiction', 'publication_date', 'publication_year', - 'publication_month', 'label']] - return problem_corpus - -# for IDM-Similar model (word2vec) -def avg_feature_vector(sentence, model, num_features, index2word_set): - words = sentence.split() - feature_vec = np.zeros((num_features, ), dtype='float32') - n_words = 0 - for word in words: - if word in index2word_set: - n_words += 1 - feature_vec = np.add(feature_vec, model[word]) - if (n_words > 0): - feature_vec = np.divide(feature_vec, n_words) - return feature_vec - -def creat_query_id(dataset): - # create query - question = [] - for each in dataset['problem']: - new = "What is the solution for the problem that " + each + "?" - question.append(new) - dataset['question'] = question - - # create id - data = dataset.rename(columns={'Unnamed: 0': 'id'}) - return data - -def csv_to_json (csv_file,json_file): - results = [] - with open(csv_file) as csv_file: - csvReader = csv.DictReader(csv_file) - for row in csvReader: - context = row['Context'] - qas = [] - content = {} - content['id'] = row['id'] - content['question'] = row['question'] - qas.append(content) - result = {} - result['context'] = context - result['qas'] = qas - results.append(result) - - # write data to a json file - with open(json_file, 'w') as jsonFile: - jsonFile.write(json.dumps(results, indent=4)) - - - -def QA_prediction(prediction_file, prediction_output, model): - # if __name__ == '__main__': - with open(prediction_file, 'r') as pre_file: - temp = json.loads(pre_file.read()) - predictions = model.predict(temp) - - with open(prediction_output, 'w') as json_file: - json_file.write(json.dumps(predictions, indent=4)) - print(predictions) - -def json_to_csv(input_file, output_file): - result = pd.read_json(input_file) - print(result.head()) - - result_answer = result.iloc[0][:] - print(result_answer.head()) - print(len(result_answer)) - - df = pd.DataFrame(index=np.arange(len(result_answer)), columns=['id', 'answer']) - print(df) - - for i in range(len(result_answer)): - line = result_answer[i] - print(line) - df.iloc[i, 0] = line['id'] - df.iloc[i, 1] = line['answer'] - - print(df.head()) - df.to_csv(output_file, index=False) diff --git a/spaces/ygangang/CodeFormer/CodeFormer/scripts/crop_align_face.py b/spaces/ygangang/CodeFormer/CodeFormer/scripts/crop_align_face.py deleted file mode 100644 index 31e66266ac0e5f818fa18b6409993151086bbc8b..0000000000000000000000000000000000000000 --- a/spaces/ygangang/CodeFormer/CodeFormer/scripts/crop_align_face.py +++ /dev/null @@ -1,192 +0,0 @@ -""" -brief: face alignment with FFHQ method (https://github.com/NVlabs/ffhq-dataset) -author: lzhbrian (https://lzhbrian.me) -link: https://gist.github.com/lzhbrian/bde87ab23b499dd02ba4f588258f57d5 -date: 2020.1.5 -note: code is heavily borrowed from - https://github.com/NVlabs/ffhq-dataset - http://dlib.net/face_landmark_detection.py.html -requirements: - conda install Pillow numpy scipy - conda install -c conda-forge dlib - # download face landmark model from: - # http://dlib.net/files/shape_predictor_68_face_landmarks.dat.bz2 -""" - -import cv2 -import dlib -import glob -import numpy as np -import os -import PIL -import PIL.Image -import scipy -import scipy.ndimage -import sys -import argparse - -# download model from: http://dlib.net/files/shape_predictor_68_face_landmarks.dat.bz2 -predictor = dlib.shape_predictor('weights/dlib/shape_predictor_68_face_landmarks-fbdc2cb8.dat') - - -def get_landmark(filepath, only_keep_largest=True): - """get landmark with dlib - :return: np.array shape=(68, 2) - """ - detector = dlib.get_frontal_face_detector() - - img = dlib.load_rgb_image(filepath) - dets = detector(img, 1) - - # Shangchen modified - print("Number of faces detected: {}".format(len(dets))) - if only_keep_largest: - print('Detect several faces and only keep the largest.') - face_areas = [] - for k, d in enumerate(dets): - face_area = (d.right() - d.left()) * (d.bottom() - d.top()) - face_areas.append(face_area) - - largest_idx = face_areas.index(max(face_areas)) - d = dets[largest_idx] - shape = predictor(img, d) - print("Part 0: {}, Part 1: {} ...".format( - shape.part(0), shape.part(1))) - else: - for k, d in enumerate(dets): - print("Detection {}: Left: {} Top: {} Right: {} Bottom: {}".format( - k, d.left(), d.top(), d.right(), d.bottom())) - # Get the landmarks/parts for the face in box d. - shape = predictor(img, d) - print("Part 0: {}, Part 1: {} ...".format( - shape.part(0), shape.part(1))) - - t = list(shape.parts()) - a = [] - for tt in t: - a.append([tt.x, tt.y]) - lm = np.array(a) - # lm is a shape=(68,2) np.array - return lm - -def align_face(filepath, out_path): - """ - :param filepath: str - :return: PIL Image - """ - try: - lm = get_landmark(filepath) - except: - print('No landmark ...') - return - - lm_chin = lm[0:17] # left-right - lm_eyebrow_left = lm[17:22] # left-right - lm_eyebrow_right = lm[22:27] # left-right - lm_nose = lm[27:31] # top-down - lm_nostrils = lm[31:36] # top-down - lm_eye_left = lm[36:42] # left-clockwise - lm_eye_right = lm[42:48] # left-clockwise - lm_mouth_outer = lm[48:60] # left-clockwise - lm_mouth_inner = lm[60:68] # left-clockwise - - # Calculate auxiliary vectors. - eye_left = np.mean(lm_eye_left, axis=0) - eye_right = np.mean(lm_eye_right, axis=0) - eye_avg = (eye_left + eye_right) * 0.5 - eye_to_eye = eye_right - eye_left - mouth_left = lm_mouth_outer[0] - mouth_right = lm_mouth_outer[6] - mouth_avg = (mouth_left + mouth_right) * 0.5 - eye_to_mouth = mouth_avg - eye_avg - - # Choose oriented crop rectangle. - x = eye_to_eye - np.flipud(eye_to_mouth) * [-1, 1] - x /= np.hypot(*x) - x *= max(np.hypot(*eye_to_eye) * 2.0, np.hypot(*eye_to_mouth) * 1.8) - y = np.flipud(x) * [-1, 1] - c = eye_avg + eye_to_mouth * 0.1 - quad = np.stack([c - x - y, c - x + y, c + x + y, c + x - y]) - qsize = np.hypot(*x) * 2 - - # read image - img = PIL.Image.open(filepath) - - output_size = 512 - transform_size = 4096 - enable_padding = False - - # Shrink. - shrink = int(np.floor(qsize / output_size * 0.5)) - if shrink > 1: - rsize = (int(np.rint(float(img.size[0]) / shrink)), - int(np.rint(float(img.size[1]) / shrink))) - img = img.resize(rsize, PIL.Image.ANTIALIAS) - quad /= shrink - qsize /= shrink - - # Crop. - border = max(int(np.rint(qsize * 0.1)), 3) - crop = (int(np.floor(min(quad[:, 0]))), int(np.floor(min(quad[:, 1]))), - int(np.ceil(max(quad[:, 0]))), int(np.ceil(max(quad[:, 1])))) - crop = (max(crop[0] - border, 0), max(crop[1] - border, 0), - min(crop[2] + border, - img.size[0]), min(crop[3] + border, img.size[1])) - if crop[2] - crop[0] < img.size[0] or crop[3] - crop[1] < img.size[1]: - img = img.crop(crop) - quad -= crop[0:2] - - # Pad. - pad = (int(np.floor(min(quad[:, 0]))), int(np.floor(min(quad[:, 1]))), - int(np.ceil(max(quad[:, 0]))), int(np.ceil(max(quad[:, 1])))) - pad = (max(-pad[0] + border, - 0), max(-pad[1] + border, - 0), max(pad[2] - img.size[0] + border, - 0), max(pad[3] - img.size[1] + border, 0)) - if enable_padding and max(pad) > border - 4: - pad = np.maximum(pad, int(np.rint(qsize * 0.3))) - img = np.pad( - np.float32(img), ((pad[1], pad[3]), (pad[0], pad[2]), (0, 0)), - 'reflect') - h, w, _ = img.shape - y, x, _ = np.ogrid[:h, :w, :1] - mask = np.maximum( - 1.0 - - np.minimum(np.float32(x) / pad[0], - np.float32(w - 1 - x) / pad[2]), 1.0 - - np.minimum(np.float32(y) / pad[1], - np.float32(h - 1 - y) / pad[3])) - blur = qsize * 0.02 - img += (scipy.ndimage.gaussian_filter(img, [blur, blur, 0]) - - img) * np.clip(mask * 3.0 + 1.0, 0.0, 1.0) - img += (np.median(img, axis=(0, 1)) - img) * np.clip(mask, 0.0, 1.0) - img = PIL.Image.fromarray( - np.uint8(np.clip(np.rint(img), 0, 255)), 'RGB') - quad += pad[:2] - - img = img.transform((transform_size, transform_size), PIL.Image.QUAD, - (quad + 0.5).flatten(), PIL.Image.BILINEAR) - - if output_size < transform_size: - img = img.resize((output_size, output_size), PIL.Image.ANTIALIAS) - - # Save aligned image. - print('saveing: ', out_path) - img.save(out_path) - - return img, np.max(quad[:, 0]) - np.min(quad[:, 0]) - - -if __name__ == '__main__': - parser = argparse.ArgumentParser() - parser.add_argument('--in_dir', type=str, default='./inputs/whole_imgs') - parser.add_argument('--out_dir', type=str, default='./inputs/cropped_faces') - args = parser.parse_args() - - img_list = sorted(glob.glob(f'{args.in_dir}/*.png')) - img_list = sorted(img_list) - - for in_path in img_list: - out_path = os.path.join(args.out_dir, in_path.split("/")[-1]) - out_path = out_path.replace('.jpg', '.png') - size_ = align_face(in_path, out_path) \ No newline at end of file diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/generation/flax_logits_process.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/generation/flax_logits_process.py deleted file mode 100644 index 5c30b92755a4261654a7b7c930d07c0c6859c4a5..0000000000000000000000000000000000000000 --- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/generation/flax_logits_process.py +++ /dev/null @@ -1,457 +0,0 @@ -# coding=utf-8 -# Copyright 2021 The HuggingFace Inc. team -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import inspect - -import jax -import jax.lax as lax -import jax.numpy as jnp - -from ..utils import add_start_docstrings -from ..utils.logging import get_logger - - -logger = get_logger(__name__) - - -LOGITS_PROCESSOR_INPUTS_DOCSTRING = r""" - Args: - input_ids (`jnp.ndarray` of shape `(batch_size, sequence_length)`): - Indices of input sequence tokens in the vocabulary. - - Indices can be obtained using [`PreTrainedTokenizer`]. See [`PreTrainedTokenizer.encode`] and - [`PreTrainedTokenizer.__call__`] for details. - - [What are input IDs?](../glossary#input-ids) - scores (`jnp.ndarray` of shape `(batch_size, config.vocab_size)`): - Prediction scores of a language modeling head. These can be logits for each vocabulary when not using beam - search or log softmax for each vocabulary token when using beam search - kwargs (`Dict[str, Any]`, *optional*): - Additional logits processor specific kwargs. - - Return: - `jnp.ndarray` of shape `(batch_size, config.vocab_size)`: The processed prediction scores. - -""" - - -class FlaxLogitsProcessor: - """Abstract base class for all logit processors that can be applied during generation.""" - - @add_start_docstrings(LOGITS_PROCESSOR_INPUTS_DOCSTRING) - def __call__(self, input_ids: jnp.ndarray, scores: jnp.ndarray) -> jnp.ndarray: - """Flax method for processing logits.""" - raise NotImplementedError( - f"{self.__class__} is an abstract class. Only classes inheriting this class can be called." - ) - - -class FlaxLogitsWarper: - """Abstract base class for all logit warpers that can be applied during generation with multinomial sampling.""" - - @add_start_docstrings(LOGITS_PROCESSOR_INPUTS_DOCSTRING) - def __call__(self, input_ids: jnp.ndarray, scores: jnp.ndarray) -> jnp.ndarray: - """Flax method for warping logits.""" - raise NotImplementedError( - f"{self.__class__} is an abstract class. Only classes inheriting this class can be called." - ) - - -class FlaxLogitsProcessorList(list): - """ - This class can be used to create a list of [`FlaxLogitsProcessor`] or [`FlaxLogitsWarper`] to subsequently process - a `scores` input tensor. This class inherits from list and adds a specific *__call__* method to apply each - [`FlaxLogitsProcessor`] or [`FlaxLogitsWarper`] to the inputs. - """ - - @add_start_docstrings(LOGITS_PROCESSOR_INPUTS_DOCSTRING) - def __call__(self, input_ids: jnp.ndarray, scores: jnp.ndarray, cur_len: int, **kwargs) -> jnp.ndarray: - for processor in self: - function_args = inspect.signature(processor.__call__).parameters - if len(function_args) > 3: - if not all(arg in kwargs for arg in list(function_args.keys())[2:]): - raise ValueError( - f"Make sure that all the required parameters: {list(function_args.keys())} for " - f"{processor.__class__} are passed to the logits processor." - ) - scores = processor(input_ids, scores, cur_len, **kwargs) - else: - scores = processor(input_ids, scores, cur_len) - return scores - - -class FlaxTemperatureLogitsWarper(FlaxLogitsWarper): - r""" - [`FlaxLogitsWarper`] for temperature (exponential scaling output probability distribution). - - Args: - temperature (`float`): - The value used to module the logits distribution. - """ - - def __init__(self, temperature: float): - if not isinstance(temperature, float) or not (temperature > 0): - raise ValueError(f"`temperature` has to be a strictly positive float, but is {temperature}") - - self.temperature = temperature - - def __call__(self, input_ids: jnp.ndarray, scores: jnp.ndarray, cur_len: int) -> jnp.ndarray: - scores = scores / self.temperature - return scores - - -class FlaxTopPLogitsWarper(FlaxLogitsWarper): - """ - [`FlaxLogitsWarper`] that performs top-p, i.e. restricting to top tokens summing to prob_cut_off <= prob_cut_off. - - Args: - top_p (`float`): - If set to < 1, only the smallest set of most probable tokens with probabilities that add up to `top_p` or - higher are kept for generation. - filter_value (`float`, *optional*, defaults to -inf): - All filtered values will be set to this float value. - min_tokens_to_keep (`int`, *optional*, defaults to 1): - Minimum number of tokens that cannot be filtered. - """ - - def __init__(self, top_p: float, filter_value: float = -float("Inf"), min_tokens_to_keep: int = 1): - if not isinstance(top_p, float) or (top_p < 0 or top_p > 1.0): - raise ValueError(f"`top_p` has to be a float > 0 and < 1, but is {top_p}") - if not isinstance(min_tokens_to_keep, int) or (min_tokens_to_keep < 1): - raise ValueError(f"`min_tokens_to_keep` has to be a positive integer, but is {min_tokens_to_keep}") - - self.top_p = top_p - self.filter_value = filter_value - self.min_tokens_to_keep = min_tokens_to_keep - - def __call__(self, input_ids: jnp.ndarray, scores: jnp.ndarray, cur_len: int) -> jnp.ndarray: - topk_scores, topk_indices = lax.top_k(scores, scores.shape[-1]) - - mask_scores = jnp.full_like(scores, self.filter_value) - cumulative_probs = jax.nn.softmax(topk_scores, axis=-1).cumsum(axis=-1) - score_mask = cumulative_probs < self.top_p - - # include the token that is higher than top_p as well - score_mask = jnp.roll(score_mask, 1) - score_mask |= score_mask.at[:, 0].set(True) - - # min tokens to keep - score_mask = score_mask.at[:, : self.min_tokens_to_keep].set(True) - - topk_next_scores = jnp.where(score_mask, topk_scores, mask_scores) - next_scores = jax.lax.sort_key_val(topk_indices, topk_next_scores)[-1] - - return next_scores - - -class FlaxTopKLogitsWarper(FlaxLogitsWarper): - r""" - [`FlaxLogitsWarper`] that performs top-k, i.e. restricting to the k highest probability elements. - - Args: - top_k (`int`): - The number of highest probability vocabulary tokens to keep for top-k-filtering. - filter_value (`float`, *optional*, defaults to -inf): - All filtered values will be set to this float value. - min_tokens_to_keep (`int`, *optional*, defaults to 1): - Minimum number of tokens that cannot be filtered. - """ - - def __init__(self, top_k: int, filter_value: float = -float("Inf"), min_tokens_to_keep: int = 1): - if not isinstance(top_k, int) or top_k <= 0: - raise ValueError(f"`top_k` has to be a strictly positive integer, but is {top_k}") - - self.top_k = max(top_k, min_tokens_to_keep) - self.filter_value = filter_value - - def __call__(self, input_ids: jnp.ndarray, scores: jnp.ndarray, cur_len: int) -> jnp.ndarray: - batch_size, vocab_size = scores.shape - next_scores_flat = jnp.full(batch_size * vocab_size, self.filter_value) - - topk = min(self.top_k, scores.shape[-1]) # Safety check - topk_scores, topk_indices = lax.top_k(scores, topk) - shift = jnp.broadcast_to((jnp.arange(batch_size) * vocab_size)[:, None], (batch_size, topk)).flatten() - topk_scores_flat = topk_scores.flatten() - topk_indices_flat = topk_indices.flatten() + shift - - next_scores_flat = next_scores_flat.at[topk_indices_flat].set(topk_scores_flat) - next_scores = next_scores_flat.reshape(batch_size, vocab_size) - return next_scores - - -class FlaxForcedBOSTokenLogitsProcessor(FlaxLogitsProcessor): - r""" - [`FlaxLogitsProcessor`] that enforces the specified token as the first generated token. - - Args: - bos_token_id (`int`): - The id of the token to force as the first generated token. - """ - - def __init__(self, bos_token_id: int): - self.bos_token_id = bos_token_id - - def __call__(self, input_ids: jnp.ndarray, scores: jnp.ndarray, cur_len: int) -> jnp.ndarray: - new_scores = jnp.full(scores.shape, -float("inf")) - - apply_penalty = 1 - jnp.bool_(cur_len - 1) - - scores = jnp.where(apply_penalty, new_scores.at[:, self.bos_token_id].set(0), scores) - - return scores - - -class FlaxForcedEOSTokenLogitsProcessor(FlaxLogitsProcessor): - r""" - [`FlaxLogitsProcessor`] that enforces the specified token as the last generated token when `max_length` is reached. - - Args: - max_length (`int`): - The maximum length of the sequence to be generated. - eos_token_id (`int`): - The id of the token to force as the last generated token when `max_length` is reached. - """ - - def __init__(self, max_length: int, eos_token_id: int): - self.max_length = max_length - self.eos_token_id = eos_token_id - - def __call__(self, input_ids: jnp.ndarray, scores: jnp.ndarray, cur_len: int) -> jnp.ndarray: - new_scores = jnp.full(scores.shape, -float("inf")) - - apply_penalty = 1 - jnp.bool_(cur_len - self.max_length + 1) - - scores = jnp.where(apply_penalty, new_scores.at[:, self.eos_token_id].set(0), scores) - - return scores - - -class FlaxMinLengthLogitsProcessor(FlaxLogitsProcessor): - r""" - [`FlaxLogitsProcessor`] enforcing a min-length by setting EOS probability to 0. - - Args: - min_length (`int`): - The minimum length below which the score of `eos_token_id` is set to `-float("Inf")`. - eos_token_id (`int`): - The id of the *end-of-sequence* token. - """ - - def __init__(self, min_length: int, eos_token_id: int): - if not isinstance(min_length, int) or min_length < 0: - raise ValueError(f"`min_length` has to be a positive integer, but is {min_length}") - - if not isinstance(eos_token_id, int) or eos_token_id < 0: - raise ValueError(f"`eos_token_id` has to be a positive integer, but is {eos_token_id}") - - self.min_length = min_length - self.eos_token_id = eos_token_id - - def __call__(self, input_ids: jnp.ndarray, scores: jnp.ndarray, cur_len: int) -> jnp.ndarray: - # create boolean flag to decide if min length penalty should be applied - apply_penalty = 1 - jnp.clip(cur_len - self.min_length, 0, 1) - - scores = jnp.where(apply_penalty, scores.at[:, self.eos_token_id].set(-float("inf")), scores) - - return scores - - -class FlaxSuppressTokensAtBeginLogitsProcessor(FlaxLogitsProcessor): - r""" - [`FlaxLogitsProcessor`] supressing a list of tokens as soon as the `generate` function starts generating using - `begin_index` tokens. This should ensure that the tokens defined by `begin_suppress_tokens` are not sampled at the - begining of the generation. - - Args: - begin_suppress_tokens (`List[int]`): - Tokens to not sample. - begin_index (`int`): - Index where the tokens are suppressed. - """ - - def __init__(self, begin_suppress_tokens, begin_index): - self.begin_suppress_tokens = list(begin_suppress_tokens) - self.begin_index = begin_index - - def __call__(self, input_ids, scores, cur_len: int): - apply_penalty = 1 - jnp.bool_(cur_len - self.begin_index) - - scores = jnp.where(apply_penalty, scores.at[:, self.begin_suppress_tokens].set(-float("inf")), scores) - - return scores - - -class FlaxSuppressTokensLogitsProcessor(FlaxLogitsProcessor): - r""" - [`FlaxLogitsProcessor`] suppressing a list of tokens at each decoding step. The processor will set their log probs - to be `-inf` so they are not sampled. - - Args: - suppress_tokens (`list`): - Tokens to not sample. - """ - - def __init__(self, suppress_tokens: list): - self.suppress_tokens = list(suppress_tokens) - - def __call__(self, input_ids: jnp.ndarray, scores: jnp.ndarray, cur_len: int) -> jnp.ndarray: - scores = scores.at[..., self.suppress_tokens].set(-float("inf")) - - return scores - - -class FlaxForceTokensLogitsProcessor(FlaxLogitsProcessor): - r""" - [`FlaxLogitsProcessor`] that takes a list of pairs of integers which indicates a mapping from generation indices to - token indices that will be forced before sampling. The processor will set their log probs to 0 and all other tokens - to `-inf` so that they are sampled at their corresponding index. - - Args: - force_token_map (`list`): - Map giving token ids and indices where they will be forced to be sampled. - """ - - def __init__(self, force_token_map): - force_token_map = dict(force_token_map) - # Converts the dictionary of format {index: token} containing the tokens to be forced to an array, where the - # index of the array corresponds to the index of the token to be forced, for XLA compatibility. - # Indexes without forced tokens will have a negative value. - force_token_array = jnp.ones((max(force_token_map.keys()) + 1), dtype=jnp.int32) * -1 - for index, token in force_token_map.items(): - if token is not None: - force_token_array = force_token_array.at[index].set(token) - self.force_token_array = jnp.int32(force_token_array) - - def __call__(self, input_ids: jnp.ndarray, scores: jnp.ndarray, cur_len: int) -> jnp.ndarray: - def _force_token(generation_idx): - batch_size = scores.shape[0] - current_token = self.force_token_array[generation_idx] - - new_scores = jnp.ones_like(scores, dtype=scores.dtype) * -float("inf") - updates = jnp.zeros((batch_size, 1), dtype=scores.dtype) - new_scores = lax.dynamic_update_slice(new_scores, updates, (0, current_token)) - return new_scores - - scores = lax.cond( - cur_len >= self.force_token_array.shape[0], - # If the current length is geq than the length of force_token_array, the processor does nothing. - lambda: scores, - # Otherwise, it may force a certain token. - lambda: lax.cond( - self.force_token_array[cur_len] >= 0, - # Only valid (positive) tokens are forced - lambda: _force_token(cur_len), - # Otherwise, the processor does nothing. - lambda: scores, - ), - ) - return scores - - -class FlaxWhisperTimeStampLogitsProcessor(FlaxLogitsProcessor): - r""" - Whisper specific Processor. This processor can be used to force a list of tokens. The processor will set their log - probs to `inf` so that they are sampled at their corresponding index. - - Args: - generate_config (`GenerateConfig`): - The generate config used to generate the output. The following parameters are required: - eos_token_id (`int`, *optional*, defaults to 50257): - The id of the *end-of-sequence* token. - no_timestamps_token_id (`int`, *optional*, defaults to 50363): - The id of the `"<|notimestamps|>"` token. - max_initial_timestamp_index (`int`, *optional*, defaults to 1): - Used to set the maximum value of the initial timestamp. This is used to prevent the model from - predicting timestamps that are too far in the future. - """ - - def __init__(self, generate_config, model_config, decoder_input_length): - self.eos_token_id = generate_config.eos_token_id - self.no_timestamps_token_id = generate_config.no_timestamps_token_id - self.timestamp_begin = generate_config.no_timestamps_token_id + 1 - - self.begin_index = decoder_input_length + 1 - - if generate_config.is_multilingual: - # room for language token and task token - self.begin_index += 2 - if hasattr(generate_config, "max_initial_timestamp_index"): - self.max_initial_timestamp_index = generate_config.max_initial_timestamp_index - else: - self.max_initial_timestamp_index = model_config.vocab_size - if self.max_initial_timestamp_index is None: - self.max_initial_timestamp_index = model_config.vocab_size - - def __call__(self, input_ids, scores, cur_len): - # suppress <|notimestamps|> which is handled by without_timestamps - scores = scores.at[:, self.no_timestamps_token_id].set(-float("inf")) - - def handle_pairs(input_ids_k, scores_k): - last_was_timestamp = jnp.where((cur_len - self.begin_index) >= 1, True, False) - last_was_timestamp = jnp.where( - input_ids_k[cur_len - 1] >= self.timestamp_begin, - True and last_was_timestamp, - False, - ) - - penultimate_was_timestamp = jnp.where((cur_len - self.begin_index) < 2, True, False) - penultimate_was_timestamp = jnp.where( - input_ids_k[cur_len - 2] >= self.timestamp_begin, - True, - penultimate_was_timestamp, - ) - - return jnp.where( - last_was_timestamp, - jnp.where( - penultimate_was_timestamp > 0, - scores_k.at[self.timestamp_begin :].set(-float("inf")), - scores_k.at[: self.eos_token_id].set(-float("inf")), - ), - scores_k, - ) - - scores = jax.vmap(handle_pairs)(input_ids, scores) - - apply_max_initial_timestamp = jnp.where(cur_len == self.begin_index, True, False) - apply_max_initial_timestamp = jnp.where( - self.max_initial_timestamp_index is not None, - True and apply_max_initial_timestamp, - False, - ) - - last_allowed = self.timestamp_begin + self.max_initial_timestamp_index - - scores = jnp.where( - apply_max_initial_timestamp, - scores.at[:, last_allowed + 1 :].set(-float("inf")), - scores, - ) - - # if sum of probability over timestamps is above any other token, sample timestamp - logprobs = jax.nn.log_softmax(scores, axis=-1) - - def handle_cumulative_probs(logprobs_k, scores_k): - timestamp_logprob = jax.nn.logsumexp(logprobs_k[self.timestamp_begin :], axis=-1) - max_text_token_logprob = jnp.max(logprobs_k[: self.timestamp_begin]) - return jnp.where( - timestamp_logprob > max_text_token_logprob, - scores_k.at[: self.timestamp_begin].set(-float("inf")), - scores_k, - ) - - scores = jax.vmap(handle_cumulative_probs)(logprobs, scores) - - return scores diff --git a/spaces/yomo93/Tendon-search/README.md b/spaces/yomo93/Tendon-search/README.md deleted file mode 100644 index 45a0f44fa463764d86afd54002129a40d5f6414d..0000000000000000000000000000000000000000 --- a/spaces/yomo93/Tendon-search/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Tendon Search -emoji: 📚 -colorFrom: gray -colorTo: purple -sdk: gradio -sdk_version: 3.44.4 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/ysharma/LangChain_GradioBot/README.md b/spaces/ysharma/LangChain_GradioBot/README.md deleted file mode 100644 index 0e978824f9dd4920b246147a57f432ccf38595ad..0000000000000000000000000000000000000000 --- a/spaces/ysharma/LangChain_GradioBot/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: LangChain GradioBot -emoji: 🏃 -colorFrom: yellow -colorTo: pink -sdk: gradio -sdk_version: 3.17.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/zeno-ml/translation-report/modeling.py b/spaces/zeno-ml/translation-report/modeling.py deleted file mode 100644 index 370aef8b95ce7ad63ad92dd611b67f9c2927324c..0000000000000000000000000000000000000000 --- a/spaces/zeno-ml/translation-report/modeling.py +++ /dev/null @@ -1,106 +0,0 @@ -"""Chatbots using API-based services.""" -from __future__ import annotations - -import os -import re -from dataclasses import dataclass - -import config - - -@dataclass(frozen=True) -class GptMtInstance: - """An instance from the GPT-MT dataset. - - Attributes: - data: The input sentence. - label: The output sentence. - doc_id: The document ID. - lang_pair: The language pair. - """ - - data: str - label: str - doc_id: str - lang_pair: str - - -def process_data( - input_dir: str, - lang_pairs: list[str], -) -> list[GptMtInstance]: - """Load data.""" - # Load the data - data: list[GptMtInstance] = [] - eval_dir = os.path.join(input_dir, "evaluation", "testset") - for lang_pair in lang_pairs: - src_lang, trg_lang = lang_pair[:2], lang_pair[2:] - src_file = os.path.join( - eval_dir, "wmt-testset", lang_pair, f"test.{src_lang}-{trg_lang}.{src_lang}" - ) - trg_file = os.path.join( - eval_dir, "wmt-testset", lang_pair, f"test.{src_lang}-{trg_lang}.{trg_lang}" - ) - doc_file = os.path.join( - eval_dir, - "wmt-testset-docids", - lang_pair, - f"test.{src_lang}-{trg_lang}.docids", - ) - with open(src_file, "r") as src_in, open(trg_file, "r") as trg_in, open( - doc_file, "r" - ) as doc_in: - for src_line, trg_line, doc_line in zip(src_in, trg_in, doc_in): - data.append( - GptMtInstance( - src_line.strip(), trg_line.strip(), doc_line.strip(), lang_pair - ) - ) - return data - - -def remove_leading_language(line: str) -> str: - """Remove a language at the beginning of the string. - - Some zero-shot models output the name of the language at the beginning of the - string. This is a manual post-processing function that removes the language name - (partly as an example of how you can do simple fixes to issues that come up during - analysis using Zeno). - - Args: - line: The line to process. - - Returns: - The line with the language removed. - """ - return re.sub( - r"^(English|Japanese|Chinese|Hausa|Icelandic|French|German|Russian|Ukranian): ", - "", - line, - ) - - -def process_output( - input_dir: str, - lang_pairs: list[str], - model_preset: str, -) -> list[str]: - """Load model outputs.""" - # Load the data - data: list[str] = [] - model_config = config.model_configs[model_preset] - model_path = model_config.path - system_dir = os.path.join(input_dir, "evaluation", "system-outputs", model_path) - for lang_pair in lang_pairs: - src_lang, trg_lang = lang_pair[:2], lang_pair[2:] - sys_file = os.path.join( - system_dir, lang_pair, f"test.{src_lang}-{trg_lang}.{trg_lang}" - ) - with open(sys_file, "r") as sys_in: - for sys_line in sys_in: - sys_line = sys_line.strip() - if model_config.post_processors is not None: - for postprocessor in model_config.post_processors: - sys_line = postprocessor(sys_line) - data.append(sys_line) - return data diff --git a/spaces/zhanghaohui/szu-gpt-academic/multi_language.py b/spaces/zhanghaohui/szu-gpt-academic/multi_language.py deleted file mode 100644 index 6c7259836e69d7bc5724a301883a9dbf1526589a..0000000000000000000000000000000000000000 --- a/spaces/zhanghaohui/szu-gpt-academic/multi_language.py +++ /dev/null @@ -1,510 +0,0 @@ -""" - Translate this project to other languages (experimental, please open an issue if there is any bug) - - - Usage: - 1. modify LANG - LANG = "English" - - 2. modify TransPrompt - TransPrompt = f"Replace each json value `#` with translated results in English, e.g., \"原始文本\":\"TranslatedText\". Keep Json format. Do not answer #." - - 3. Run `python multi_language.py`. - Note: You need to run it multiple times to increase translation coverage because GPT makes mistakes sometimes. - - 4. Find the translated program in `multi-language\English\*` - - P.S. - - - The translation mapping will be stored in `docs/translation_xxxx.json`, you can revised mistaken translation there. - - - If you would like to share your `docs/translation_xxxx.json`, (so that everyone can use the cached & revised translation mapping), please open a Pull Request - - - If there is any translation error in `docs/translation_xxxx.json`, please open a Pull Request - - - Welcome any Pull Request, regardless of language -""" - -import os -import json -import functools -import re -import pickle -import time - -CACHE_FOLDER = "gpt_log" -blacklist = ['multi-language', 'gpt_log', '.git', 'private_upload', 'multi_language.py'] - -# LANG = "TraditionalChinese" -# TransPrompt = f"Replace each json value `#` with translated results in Traditional Chinese, e.g., \"原始文本\":\"翻譯後文字\". Keep Json format. Do not answer #." - -# LANG = "Japanese" -# TransPrompt = f"Replace each json value `#` with translated results in Japanese, e.g., \"原始文本\":\"テキストの翻訳\". Keep Json format. Do not answer #." - -LANG = "English" -TransPrompt = f"Replace each json value `#` with translated results in English, e.g., \"原始文本\":\"TranslatedText\". Keep Json format. Do not answer #." - - -if not os.path.exists(CACHE_FOLDER): - os.makedirs(CACHE_FOLDER) - - -def lru_file_cache(maxsize=128, ttl=None, filename=None): - """ - Decorator that caches a function's return value after being called with given arguments. - It uses a Least Recently Used (LRU) cache strategy to limit the size of the cache. - maxsize: Maximum size of the cache. Defaults to 128. - ttl: Time-to-Live of the cache. If a value hasn't been accessed for `ttl` seconds, it will be evicted from the cache. - filename: Name of the file to store the cache in. If not supplied, the function name + ".cache" will be used. - """ - cache_path = os.path.join(CACHE_FOLDER, f"{filename}.cache") if filename is not None else None - - def decorator_function(func): - cache = {} - _cache_info = { - "hits": 0, - "misses": 0, - "maxsize": maxsize, - "currsize": 0, - "ttl": ttl, - "filename": cache_path, - } - - @functools.wraps(func) - def wrapper_function(*args, **kwargs): - key = str((args, frozenset(kwargs))) - if key in cache: - if _cache_info["ttl"] is None or (cache[key][1] + _cache_info["ttl"]) >= time.time(): - _cache_info["hits"] += 1 - print(f'Warning, reading cache, last read {(time.time()-cache[key][1])//60} minutes ago'); time.sleep(2) - cache[key][1] = time.time() - return cache[key][0] - else: - del cache[key] - - result = func(*args, **kwargs) - cache[key] = [result, time.time()] - _cache_info["misses"] += 1 - _cache_info["currsize"] += 1 - - if _cache_info["currsize"] > _cache_info["maxsize"]: - oldest_key = None - for k in cache: - if oldest_key is None: - oldest_key = k - elif cache[k][1] < cache[oldest_key][1]: - oldest_key = k - del cache[oldest_key] - _cache_info["currsize"] -= 1 - - if cache_path is not None: - with open(cache_path, "wb") as f: - pickle.dump(cache, f) - - return result - - def cache_info(): - return _cache_info - - wrapper_function.cache_info = cache_info - - if cache_path is not None and os.path.exists(cache_path): - with open(cache_path, "rb") as f: - cache = pickle.load(f) - _cache_info["currsize"] = len(cache) - - return wrapper_function - - return decorator_function - -def contains_chinese(string): - """ - Returns True if the given string contains Chinese characters, False otherwise. - """ - chinese_regex = re.compile(u'[\u4e00-\u9fff]+') - return chinese_regex.search(string) is not None - -def split_list(lst, n_each_req): - """ - Split a list into smaller lists, each with a maximum number of elements. - :param lst: the list to split - :param n_each_req: the maximum number of elements in each sub-list - :return: a list of sub-lists - """ - result = [] - for i in range(0, len(lst), n_each_req): - result.append(lst[i:i + n_each_req]) - return result - -def map_to_json(map, language): - dict_ = read_map_from_json(language) - dict_.update(map) - with open(f'docs/translate_{language.lower()}.json', 'w', encoding='utf8') as f: - json.dump(dict_, f, indent=4, ensure_ascii=False) - -def read_map_from_json(language): - if os.path.exists(f'docs/translate_{language.lower()}.json'): - with open(f'docs/translate_{language.lower()}.json', 'r', encoding='utf8') as f: - res = json.load(f) - res = {k:v for k, v in res.items() if v is not None and contains_chinese(k)} - return res - return {} - -def advanced_split(splitted_string, spliter, include_spliter=False): - splitted_string_tmp = [] - for string_ in splitted_string: - if spliter in string_: - splitted = string_.split(spliter) - for i, s in enumerate(splitted): - if include_spliter: - if i != len(splitted)-1: - splitted[i] += spliter - splitted[i] = splitted[i].strip() - for i in reversed(range(len(splitted))): - if not contains_chinese(splitted[i]): - splitted.pop(i) - splitted_string_tmp.extend(splitted) - else: - splitted_string_tmp.append(string_) - splitted_string = splitted_string_tmp - return splitted_string_tmp - -cached_translation = {} -cached_translation = read_map_from_json(language=LANG) - -def trans(word_to_translate, language, special=False): - if len(word_to_translate) == 0: return {} - from crazy_functions.crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency - from toolbox import get_conf, ChatBotWithCookies - proxies, WEB_PORT, LLM_MODEL, CONCURRENT_COUNT, AUTHENTICATION, CHATBOT_HEIGHT, LAYOUT, API_KEY = \ - get_conf('proxies', 'WEB_PORT', 'LLM_MODEL', 'CONCURRENT_COUNT', 'AUTHENTICATION', 'CHATBOT_HEIGHT', 'LAYOUT', 'API_KEY') - llm_kwargs = { - 'api_key': API_KEY, - 'llm_model': LLM_MODEL, - 'top_p':1.0, - 'max_length': None, - 'temperature':0.4, - } - import random - N_EACH_REQ = random.randint(16, 32) - word_to_translate_split = split_list(word_to_translate, N_EACH_REQ) - inputs_array = [str(s) for s in word_to_translate_split] - inputs_show_user_array = inputs_array - history_array = [[] for _ in inputs_array] - if special: # to English using CamelCase Naming Convention - sys_prompt_array = [f"Translate following names to English with CamelCase naming convention. Keep original format" for _ in inputs_array] - else: - sys_prompt_array = [f"Translate following sentences to {LANG}. E.g., You should translate sentences to the following format ['translation of sentence 1', 'translation of sentence 2']. Do NOT answer with Chinese!" for _ in inputs_array] - chatbot = ChatBotWithCookies(llm_kwargs) - gpt_say_generator = request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency( - inputs_array, - inputs_show_user_array, - llm_kwargs, - chatbot, - history_array, - sys_prompt_array, - ) - while True: - try: - gpt_say = next(gpt_say_generator) - print(gpt_say[1][0][1]) - except StopIteration as e: - result = e.value - break - translated_result = {} - for i, r in enumerate(result): - if i%2 == 1: - try: - res_before_trans = eval(result[i-1]) - res_after_trans = eval(result[i]) - if len(res_before_trans) != len(res_after_trans): - raise RuntimeError - for a,b in zip(res_before_trans, res_after_trans): - translated_result[a] = b - except: - # try: - # res_before_trans = word_to_translate_split[(i-1)//2] - # res_after_trans = [s for s in result[i].split("', '")] - # for a,b in zip(res_before_trans, res_after_trans): - # translated_result[a] = b - # except: - print('GPT answers with unexpected format, some words may not be translated, but you can try again later to increase translation coverage.') - res_before_trans = eval(result[i-1]) - for a in res_before_trans: - translated_result[a] = None - return translated_result - - -def trans_json(word_to_translate, language, special=False): - if len(word_to_translate) == 0: return {} - from crazy_functions.crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency - from toolbox import get_conf, ChatBotWithCookies - proxies, WEB_PORT, LLM_MODEL, CONCURRENT_COUNT, AUTHENTICATION, CHATBOT_HEIGHT, LAYOUT, API_KEY = \ - get_conf('proxies', 'WEB_PORT', 'LLM_MODEL', 'CONCURRENT_COUNT', 'AUTHENTICATION', 'CHATBOT_HEIGHT', 'LAYOUT', 'API_KEY') - llm_kwargs = { - 'api_key': API_KEY, - 'llm_model': LLM_MODEL, - 'top_p':1.0, - 'max_length': None, - 'temperature':0.1, - } - import random - N_EACH_REQ = random.randint(16, 32) - random.shuffle(word_to_translate) - word_to_translate_split = split_list(word_to_translate, N_EACH_REQ) - inputs_array = [{k:"#" for k in s} for s in word_to_translate_split] - inputs_array = [ json.dumps(i, ensure_ascii=False) for i in inputs_array] - - inputs_show_user_array = inputs_array - history_array = [[] for _ in inputs_array] - sys_prompt_array = [TransPrompt for _ in inputs_array] - chatbot = ChatBotWithCookies(llm_kwargs) - gpt_say_generator = request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency( - inputs_array, - inputs_show_user_array, - llm_kwargs, - chatbot, - history_array, - sys_prompt_array, - ) - while True: - try: - gpt_say = next(gpt_say_generator) - print(gpt_say[1][0][1]) - except StopIteration as e: - result = e.value - break - translated_result = {} - for i, r in enumerate(result): - if i%2 == 1: - try: - translated_result.update(json.loads(result[i])) - except: - print(result[i]) - print(result) - return translated_result - - -def step_1_core_key_translate(): - def extract_chinese_characters(file_path): - syntax = [] - with open(file_path, 'r', encoding='utf-8') as f: - content = f.read() - import ast - root = ast.parse(content) - for node in ast.walk(root): - if isinstance(node, ast.Name): - if contains_chinese(node.id): syntax.append(node.id) - if isinstance(node, ast.Import): - for n in node.names: - if contains_chinese(n.name): syntax.append(n.name) - elif isinstance(node, ast.ImportFrom): - for n in node.names: - if contains_chinese(n.name): syntax.append(n.name) - for k in node.module.split('.'): - if contains_chinese(k): syntax.append(k) - return syntax - - def extract_chinese_characters_from_directory(directory_path): - chinese_characters = [] - for root, dirs, files in os.walk(directory_path): - if any([b in root for b in blacklist]): - continue - for file in files: - if file.endswith('.py'): - file_path = os.path.join(root, file) - chinese_characters.extend(extract_chinese_characters(file_path)) - return chinese_characters - - directory_path = './' - chinese_core_names = extract_chinese_characters_from_directory(directory_path) - chinese_core_keys = [name for name in chinese_core_names] - chinese_core_keys_norepeat = [] - for d in chinese_core_keys: - if d not in chinese_core_keys_norepeat: chinese_core_keys_norepeat.append(d) - need_translate = [] - cached_translation = read_map_from_json(language=LANG) - cached_translation_keys = list(cached_translation.keys()) - for d in chinese_core_keys_norepeat: - if d not in cached_translation_keys: - need_translate.append(d) - - need_translate_mapping = trans(need_translate, language=LANG, special=True) - map_to_json(need_translate_mapping, language=LANG) - cached_translation = read_map_from_json(language=LANG) - cached_translation = dict(sorted(cached_translation.items(), key=lambda x: -len(x[0]))) - - chinese_core_keys_norepeat_mapping = {} - for k in chinese_core_keys_norepeat: - chinese_core_keys_norepeat_mapping.update({k:cached_translation[k]}) - chinese_core_keys_norepeat_mapping = dict(sorted(chinese_core_keys_norepeat_mapping.items(), key=lambda x: -len(x[0]))) - - # =============================================== - # copy - # =============================================== - def copy_source_code(): - - from toolbox import get_conf - import shutil - import os - try: shutil.rmtree(f'./multi-language/{LANG}/') - except: pass - os.makedirs(f'./multi-language', exist_ok=True) - backup_dir = f'./multi-language/{LANG}/' - shutil.copytree('./', backup_dir, ignore=lambda x, y: blacklist) - copy_source_code() - - # =============================================== - # primary key replace - # =============================================== - directory_path = f'./multi-language/{LANG}/' - for root, dirs, files in os.walk(directory_path): - for file in files: - if file.endswith('.py'): - file_path = os.path.join(root, file) - syntax = [] - # read again - with open(file_path, 'r', encoding='utf-8') as f: - content = f.read() - - for k, v in chinese_core_keys_norepeat_mapping.items(): - content = content.replace(k, v) - - with open(file_path, 'w', encoding='utf-8') as f: - f.write(content) - - -def step_2_core_key_translate(): - - # ================================================================================================= - # step2 - # ================================================================================================= - - def load_string(strings, string_input): - string_ = string_input.strip().strip(',').strip().strip('.').strip() - if string_.startswith('[Local Message]'): - string_ = string_.replace('[Local Message]', '') - string_ = string_.strip().strip(',').strip().strip('.').strip() - splitted_string = [string_] - # -------------------------------------- - splitted_string = advanced_split(splitted_string, spliter=",", include_spliter=False) - splitted_string = advanced_split(splitted_string, spliter="。", include_spliter=False) - splitted_string = advanced_split(splitted_string, spliter=")", include_spliter=False) - splitted_string = advanced_split(splitted_string, spliter="(", include_spliter=False) - splitted_string = advanced_split(splitted_string, spliter="(", include_spliter=False) - splitted_string = advanced_split(splitted_string, spliter=")", include_spliter=False) - splitted_string = advanced_split(splitted_string, spliter="<", include_spliter=False) - splitted_string = advanced_split(splitted_string, spliter=">", include_spliter=False) - splitted_string = advanced_split(splitted_string, spliter="[", include_spliter=False) - splitted_string = advanced_split(splitted_string, spliter="]", include_spliter=False) - splitted_string = advanced_split(splitted_string, spliter="【", include_spliter=False) - splitted_string = advanced_split(splitted_string, spliter="】", include_spliter=False) - splitted_string = advanced_split(splitted_string, spliter="?", include_spliter=False) - splitted_string = advanced_split(splitted_string, spliter=":", include_spliter=False) - splitted_string = advanced_split(splitted_string, spliter=":", include_spliter=False) - splitted_string = advanced_split(splitted_string, spliter=",", include_spliter=False) - splitted_string = advanced_split(splitted_string, spliter="#", include_spliter=False) - splitted_string = advanced_split(splitted_string, spliter="\n", include_spliter=False) - splitted_string = advanced_split(splitted_string, spliter=";", include_spliter=False) - splitted_string = advanced_split(splitted_string, spliter="`", include_spliter=False) - splitted_string = advanced_split(splitted_string, spliter=" ", include_spliter=False) - splitted_string = advanced_split(splitted_string, spliter="- ", include_spliter=False) - splitted_string = advanced_split(splitted_string, spliter="---", include_spliter=False) - - # -------------------------------------- - for j, s in enumerate(splitted_string): # .com - if '.com' in s: continue - if '\'' in s: continue - if '\"' in s: continue - strings.append([s,0]) - - - def get_strings(node): - strings = [] - # recursively traverse the AST - for child in ast.iter_child_nodes(node): - node = child - if isinstance(child, ast.Str): - if contains_chinese(child.s): - load_string(strings=strings, string_input=child.s) - elif isinstance(child, ast.AST): - strings.extend(get_strings(child)) - return strings - - string_literals = [] - directory_path = f'./multi-language/{LANG}/' - for root, dirs, files in os.walk(directory_path): - for file in files: - if file.endswith('.py'): - file_path = os.path.join(root, file) - syntax = [] - with open(file_path, 'r', encoding='utf-8') as f: - content = f.read() - # comments - comments_arr = [] - for code_sp in content.splitlines(): - comments = re.findall(r'#.*$', code_sp) - for comment in comments: - load_string(strings=comments_arr, string_input=comment) - string_literals.extend(comments_arr) - - # strings - import ast - tree = ast.parse(content) - res = get_strings(tree, ) - string_literals.extend(res) - - [print(s) for s in string_literals] - chinese_literal_names = [] - chinese_literal_names_norepeat = [] - for string, offset in string_literals: - chinese_literal_names.append(string) - chinese_literal_names_norepeat = [] - for d in chinese_literal_names: - if d not in chinese_literal_names_norepeat: chinese_literal_names_norepeat.append(d) - need_translate = [] - cached_translation = read_map_from_json(language=LANG) - cached_translation_keys = list(cached_translation.keys()) - for d in chinese_literal_names_norepeat: - if d not in cached_translation_keys: - need_translate.append(d) - - - up = trans_json(need_translate, language=LANG, special=False) - map_to_json(up, language=LANG) - cached_translation = read_map_from_json(language=LANG) - cached_translation = dict(sorted(cached_translation.items(), key=lambda x: -len(x[0]))) - - # =============================================== - # literal key replace - # =============================================== - directory_path = f'./multi-language/{LANG}/' - for root, dirs, files in os.walk(directory_path): - for file in files: - if file.endswith('.py'): - file_path = os.path.join(root, file) - syntax = [] - # read again - with open(file_path, 'r', encoding='utf-8') as f: - content = f.read() - - for k, v in cached_translation.items(): - if v is None: continue - if '"' in v: - v = v.replace('"', "`") - if '\'' in v: - v = v.replace('\'', "`") - content = content.replace(k, v) - - with open(file_path, 'w', encoding='utf-8') as f: - f.write(content) - - if file.strip('.py') in cached_translation: - file_new = cached_translation[file.strip('.py')] + '.py' - file_path_new = os.path.join(root, file_new) - with open(file_path_new, 'w', encoding='utf-8') as f: - f.write(content) - os.remove(file_path) - -step_1_core_key_translate() -step_2_core_key_translate()